Search Results

Search found 4503 results on 181 pages for 'black eagle'.

Page 174/181 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • WPF MVVM TreeView item source losing context after command

    - by user3955716
    I have a treeview which contains files, every view model holds an item source which is an ObservableCollection with files items: public ObservableCollection<CMItemFileNode> SubItemNode On each item i have context menu options (Delete, Execute..). If i move from one viewModel to another the ObservableCollection of files updated correctly and presented correctly but, when i perform a context menu command like delete file item, the command execute good but when i move to another view model (which holds SubItemNode ObservableCollection of is own) after the command executed the WPF still thinks i'm in the last view model i was in and not the one i'm really on. Very important to mention is that when i update to .net 4.5 (which unfortunantly i can't do) everything is ok and the ObservableCollection addresses the correct view model. Here is the treeView: <TreeView x:Name="Files" Margin="0,5,5,0" Grid.Row="6" Grid.Column="2" ItemsSource="{Binding SubItemNode}" HorizontalAlignment="Stretch" HorizontalContentAlignment="Stretch" VerticalAlignment="Stretch" Height="300" Grid.RowSpan="6" Width="300" dd:DragDrop.IsDragSource="True" dd:DragDrop.IsDropTarget="True" dd:DragDrop.DropHandler="{Binding}" dd:DragDrop.UseDefaultDragAdorner="True"> <TreeView.Resources> <Style TargetType="{x:Type TreeView}"> <Setter Property="local:CMTreeViewFilesBehavior.IsTreeViewFilesBehavior" Value="True"/> </Style> <Style TargetType="{x:Type TreeViewItem}"> <Setter Property="IsSelected" Value="{Binding IsSelected}" /> <Setter Property="local:CMTreeViewFilesItemBehavior.IsTreeViewFilesItemBehavior" Value="True"/> </Style> <SolidColorBrush x:Key="{x:Static SystemColors.HighlightBrushKey}" Color="Transparent" /> <SolidColorBrush x:Key="{x:Static SystemColors.HighlightTextBrushKey}" Color="Black" /> </TreeView.Resources> <TreeView.ContextMenu> <ContextMenu> <MenuItem Header="View File" Command="{Binding ExecuteFileCommand}" /> <Separator /> <MenuItem Header="Delete all" Command="{Binding DeleteAllFilesCommand}" /> <MenuItem Header="Delete selected" Command="{Binding DeleteSelectedFilesCommand}" /> </ContextMenu> </TreeView.ContextMenu> <TreeView.ItemTemplate> <HierarchicalDataTemplate ItemsSource="{Binding SubItemNode}" > <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Image Grid.Column="0" Margin="2" Width="32" Height="18" Source="{Binding Path=Icon}" HorizontalAlignment="Left" VerticalAlignment="Center" /> <TextBlock Text="{Binding Path=Name}" Grid.Column="1" Margin="2" VerticalAlignment="Center" Foreground="{Binding Path=Status, Converter={StaticResource ItemFileStatusToColor}}" FontWeight="{Binding Path=IsSelected, Converter={StaticResource BoolToFontWidth}}"/> </Grid> </HierarchicalDataTemplate> </TreeView.ItemTemplate> </TreeView> Am I doing somthing wrong? and why in .net 4.5 it works well ?

    Read the article

  • get text from a certain <tr> tag

    - by WideBlade
    Is there a way to get the text in a dynamic way from a certain <tr> tag in the page? e.g. I've a page with a <tr> with the value "a1". I'd like to get only the text from this <tr> tag, and echo it into the page. is this possible? here is the HTML: <html><tr id='ieconn2' > <td><table width='100%'><tr><td valign='top'><table width='100%'><tr><td><script type="text/javascript"><!-- google_ad_client = "pub-4503439170693445"; /* 300x250, created 7/21/10 */ google_ad_slot = "7608120147"; google_ad_width = 300; google_ad_height = 250; //--> </script> <script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js"> </script><br>When Marshall and Lily fear they will never get pregnant, they see a specialist who can hopefully help move the process along. Meanwhile, Robin starts her new job.<br><br><b>Source: </b>CBS <br>&nbsp;</td></tr><tr><td><b>There are no foreign summaries for this episode:</b> <a href='/edit/shows/3918/episode_foreign_summary/?eid=1065002553&season=6'>Contribute</a></td></tr><tr><td><b>English Recap Available: </b> <a href='/How_I_Met_Your_Mother/episodes/1065002553?show_recap=1'>View Here</a></td></tr></table></td><td valign='top' width='250'><div align='left'> <img alt='How I Met Your Mother season 6 episode 13' src="http://images.tvrage.com/screencaps/20/3918/1065002553.jpg" width="248" border='0' > </div><div align='center'><a href='/How_I_Met_Your_Mother/episodes/1065002553?gallery=1'>6 gallery images</a></div></td></tr></table></td></tr><tr> <td background='/_layout_v3/buttons/title.jpg' height='39' width='631' align='center'> <table width='100%' cellpadding='0' cellspacing='0' style='margin: 1px 1px 1px 1px;'> <tr> <td align='left' style='cursor: pointer;' onclick="SwitchHeader('ieconn3','iehide3','26')" width='90'>&nbsp;<span style='font-size: 15px; font-weight: bold; color: black; padding-left: 8px;' id='iehide3'><img src='/_layout_v3/misc/minus.gif' width='26'></span></td> <td align='center' style='cursor: pointer;' onclick="SwitchHeader('ieconn3','iehide3','26')" ><h5 class='nospace'>Sponsored Links</h5><a name=''></a></td> <td align='left' width='90' >&nbsp;</td></tr></table></td> </tr></html> All I want to get is this text: "When Marshall and Lily fear they will never get pregnant, they see a specialist who can hopefully help move the process along. Meanwhile, Robin starts her new job. "

    Read the article

  • How can I make the storage of C++ lambda objects more efficient?

    - by Peter Ruderman
    I've been thinking about storing C++ lambda's lately. The standard advice you see on the Internet is to store the lambda in a std::function object. However, none of this advice ever considers the storage implications. It occurred to me that there must be some seriously black voodoo going on behind the scenes to make this work. Consider the following class that stores an integer value: class Simple { public: Simple( int value ) { puts( "Constructing simple!" ); this->value = value; } Simple( const Simple& rhs ) { puts( "Copying simple!" ); this->value = rhs.value; } Simple( Simple&& rhs ) { puts( "Moving simple!" ); this->value = rhs.value; } ~Simple() { puts( "Destroying simple!" ); } int Get() const { return this->value; } private: int value; }; Now, consider this simple program: int main() { Simple test( 5 ); std::function<int ()> f = [test] () { return test.Get(); }; printf( "%d\n", f() ); } This is the output I would hope to see from this program: Constructing simple! Copying simple! Moving simple! Destroying simple! 5 Destroying simple! Destroying simple! First, we create the value test. We create a local copy on the stack for the temporary lambda object. We then move the temporary lambda object into memory allocated by std::function. We destroy the temporary lambda. We print our output. We destroy the std::function. And finally, we destroy the test object. Needless to say, this is not what I see. When I compile this on Visual C++ 2010 (release or debug mode), I get this output: Constructing simple! Copying simple! Copying simple! Copying simple! Copying simple! Destroying simple! Destroying simple! Destroying simple! 5 Destroying simple! Destroying simple! Holy crap that's inefficient! Not only did the compiler fail to use my move constructor, but it generated and destroyed two apparently superfluous copies of the lambda during the assignment. So, here finally are the questions: (1) Is all this copying really necessary? (2) Is there some way to coerce the compiler into generating better code? Thanks for reading!

    Read the article

  • C++ Optimize if/else condition

    - by Heye
    I have a single line of code, that consumes 25% - 30% of the runtime of my application. It is a less-than comparator for an std::set (the set is implemented with a Red-Black-Tree). It is called about 180 Million times within 52 seconds. struct Entry { const float _cost; const long _id; // some other vars Entry(float cost, float id) : _cost(cost), _id(id) { } }; template<class T> struct lt_entry: public binary_function <T, T, bool> { bool operator()(const T &l, const T &r) const { // Most readable shape if(l._cost != r._cost) { return r._cost < l._cost; } else { return l._id < r._id; } } }; The entries should be sorted by cost and if the cost is the same by their id. I have many insertions for each extraction of the minimum. I thought about using Fibonacci-Heaps, but I have been told that they are theoretically nice, but suffer from high constants and are pretty complicated to implement. And since insert is in O(log(n)) the runtime increase is nearly constant with large n. So I think its okay to stick to the set. To improve performance I tried to express it in different shapes: return l._cost < r._cost || r._cost > l._cost || l._id < r._id; return l._cost < r._cost || (l._cost == r._cost && l._id < r._id); Even this: typedef union { float _f; int _i; } flint; //... flint diff; diff._f = (l._cost - r._cost); return (diff._i && diff._i >> 31) || l._id < r._id; But the compiler seems to be smart enough already, because I haven't been able to improve the runtime. I also thought about SSE but this problem is really not very applicable for SSE... The assembly looks somewhat like this: movss (%rbx),%xmm1 mov $0x1,%r8d movss 0x20(%rdx),%xmm0 ucomiss %xmm1,%xmm0 ja 0x410600 <_ZNSt8_Rb_tree[..]+96> ucomiss %xmm0,%xmm1 jp 0x4105fd <_ZNSt8_Rb_[..]_+93> jne 0x4105fd <_ZNSt8_Rb_[..]_+93> mov 0x28(%rdx),%rax cmp %rax,0x8(%rbx) jb 0x410600 <_ZNSt8_Rb_[..]_+96> xor %r8d,%r8d I have a very tiny bit experience with assembly language, but not really much. I thought it would be the best (only?) point to squeeze out some performance, but is it really worth the effort? Can you see any shortcuts that could save some cycles? The platform the code will run on is an ubuntu 12 with gcc 4.6 (-stl=c++0x) on a many-core intel machine. Only libraries available are boost, openmp and tbb. I am really stuck on this one, it seems so simple, but takes that much time. I have been crunching my head since days thinking how I could improve this line... Can you give me a suggestion how to improve this part, or is it already at its best?

    Read the article

  • onDraw() triggered but results don't show

    - by Don
    I have the following routine in a subclass of view: It calculates an array of points that make up a line, then erases the previous lines, then draws the new lines (impact refers to the width in pixels drawn with multiple lines). The line is your basic bell curve, squeezed or stretched by variance and x-factor. Unfortunately, nothing shows on the screen. A previous version with drawPoint() and no array worked, and I've verified the array contents are being loaded correctly, and I can see that my onDraw() is being triggered. Any ideas why it might not be drawn? Thanks in advance! protected void drawNewLine( int maxx, int maxy, Canvas canvas, int impact, double variance, double xFactor, int color) { // impact = 2 to 8; xFactor between 4 and 20; variance between 0.2 and 5 double x = 0; double y = 0; int cx = maxx / 2; int cy = maxy / 2; int mu = cx; int index = 0; points[maxx<<1][1] = points[maxx<<1][0]; for (x = 0; x < maxx; x++) { points[index][1] = points[index][0]; points[index][0] = (float) x; Log.i(DEBUG_TAG, "x: " + x); index++; double root = 1.0 / (Math.sqrt(2 * Math.PI * variance)); double exponent = -1.0 * (Math.pow(((x - mu)/maxx*xFactor), 2) / (2 * variance)); double ePow = Math.exp(exponent); y = Math.round(cy * root * ePow); points[index][1] = points[index][0]; points[index][0] = (float) (maxy - y - OFFSET); index++; } points[maxx<<1][0] = (float) impact; for (int line = 0; line < points[maxx<<1][1]; line++) { for (int pt = 0; pt < (maxx<<1); pt++) { pointsToPaint[pt] = points[pt][1]; } for (int skip = 1; skip < (maxx<<1); skip = skip + 2) pointsToPaint[skip] = pointsToPaint[skip] + line; myLinePaint.setColor(Color.BLACK); canvas.drawLines(pointsToPaint, bLinePaint); // draw over old lines w/blk } for (int line = 0; line < points[maxx<<1][0]; line++) { for (int pt = 0; pt < maxx<<1; pt++) { pointsToPaint[pt] = points[pt][0]; } for (int skip = 1; skip < maxx<<1; skip = skip + 2) pointsToPaint[skip] = pointsToPaint[skip] + line; myLinePaint.setColor(color); canvas.drawLines(pointsToPaint, myLinePaint); / new color } } update: Replaced the drawLines() with drawPoint() in loop, still no joy for (int p = 0; p<pointsToPaint.length; p = p + 2) { Log.i(DEBUG_TAG, "x " + pointsToPaint[p] + " y " + pointsToPaint[p+1]); canvas.drawPoint(pointsToPaint[p], pointsToPaint[p+1], myLinePaint); } /// canvas.drawLines(pointsToPaint, myLinePaint);

    Read the article

  • Path to background in servlet

    - by kapil chhattani
    //the below line is the element of my HTML form which renders the image sent by the servlet written further below. <img style="margin-left:91px; margin-top:-6px;" class="image" src="http://www.abcd.com/captchaServlet"> I generate a captcha code using the following code in java. public class captchaServlet extends HttpServlet { protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { int width = 150; int height = 50; int charsToPrint = 6; String elegibleChars = "ABCDEFGHJKLMPQRSTUVWXYabcdefhjkmnpqrstuvwxy1234567890"; char[] chars = elegibleChars.toCharArray(); StringBuffer finalString = new StringBuffer(); for ( int i = 0; i < charsToPrint; i++ ) { double randomValue = Math.random(); int randomIndex = (int) Math.round(randomValue * (chars.length - 1)); char characterToShow = chars[randomIndex]; finalString.append(characterToShow); } System.out.println(finalString); BufferedImage bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB); Graphics2D g2d = bufferedImage.createGraphics(); Font font = new Font("Georgia", Font.BOLD, 18); g2d.setFont(font); RenderingHints rh = new RenderingHints( RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); rh.put(RenderingHints.KEY_RENDERING, RenderingHints.VALUE_RENDER_QUALITY); g2d.setRenderingHints(rh); GradientPaint gp = new GradientPaint(0, 0, Color.BLUE, 0, height/2, Color.black, true); g2d.setPaint(gp); g2d.fillRect(0, 0, width, height); g2d.setColor(new Color(255, 255, 0)); Random r = new Random(); int index = Math.abs(r.nextInt()) % 5; char[] data=new String(finalString).toCharArray(); String captcha = String.copyValueOf(data); int x = 0; int y = 0; for (int i=0; i<data.length; i++) { x += 10 + (Math.abs(r.nextInt()) % 15); y = 20 + Math.abs(r.nextInt()) % 20; g2d.drawChars(data, i, 1, x, y); } g2d.dispose(); response.setContentType("image/png"); OutputStream os = response.getOutputStream(); ImageIO.write(bufferedImage, "png", os); os.close(); } protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); } protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); } } But in the above code background is also generated using the setPaint menthod I am guessing. I want the background to be some image from my local machine whoz URL i should be able to mention like URL url=this.getClass().getResource("Desktop/images.jpg"); BufferedImage bufferedImage = ImageIO.read(url); I am just writing the above two lines for making the reader understand better what the issue is. Dont want to use the exact same commands. All I want is the the background of the captcha code generated should be an image of my choice.

    Read the article

  • Windows opaque UserControl not refreshing any graphical changes made on it

    - by Debajyoti Das
    I have created a Windows UserControl. It actually paints a Grid (i.e. vertical and horizontal lines) using Graphics. User can change each cell height and width, and according to that Grid is refreshed. Overriding the OnPaint event I have created the grid. I used SetStyle(ControlStyles.Opaque, true) to make it transparent. I used this control on a form and from there I change the values of the cell height and width but due to Opaque the new grid is overlapping on the previous one and making it clumsy. How do I resolve this? UserControl Code: public partial class Grid : UserControl { public Grid() { InitializeComponent(); SetStyle(ControlStyles.Opaque, true); } private float _CellWidth = 10, _CellHeight = 10; private Color _GridColor = Color.Black; public float CellWidth { get { return this._CellWidth; } set { this._CellWidth = value; } } public float CellHeight { get { return this._CellHeight; } set { this._CellHeight = value; } } public Color GridColor { get { return this._GridColor; } set { this._GridColor = value; } } protected override void OnPaint(PaintEventArgs e) { base.OnPaint(e); Graphics g; float iHeight = this.Height; float iWidth = this.Width; g = e.Graphics; Pen myPen = new Pen(GridColor); myPen.Width = 1; if (this.CellWidth > 0 && this.CellHeight > 0) { for (float X = 0; X <= iWidth; X += this.CellWidth) { g.DrawLine(myPen, X, 0, X, iHeight); } for (float Y = 0; Y <= iHeight; Y += this.CellHeight) { g.DrawLine(myPen, 0, Y, iWidth, Y); } } } public override void Refresh() { base.ResumeLayout(true); base.Refresh(); ResumeLayout(true); } } Form Code: public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { } private void btnBrowse_Click(object sender, EventArgs e) { try { if (ofdImage.ShowDialog() == System.Windows.Forms.DialogResult.OK) { pbImage.Image = Image.FromFile(ofdImage.FileName); } } catch (Exception ex) { MessageBox.Show(ex.Message); } } private void btnShowGrid_Click(object sender, EventArgs e) { if (grid1.Visible) { grid1.Visible = false; btnShowGrid.Text = "Show"; } else { grid1.Visible = true; btnShowGrid.Text = "Hide"; } } private void btnGridCellMaximize_Click(object sender, EventArgs e) { grid1.CellHeight += 1; grid1.CellWidth += 1; grid1.Refresh(); } private void btnGridCellMinimize_Click(object sender, EventArgs e) { grid1.CellHeight -= 1; grid1.CellWidth -= 1; grid1.Refresh(); } }

    Read the article

  • can't find what's wrong with my code :(

    - by blood
    the point of my code is for me to press f1 and it will scan 500 pixels down and 500 pixels and put them in a array (it just takes a box that is 500 by 500 of the screen). then after that when i hit end it will click on only on the color black or... what i set it to. anyway it has been doing odd stuff and i can't find why: #include <iostream> #include <windows.h> using namespace std; COLORREF rgb[499][499]; HDC hDC = GetDC(HWND_DESKTOP); POINT main_coner; BYTE rVal; BYTE gVal; BYTE bVal; int red; int green; int blue; int ff = 0; int main() { for(;;) { if(GetAsyncKeyState(VK_F1)) { cout << "started"; int a1 = 0; int a2 = 0; GetCursorPos(&main_coner); int x = main_coner.x; int y = main_coner.y; for(;;) { //cout << a1 << "___" << a2 << "\n"; rgb[a1][a2] = GetPixel(hDC, x, y); a1++; x++; if(x > main_coner.x + 499) { y++; x = main_coner.x; a1 = 0; a2++; } if(y > main_coner.y + 499) { ff = 1; break; } } cout << "done"; break; } if(ff == 1) break; } for(;;) { if(GetAsyncKeyState(VK_END)) { GetCursorPos(&main_coner); int x = main_coner.x; int y = main_coner.y; int a1 = -1; int a2 = -1; for(;;) { x++; a1++; rVal = GetRValue(rgb[a1][a2]); gVal = GetGValue(rgb[a1][a2]); bVal = GetBValue(rgb[a1][a2]); red = (int)rVal; // get the colors into __int8 green = (int)gVal; // get the colors into __int8 blue = (int)bVal; // get the colors into __int8 if(red == 0 && green == 0 && blue == 0) { SetCursorPos(main_coner.x + x, main_coner.y + y); mouse_event(MOUSEEVENTF_LEFTDOWN, 0, 0, 0, 0); Sleep(10); mouse_event(MOUSEEVENTF_LEFTUP, 0, 0, 0, 0); Sleep(100); } if(x > main_coner.x + 499) { a1 = 0; a2++; } if(y > main_coner.y + 499) { Sleep(100000000000); break; } if(GetAsyncKeyState(VK_CONTROL)) { Sleep(100000); break; } } } } for(;;) { if(GetAsyncKeyState(VK_END)) { break; } } return 0; } anyone see what's wrong with my code :( (feel free to add tags)

    Read the article

  • How do I delete files in a class?

    - by user3682906
    I want to have a CRUD management on my gallery but I have no idea how. I have searched the internet but I did not find what I was looking for. This is my class: class Gallery { public function renderImages($map) { $images = ""; foreach (glob($map ."*.{jpeg,jpg,png,gif}", GLOB_BRACE) as $image){ $images .= "<a href='".$image."' data-lightbox='afbeelding'><img src='".$image."' class='thumbnail'></img></a>"; } return $images; } } and this is my gallery: <div class="gallery"> <center> <a href="index.php">Back:</a> <style> body { margin:0; padding:0; background:#efefef; font-family: "Helvetica Neue", helvetica, arial, sans-serif; color: black; text-align:center; /* used to center div in IE */ } </style> <?php $gallery = new Gallery(); echo $gallery -> renderImages("../images/"); $connection = new DatabaseConnection(); $render = new RenderData(); $page = 1; if(isset($_GET['page'])) { $check = new CheckData(); if($check->PageCheck($_GET['page'])) { $page = $_GET['page']; } } ?> <table> <div id="Intro"> <tr> <td>Hieronder kan je uploaden.</td> </tr> </div> <tr> <td>Max. 500000000Bytes</td> </tr> <tr> <form action="pages/upload_file.php" method="post" enctype="multipart/form-data"> <td><label for="file">Filename:</label></td> </tr> <tr> <td><input type="file" name="file" id="file"></td> </tr> <tr> <td><input type="submit" name="submit" value="Submit"></td> </tr> <tr> <td><input type="submit" name="delete" value="Delete"></td> </tr> </form> </table> How do I make a CRUD management on this? I want to select the photo and then delete it from my computer. I hope you guys can help me out...

    Read the article

  • Invalid Cast Exception with Update Panel

    - by user1593175
    On Button Click Protected Sub btnSubmit_Click(sender As Object, e As System.EventArgs) Handles btnSubmit.Click MsgBox("INSIDE") If SocialAuthUser.IsLoggedIn Then Dim accountId As Integer = BLL.getAccIDFromSocialAuthSession Dim AlbumID As Integer = BLL.createAndReturnNewAlbumId(txtStoryTitle.Text.Trim, "") Dim URL As String = BLL.getAlbumPicUrl(txtStoryTitle.Text.Trim) Dim dt As New DataTable dt.Columns.Add("PictureID") dt.Columns.Add("AccountID") dt.Columns.Add("AlbumID") dt.Columns.Add("URL") dt.Columns.Add("Thumbnail") dt.Columns.Add("Description") dt.Columns.Add("AlbumCover") dt.Columns.Add("Tags") dt.Columns.Add("Votes") dt.Columns.Add("Abused") dt.Columns.Add("isActive") Dim Row As DataRow Dim uniqueFileName As String = "" If Session("ID") Is Nothing Then lblMessage.Text = "You don't seem to have uploaded any pictures." Exit Sub Else **Dim FileCount As Integer = Request.Form(Request.Form.Count - 2)** Dim FileName, TargetName As String Try Dim Path As String = Server.MapPath(BLL.getAlbumPicUrl(txtStoryTitle.Text.Trim)) If Not IO.Directory.Exists(Path) Then IO.Directory.CreateDirectory(Path) End If Dim StartIndex As Integer Dim PicCount As Integer For i As Integer = 0 To Request.Form.Count - 1 If Request.Form(i).ToLower.Contains("jpg") Or Request.Form(i).ToLower.Contains("gif") Or Request.Form(i).ToLower.Contains("png") Then StartIndex = i + 1 Exit For End If Next For i As Integer = StartIndex To Request.Form.Count - 4 Step 3 FileName = Request.Form(i) '## If part here is not kaam ka..but still using it for worst case scenario If IO.File.Exists(Path & FileName) Then TargetName = Path & FileName 'MsgBox(TargetName & "--- 1") Dim j As Integer = 1 While IO.File.Exists(TargetName) TargetName = Path & IO.Path.GetFileNameWithoutExtension(FileName) & "(" & j & ")" & IO.Path.GetExtension(FileName) j += 1 End While Else uniqueFileName = Guid.NewGuid.ToString & "__" & FileName TargetName = Path & uniqueFileName End If IO.File.Move(Server.MapPath("~/TempUploads/" & Session("ID") & "/" & FileName), TargetName) PicCount += 1 Row = dt.NewRow() Row(1) = accountId Row(2) = AlbumID Row(3) = URL & uniqueFileName Row(4) = "" Row(5) = "No Desc" Row(6) = "False" Row(7) = "" Row(8) = "0" Row(9) = "0" Row(10) = "True" dt.Rows.Add(Row) Next If BLL.insertImagesIntoAlbum(dt) Then lblMessage.Text = PicCount & IIf(PicCount = 1, " Picture", " Pictures") & " Saved!" lblMessage.ForeColor = Drawing.Color.Black Dim db As SqlDatabase = Connection.connection Using cmd As DbCommand = db.GetSqlStringCommand("SELECT PictureID,URL FROM AlbumPictures WHERE AlbumID=@AlbumID AND AccountID=@AccountID") db.AddInParameter(cmd, "AlbumID", Data.DbType.Int32, AlbumID) db.AddInParameter(cmd, "AccountID", Data.DbType.Int32, accountId) Using ds As DataSet = db.ExecuteDataSet(cmd) If ds.Tables(0).Rows.Count > 0 Then ListView1.DataSource = ds.Tables(0) ListView1.DataBind() Else lblMessage.Text = "No Such Album Exists." End If End Using End Using 'WebNavigator.GoToResponseRedirect(WebNavigator.URLFor.ReturnUrl("~/Memories/SortImages.aspx?id=" & AlbumID)) Else 'TODO:we'll show some error msg End If Catch ex As Exception MsgBox(ex.Message) lblMessage.Text = "Oo Poop!!" End Try End If Else WebNavigator.GoToResponseRedirect(WebNavigator.URLFor.LoginWithReturnUrl("~/Memories/CreateAlbum.aspx")) Exit Sub End If End Sub The above code works fine.I have added an Update Panel in the page to avoid post back, But when i add the button click as a trigger <Triggers> <asp:AsyncPostBackTrigger ControlID="btnSubmit" /> </Triggers> in the update panel to avoid post back, i get the following error.This happens when i add the button click as a Trigger to the update panel.

    Read the article

  • Making CSS Render in a simialr way on FireFox 3.0.15/IE 6.0 & 7.0

    - by R.R
    Following css renders differently depends on the browser (mainly with Firefox) Firefox: the border-left-style:dashed does not seem to take effect as desired and black lines are shown instead. Also font seems to be another issue using em as they respond relatively better in cross browser. When i used pixel its a mess but not sure em is better or not. I am not a CSS expert and working with CSS makes me feel worse than dealing with a second hand car dealer. .Main { font-family: Arial, "Trebuchet MS", Sans-Serif; font-size: 0.8em; border:0px; } .Header { font-family: Arial, "Trebuchet MS", Sans-Serif; font-size: 1.2em; color:#666; background : url("../images/header.jpg") repeat-x top left; padding-left: 10px; padding:4px; text-transform:uppercase; border:1px; border-left-style:dashed; border-bottom-width:thin; border-collapse:collapse } .Footer { color:#666; font-family: Arial, "Trebuchet MS", Sans-Serif; font-size: 0.7em; } .Footer td { border-style:none; text-align:center; } .Footer span { color:#666; font-family: Arial, "Trebuchet MS", Sans-Serif; font-size: 0.7em; font-weight:bold; text-decoration:underline; border-style:none; } .Footer a { font-family: Arial, "Trebuchet MS", Sans-Serif; font-size: 0.7em; color:#666; } .Results-Item td { margin-left: 10px; vertical-align:middle; color:#666; background-color: white; font-size: 1.2em; padding:4px; font-family: Arial, "Trebuchet MS", Sans-Serif; padding-left: 10px; line-height: 20px; border:1px; border-left-style:dashed; border-bottom-width:thin; border-collapse:collapse; } .Results-AltItem td { margin-left: 10px; vertical-align:middle; color:#666; font-size: 1.2em; /* _font-size: 1.2em; /* IE6 hack */ padding:4px; font-family: Arial, "Trebuchet MS", Sans-Serif; background-color: #ccc; padding-left: 10px; line-height: 20px; border:1px; border:1px; border-left-style:dashed; border-bottom-width:thin; border-collapse:collapse; } Amount { text-align:right; }

    Read the article

  • C# - Must declare the scalar variable "@ms_id" - Error

    - by user1075106
    I'm writing an web-app that keeps track of deadlines. With this app you have to be able to update records that are being saved in an SQL DB. However I'm having some problem with my update in my aspx-file. <asp:GridView ID="gv_editMilestones" runat="server" DataSourceID="sql_ds_milestones" CellPadding="4" ForeColor="#333333" GridLines="None" Font-Size="Small" AutoGenerateColumns="False" DataKeyNames="id" Visible="false" onrowupdated="gv_editMilestones_RowUpdated" onrowupdating="gv_editMilestones_RowUpdating" onrowediting="gv_editMilestones_RowEditing"> <RowStyle BackColor="#F7F6F3" ForeColor="#333333" /> <Columns> <asp:CommandField ShowEditButton="True" /> <asp:BoundField DataField="id" HeaderText="id" SortExpression="id" ReadOnly="True" Visible="false"/> <asp:BoundField DataField="ms_id" HeaderText="ms_id" SortExpression="ms_id" ReadOnly="True"/> <asp:BoundField DataField="ms_description" HeaderText="ms_description" SortExpression="ms_description"/> <%-- <asp:BoundField DataField="ms_resp_team" HeaderText="ms_resp_team" SortExpression="ms_resp_team"/>--%> <asp:TemplateField HeaderText="ms_resp_team" SortExpression="ms_resp_team"> <ItemTemplate> <%# Eval("ms_resp_team") %> </ItemTemplate> <EditItemTemplate> <asp:DropDownList ID="DDL_ms_resp_team" runat="server" DataSourceID="sql_ds_ms_resp_team" DataTextField="team_name" DataValueField="id"> <%--SelectedValue='<%# Bind("ms_resp_team") %>'--%> </asp:DropDownList> </EditItemTemplate> </asp:TemplateField> <asp:BoundField DataField="ms_focal_point" HeaderText="ms_focal_point" SortExpression="ms_focal_point" /> <asp:BoundField DataField="ms_exp_date" HeaderText="ms_exp_date" SortExpression="ms_exp_date" DataFormatString="{0:d}"/> <asp:BoundField DataField="ms_deal" HeaderText="ms_deal" SortExpression="ms_deal" ReadOnly="True"/> <asp:CheckBoxField DataField="ms_active" HeaderText="ms_active" SortExpression="ms_active"/> </Columns> <FooterStyle BackColor="#CCCC99" /> <PagerStyle BackColor="#F7F7DE" ForeColor="Black" HorizontalAlign="Right" /> <SelectedRowStyle BackColor="#CE5D5A" Font-Bold="True" ForeColor="White" /> <HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <AlternatingRowStyle BackColor="White" /> <EditRowStyle BackColor="#999999" /> </asp:GridView> <asp:SqlDataSource ID="sql_ds_milestones" runat="server" ConnectionString="<%$ ConnectionStrings:testServer %>" SelectCommand="SELECT [id] ,[ms_id] ,[ms_description] ,(SELECT [team_name] FROM [NSBP].[dbo].[tbl_teams] as teams WHERE milestones.[ms_resp_team] = teams.[id]) as 'ms_resp_team' ,[ms_focal_point] ,[ms_exp_date] ,(SELECT [deal] FROM [NSBP].[dbo].[tbl_deals] as deals WHERE milestones.[ms_deal] = deals.[id]) as 'ms_deal' ,[ms_active] FROM [NSBP].[dbo].[tbl_milestones] as milestones" UpdateCommand="UPDATE [NSBP].[dbo].[tbl_milestones] SET [ms_description] = @ms_description ,[ms_focal_point] = @ms_focal_point ,[ms_active] = @ms_active WHERE [ms_id] = @ms_id"> <UpdateParameters> <asp:Parameter Name="ms_description" Type="String" /> <%-- <asp:Parameter Name="ms_resp_team" Type="String" />--%> <asp:Parameter Name="ms_focal_point" Type="String" /> <asp:Parameter Name="ms_exp_date" Type="DateTime" /> <asp:Parameter Name="ms_active" Type="Boolean" /> <%-- <asp:Parameter Name="ms_id" Type="String" />--%> </UpdateParameters> </asp:SqlDataSource> You can see my complete GridView-structure + my datasource bound to this GridView. There is nothing written in my onrowupdating-function in my code-behind file. Thx in advance

    Read the article

  • HMTL5 Anti Aliasing Browser Disable

    - by Tappa Tappa
    I am forced to consider writing a library to handle the fundamental basics of drawing lines, thick lines, circles, squares etc. of an HTML5 canvas because I can't disable a feature embedded in the browser rendering of the core canvas algorithms. Am I forced to build the HTML5 Canvas rendering process from the ground up? If I am, who's in with me to do this? Who wants to change the world? Imagine a simple drawing application written in HTML5... you draw a shape... a closed shape like a rudimentary circle, free hand, more like an onion than a circle (well, that's what mine would look like!)... then imagine selecting a paint bucket icon and clicking inside that shape you drew and expecting it to be filled with a color of your choice. Imagine your surprise as you selected "Paint Bucket" and clicked in the middle of your shape and it filled your shape with color... BUT, not quite... HANG ON... this isn't right!!! On the inside of the edge of the shape you drew is a blur between the background color and your fill color and the edge color... the fill seems to be flawed. You wanted a straight forward "Paint Bucket" / "Fill"... you wanted to draw a shape and then fill it with a color... no fuss.... fill the whole damned inside of your shape with the color you choose. Your web browser has decided that when you draw the lines to define your shape they will be anti-aliased. If you draw a black line for your shape... well, the browser will draw grey pixels along the edges, in places... to make it look like a "better" line. Yeah, a "better" line that **s up the paint / flood fill process. How much does is cost to pay off the browser developers to expose a property to disable their anti-aliasing rendering? Disabling would save milliseconds for their rendering engine, surely! Bah, I really don't want to have to build my own canvas rendering engine using Bresenham line rendering algorithm... WHAT CAN BE DONE... HOW CAN THIS BE CHANGED!!!??? Do I need to start a petition aimed at the WC3???? Will you include your name if you are interested??? UPDATED function DrawLine(objContext, FromX, FromY, ToX, ToY) { var dx = Math.abs(ToX - FromX); var dy = Math.abs(ToY - FromY); var sx = (FromX < ToX) ? 1 : -1; var sy = (FromY < ToY) ? 1 : -1; var err = dx - dy; var CurX, CurY; CurX = FromX; CurY = FromY; while (true) { objContext.fillRect(CurX, CurY, objContext.lineWidth, objContext.lineWidth); if ((CurX == ToX) && (CurY == ToY)) break; var e2 = 2 * err; if (e2 > -dy) { err -= dy; CurX += sx; } if (e2 < dx) { err += dx; CurY += sy; } } }

    Read the article

  • Android how to match text with images by pointing text and images with lines

    - by Shirisha
    I am trying to create app which is match text with appropriate images by pointing with line. I want to create app exactly same which is shown in the below image: can any one please give me an idea? This is my main class: public class MatchActivity extends Activity { ArrayAdapter<String> listadapter; float x1; float y1; float x2; float y2; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); String[] s1 = { "smiley1", "smiley2", "smiley3" }; ListView lv = (ListView) findViewById(R.id.text_list); ArrayList<String> list = new ArrayList<String>(); list.addAll(Arrays.asList(s1)); listadapter = new ArrayAdapter<String>(this, R.layout.rowtext, s1); lv.setAdapter(listadapter); GridView gv = (GridView) findViewById(R.id.image_list); gv.setAdapter(new ImageAdapter(this)); lv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> arg0, View v, int arg2, long arg3){ x1=v.getX(); y1=v.getY(); Log.d("list","text positions x1:"+x1+" y1:"+y1); } }); gv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> arg0, View v, int arg2, long arg3){ DrawView draw=new DrawView(MatchActivity.this); x2=v.getX(); y2=v.getY(); draw.position1.add(x1); draw.position1.add(y1); draw.position2.add( x2); draw.position2.add(y2); Log.d("list","image positions x2:"+x2+" y2:"+y2); LinearLayout ll=LinearLayout)findViewById(R.id.draw_line); ll.addView(draw); } }); } } This is my drawing class to draw a line: public class DrawView extends View { Paint paint = new Paint(); private List<Float> position1=new ArrayList<Float>(); private List<Float> position2=new ArrayList<Float>();; public DrawView(Context context) { super(context); invalidate(); Log.d("drawview","In DrawView class position1:"+position1+" position2:"+position2) ; } @Override public void onDraw(Canvas canvas) { super.onDraw(canvas); Log.d("on draw","IN onDraw() position1:"+position1+" position2:"+position2); assert position1.size() == position2.size(); for (int i = 0; i < position1.size(); i += 2) { float x1 = position1.get(i); float y1 = position1.get(i + 1); float x2 = position2.get(i); float y2 = position2.get(i + 1); paint.setColor(Color.BLACK); paint.setStrokeWidth(3); canvas.drawLine(x1,y1, x2,y2, paint); } } } Thanks in advance .

    Read the article

  • HTML/CSS - No 100% height on div in IE

    - by Jordan Rynard
    Okay, so I've got a problem - and I'd love to have it fixed. I am using my favourite way of setting up a simple header/content/footer layout. The problem is that any elements I add to the 'content' div of my layout can not be expanded to 100% in Internet Explorer (as far as I know, IE only). I understand there is no height declared to the 'content' element, but because of the style of its positioning (declaring an absolute top AND bottom), the element fills the desired area. (The content element has a background color defined so you can see that the div is in fact filling between both the header and the footer.) So my problem is, since the div is clearly expanded between the two, why can't a child be set to 100% to fill that area? If anyone has any solutions, I'd love to hear them. (I'm looking for a solution that won't involve designing by an entire different layout.. or at least perhaps an explanation of why this is happening. I'm assuming at this point it's because of the lack of a height declaration -- but the div is expanded, so I don't get it!) You can view a page of the example here: http://www.elizabethlouter.com/html/index.html And here is the code as used on the page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta name="robots" content="noindex" /> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>No 100% height on 'content' child div in IE</title> </head> <style> html, body { width:100%; height:100%; margin:0px; padding:0px; } body { position:relative; } #wrapper { position:absolute; top:0px; width:960px; height:100%; left:50%; margin-left:-480px; } #header{ position:absolute; top:0px; left:0px; width:100%; height:200px; background-color:#999; } #content{ position:absolute; top:100px; bottom:50px; left:0px; width:100%; background-color:#F7F7F7; } #content_1{ width:200px; background-color:black; height:100%; } #footer{ position:absolute; bottom:0px; left:0px; width:100%; height:50px; background-color:#999; } </style> <body> <div id="wrapper"> <div id="header"> </div> <div id="content"> <div id="content_1"> </div> </div> <div id="footer"> </div> </div> </body> </html>

    Read the article

  • Version Assemblies with TFS 2010 Continuous Integration

    - by Steve Michelotti
    When I first heard that TFS 2010 had moved to Workflow Foundation for Team Build, I was *extremely* skeptical. I’ve loved MSBuild and didn’t quite understand the reasons for this change. In fact, given that I’ve been exclusively using Cruise Control for Continuous Integration (CI) for the last 5+ years of my career, I was skeptical of TFS for CI in general. However, after going through the learning process for TFS 2010 recently, I’m starting to become a believer. I’m also starting to see some of the benefits with Workflow Foundation for the overall processing because it gives you constructs not available in MSBuild such as parallel tasks, better control flow constructs, and a slightly better customization story. The first customization I had to make to the build process was to version the assemblies of my solution. This is not new. In fact, I’d recommend reading Mike Fourie’s well known post on Versioning Code in TFS before you get started. This post describes several foundational aspects of versioning assemblies regardless of your version of TFS. The main points are: 1) don’t use source control operations for your version file, 2) use a schema like <Major>.<Minor>.<IncrementalNumber>.0, and 3) do not keep AssemblyVersion and AssemblyFileVersion in sync. To do this in TFS 2010, the best post I’ve found has been Jim Lamb’s post of building a custom TFS 2010 workflow activity. Overall, this post is excellent but the primary issue I have with it is that the assembly version numbers produced are based in a date and look like this: “2010.5.15.1”. This is definitely not what I want. I want to be able to communicate to the developers and stakeholders that we are producing the “1.1 release” or “1.2 release” – which would have an assembly version number of “1.1.317.0” for example. In this post, I’ll walk through the process of customizing the assembly version number based on this method – customizing the concepts in Lamb’s post to suit my needs. I’ll also be combining this with the concepts of Fourie’s post – particularly with regards to the standards around how to version the assemblies. The first thing I’ll do is add a file called SolutionAssemblyVersionInfo.cs to the root of my solution that looks like this: 1: using System; 2: using System.Reflection; 3: [assembly: AssemblyVersion("1.1.0.0")] 4: [assembly: AssemblyFileVersion("1.1.0.0")] I’ll then add that file as a Visual Studio link file to each project in my solution by right-clicking the project, “Add – Existing Item…” then when I click the SolutionAssemblyVersionInfo.cs file, making sure I “Add As Link”: Now the Solution Explorer will show our file. We can see that it’s a “link” file because of the black arrow in the icon within all our projects. Of course you’ll need to remove the AssemblyVersion and AssemblyFileVersion attributes from the AssemblyInfo.cs files to avoid the duplicate attributes since they now leave in the SolutionAssemblyVersionInfo.cs file. This is an extremely common technique so that all the projects in our solution can be versioned as a unit. At this point, we’re ready to write our custom activity. The primary consideration is that I want the developer and/or tech lead to be able to easily be in control of the Major.Minor and then I want the CI process to add the third number with a unique incremental number. We’ll leave the fourth position always “0” for now – it’s held in reserve in case the day ever comes where we need to do an emergency patch to Production based on a branched version.   Writing the Custom Workflow Activity Similar to Lamb’s post, I’m going to write two custom workflow activities. The “outer” activity (a xaml activity) will be pretty straight forward. It will check if the solution version file exists in the solution root and, if so, delegate the replacement of version to the AssemblyVersionInfo activity which is a CodeActivity highlighted in red below:   Notice that the arguments of this activity are the “solutionVersionFile” and “tfsBuildNumber” which will be passed in. The tfsBuildNumber passed in will look something like this: “CI_MyApplication.4” and we’ll need to grab the “4” (i.e., the incremental revision number) and put that in the third position. Then we’ll need to honor whatever was specified for Major.Minor in the SolutionAssemblyVersionInfo.cs file. For example, if the SolutionAssemblyVersionInfo.cs file had “1.1.0.0” for the AssemblyVersion (as shown in the first code block near the beginning of this post), then we want to resulting file to have “1.1.4.0”. Before we do anything, let’s put together a unit test for all this so we can know if we get it right: 1: [TestMethod] 2: public void Assembly_version_should_be_parsed_correctly_from_build_name() 3: { 4: // arrange 5: const string versionFile = "SolutionAssemblyVersionInfo.cs"; 6: WriteTestVersionFile(versionFile); 7: var activity = new VersionAssemblies(); 8: var arguments = new Dictionary<string, object> { 9: { "tfsBuildNumber", "CI_MyApplication.4"}, 10: { "solutionVersionFile", versionFile} 11: }; 12:   13: // act 14: var result = WorkflowInvoker.Invoke(activity, arguments); 15:   16: // assert 17: Assert.AreEqual("1.2.4.0", (string)result["newAssemblyFileVersion"]); 18: var lines = File.ReadAllLines(versionFile); 19: Assert.IsTrue(lines.Contains("[assembly: AssemblyVersion(\"1.2.0.0\")]")); 20: Assert.IsTrue(lines.Contains("[assembly: AssemblyFileVersion(\"1.2.4.0\")]")); 21: } 22: 23: private void WriteTestVersionFile(string versionFile) 24: { 25: var fileContents = "using System.Reflection;\n" + 26: "[assembly: AssemblyVersion(\"1.2.0.0\")]\n" + 27: "[assembly: AssemblyFileVersion(\"1.2.0.0\")]"; 28: File.WriteAllText(versionFile, fileContents); 29: }   At this point, the code for our AssemblyVersion activity is pretty straight forward: 1: [BuildActivity(HostEnvironmentOption.Agent)] 2: public class AssemblyVersionInfo : CodeActivity 3: { 4: [RequiredArgument] 5: public InArgument<string> FileName { get; set; } 6:   7: [RequiredArgument] 8: public InArgument<string> TfsBuildNumber { get; set; } 9:   10: public OutArgument<string> NewAssemblyFileVersion { get; set; } 11:   12: protected override void Execute(CodeActivityContext context) 13: { 14: var solutionVersionFile = this.FileName.Get(context); 15: 16: // Ensure that the file is writeable 17: var fileAttributes = File.GetAttributes(solutionVersionFile); 18: File.SetAttributes(solutionVersionFile, fileAttributes & ~FileAttributes.ReadOnly); 19:   20: // Prepare assembly versions 21: var majorMinor = GetAssemblyMajorMinorVersionBasedOnExisting(solutionVersionFile); 22: var newBuildNumber = GetNewBuildNumber(this.TfsBuildNumber.Get(context)); 23: var newAssemblyVersion = string.Format("{0}.{1}.0.0", majorMinor.Item1, majorMinor.Item2); 24: var newAssemblyFileVersion = string.Format("{0}.{1}.{2}.0", majorMinor.Item1, majorMinor.Item2, newBuildNumber); 25: this.NewAssemblyFileVersion.Set(context, newAssemblyFileVersion); 26:   27: // Perform the actual replacement 28: var contents = this.GetFileContents(newAssemblyVersion, newAssemblyFileVersion); 29: File.WriteAllText(solutionVersionFile, contents); 30:   31: // Restore the file's original attributes 32: File.SetAttributes(solutionVersionFile, fileAttributes); 33: } 34:   35: #region Private Methods 36:   37: private string GetFileContents(string newAssemblyVersion, string newAssemblyFileVersion) 38: { 39: var cs = new StringBuilder(); 40: cs.AppendLine("using System.Reflection;"); 41: cs.AppendFormat("[assembly: AssemblyVersion(\"{0}\")]", newAssemblyVersion); 42: cs.AppendLine(); 43: cs.AppendFormat("[assembly: AssemblyFileVersion(\"{0}\")]", newAssemblyFileVersion); 44: return cs.ToString(); 45: } 46:   47: private Tuple<string, string> GetAssemblyMajorMinorVersionBasedOnExisting(string filePath) 48: { 49: var lines = File.ReadAllLines(filePath); 50: var versionLine = lines.Where(x => x.Contains("AssemblyVersion")).FirstOrDefault(); 51:   52: if (versionLine == null) 53: { 54: throw new InvalidOperationException("File does not contain [assembly: AssemblyVersion] attribute"); 55: } 56:   57: return ExtractMajorMinor(versionLine); 58: } 59:   60: private static Tuple<string, string> ExtractMajorMinor(string versionLine) 61: { 62: var firstQuote = versionLine.IndexOf('"') + 1; 63: var secondQuote = versionLine.IndexOf('"', firstQuote); 64: var version = versionLine.Substring(firstQuote, secondQuote - firstQuote); 65: var versionParts = version.Split('.'); 66: return new Tuple<string, string>(versionParts[0], versionParts[1]); 67: } 68:   69: private string GetNewBuildNumber(string buildName) 70: { 71: return buildName.Substring(buildName.LastIndexOf(".") + 1); 72: } 73:   74: #endregion 75: }   At this point the final step is to incorporate this activity into the overall build template. Make a copy of the DefaultTempate.xaml – we’ll call it DefaultTemplateWithVersioning.xaml. Before the build and labeling happens, drag the VersionAssemblies activity in. Then set the LabelName variable to “BuildDetail.BuildDefinition.Name + "-" + newAssemblyFileVersion since the newAssemblyFileVersion was produced by our activity.   Configuring CI Once you add your solution to source control, you can configure CI with the build definition window as shown here. The main difference is that we’ll change the Process tab to reflect a different build number format and choose our custom build process file:   When the build completes, we’ll see the name of our project with the unique revision number:   If we look at the detailed build log for the latest build, we’ll see the label being created with our custom task:     We can now look at the history labels in TFS and see the project name with the labels (the Assignment activity I added to the workflow):   Finally, if we look at the physical assemblies that are produced, we can right-click on any assembly in Windows Explorer and see the assembly version in its properties:   Full Traceability We now have full traceability for our code. There will never be a question of what code was deployed to Production. You can always see the assembly version in the properties of the physical assembly. That can be traced back to a label in TFS where the unique revision number matches. The label in TFS gives you the complete snapshot of the code in your source control repository at the time the code was built. This type of process for full traceability has been used for many years for CI – in fact, I’ve done similar things with CCNet and SVN for quite some time. This is simply the TFS implementation of that pattern. The new features that TFS 2010 give you to make these types of customizations in your build process are quite easy once you get over the initial curve.

    Read the article

  • Netflix, jQuery, JSONP, and OData

    - by Stephen Walther
    At the last MIX conference, Netflix announced that they are exposing their catalog of movie information using the OData protocol. This is great news! This means that you can take advantage of all of the advanced OData querying features against a live database of Netflix movies. In this blog entry, I’ll demonstrate how you can use Netflix, jQuery, JSONP, and OData to create a simple movie lookup form. The form enables you to enter a movie title, or part of a movie title, and display a list of matching movies. For example, Figure 1 illustrates the movies displayed when you enter the value robot into the lookup form.   Using the Netflix OData Catalog API You can learn about the Netflix OData Catalog API at the following website: http://developer.netflix.com/docs/oData_Catalog The nice thing about this website is that it provides plenty of samples. It also has a good general reference for OData. For example, the website includes a list of OData filter operators and functions. The Netflix Catalog API exposes 4 top-level resources: Titles – A database of Movie information including interesting movie properties such as synopsis, BoxArt, and Cast. People – A database of people information including interesting information such as Awards, TitlesDirected, and TitlesActedIn. Languages – Enables you to get title information in different languages. Genres – Enables you to get title information for specific movie genres. OData is REST based. This means that you can perform queries by putting together the right URL. For example, if you want to get a list of the movies that were released after 2010 and that had an average rating greater than 4 then you can enter the following URL in the address bar of your browser: http://odata.netflix.com/Catalog/Titles?$filter=ReleaseYear gt 2010&AverageRating gt 4 Entering this URL returns the movies in Figure 2. Creating the Movie Lookup Form The complete code for the Movie Lookup form is contained in Listing 1. Listing 1 – MovieLookup.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Netflix with jQuery</title> <style type="text/css"> #movieTemplateContainer div { width:400px; padding: 10px; margin: 10px; border: black solid 1px; } </style> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> </head> <body> <label>Search Movies:</label> <input id="movieName" size="50" /> <button id="btnLookup">Lookup</button> <div id="movieTemplateContainer"></div> <script id="movieTemplate" type="text/html"> <div> <img src="<%=BoxArtSmallUrl %>" /> <strong><%=Name%></strong> <p> <%=Synopsis %> </p> </div> </script> <script type="text/javascript"> $("#btnLookup").click(function () { // Build OData query var movieName = $("#movieName").val(); var query = "http://odata.netflix.com/Catalog" // netflix base url + "/Titles" // top-level resource + "?$filter=substringof('" + escape(movieName) + "',Name)" // filter by movie name + "&$callback=callback" // jsonp request + "&$format=json"; // json request // Make JSONP call to Netflix $.ajax({ dataType: "jsonp", url: query, jsonpCallback: "callback", success: callback }); }); function callback(result) { // unwrap result var movies = result["d"]["results"]; // show movies in template var showMovie = tmpl("movieTemplate"); var html = ""; for (var i = 0; i < movies.length; i++) { // flatten movie movies[i].BoxArtSmallUrl = movies[i].BoxArt.SmallUrl; // render with template html += showMovie(movies[i]); } $("#movieTemplateContainer").html(html); } </script> </body> </html> The HTML page in Listing 1 includes two JavaScript libraries: <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> The first script tag retrieves jQuery from the Microsoft Ajax CDN. You can learn more about the Microsoft Ajax CDN by visiting the following website: http://www.asp.net/ajaxLibrary/cdn.ashx The second script tag is used to reference Resig’s micro-templating library. Because I want to use a template to display each movie, I need this library: http://ejohn.org/blog/javascript-micro-templating/ When you enter a value into the Search Movies input field and click the button, the following JavaScript code is executed: // Build OData query var movieName = $("#movieName").val(); var query = "http://odata.netflix.com/Catalog" // netflix base url + "/Titles" // top-level resource + "?$filter=substringof('" + escape(movieName) + "',Name)" // filter by movie name + "&$callback=callback" // jsonp request + "&$format=json"; // json request // Make JSONP call to Netflix $.ajax({ dataType: "jsonp", url: query, jsonpCallback: "callback", success: callback }); This code Is used to build a query that will be executed against the Netflix Catalog API. For example, if you enter the search phrase King Kong then the following URL is created: http://odata.netflix.com/Catalog/Titles?$filter=substringof(‘King%20Kong’,Name)&$callback=callback&$format=json This query includes the following parameters: $filter – You assign a filter expression to this parameter to filter the movie results. $callback – You assign the name of a JavaScript callback method to this parameter. OData calls this method to return the movie results. $format – you assign either the value json or xml to this parameter to specify how the format of the movie results. Notice that all of the OData parameters -- $filter, $callback, $format -- start with a dollar sign $. The Movie Lookup form uses JSONP to retrieve data across the Internet. Because WCF Data Services supports JSONP, and Netflix uses WCF Data Services to expose movies using the OData protocol, you can use JSONP when interacting with the Netflix Catalog API. To learn more about using JSONP with OData, see Pablo Castro’s blog: http://blogs.msdn.com/pablo/archive/2009/02/25/adding-support-for-jsonp-and-url-controlled-format-to-ado-net-data-services.aspx The actual JSONP call is performed by calling the $.ajax() method. When this call successfully completes, the JavaScript callback() method is called. The callback() method looks like this: function callback(result) { // unwrap result var movies = result["d"]["results"]; // show movies in template var showMovie = tmpl("movieTemplate"); var html = ""; for (var i = 0; i < movies.length; i++) { // flatten movie movies[i].BoxArtSmallUrl = movies[i].BoxArt.SmallUrl; // render with template html += showMovie(movies[i]); } $("#movieTemplateContainer").html(html); } The movie results from Netflix are passed to the callback method. The callback method takes advantage of Resig’s micro-templating library to display each of the movie results. A template used to display each movie is passed to the tmpl() method. The movie template looks like this: <script id="movieTemplate" type="text/html"> <div> <img src="<%=BoxArtSmallUrl %>" /> <strong><%=Name%></strong> <p> <%=Synopsis %> </p> </div> </script>   This template looks like a server-side ASP.NET template. However, the template is rendered in the client (browser) instead of the server. Summary The goal of this blog entry was to demonstrate how well jQuery works with OData. We managed to use a number of interesting open-source libraries and open protocols while building the Movie Lookup form including jQuery, JSONP, JSON, and OData.

    Read the article

  • 14+ Real Estate WordPress Themes

    - by Aditi
    If you are looking for a great WordPress real estate theme. Below is a list of some of the best wordpress real estate themes, so you can find one, which is the best suited for you and be at par with increasing industry demands in real estates business.We have covered only the best themes available. The Themes are flexible & can be used by anybody in real estate business. If you are realtor, agent, appraiser or realty these can be modified as per your use. Estate It is an immensely powerful and simple to manage business theme. It offers advanced SEO control, clean code and styling modification features. It has new “Properties” management facility when installed – proving it’s far more than just a WordPress theme. It offers flexible page templates, an advanced search facility that allows you to drill down into properties based on very specific criteria, Google Maps integration and smart property images management. It is a complete web solution. It also has IDX functionality due to dsIDXpress plugin integration, which allows multi-listing services. Price: $200 View Demo Download ElegantEstate It makes your WordPress blog into a full-feature real estate website. The theme makes browsing your listings easy, and adds special integration features for property info, photos, Google Maps and more. Help increase sales by establishing an elegant and professional online presence today. It has opera compatibility, Netscape compatibility, Safari compatibility, WordPress 3.0 compatibility. It comes with five color schemes, threaded comments, optional blog-style structure, Gravatar ready, firefox compatible, IE8 + IE7 + IE6 compatible, advertisement ready, widget ready sidebars, theme options page, custom thumbnail images, PSD files, valid XHTML + CSS, smooth table less design, ePanel theme options, page templates, complete localization and many more features. Price: $39 (Package includes more than 55 themes) View Demo Download Open House Open House is fully compatible with WordPress 3.0+ and a highly customizable Real Estate WordPress theme. It has Google Maps Integration with Street View. It has a professional look for Agents and Realtors both. It is best suited for all markets and countries with theme localization, translation and internationalization. It provides for English, Spanish and Portuguese language files in the Developer Package. It has custom scripts, which makes it easy to add/delete/modify listings. It also includes photo gallery with a lightbox effect, gorgeous photo fade animations and automatic Google Maps integration. The theme can be used as a single or multi-agent website with individual Agent-Realtor pages with listings and biography information, Agent photo uploader, financing calculator.There is Multi Category search for potential customers to locate the house they want. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Residence Real Estate It is a WordPress 3.0+ compatible stunning real estate theme. It has a dynamic real estate framework management module for easy edit-delete-add more features options, which makes this theme super easy to customize to the market needs. It allows you to add your own labels and values in your own language and switch the theme to your own language with English and Spanish files included with the ability to add your own language. It offers Multi-Category search with breadcrumb filtered results, easy photo gallery management with drag-drop sorting of images. It allows you to build your own multi-category search section menu with custom labels-choices and unlimited dropdown menus. They have been presented in a professional module with search results in breadcrumb navigation. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Smooth Smooth is a WordPress Real Estate theme. It is a complete theme, which comes with Multi Category Search, Google Maps Integration, Agent Photo and Logo uploader that offers a professional and extremely affordable solution for Realtors and Agents to showcase their properties with ease. You can add your listings with the extremely easy and flexible Dynamic Real Estate Framework, edit-add-modify-delete all features, labels and values within the WordPress administration and upload unlimited photos to your galleries with latest WordPress 3.0+ features. It is a complete solution for real estate sites. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Homeowners It is another WordPress Real Estate theme, which is a fast loading optimized theme with Google Maps Integration, fully compatible with WordPress 3.0 features and all Real Estate markets. It has a professional clean look and it is full of features extremely easy to modify. It also provides for 12 new styles provided. English, Spanish and Portuguese language files are provided in the Developer Package. Homeowners WordPress Real Estate features custom scripts that make add/delete/modify listings an easy task with an included photo gallery with a lightbox effect and automatic Google Map integration with street view (New) Agents will have access only to their own listings and add the listing management for their account making this theme an ideal affordable solution for Realtors and Real Estate agencies. The theme can be used as a single or multi-agent website with individual Agent-Realtor pages with listings and biography information, Agent photo uploader, financing calculator. Multi category search has also been provided. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Real Agent Real Estate This theme is a WordPress 3.0+ compatible clean grid based real estate theme. It has a dynamic real estate framework management module for easy edit-delete-add more features options. It is easy to customize according to market. It allows you to add your own labels and values in your own language switch the theme to your own language with English and Spanish files included with the ability to add your own language. Multi-Category search with breadcrumb filtered results, easy photo gallery management with drag-drop sorting of images. You can upload property photos in bulk with the native WordPress uploader and the new image editing and resizing options in WordPress 3.0+. The theme features 5 different color styles, blue, black, red, green and purple with professional layouts, logo and agent photo uploaders. This theme is best suited for individual or multiple agents both. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Agent Press The AgentPress theme is an ideal solution for real estate agents. It offers multiple page templates that can be used to create a complete real estate website. You can create from single property templates to a custom homepage easily with it. It is compatible to WordPress 3.0 and 3.1. It has custom background/header, property template, 6 layout options, fixed width, threaded comments and many more features. Price: $99.95 View Demo Download Real Estate It is one of the best Real Estate themes. It offers single click auto install of the site, Allow user to pay & submit properties on your site, Multi-agent site with profiles, Strategically built real estate site with professional design, User dashboard to edit/renew their submissions, Auto generated Google Maps and Image Slideshows and many more unique features. Once the users search property as per their criteria, the properties are listed with all the necessary parameters that let them select the property of their choice. Users can also add the property to favorite so they can check the property later from their member area dashboard. Admin may display different sidebar on this page and add widgets of their choice. This theme is full of custom, dynamic widgets such as top agents, finance calculator, user login; advertise blocks, testimonials and so on. There is a property details page where users can see the actual property. The agent details is displayed with the full contact details and appropriate links so the visitor can get all info about the property being sold, seller and may contact them by filling out a simple form. The email will be sent directly to the person who listed the property. Price: $89.95 Single | $159.95 Developer View Demo Download Broker Real Estate It is also a WordPress 3.0+ compatible real estate theme. It has a featured property slideshow, dynamic real estate framework management module for easy edit-delete-add more features. You can add your own labels and values in your own language. It offers multi-category search with breadcrumb-filtered results, easy photo gallery management with drag-drop sorting of images. You can also build your own multi-category search section menu with custom labels-choices and unlimited dropdown menus. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Decasa It has custom search panel that lets your user easily browse your properties by keyword search or category select drop downs. It offers the property exposé, which is a user-friendly overview over the most important details of each real estate object. You can easily add this data through a post settings meta box on the post edit screen. You can easily create a real estate image gallery. Its theme options panel makes it easy to make the basic theme settings. It supports the new WordPress post thumbnail feature. When uploading an image file the theme will automatically create all the necessary image size. You can also create your own custom menu easily and fast with drag and drop without touching any code. Price: 39 € View Demo Download RealtorPress A real estate premium WordPress theme from PremiumPress. Versatile WordPress Theme that can be used by individual agents or real estate companies. The theme allows you to easily add property listings via the custom backend admin area or import CSV spreadsheets. It features customisable search options, Google maps integration, real estate data custom field creator, image management tools and more. Price: $79 | Premium Collection: $259 (all PremiumPress themes) View Demo Download Related posts:21+ WordPress Photo Blog & Portfolio Themes 14+ WordPress Portfolio Themes Professional WordPress Business Themes

    Read the article

  • Parallelism in .NET – Part 3, Imperative Data Parallelism: Early Termination

    - by Reed
    Although simple data parallelism allows us to easily parallelize many of our iteration statements, there are cases that it does not handle well.  In my previous discussion, I focused on data parallelism with no shared state, and where every element is being processed exactly the same. Unfortunately, there are many common cases where this does not happen.  If we are dealing with a loop that requires early termination, extra care is required when parallelizing. Often, while processing in a loop, once a certain condition is met, it is no longer necessary to continue processing.  This may be a matter of finding a specific element within the collection, or reaching some error case.  The important distinction here is that, it is often impossible to know until runtime, what set of elements needs to be processed. In my initial discussion of data parallelism, I mentioned that this technique is a candidate when you can decompose the problem based on the data involved, and you wish to apply a single operation concurrently on all of the elements of a collection.  This covers many of the potential cases, but sometimes, after processing some of the elements, we need to stop processing. As an example, lets go back to our previous Parallel.ForEach example with contacting a customer.  However, this time, we’ll change the requirements slightly.  In this case, we’ll add an extra condition – if the store is unable to email the customer, we will exit gracefully.  The thinking here, of course, is that if the store is currently unable to email, the next time this operation runs, it will handle the same situation, so we can just skip our processing entirely.  The original, serial case, with this extra condition, might look something like the following: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) break; customer.LastEmailContact = DateTime.Now; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re processing our loop, but at any point, if we fail to send our email successfully, we just abandon this process, and assume that it will get handled correctly the next time our routine is run.  If we try to parallelize this using Parallel.ForEach, as we did previously, we’ll run into an error almost immediately: the break statement we’re using is only valid when enclosed within an iteration statement, such as foreach.  When we switch to Parallel.ForEach, we’re no longer within an iteration statement – we’re a delegate running in a method. This needs to be handled slightly differently when parallelized.  Instead of using the break statement, we need to utilize a new class in the Task Parallel Library: ParallelLoopState.  The ParallelLoopState class is intended to allow concurrently running loop bodies a way to interact with each other, and provides us with a way to break out of a loop.  In order to use this, we will use a different overload of Parallel.ForEach which takes an IEnumerable<T> and an Action<T, ParallelLoopState> instead of an Action<T>.  Using this, we can parallelize the above operation by doing: Parallel.ForEach(customers, (customer, parallelLoopState) => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) parallelLoopState.Break(); else customer.LastEmailContact = DateTime.Now; } }); There are a couple of important points here.  First, we didn’t actually instantiate the ParallelLoopState instance.  It was provided directly to us via the Parallel class.  All we needed to do was change our lambda expression to reflect that we want to use the loop state, and the Parallel class creates an instance for our use.  We also needed to change our logic slightly when we call Break().  Since Break() doesn’t stop the program flow within our block, we needed to add an else case to only set the property in customer when we succeeded.  This same technique can be used to break out of a Parallel.For loop. That being said, there is a huge difference between using ParallelLoopState to cause early termination and to use break in a standard iteration statement.  When dealing with a loop serially, break will immediately terminate the processing within the closest enclosing loop statement.  Calling ParallelLoopState.Break(), however, has a very different behavior. The issue is that, now, we’re no longer processing one element at a time.  If we break in one of our threads, there are other threads that will likely still be executing.  This leads to an important observation about termination of parallel code: Early termination in parallel routines is not immediate.  Code will continue to run after you request a termination. This may seem problematic at first, but it is something you just need to keep in mind while designing your routine.  ParallelLoopState.Break() should be thought of as a request.  We are telling the runtime that no elements that were in the collection past the element we’re currently processing need to be processed, and leaving it up to the runtime to decide how to handle this as gracefully as possible.  Although this may seem problematic at first, it is a good thing.  If the runtime tried to immediately stop processing, many of our elements would be partially processed.  It would be like putting a return statement in a random location throughout our loop body – which could have horrific consequences to our code’s maintainability. In order to understand and effectively write parallel routines, we, as developers, need a subtle, but profound shift in our thinking.  We can no longer think in terms of sequential processes, but rather need to think in terms of requests to the system that may be handled differently than we’d first expect.  This is more natural to developers who have dealt with asynchronous models previously, but is an important distinction when moving to concurrent programming models. As an example, I’ll discuss the Break() method.  ParallelLoopState.Break() functions in a way that may be unexpected at first.  When you call Break() from a loop body, the runtime will continue to process all elements of the collection that were found prior to the element that was being processed when the Break() method was called.  This is done to keep the behavior of the Break() method as close to the behavior of the break statement as possible. We can see the behavior in this simple code: var collection = Enumerable.Range(0, 20); var pResult = Parallel.ForEach(collection, (element, state) => { if (element > 10) { Console.WriteLine("Breaking on {0}", element); state.Break(); } Console.WriteLine(element); }); If we run this, we get a result that may seem unexpected at first: 0 2 1 5 6 3 4 10 Breaking on 11 11 Breaking on 12 12 9 Breaking on 13 13 7 8 Breaking on 15 15 What is occurring here is that we loop until we find the first element where the element is greater than 10.  In this case, this was found, the first time, when one of our threads reached element 11.  It requested that the loop stop by calling Break() at this point.  However, the loop continued processing until all of the elements less than 11 were completed, then terminated.  This means that it will guarantee that elements 9, 7, and 8 are completed before it stops processing.  You can see our other threads that were running each tried to break as well, but since Break() was called on the element with a value of 11, it decides which elements (0-10) must be processed. If this behavior is not desirable, there is another option.  Instead of calling ParallelLoopState.Break(), you can call ParallelLoopState.Stop().  The Stop() method requests that the runtime terminate as soon as possible , without guaranteeing that any other elements are processed.  Stop() will not stop the processing within an element, so elements already being processed will continue to be processed.  It will prevent new elements, even ones found earlier in the collection, from being processed.  Also, when Stop() is called, the ParallelLoopState’s IsStopped property will return true.  This lets longer running processes poll for this value, and return after performing any necessary cleanup. The basic rule of thumb for choosing between Break() and Stop() is the following. Use ParallelLoopState.Stop() when possible, since it terminates more quickly.  This is particularly useful in situations where you are searching for an element or a condition in the collection.  Once you’ve found it, you do not need to do any other processing, so Stop() is more appropriate. Use ParallelLoopState.Break() if you need to more closely match the behavior of the C# break statement. Both methods behave differently than our C# break statement.  Unfortunately, when parallelizing a routine, more thought and care needs to be put into every aspect of your routine than you may otherwise expect.  This is due to my second observation: Parallelizing a routine will almost always change its behavior. This sounds crazy at first, but it’s a concept that’s so simple its easy to forget.  We’re purposely telling the system to process more than one thing at the same time, which means that the sequence in which things get processed is no longer deterministic.  It is easy to change the behavior of your routine in very subtle ways by introducing parallelism.  Often, the changes are not avoidable, even if they don’t have any adverse side effects.  This leads to my final observation for this post: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Parallelism in .NET – Part 7, Some Differences between PLINQ and LINQ to Objects

    - by Reed
    In my previous post on Declarative Data Parallelism, I mentioned that PLINQ extends LINQ to Objects to support parallel operations.  Although nearly all of the same operations are supported, there are some differences between PLINQ and LINQ to Objects.  By introducing Parallelism to our declarative model, we add some extra complexity.  This, in turn, adds some extra requirements that must be addressed. In order to illustrate the main differences, and why they exist, let’s begin by discussing some differences in how the two technologies operate, and look at the underlying types involved in LINQ to Objects and PLINQ . LINQ to Objects is mainly built upon a single class: Enumerable.  The Enumerable class is a static class that defines a large set of extension methods, nearly all of which work upon an IEnumerable<T>.  Many of these methods return a new IEnumerable<T>, allowing the methods to be chained together into a fluent style interface.  This is what allows us to write statements that chain together, and lead to the nice declarative programming model of LINQ: double min = collection .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Other LINQ variants work in a similar fashion.  For example, most data-oriented LINQ providers are built upon an implementation of IQueryable<T>, which allows the database provider to turn a LINQ statement into an underlying SQL query, to be performed directly on the remote database. PLINQ is similar, but instead of being built upon the Enumerable class, most of PLINQ is built upon a new static class: ParallelEnumerable.  When using PLINQ, you typically begin with any collection which implements IEnumerable<T>, and convert it to a new type using an extension method defined on ParallelEnumerable: AsParallel().  This method takes any IEnumerable<T>, and converts it into a ParallelQuery<T>, the core class for PLINQ.  There is a similar ParallelQuery class for working with non-generic IEnumerable implementations. This brings us to our first subtle, but important difference between PLINQ and LINQ – PLINQ always works upon specific types, which must be explicitly created. Typically, the type you’ll use with PLINQ is ParallelQuery<T>, but it can sometimes be a ParallelQuery or an OrderedParallelQuery<T>.  Instead of dealing with an interface, implemented by an unknown class, we’re dealing with a specific class type.  This works seamlessly from a usage standpoint – ParallelQuery<T> implements IEnumerable<T>, so you can always “switch back” to an IEnumerable<T>.  The difference only arises at the beginning of our parallelization.  When we’re using LINQ, and we want to process a normal collection via PLINQ, we need to explicitly convert the collection into a ParallelQuery<T> by calling AsParallel().  There is an important consideration here – AsParallel() does not need to be called on your specific collection, but rather any IEnumerable<T>.  This allows you to place it anywhere in the chain of methods involved in a LINQ statement, not just at the beginning.  This can be useful if you have an operation which will not parallelize well or is not thread safe.  For example, the following is perfectly valid, and similar to our previous examples: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); However, if SomeOperation() is not thread safe, we could just as easily do: double min = collection .Select(item => item.SomeOperation()) .AsParallel() .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); In this case, we’re using standard LINQ to Objects for the Select(…) method, then converting the results of that map routine to a ParallelQuery<T>, and processing our filter (the Where method) and our aggregation (the Min method) in parallel. PLINQ also provides us with a way to convert a ParallelQuery<T> back into a standard IEnumerable<T>, forcing sequential processing via standard LINQ to Objects.  If SomeOperation() was thread-safe, but PerformComputation() was not thread-safe, we would need to handle this by using the AsEnumerable() method: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .AsEnumerable() .Min(item => item.PerformComputation()); Here, we’re converting our collection into a ParallelQuery<T>, doing our map operation (the Select(…) method) and our filtering in parallel, then converting the collection back into a standard IEnumerable<T>, which causes our aggregation via Min() to be performed sequentially. This could also be written as two statements, as well, which would allow us to use the language integrated syntax for the first portion: var tempCollection = from item in collection.AsParallel() let e = item.SomeOperation() where (e.SomeProperty > 6 && e.SomeProperty < 24) select e; double min = tempCollection.AsEnumerable().Min(item => item.PerformComputation()); This allows us to use the standard LINQ style language integrated query syntax, but control whether it’s performed in parallel or serial by adding AsParallel() and AsEnumerable() appropriately. The second important difference between PLINQ and LINQ deals with order preservation.  PLINQ, by default, does not preserve the order of of source collection. This is by design.  In order to process a collection in parallel, the system needs to naturally deal with multiple elements at the same time.  Maintaining the original ordering of the sequence adds overhead, which is, in many cases, unnecessary.  Therefore, by default, the system is allowed to completely change the order of your sequence during processing.  If you are doing a standard query operation, this is usually not an issue.  However, there are times when keeping a specific ordering in place is important.  If this is required, you can explicitly request the ordering be preserved throughout all operations done on a ParallelQuery<T> by using the AsOrdered() extension method.  This will cause our sequence ordering to be preserved. For example, suppose we wanted to take a collection, perform an expensive operation which converts it to a new type, and display the first 100 elements.  In LINQ to Objects, our code might look something like: // Using IEnumerable<SourceClass> collection IEnumerable<ResultClass> results = collection .Select(e => e.CreateResult()) .Take(100); If we just converted this to a parallel query naively, like so: IEnumerable<ResultClass> results = collection .AsParallel() .Select(e => e.CreateResult()) .Take(100); We could very easily get a very different, and non-reproducable, set of results, since the ordering of elements in the input collection is not preserved.  To get the same results as our original query, we need to use: IEnumerable<ResultClass> results = collection .AsParallel() .AsOrdered() .Select(e => e.CreateResult()) .Take(100); This requests that PLINQ process our sequence in a way that verifies that our resulting collection is ordered as if it were processed serially.  This will cause our query to run slower, since there is overhead involved in maintaining the ordering.  However, in this case, it is required, since the ordering is required for correctness. PLINQ is incredibly useful.  It allows us to easily take nearly any LINQ to Objects query and run it in parallel, using the same methods and syntax we’ve used previously.  There are some important differences in operation that must be considered, however – it is not a free pass to parallelize everything.  When using PLINQ in order to parallelize your routines declaratively, the same guideline I mentioned before still applies: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

  • Parallelism in .NET – Part 2, Simple Imperative Data Parallelism

    - by Reed
    In my discussion of Decomposition of the problem space, I mentioned that Data Decomposition is often the simplest abstraction to use when trying to parallelize a routine.  If a problem can be decomposed based off the data, we will often want to use what MSDN refers to as Data Parallelism as our strategy for implementing our routine.  The Task Parallel Library in .NET 4 makes implementing Data Parallelism, for most cases, very simple. Data Parallelism is the main technique we use to parallelize a routine which can be decomposed based off data.  Data Parallelism refers to taking a single collection of data, and having a single operation be performed concurrently on elements in the collection.  One side note here: Data Parallelism is also sometimes referred to as the Loop Parallelism Pattern or Loop-level Parallelism.  In general, for this series, I will try to use the terminology used in the MSDN Documentation for the Task Parallel Library.  This should make it easier to investigate these topics in more detail. Once we’ve determined we have a problem that, potentially, can be decomposed based on data, implementation using Data Parallelism in the TPL is quite simple.  Let’s take our example from the Data Decomposition discussion – a simple contrast stretching filter.  Here, we have a collection of data (pixels), and we need to run a simple operation on each element of the pixel.  Once we know the minimum and maximum values, we most likely would have some simple code like the following: for (int row=0; row < pixelData.GetUpperBound(0); ++row) { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This simple routine loops through a two dimensional array of pixelData, and calls the AdjustContrast routine on each pixel. As I mentioned, when you’re decomposing a problem space, most iteration statements are potentially candidates for data decomposition.  Here, we’re using two for loops – one looping through rows in the image, and a second nested loop iterating through the columns.  We then perform one, independent operation on each element based on those loop positions. This is a prime candidate – we have no shared data, no dependencies on anything but the pixel which we want to change.  Since we’re using a for loop, we can easily parallelize this using the Parallel.For method in the TPL: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Here, by simply changing our first for loop to a call to Parallel.For, we can parallelize this portion of our routine.  Parallel.For works, as do many methods in the TPL, by creating a delegate and using it as an argument to a method.  In this case, our for loop iteration block becomes a delegate creating via a lambda expression.  This lets you write code that, superficially, looks similar to the familiar for loop, but functions quite differently at runtime. We could easily do this to our second for loop as well, but that may not be a good idea.  There is a balance to be struck when writing parallel code.  We want to have enough work items to keep all of our processors busy, but the more we partition our data, the more overhead we introduce.  In this case, we have an image of data – most likely hundreds of pixels in both dimensions.  By just parallelizing our first loop, each row of pixels can be run as a single task.  With hundreds of rows of data, we are providing fine enough granularity to keep all of our processors busy. If we parallelize both loops, we’re potentially creating millions of independent tasks.  This introduces extra overhead with no extra gain, and will actually reduce our overall performance.  This leads to my first guideline when writing parallel code: Partition your problem into enough tasks to keep each processor busy throughout the operation, but not more than necessary to keep each processor busy. Also note that I parallelized the outer loop.  I could have just as easily partitioned the inner loop.  However, partitioning the inner loop would have led to many more discrete work items, each with a smaller amount of work (operate on one pixel instead of one row of pixels).  My second guideline when writing parallel code reflects this: Partition your problem in a way to place the most work possible into each task. This typically means, in practice, that you will want to parallelize the routine at the “highest” point possible in the routine, typically the outermost loop.  If you’re looking at parallelizing methods which call other methods, you’ll want to try to partition your work high up in the stack – as you get into lower level methods, the performance impact of parallelizing your routines may not overcome the overhead introduced. Parallel.For works great for situations where we know the number of elements we’re going to process in advance.  If we’re iterating through an IList<T> or an array, this is a typical approach.  However, there are other iteration statements common in C#.  In many situations, we’ll use foreach instead of a for loop.  This can be more understandable and easier to read, but also has the advantage of working with collections which only implement IEnumerable<T>, where we do not know the number of elements involved in advance. As an example, lets take the following situation.  Say we have a collection of Customers, and we want to iterate through each customer, check some information about the customer, and if a certain case is met, send an email to the customer and update our instance to reflect this change.  Normally, this might look something like: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } } Here, we’re doing a fair amount of work for each customer in our collection, but we don’t know how many customers exist.  If we assume that theStore.GetLastContact(customer) and theStore.EmailCustomer(customer) are both side-effect free, thread safe operations, we could parallelize this using Parallel.ForEach: Parallel.ForEach(customers, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); Just like Parallel.For, we rework our loop into a method call accepting a delegate created via a lambda expression.  This keeps our new code very similar to our original iteration statement, however, this will now execute in parallel.  The same guidelines apply with Parallel.ForEach as with Parallel.For. The other iteration statements, do and while, do not have direct equivalents in the Task Parallel Library.  These, however, are very easy to implement using Parallel.ForEach and the yield keyword. Most applications can benefit from implementing some form of Data Parallelism.  Iterating through collections and performing “work” is a very common pattern in nearly every application.  When the problem can be decomposed by data, we often can parallelize the workload by merely changing foreach statements to Parallel.ForEach method calls, and for loops to Parallel.For method calls.  Any time your program operates on a collection, and does a set of work on each item in the collection where that work is not dependent on other information, you very likely have an opportunity to parallelize your routine.

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • Building and Deploying Windows Azure Web Sites using Git and GitHub for Windows

    - by shiju
    Microsoft Windows Azure team has released a new version of Windows Azure which is providing many excellent features. The new Windows Azure provides Web Sites which allows you to deploy up to 10 web sites  for free in a multitenant shared environment and you can easily upgrade this web site to a private, dedicated virtual server when the traffic is grows. The Meet Windows Azure Fact Sheet provides the following information about a Windows Azure Web Site: Windows Azure Web Sites enable developers to easily build and deploy websites with support for multiple frameworks and popular open source applications, including ASP.NET, PHP and Node.js. With just a few clicks, developers can take advantage of Windows Azure’s global scale without having to worry about operations, servers or infrastructure. It is easy to deploy existing sites, if they run on Internet Information Services (IIS) 7, or to build new sites, with a free offer of 10 websites upon signup, with the ability to scale up as needed with reserved instances. Windows Azure Web Sites includes support for the following: Multiple frameworks including ASP.NET, PHP and Node.js Popular open source software apps including WordPress, Joomla!, Drupal, Umbraco and DotNetNuke Windows Azure SQL Database and MySQL databases Multiple types of developer tools and protocols including Visual Studio, Git, FTP, Visual Studio Team Foundation Services and Microsoft WebMatrix Signup to Windows and Enable Azure Web Sites You can signup for a 90 days free trial account in Windows Azure from here. After creating an account in Windows Azure, go to https://account.windowsazure.com/ , and select to preview features to view the available previews. In the Web Sites section of the preview features, click “try it now” which will enables the web sites feature Create Web Site in Windows Azure To create a web sites, login to the Windows Azure portal, and select Web Sites from and click New icon from the left corner  Click WEB SITE, QUICK CREATE and put values for URL and REGION dropdown. You can see the all web sites from the dashboard of the Windows Azure portal Set up Git Publishing Select your web site from the dashboard, and select Set up Git publishing To enable Git publishing , you must give user name and password which will initialize a Git repository Clone Git Repository We can use GitHub for Windows to publish apps to non-GitHub repositories which is well explained by Phil Haack on his blog post. Here we are going to deploy the web site using GitHub for Windows. Let’s clone a Git repository using the Git Url which will be getting from the Windows Azure portal. Let’s copy the Git url and execute the “git clone” with the git url. You can use the Git Shell provided by GitHub for Windows. To get it, right on the GitHub for Windows, and select open shell here as shown in the below picture. When executing the Git Clone command, it will ask for a password where you have to give password which specified in the Windows Azure portal. After cloning the GIT repository, you can drag and drop the local Git repository folder to GitHub for Windows GUI. This will automatically add the Windows Azure Web Site repository onto GitHub for Windows where you can commit your changes and publish your web sites to Windows Azure. Publish the Web Site using GitHub for Windows We can add multiple framework level files including ASP.NET, PHP and Node.js, to the local repository folder can easily publish to Windows Azure from GitHub for Windows GUI. For this demo, let me just add a simple Node.js file named Server.js which handles few request handlers. 1: var http = require('http'); 2: var port=process.env.PORT; 3: var querystring = require('querystring'); 4: var utils = require('util'); 5: var url = require("url"); 6:   7: var server = http.createServer(function(req, res) { 8: switch (req.url) { //checking the request url 9: case '/': 10: homePageHandler (req, res); //handler for home page 11: break; 12: case '/register': 13: registerFormHandler (req, res);//hamdler for register 14: break; 15: default: 16: nofoundHandler (req, res);// handler for 404 not found 17: break; 18: } 19: }); 20: server.listen(port); 21: //function to display the html form 22: function homePageHandler (req, res) { 23: console.log('Request handler home was called.'); 24: res.writeHead(200, {'Content-Type': 'text/html'}); 25: var body = '<html>'+ 26: '<head>'+ 27: '<meta http-equiv="Content-Type" content="text/html; '+ 28: 'charset=UTF-8" />'+ 29: '</head>'+ 30: '<body>'+ 31: '<form action="/register" method="post">'+ 32: 'Name:<input type=text value="" name="name" size=15></br>'+ 33: 'Email:<input type=text value="" name="email" size=15></br>'+ 34: '<input type="submit" value="Submit" />'+ 35: '</form>'+ 36: '</body>'+ 37: '</html>'; 38: //response content 39: res.end(body); 40: } 41: //handler for Post request 42: function registerFormHandler (req, res) { 43: console.log('Request handler register was called.'); 44: var pathname = url.parse(req.url).pathname; 45: console.log("Request for " + pathname + " received."); 46: var postData = ""; 47: req.on('data', function(chunk) { 48: // append the current chunk of data to the postData variable 49: postData += chunk.toString(); 50: }); 51: req.on('end', function() { 52: // doing something with the posted data 53: res.writeHead(200, "OK", {'Content-Type': 'text/html'}); 54: // parse the posted data 55: var decodedBody = querystring.parse(postData); 56: // output the decoded data to the HTTP response 57: res.write('<html><head><title>Post data</title></head><body><pre>'); 58: res.write(utils.inspect(decodedBody)); 59: res.write('</pre></body></html>'); 60: res.end(); 61: }); 62: } 63: //Error handler for 404 no found 64: function nofoundHandler(req, res) { 65: console.log('Request handler nofound was called.'); 66: res.writeHead(404, {'Content-Type': 'text/plain'}); 67: res.end('404 Error - Request handler not found'); 68: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If there is any change in the local repository folder, GitHub for Windows will automatically detect the changes. In the above step, we have just added a Server.js file so that GitHub for Windows will detect the changes. Let’s commit the changes to the local repository before publishing the web site to Windows Azure. After committed the all changes, you can click publish button which will publish the all changes to Windows Azure repository. The following screen shot shows deployment history from the Windows Azure portal.   GitHub for Windows is providing a sync button which can use for synchronizing between local repository and Windows Azure repository after making any commit on the local repository after any changes. Our web site is running after the deployment using Git Summary Windows Azure Web Sites lets the developers to easily build and deploy websites with support for multiple framework including ASP.NET, PHP and Node.js and can easily deploy the Web Sites using Visual Studio, Git, FTP, Visual Studio Team Foundation Services and Microsoft WebMatrix. In this demo, we have deployed a Node.js Web Site to Windows Azure using Git. We can use GitHub for Windows to publish apps to non-GitHub repositories and can use to publish Web SItes to Windows Azure.

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >