Search Results

Search found 757 results on 31 pages for 'grouping'.

Page 18/31 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • VS 2010 Debugger Improvements (BreakPoints, DataTips, Import/Export)

    - by ScottGu
    This is the twenty-first in a series of blog posts I’m doing on the VS 2010 and .NET 4 release.  Today’s blog post covers a few of the nice usability improvements coming with the VS 2010 debugger.  The VS 2010 debugger has a ton of great new capabilities.  Features like Intellitrace (aka historical debugging), the new parallel/multithreaded debugging capabilities, and dump debuging support typically get a ton of (well deserved) buzz and attention when people talk about the debugging improvements with this release.  I’ll be doing blog posts in the future that demonstrate how to take advantage of them as well.  With today’s post, though, I thought I’d start off by covering a few small, but nice, debugger usability improvements that were also included with the VS 2010 release, and which I think you’ll find useful. Breakpoint Labels VS 2010 includes new support for better managing debugger breakpoints.  One particularly useful feature is called “Breakpoint Labels” – it enables much better grouping and filtering of breakpoints within a project or across a solution.  With previous releases of Visual Studio you had to manage each debugger breakpoint as a separate item. Managing each breakpoint separately can be a pain with large projects and for cases when you want to maintain “logical groups” of breakpoints that you turn on/off depending on what you are debugging.  Using the new VS 2010 “breakpoint labeling” feature you can now name these “groups” of breakpoints and manage them as a unit. Grouping Multiple Breakpoints Together using a Label Below is a screen-shot of the breakpoints window within Visual Studio 2010.  This lists all of the breakpoints defined within my solution (which in this case is the ASP.NET MVC 2 code base): The first and last breakpoint in the list above breaks into the debugger when a Controller instance is created or released by the ASP.NET MVC Framework. Using VS 2010, I can now select these two breakpoints, right-click, and then select the new “Edit labels…” menu command to give them a common label/name (making them easier to find and manage): Below is the dialog that appears when I select the “Edit labels” command.  We can use it to create a new string label for our breakpoints or select an existing one we have already defined.  In this case we’ll create a new label called “Lifetime Management” to describe what these two breakpoints cover: When we press the OK button our two selected breakpoints will be grouped under the newly created “Lifetime Management” label: Filtering/Sorting Breakpoints by Label We can use the “Search” combobox to quickly filter/sort breakpoints by label.  Below we are only showing those breakpoints with the “Lifetime Management” label: Toggling Breakpoints On/Off by Label We can also toggle sets of breakpoints on/off by label group.  We can simply filter by the label group, do a Ctrl-A to select all the breakpoints, and then enable/disable all of them with a single click: Importing/Exporting Breakpoints VS 2010 now supports importing/exporting breakpoints to XML files – which you can then pass off to another developer, attach to a bug report, or simply re-load later.  To export only a subset of breakpoints, you can filter by a particular label and then click the “Export breakpoint” button in the Breakpoints window: Above I’ve filtered my breakpoint list to only export two particular breakpoints (specific to a bug that I’m chasing down).  I can export these breakpoints to an XML file and then attach it to a bug report or email – which will enable another developer to easily setup the debugger in the correct state to investigate it on a separate machine.  Pinned DataTips Visual Studio 2010 also includes some nice new “DataTip pinning” features that enable you to better see and track variable and expression values when in the debugger.  Simply hover over a variable or expression within the debugger to expose its DataTip (which is a tooltip that displays its value)  – and then click the new “pin” button on it to make the DataTip always visible: You can “pin” any number of DataTips you want onto the screen.  In addition to pinning top-level variables, you can also drill into the sub-properties on variables and pin them as well.  Below I’ve “pinned” three variables: “category”, “Request.RawUrl” and “Request.LogonUserIdentity.Name”.  Note that these last two variable are sub-properties of the “Request” object.   Associating Comments with Pinned DataTips Hovering over a pinned DataTip exposes some additional UI within the debugger: Clicking the comment button at the bottom of this UI expands the DataTip - and allows you to optionally add a comment with it: This makes it really easy to attach and track debugging notes: Pinned DataTips are usable across both Debug Sessions and Visual Studio Sessions Pinned DataTips can be used across multiple debugger sessions.  This means that if you stop the debugger, make a code change, and then recompile and start a new debug session - any pinned DataTips will still be there, along with any comments you associate with them.  Pinned DataTips can also be used across multiple Visual Studio sessions.  This means that if you close your project, shutdown Visual Studio, and then later open the project up again – any pinned DataTips will still be there, along with any comments you associate with them. See the Value from Last Debug Session (Great Code Editor Feature) How many times have you ever stopped the debugger only to go back to your code and say: $#@! – what was the value of that variable again??? One of the nice things about pinned DataTips is that they keep track of their “last value from debug session” – and you can look these values up within the VB/C# code editor even when the debugger is no longer running.  DataTips are by default hidden when you are in the code editor and the debugger isn’t running.  On the left-hand margin of the code editor, though, you’ll find a push-pin for each pinned DataTip that you’ve previously setup: Hovering your mouse over a pinned DataTip will cause it to display on the screen.  Below you can see what happens when I hover over the first pin in the editor - it displays our debug session’s last values for the “Request” object DataTip along with the comment we associated with them: This makes it much easier to keep track of state and conditions as you toggle between code editing mode and debugging mode on your projects. Importing/Exporting Pinned DataTips As I mentioned earlier in this post, pinned DataTips are by default saved across Visual Studio sessions (you don’t need to do anything to enable this). VS 2010 also now supports importing/exporting pinned DataTips to XML files – which you can then pass off to other developers, attach to a bug report, or simply re-load later. Combined with the new support for importing/exporting breakpoints, this makes it much easier for multiple developers to share debugger configurations and collaborate across debug sessions. Summary Visual Studio 2010 includes a bunch of great new debugger features – both big and small.  Today’s post shared some of the nice debugger usability improvements. All of the features above are supported with the Visual Studio 2010 Professional edition (the Pinned DataTip features are also supported in the free Visual Studio 2010 Express Editions)  I’ll be covering some of the “big big” new debugging features like Intellitrace, parallel/multithreaded debugging, and dump file analysis in future blog posts.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • SQL University: What and why of database testing

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 2 – Tools of the trade With that out of the way let us sharpen our pencils and get going. Why test a database The sad state of the industry today is that there is very little emphasis on testing in general. Test driven development is still a small niche of the programming world while refactoring is even smaller. The cause of this is the inability of developers to convince themselves and their managers that writing tests is beneficial. At the moment they are mostly viewed as waste of time. This is because the average person (let’s not fool ourselves, we’re all average) is unable to think about lower future costs in relation to little more current work. It’s orders of magnitude easier to know about the current costs in relation to current amount of work. That’s why programmers convince themselves testing is a waste of time. However we have to ask ourselves what tests are really about? Maybe finding bugs? No, not really. If we introduce bugs, we’re likely to write test around those bugs too. But yes we can find some bugs with tests. The main point of tests is to have reproducible repeatability in our systems. By having a code base largely covered by tests we can know with better certainty what a small code change can break in other parts of the system. By having repeatability we can make code changes with confidence, since we know we’ll see what breaks in other tests. And here comes the inability to estimate future costs. By spending just a few more hours writing those tests we’d know instantly what broke where. Imagine we fix a reported bug. We check-in the code, deploy it and the users are happy. Until we get a call 2 weeks later about a certain monthly process has stopped working. What we don’t know is that this process was developed by a long gone coworker and for some reason it relied on that same bug we’ve happily fixed. There’s no way we could’ve known that. We say OK and go in and fix the monthly process. But what we have no clue about is that there’s this ETL job that relied on data from that monthly process. Now that we’ve fixed the process it’s giving unexpected (yet correct since we fixed it) data to the ETL job. So we have to fix that too. But there’s this part of the app we coded that relies on data from that exact ETL job. And just like that we enter the “Loop of maintenance horror”. With the loop eventually comes blame. Here’s a nice tip for all developers and DBAs out there: If you make a mistake man up and admit to it. All of the above is valid for any kind of software development. Keeping this in mind the database is nothing other than just a part of the application. But a big part! One reason why testing a database is even more important than testing an application is that one database is usually accessed from multiple applications and processes. This makes it the central and vital part of the enterprise software infrastructure. Knowing all this can we really afford not to have tests? What to test in a database Now that we’ve decided we’ll dive into this testing thing we have to ask ourselves what needs to be tested? The short answer is: everything. The long answer is: read on! There are 2 main ways of doing tests: Black box and White box testing. Black box testing means we have no idea how the system internals are built and we only have access to it’s inputs and outputs. With it we test that the internal changes to the system haven’t caused the input/output behavior of the system to change. The most important thing to test here are the edge conditions. It’s where most programs break. Having good edge condition tests we can be more confident that the systems changes won’t break. White box testing has the full knowledge of the system internals. With it we test the internal system changes, different states of the application, etc… White and Black box tests should be complementary to each other as they are very much interconnected. Testing database routines includes testing stored procedures, views, user defined functions and anything you use to access the data with. Database routines are your input/output interface to the database system. They count as black box testing. We test then for 2 things: Data and schema. When testing schema we only care about the columns and the data types they’re returning. After all the schema is the contract to the out side systems. If it changes we usually have to change the applications accessing it. One helpful T-SQL command when doing schema tests is SET FMTONLY ON. It tells the SQL Server to return only empty results sets. This speeds up tests because it doesn’t return any data to the client. After we’ve validated the schema we have to test the returned data. There no other way to do this but to have expected data known before the tests executes and comparing that data to the database routine output. Testing Authentication and Authorization helps us validate who has access to the SQL Server box (Authentication) and who has access to certain database objects (Authorization). For desktop applications and windows authentication this works well. But the biggest problem here are web apps. They usually connect to the database as a single user. Please ensure that that user is not SA or an account with admin privileges. That is just bad. Load testing ensures us that our database can handle peak loads. One often overlooked tool for load testing is Microsoft’s OSTRESS tool. It’s part of RML utilities (x86, x64) for SQL Server and can help determine if our database server can handle loads like 100 simultaneous users each doing 10 requests per second. SQL Profiler can also help us here by looking at why certain queries are slow and what to do to fix them.   One particular problem to think about is how to begin testing existing databases. First thing we have to do is to get to know those databases. We can’t test something when we don’t know how it works. To do this we have to talk to the users of the applications accessing the database, run SQL Profiler to see what queries are being run, use existing documentation to decipher all the object relationships, etc… The way to approach this is to choose one part of the database (say a logical grouping of tables that go together) and filter our traces accordingly. Once we’ve done that we move on to the next grouping and so on until we’ve covered the whole database. Then we move on to the next one. Database Testing is a topic that we can spent many hours discussing but let this be a nice intro to the world of database testing. See you in the next post.

    Read the article

  • Elegance, thy Name is jQuery

    - by SGWellens
    So, I'm browsing though some questions over on the Stack Overflow website and I found a good jQuery question just a few minutes old. Here is a link to it. It was a tough question; I knew that by answering it, I could learn new stuff and reinforce what I already knew: Reading is good, doing is better. Maybe I could help someone in the process too. I cut and pasted the HTML from the question into my Visual Studio IDE and went back to Stack Overflow to reread the question. Dang, someone had already answered it! And it was a great answer. I never even had a chance to start analyzing the issue. Now I know what a one-legged man feels like in an ass-kicking contest. Nevertheless, since the question and answer were so interesting, I decided to dissect them and learn as much as possible. The HTML consisted of some divs separated by h3 headings.  Note the elements are laid out sequentially with no programmatic grouping: <h3 class="heading">Heading 1</h3> <div>Content</div> <div>More content</div> <div>Even more content</div><h3 class="heading">Heading 2</h3> <div>some content</div> <div>some more content</div><h3 class="heading">Heading 3</h3> <div>other content</div></form></body>  The requirement was to wrap a div around each h3 heading and the subsequent divs grouping them into sections. Why? I don't know, I suppose if you screen-scrapped some HTML from another site, you might want to reformat it before displaying it on your own. Anyways… Here is the marvelously, succinct posted answer: $('.heading').each(function(){ $(this).nextUntil('.heading').andSelf().wrapAll('<div class="section">');}); I was familiar with all the parts except for nextUntil and andSelf. But, I'll analyze the whole answer for completeness. I'll do this by rewriting the posted answer in a different style and adding a boat-load of comments: function Test(){ // $Sections is a jQuery object and it will contain three elements var $Sections = $('.heading'); // use each to iterate over each of the three elements $Sections.each(function () { // $this is a jquery object containing the current element // being iterated var $this = $(this); // nextUntil gets the following sibling elements until it reaches // an element with the CSS class 'heading' // andSelf adds in the source element (this) to the collection $this = $this.nextUntil('.heading').andSelf(); // wrap the elements with a div $this.wrapAll('<div class="section" >'); });}  The code here doesn't look nearly as concise and elegant as the original answer. However, unless you and your staff are jQuery masters, during development it really helps to work through algorithms step by step. You can step through this code in the debugger and examine the jQuery objects to make sure one step is working before proceeding on to the next. It's much easier to debug and troubleshoot when each logical coding step is a separate line of code. Note: You may think the original code runs much faster than this version. However, the time difference is trivial: Not enough to worry about: Less than 1 millisecond (tested in IE and FF). Note: You may want to jam everything into one line because it results in less traffic being sent to the client. That is true. However, most Internet servers now compress HTML and JavaScript by stripping out comments and white space (go to Bing or Google and view the source). This feature should be enabled on your server: Let the server compress your code, you don't need to do it. Free Career Advice: Creating maintainable code is Job One—Maximum Priority—The Prime Directive. If you find yourself suddenly transferred to customer support, it may be that the code you are writing is not as readable as it could be and not as readable as it should be. Moving on… I created a CSS class to enhance the results: .section{ background-color: yellow; border: 2px solid black; margin: 5px;} Here is the rendered output before:   …and after the jQuery code runs.   Pretty Cool! But, while playing with this code, the logic of nextUntil began to bother me: What happens in the last section? What stops elements from being collected since there are no more elements with the .heading class? The answer is nothing.  In this case it stopped collecting elements because it was at the end of the page.  But what if there were additional HTML elements? I added an anchor tag and another div to the HTML: <h3 class="heading">Heading 1</h3> <div>Content</div> <div>More content</div> <div>Even more content</div><h3 class="heading">Heading 2</h3> <div>some content</div> <div>some more content</div><h3 class="heading">Heading 3</h3> <div>other content</div><a>this is a link</a><div>unrelated div</div> </form></body> The code as-is will include both the anchor and the unrelated div. This isn't what we want.   My first attempt to correct this used the filter parameter of the nextUntil function: nextUntil('.heading', 'div')  This will only collect div elements. But it merely skipped the anchor tag and it still collected the unrelated div:   The problem is we need a way to tell the nextUntil function when to stop. CSS selectors to the rescue! nextUntil('.heading, a')  This tells nextUntil to stop collecting elements when it gets to an element with a .heading class OR when it gets to an anchor tag. In this case it solved the problem. FYI: The comma operator in a CSS selector allows multiple criteria.   Bingo! One final note, we could have broken the code down even more: We could have replaced the andSelf function here: $this = $this.nextUntil('.heading, a').andSelf(); With this: // get all the following siblings and then add the current item$this = $this.nextUntil('.heading, a');$this.add(this);  But in this case, the andSelf function reads real nice. In my opinion. Here's a link to a jsFiddle if you want to play with it. I hope someone finds this useful Steve Wellens CodeProject

    Read the article

  • Elegance, thy Name is jQuery

    - by SGWellens
    So, I'm browsing though some questions over on the Stack Overflow website and I found a good jQuery question just a few minutes old. Here is a link to it. It was a tough question; I knew that by answering it, I could learn new stuff and reinforce what I already knew: Reading is good, doing is better. Maybe I could help someone in the process too. I cut and pasted the HTML from the question into my Visual Studio IDE and went back to Stack Overflow to reread the question. Dang, someone had already answered it! And it was a great answer. I never even had a chance to start analyzing the issue. Now I know what a one-legged man feels like in an ass-kicking contest. Nevertheless, since the question and answer were so interesting, I decided to dissect them and learn as much as possible. The HTML consisted of some divs separated by h3 headings.  Note the elements are laid out sequentially with no programmatic grouping: <h3 class="heading">Heading 1</h3> <div>Content</div> <div>More content</div> <div>Even more content</div><h3 class="heading">Heading 2</h3> <div>some content</div> <div>some more content</div><h3 class="heading">Heading 3</h3> <div>other content</div></form></body>  The requirement was to wrap a div around each h3 heading and the subsequent divs grouping them into sections. Why? I don't know, I suppose if you screen-scrapped some HTML from another site, you might want to reformat it before displaying it on your own. Anyways… Here is the marvelously, succinct posted answer: $('.heading').each(function(){ $(this).nextUntil('.heading').andSelf().wrapAll('<div class="section">');}); I was familiar with all the parts except for nextUntil and andSelf. But, I'll analyze the whole answer for completeness. I'll do this by rewriting the posted answer in a different style and adding a boat-load of comments: function Test(){ // $Sections is a jQuery object and it will contain three elements var $Sections = $('.heading'); // use each to iterate over each of the three elements $Sections.each(function () { // $this is a jquery object containing the current element // being iterated var $this = $(this); // nextUntil gets the following sibling elements until it reaches // an element with the CSS class 'heading' // andSelf adds in the source element (this) to the collection $this = $this.nextUntil('.heading').andSelf(); // wrap the elements with a div $this.wrapAll('<div class="section" >'); });}  The code here doesn't look nearly as concise and elegant as the original answer. However, unless you and your staff are jQuery masters, during development it really helps to work through algorithms step by step. You can step through this code in the debugger and examine the jQuery objects to make sure one step is working before proceeding on to the next. It's much easier to debug and troubleshoot when each logical coding step is a separate line. Note: You may think the original code runs much faster than this version. However, the time difference is trivial: Not enough to worry about: Less than 1 millisecond (tested in IE and FF). Note: You may want to jam everything into one line because it results in less traffic being sent to the client. That is true. However, most Internet servers now compress HTML and JavaScript by stripping out comments and white space (go to Bing or Google and view the source). This feature should be enabled on your server: Let the server compress your code, you don't need to do it. Free Career Advice: Creating maintainable code is Job One—Maximum Priority—The Prime Directive. If you find yourself suddenly transferred to customer support, it may be that the code you are writing is not as readable as it could be and not as readable as it should be. Moving on… I created a CSS class to see the results: .section{ background-color: yellow; border: 2px solid black; margin: 5px;} Here is the rendered output before:   …and after the jQuery code runs.   Pretty Cool! But, while playing with this code, the logic of nextUntil began to bother me: What happens in the last section? What stops elements from being collected since there are no more elements with the .heading class? The answer is nothing.  In this case it stopped because it was at the end of the page.  But what if there were additional HTML elements? I added an anchor tag and another div to the HTML: <h3 class="heading">Heading 1</h3> <div>Content</div> <div>More content</div> <div>Even more content</div><h3 class="heading">Heading 2</h3> <div>some content</div> <div>some more content</div><h3 class="heading">Heading 3</h3> <div>other content</div><a>this is a link</a><div>unrelated div</div> </form></body> The code as-is will include both the anchor and the unrelated div. This isn't what we want.   My first attempt to correct this used the filter parameter of the nextUntil function: nextUntil('.heading', 'div')  This will only collect div elements. But it merely skipped the anchor tag and it still collected the unrelated div:   The problem is we need a way to tell the nextUntil function when to stop. CSS selectors to the rescue: nextUntil('.heading, a')  This tells nextUntil to stop collecting sibling elements when it gets to an element with a .heading class OR when it gets to an anchor tag. In this case it solved the problem. FYI: The comma operator in a CSS selector allows multiple criteria.   Bingo! One final note, we could have broken the code down even more: We could have replaced the andSelf function here: $this = $this.nextUntil('.heading, a').andSelf(); With this: // get all the following siblings and then add the current item$this = $this.nextUntil('.heading, a');$this.add(this);  But in this case, the andSelf function reads real nice. In my opinion. Here's a link to a jsFiddle if you want to play with it. I hope someone finds this useful Steve Wellens CodeProject

    Read the article

  • MapReduce in DryadLINQ and PLINQ

    - by JoshReuben
    MapReduce See http://en.wikipedia.org/wiki/Mapreduce The MapReduce pattern aims to handle large-scale computations across a cluster of servers, often involving massive amounts of data. "The computation takes a set of input key/value pairs, and produces a set of output key/value pairs. The developer expresses the computation as two Func delegates: Map and Reduce. Map - takes a single input pair and produces a set of intermediate key/value pairs. The MapReduce function groups results by key and passes them to the Reduce function. Reduce - accepts an intermediate key I and a set of values for that key. It merges together these values to form a possibly smaller set of values. Typically just zero or one output value is produced per Reduce invocation. The intermediate values are supplied to the user's Reduce function via an iterator." the canonical MapReduce example: counting word frequency in a text file.     MapReduce using DryadLINQ see http://research.microsoft.com/en-us/projects/dryadlinq/ and http://connect.microsoft.com/Dryad DryadLINQ provides a simple and straightforward way to implement MapReduce operations. This The implementation has two primary components: A Pair structure, which serves as a data container. A MapReduce method, which counts word frequency and returns the top five words. The Pair Structure - Pair has two properties: Word is a string that holds a word or key. Count is an int that holds the word count. The structure also overrides ToString to simplify printing the results. The following example shows the Pair implementation. public struct Pair { private string word; private int count; public Pair(string w, int c) { word = w; count = c; } public int Count { get { return count; } } public string Word { get { return word; } } public override string ToString() { return word + ":" + count.ToString(); } } The MapReduce function  that gets the results. the input data could be partitioned and distributed across the cluster. 1. Creates a DryadTable<LineRecord> object, inputTable, to represent the lines of input text. For partitioned data, use GetPartitionedTable<T> instead of GetTable<T> and pass the method a metadata file. 2. Applies the SelectMany operator to inputTable to transform the collection of lines into collection of words. The String.Split method converts the line into a collection of words. SelectMany concatenates the collections created by Split into a single IQueryable<string> collection named words, which represents all the words in the file. 3. Performs the Map part of the operation by applying GroupBy to the words object. The GroupBy operation groups elements with the same key, which is defined by the selector delegate. This creates a higher order collection, whose elements are groups. In this case, the delegate is an identity function, so the key is the word itself and the operation creates a groups collection that consists of groups of identical words. 4. Performs the Reduce part of the operation by applying Select to groups. This operation reduces the groups of words from Step 3 to an IQueryable<Pair> collection named counts that represents the unique words in the file and how many instances there are of each word. Each key value in groups represents a unique word, so Select creates one Pair object for each unique word. IGrouping.Count returns the number of items in the group, so each Pair object's Count member is set to the number of instances of the word. 5. Applies OrderByDescending to counts. This operation sorts the input collection in descending order of frequency and creates an ordered collection named ordered. 6. Applies Take to ordered to create an IQueryable<Pair> collection named top, which contains the 100 most common words in the input file, and their frequency. Test then uses the Pair object's ToString implementation to print the top one hundred words, and their frequency.   public static IQueryable<Pair> MapReduce( string directory, string fileName, int k) { DryadDataContext ddc = new DryadDataContext("file://" + directory); DryadTable<LineRecord> inputTable = ddc.GetTable<LineRecord>(fileName); IQueryable<string> words = inputTable.SelectMany(x => x.line.Split(' ')); IQueryable<IGrouping<string, string>> groups = words.GroupBy(x => x); IQueryable<Pair> counts = groups.Select(x => new Pair(x.Key, x.Count())); IQueryable<Pair> ordered = counts.OrderByDescending(x => x.Count); IQueryable<Pair> top = ordered.Take(k);   return top; }   To Test: IQueryable<Pair> results = MapReduce(@"c:\DryadData\input", "TestFile.txt", 100); foreach (Pair words in results) Debug.Print(words.ToString());   Note: DryadLINQ applications can use a more compact way to represent the query: return inputTable         .SelectMany(x => x.line.Split(' '))         .GroupBy(x => x)         .Select(x => new Pair(x.Key, x.Count()))         .OrderByDescending(x => x.Count)         .Take(k);     MapReduce using PLINQ The pattern is relevant even for a single multi-core machine, however. We can write our own PLINQ MapReduce in a few lines. the Map function takes a single input value and returns a set of mapped values àLINQ's SelectMany operator. These are then grouped according to an intermediate key à LINQ GroupBy operator. The Reduce function takes each intermediate key and a set of values for that key, and produces any number of outputs per key à LINQ SelectMany again. We can put all of this together to implement MapReduce in PLINQ that returns a ParallelQuery<T> public static ParallelQuery<TResult> MapReduce<TSource, TMapped, TKey, TResult>( this ParallelQuery<TSource> source, Func<TSource, IEnumerable<TMapped>> map, Func<TMapped, TKey> keySelector, Func<IGrouping<TKey, TMapped>, IEnumerable<TResult>> reduce) { return source .SelectMany(map) .GroupBy(keySelector) .SelectMany(reduce); } the map function takes in an input document and outputs all of the words in that document. The grouping phase groups all of the identical words together, such that the reduce phase can then count the words in each group and output a word/count pair for each grouping: var files = Directory.EnumerateFiles(dirPath, "*.txt").AsParallel(); var counts = files.MapReduce( path => File.ReadLines(path).SelectMany(line => line.Split(delimiters)), word => word, group => new[] { new KeyValuePair<string, int>(group.Key, group.Count()) });

    Read the article

  • CodePlex Daily Summary for Monday, May 26, 2014

    CodePlex Daily Summary for Monday, May 26, 2014Popular ReleasesClosedXML - The easy way to OpenXML: ClosedXML 0.71.1: More performance improvements. It's faster and consumes less memory.Role Based Views in Microsoft Dynamics CRM 2011: Role Based Views in CRM 2011 and 2013 - 1.1.0.0: Issues fixed in this build: 1. Works for CRM 2013 2. Lookup view not getting blockedSimCityPak: SimCityPak 0.3.1.0: Main New Features: Fixed Importing of Instance Names (get rid of the Dutch translations) Added advanced editor for Decal Dictionaries Added possibility to import .PNG to generate new decals Added advanced editor for Path display entriesSimple Connect To Db: SimpleConnectToDb_v1: SimpleConnectToDb_v1CRM 2011 / CRM 2013 Form Helper: v2014.05.25: v2014.05.25 Added PhoneFormat & PhoneFormatAreaCode v2014.05.24 Initial ReleaseCreate Word documents without MS Word: Release 3.0: Add support for Sections, Sections Headers and Footers and right to left languages.Corporate News App for SharePoint 2013: CorporateNewsApp v1.6.2.0: Important note This version contains a major bug fix about the generic error "Request failed. Unexpected response data from server null" This error occurs on SharePoint Online only, following an update of the Javascript API after May 2014. If you have installed this application manually in your applications company catalog, you can download the CorporateNewsApp.app file in the zip archive and update it manually. If you have installed this application directly from the SharePoint Store, it ...DevOS: DevOS: Plugin-system added Including:DevOS.exe DevOS API.dll Files must be in the some folderTiny Deduplicator: Tiny Deduplicator 1.0.1.0: Increased version number to 1.0.1.0 Moved all options to a separate 'Options' dialog window. Allows the user to specify a selection strategy which will help when dealing with large numbers of duplicate files. Available options are "None," "Keep First," and "Keep Last"C64 Studio: 3.5: Add: BASIC renumber function Add: !PET pseudo op Add: elseif for !if, } else { pseudo op Add: !TRACE pseudo op Add: Watches are saved/restored with a solution Add: Ctrl-A works now in export assembly controls Add: Preliminary graphic import dialog (not fully functional yet) Add: range and block selection in sprite/charset editor (Shift-Click = range, Alt-Click = block) Fix: Expression evaluator could miscalculate when both division and multiplication were in an expression without parenthesisSEToolbox: SEToolbox 01.031.009 Release 1: Added mirroring of ConveyorTubeCurved. Updated Ship cube rotation to rotate ship back to original location (cubes are reoriented but ship appears no different to outsider), and to rotate Grouped items. Repair now fixes the loss of Grouped controls due to changes in Space Engineers 01.030. Added export asteroids. Rejoin ships will merge grouping and conveyor systems (even though broken ships currently only maintain the Grouping on one part of the ship). Installation of this version wi...Player Framework by Microsoft: Player Framework for Windows and WP v2.0: Support for new Universal and Windows Phone 8.1 projects for both Xaml and JavaScript projects. See a detailed list of improvements, breaking changes and a general overview of version 2 ADDITIONAL DOWNLOADSSmooth Streaming Client SDK for Windows 8 Applications Smooth Streaming Client SDK for Windows 8.1 Applications Smooth Streaming Client SDK for Windows Phone 8.1 Applications Microsoft PlayReady Client SDK for Windows 8 Applications Microsoft PlayReady Client SDK for Windows 8.1 Applicat...TerraMap (Terraria World Map Viewer): TerraMap 1.0.6: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Added the ability to select multiple highlighted block types. Added a dynamic, interactive highlight opacity slider, making it easier to find highlighted tiles with dark colors (and fixed blurriness from 1.0.5 alpha). Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Fixed inst...DotNet.Highcharts: DotNet.Highcharts 4.0 with Examples: DotNet.Highcharts 4.0 Tested and adapted to the latest version of Highcharts 4.0.1 Added new chart type: Heatmap Added new type PointPlacement which represents enumeration or number for the padding of the X axis. Changed target framework from .NET Framework 4 to .NET Framework 4.5. Closed issues: 974: Add 'overflow' property to PlotOptionsColumnDataLabels class 997: Split container from JS 1006: Series/Categories with numeric names don't render DotNet.Highcharts.Samples Updated s...ConEmu - Windows console with tabs: ConEmu 140523 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe" Aspose for Apache POI: Missing Features of Apache POI SL - v 1.1: Release contain the Missing Features in Apache POI SL SDK in Comparison with Aspose.Slides for dealing with Microsoft Power Point. What's New ?Following Examples: Managing Slide Transitions Manage Smart Art Adding Media Player Adding Audio Frame to Slide Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.3: Added CompressLogs option to the config file. Each Install / Uninstall creates a timestamped zip file with all MSI and PSAppDeployToolkit logs contained within Added variable expansion to all paths in the configuration file Added documentation for each of the Toolkit internal variables that can be used Changed Install-MSUpdates to continue if any errors are encountered when installing updates Implement /Force parameter on Update-GroupPolicy (ensure that any logoff message is ignored) ...WordMat: WordMat v. 1.07: A quick fix because scientific notation was broken in v. 1.06 read more at http://wordmat.blogspot.com????: 《????》: 《????》(c???)??“????”???????,???????????????C?????????。???????,???????????????????????. ??????????????????????????????????;????????????????????????????。Mini SQL Query: Mini SQL Query (1.0.72.457): Apologies for the previous update! FK issue fixed and also a template data cache issue.New ProjectsASP.Net MCV4 Simplified Code Samples: This project intended to simplify the same. In this project each task is implemented with minimum lines of code to reduces complicity.Calvin: net???CodeLatino by Latinosoft: A Modified version for codeShow -- Probably taking more than a month.freeasyBackup: A free and easy to use Backup Tool for everyone. Without any cloud restrictions. freeasyExplorer: A free and easy to use File Explorer for everyone.openPDFspeedreader: #spritz #pdfreader #speedreader PDF Editor to Edit PDF Files in your ASP.NET Applications: This sample application allows the users to edit PDF files online using Aspose.Pdf for .NET.SharePoint World Cup 2013: world cup 2014SSAS Long Running Query Performance Helper: This utility helps investigate long running multidimensional or mining queries in discovery, de-parameterization and re-parameterization back to source format.

    Read the article

  • What's up with LDoms: Part 5 - A few Words about Consoles

    - by Stefan Hinker
    Back again to look at a detail of LDom configuration that is often forgotten - the virtual console server. Remember, LDoms are SPARC systems.  As such, each guest will have it's own OBP running.  And to connect to that OBP, the administrator will need a console connection.  Since it's OBP, and not some x86 BIOS, this console will be very serial in nature ;-)  It's really very much like in the good old days, where we had a terminal concentrator where all those serial cables ended up in.  Just like with other components in LDoms, the virtualized solution looks very similar. Every LDom guest requires exactly one console connection.  Envision this similar to the RS-232 port on older SPARC systems.  The LDom framework provides one or more console services that provide access to these connections.  This would be the virtual equivalent of a network terminal server (NTS), where all those serial cables are plugged in.  In the physical world, we'd have a list somewhere, that would tell us which TCP-Port of the NTS was connected to which server.  "ldm list" does just that: root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 16 7680M 0.4% 27d 8h 22m jupiter bound ------ 5002 20 8G mars active -n---- 5000 2 8G 0.5% 55d 14h 10m venus active -n---- 5001 2 8G 0.5% 56d 40m pluto inactive ------ 4 4G The column marked "CONS" tells us, where to reach the console of each domain. In the case of the primary domain, this is actually a (more) physical connection - it's the console connection of the physical system, which is either reachable via the ILOM of that system, or directly via the serial console port on the chassis. All the other guests are reachable through the console service which we created during the inital setup of the system.  Note that pluto does not have a port assigned.  This is because pluto is not yet bound.  (Binding can be viewed very much as the assembly of computer parts - CPU, Memory, disks, network adapters and a serial console cable are all put together when binding the domain.)  Unless we set the port number explicitly, LDoms Manager will do this on a first come, first serve basis.  For just a few domains, this is fine.  For larger deployments, it might be a good idea to assign these port numbers manually using the "ldm set-vcons" command.  However, there is even better magic associated with virtual consoles. You can group several domains into one console group, reachable through one TCP port of the console service.  This can be useful when several groups of administrators are to be given access to different domains, or for other grouping reasons.  Here's an example: root@sun # ldm set-vcons group=planets service=console jupiter root@sun # ldm set-vcons group=planets service=console pluto root@sun # ldm bind jupiter root@sun # ldm bind pluto root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 16 7680M 6.1% 27d 8h 24m jupiter bound ------ 5002 200 8G mars active -n---- 5000 2 8G 0.6% 55d 14h 12m pluto bound ------ 5002 4 4G venus active -n---- 5001 2 8G 0.5% 56d 42m root@sun # telnet localhost 5002 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. sun-vnts-planets: h, l, c{id}, n{name}, q:l DOMAIN ID DOMAIN NAME DOMAIN STATE 2 jupiter online 3 pluto online sun-vnts-planets: h, l, c{id}, n{name}, q:npluto Connecting to console "pluto" in group "planets" .... Press ~? for control options .. What I did here was add the two domains pluto and jupiter to a new console group called "planets" on the service "console" running in the primary domain.  Simply using a group name will create such a group, if it doesn't already exist.  By default, each domain has its own group, using the domain name as the group name.  The group will be available on port 5002, chosen by LDoms Manager because I didn't specify it.  If I connect to that console group, I will now first be prompted to choose the domain I want to connect to from a little menu. Finally, here's an example how to assign port numbers explicitly: root@sun # ldm set-vcons port=5044 group=pluto service=console pluto root@sun # ldm bind pluto root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 16 7680M 3.8% 27d 8h 54m jupiter active -t---- 5002 200 8G 0.5% 30m mars active -n---- 5000 2 8G 0.6% 55d 14h 43m pluto bound ------ 5044 4 4G venus active -n---- 5001 2 8G 0.4% 56d 1h 13m With this, pluto would always be reachable on port 5044 in its own exclusive console group, no matter in which order other domains are bound. Now, you might be wondering why we always have to mention the console service name, "console" in all the examples here.  The simple answer is because there could be more than one such console service.  For all "normal" use, a single console service is absolutely sufficient.  But the system is flexible enough to allow more than that single one, should you need them.  In fact, you could even configure such a console service on a domain other than the primary (or control domain), which would make that domain a real console server.  I actually have a customer who does just that - they want to separate console access from the control domain functionality.  But this is definately a rather sophisticated setup. Something I don't want to go into in this post is access control.  vntsd, which is the daemon providing all these console services, is fully RBAC-aware, and you can configure authorizations for individual users to connect to console groups or individual domain's consoles.  If you can't wait until I get around to security, check out the man page of vntsd. Further reading: The Admin Guide is rather reserved on this subject.  I do recommend to check out the Reference Manual. The manpage for vntsd will discuss all the control sequences as well as the grouping and authorizations mentioned here.

    Read the article

  • Box Selection and Multi-Line Editing with VS 2010

    - by ScottGu
    This is the twenty-second in a series of blog posts I’m doing on the VS 2010 and .NET 4 release. I’ve already covered some of the code editor improvements in the VS 2010 release.  In particular, I’ve blogged about the Code Intellisense Improvements, new Code Searching and Navigating Features, HTML, ASP.NET and JavaScript Snippet Support, and improved JavaScript Intellisense.  Today’s blog post covers a small, but nice, editor improvement with VS 2010 – the ability to use “Box Selection” when performing multi-line editing.  This can eliminate keystrokes and enables some slick editing scenarios. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Box Selection Box selection is a feature that has been in Visual Studio for awhile (although not many people knew about it).  It allows you to select a rectangular region of text within the code editor by holding down the Alt key while selecting the text region with the mouse.  With VS 2008 you could then copy or delete the selected text. VS 2010 now enables several more capabilities with box selection including: Text Insertion: Typing with box selection now allows you to insert new text into every selected line Paste/Replace: You can now paste the contents of one box selection into another and have the content flow correctly Zero-Length Boxes: You can now make a vertical selection zero characters wide to create a multi-line insert point for new or copied text These capabilities can be very useful in a variety of scenarios.  Some example scenarios: change access modifiers (private->public), adding comments to multiple lines, setting fields, or grouping multiple statements together. Great 3 Minute Box-Selection Video Demo Brittany Behrens from the Visual Studio Editor Team has an excellent 3 minute video that shows off a few cool VS 2010 multi-line code editing scenarios with box selection:   Watch it to learn a few ways you can use this new box selection capability to optimize your typing in VS 2010 even further: Hope this helps, Scott P.S. You can learn more about the VS Editor by following the Visual Studio Team Blog or by following @VSEditor on Twitter.

    Read the article

  • LLBLGen Pro v3.0 has been released!

    - by FransBouma
    After two years of hard work we released v3.0 of LLBLGen Pro today! V3.0 comes with a completely new designer which has been developed from the ground up for .NET 3.5 and higher. Below I'll briefly mention some highlights of this new release: Entity Framework (v1 & v4) support NHibernate support (hbm.xml mappings & FluentNHibernate mappings) Linq to SQL support Allows both Model first and Database first development, or a mixture of both .NET 4.0 support Model views Grouping of project elements Linq-based project search Value Type (DDD) support Multiple Database types in single project XML based project file Integrated template editor Relational Model Data management Flexible attribute declaration for code generation, no more buddy classes needed Fine-grained project validation Update / Create DDL SQL scripts Fast Text-DSL based Quick mode Powerful text-DSL based Quick Model functionality Per target framework extensible settings framework much much more... Of course we still support our own O/R mapper framework: LLBLGen Pro v3.0 Runtime framework as well, which was updated with some minor features and was upgraded to use the DbProviderFactory system. Please watch the videos of the designer (more to come very soon!) to see some aspects of the new designer in action. The full version comes with Algorithmia in sourcecode as well. Algorithmia is an algorithm library written for .NET 3.5 which powers the heart of the designer with a fine-grained undo/redo command framework, graph classes and much more. I'd like to thank all beta-testers, our support team and others who have helped us with this massive release. :)

    Read the article

  • Programming logic to group a users activities like facebook. E.g. Chris is now friends with A, B and C

    - by Chris Dowdeswell
    So I am trying to develop an activity feed for my site, Basically If I UNION a bunch of activities into a feed I would end up with something like the following. Chris is now friends with Mark Chris is now friends with Dave What I want though is a neater way of grouping these similar posts so the feed doesn't give information overload... E.g. Chris is now friends with Mark, Dave and 4 Others Any ideas on how I can approach this logically? I am using Classic ASP on SQL server. Here is the UNION statement I have so far... SELECT U.UserID As UserID, L.UN As UN,Left(U.UID,13) As ProfilePic,U.Fname + ' ' + U.Sname As FullName, 'said ' + WP.Post AS Activity, WP.Ctime FROM Users AS U LEFT JOIN Logins L ON L.userID = U.UserID LEFT OUTER JOIN WallPosts AS WP ON WP.userID = U.userID WHERE WP.Ctime IS NOT NULL UNION SELECT U.UserID As UserID, L.UN As UN,Left(U.UID,13) As ProfilePic,U.Fname + ' ' + U.Sname As FullName, 'commented ' + C.Comment AS Activity, C.Ctime FROM Users AS U LEFT JOIN Logins L ON L.userID = U.UserID LEFT OUTER JOIN Comments AS C ON C.UserID = U.userID WHERE C.Ctime IS NOT NULL UNION SELECT U.UserID As UserID, L.UN As UN,Left(U.UID,13) As ProfilePic, U.Fname + ' ' + U.Sname As FullName, 'connected with <a href="/profile.asp?un='+(SELECT Logins.un FROM Logins WHERE Logins.userID = Cn.ToUserID)+'">' + (SELECT Users.Fname + ' ' + Users.Sname FROM Users WHERE userID = Cn.ToUserID) + '</a>' AS Activity, Cn.Ctime FROM Users AS U LEFT JOIN Logins L ON L.userID = U.UserID LEFT OUTER JOIN Connections AS Cn ON Cn.UserID = U.userID WHERE CN.Ctime IS NOT NULL

    Read the article

  • Silverlight Cream for April 25, 2010 -- #847

    - by Dave Campbell
    In this Issue: Michael Washington, David Poll, Andrea Boschin, Kunal Chowdhury(-2-), Lee, and Chad Campbell. Shoutout: Not at all Silverlight, but Kirupa has a great article up about Elastic Collisions From SilverlightCream.com: Easily decouple your MVVM ViewModel from your Model using RX Extensions Michael Washington continues with his Simplified MVVM and uses Rx to allow him to put web service methods in the model and call them from the ViewModel. A “refreshing” Authentication/Authorization experience with Silverlight 4 David Poll expands on his previous post and demonstrates returning the user to the page prior to the login, just the way you'd like it. Using a PollingDuplex service to handle long running activities Andrea Boschin is back discussing PollingDuplex again, this time is discussing getting updates from a long-running process. Introduction to Silverlight - Silverlight tutorials Chapter 1 Kunal Chowdhury has an intro to Silverlight Tutorial that looks good... if you're just getting started, here's a place to look. Introduction to Silverlight Application Development - Silverlight tutorial Chapter 2 In his 2nd introductory tutorial, Kunal Chowdhury gets into code and discusses User Controls. Drag and Drop grouping in Datagrid Lee has a post up where he's taken a DataGrid and produced some of the features of some of the 3rd-party controls, specifically dragging column headers to a place above the grid to sort the grid. Silverlight – HTTP request to [Url] was aborted. …local channel being closed while the request was still in progress Chad Campbell is addressing timeout problems you may hit with connecting up to your WCF service. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Bit by bit comparison of using Java or Python for unit testing frameworks and Selenium

    - by Anirudh
    Currently we are in the process of finalizing which language out of Java, Python should be used for Automation using selenium webdriver and a suitable unit testing frameworks. I have made use of Junit, TestNG and webdriver while using with Java and have designed frameworks without much fuss before. I am new to python though I came across pyhton's unit testing frameworks like unittest, pyunit, nose e.t.c but I have doubts if they would be as successful as testNG or Java. I would like to analyze point by point when used with selenium webdriver as below: 1)I have read that as Python is an interpreted language hence it's execution is slower, so say if I have to run 1000 test cases which take about 6 hours to run in Java, would python take considerably longer time for the same test cases like 8 hours? 2)Can the Python unit testing framework be as flexible as a Java unit testing framework like testNG in terms or Grouping the tests, parallel execution, skipping test. e.t.c 3)Also one point that I think of is that Python with selenium webdriver doeasn't have as big or learned community as we have for Java with webdriver, say if I run into trouble with something I am more likely to find an answer for Java as compared to python? 4)Somewhat related to point 3, is it safe to rely on tools, plugins or even webderiver's python's binding as a continuously well maintained? 5)One major drawback as I see while using python's unit testing framework is lack of boilerplate code or libraries for nicely illustrative HTML reports preferably historical reports with Pie charts, bar graphs and timelines as we have in case of Java like Allure, TestNG's default reports, reportNG or Junit reports with the help of ANT as shown below Allure Reports Junit Historical reports Also I would like to emphasize on the fact if there is a way for one to write the framework in java and make libraries or utilities according to out application in webdriver which can easily be called or integrated in with python code or modules? That would actually solve the problem for us as the client would be able to use the code we write in Java and make use of the same or call it from their python modules?

    Read the article

  • FotoJeff.com | My Photography Blog

    - by Jeff Julian
    I have been recently been doing more photography and Flickr was only allowing me to do so much with the images in displaying them.  No customization of skin, no page grouping, no post like pages.  So I decided to host a WordPress blog to host my images.  I really wanted to try WordPress to see what features single-hosted blog products offer that our multiple-hosted blog system could take advantage of.  So far the product is very cool, I can see how such a large developer network would help produce such cool “apps” for WP.  The product makes if very easy to make changes to your hosted environment that would be a little scary for a multiple blog host.  I need to compare features for their hosted solution. Any who, FotoJeff.com is my new photography blog home.  I have been working with the Kansas City Rescue Mission a lot lately so most of my shots are for them. FotoJeff.com – Photography Blog of Jeff Julian My hope is to make this blog again my technology blog, Staff of Geeks our Geekswithblogs.net announcement blog and FotoJeff.com my photography blog.  I need to start dog fooding my thoughts on blogging and keep the noise down (by making more noise with this post :D). Technorati Tags: FotoJeff.com,Photography,Blogs,Geekswithblogs

    Read the article

  • Seperator in dock in osx

    - by sagar
    I have placed too many icons in my dock. there is by default a separator in dock between applications & trash. I want to add more separators in my dock - for grouping purpose. say for example finder, preview, itunes, system pref.,activity monitor FOR Mac osx group Mozilla, safari - for Browsing group Odesk, skype, ipmessanger, adium, team viewer for communication Means, I just want to add separator to identify them very quickly. Is it possible ? if yes - how ? Thanks in advance for sharing your knowledge. sagar

    Read the article

  • Is there an effective tab manager for Google Chrome?

    - by Mahn
    I've tried a few extensions but I wasn't very satisfied, I'm basically looking for an extension that allows me to group sets of tabs in such a way that I'm able to quickly switch between the groups within the same window. That is to say, if I had 5 Wikipedia tabs and 5 Stack Exchange tabs, ideally I would create 2 groups, hide the tabs of one of them, and switch between the Wikipedia and Stack Exchange tabs back and forth as I need without leaving the same window (note that I'm not specifically looking for grouping by site, that was just a simplified example) Does an extension like this exist? PS: No, I don't actually have 5 Wikipedia tabs open while browsing Stack Exchange sites :-)

    Read the article

  • Programming logic to group a users activities like Facebook

    - by Chris Dowdeswell
    So I am trying to develop an activity feed for my site. Basically If I UNION a bunch of activities into a feed I would end up with something like the following. Chris is now friends with Mark Chris is now friends with Dave What I want though is a neater way of grouping these similar posts so the feed doesn't give information overload... E.g. Chris is now friends with Mark, Dave and 4 Others Any ideas on how I can approach this logically? I am using Classic ASP on SQL server. Here is the UNION statement I have so far: SELECT U.UserID As UserID, L.UN As UN,Left(U.UID,13) As ProfilePic,U.Fname + ' ' + U.Sname As FullName, 'said ' + WP.Post AS Activity, WP.Ctime FROM Users AS U LEFT JOIN Logins L ON L.userID = U.UserID LEFT OUTER JOIN WallPosts AS WP ON WP.userID = U.userID WHERE WP.Ctime IS NOT NULL UNION SELECT U.UserID As UserID, L.UN As UN,Left(U.UID,13) As ProfilePic,U.Fname + ' ' + U.Sname As FullName, 'commented ' + C.Comment AS Activity, C.Ctime FROM Users AS U LEFT JOIN Logins L ON L.userID = U.UserID LEFT OUTER JOIN Comments AS C ON C.UserID = U.userID WHERE C.Ctime IS NOT NULL UNION SELECT U.UserID As UserID, L.UN As UN,Left(U.UID,13) As ProfilePic, U.Fname + ' ' + U.Sname As FullName, 'connected with <a href="/profile.asp?un='+(SELECT Logins.un FROM Logins WHERE Logins.userID = Cn.ToUserID)+'">' + (SELECT Users.Fname + ' ' + Users.Sname FROM Users WHERE userID = Cn.ToUserID) + '</a>' AS Activity, Cn.Ctime FROM Users AS U LEFT JOIN Logins L ON L.userID = U.UserID LEFT OUTER JOIN Connections AS Cn ON Cn.UserID = U.userID WHERE CN.Ctime IS NOT NULL

    Read the article

  • Excel 2010: if( , , "") not treated the same as blank for pivot table group by date

    - by Confused
    I'm trying to group by date in an Excel 2010 pivot table. The column with dates (i.e., the one want to group by), should be the latest date of 2 other columns if neither is null, or blank. i.e., with a formula like: =IF(AND(A4 <> "", B4 <> ""), MAX(A4,B4), "") Normally, this ""in the IF() formula acts the same as an empty cell. In this case, it is preventing me from grouping by date in the Pivot Table. If I filter the date column by (Blanks), then clear the contents of all those cells, then the pivot table does group by date ok. i.e., "" is not being treated the same as an empty cell.

    Read the article

  • tag based file organizer

    - by Richie
    I'm finishing professional school and over the years have acquired a pile of notes and articles that I'd like to hang onto. I'd like to add to them and create sort of an archive of article and files that may be useful down the road. I'd also like to organize this collection of files not only by simple grouping but also with tags. I feel like that will make searching through them years later much easier. Suggestions on software that would be good for this? Just a general file manager, something that uses tags that can be attached to each file? Thanks

    Read the article

  • CodePlex Daily Summary for Sunday, May 25, 2014

    CodePlex Daily Summary for Sunday, May 25, 2014Popular ReleasesClosedXML - The easy way to OpenXML: ClosedXML 0.71.0: Major improvement when saving large files.SimCityPak: SimCityPak 0.3.1.0: Main New Features: Fixed Importing of Instance Names (get rid of the Dutch translations) Added advanced editor for Decal Dictionaries Added possibility to import .PNG to generate new decals Added advanced editor for Path display entriesTiny Deduplicator: Tiny Deduplicator 1.0.1.0: Increased version number to 1.0.1.0 Moved all options to a separate 'Options' dialog window. Allows the user to specify a selection strategy which will help when dealing with large numbers of duplicate files. Available options are "None," "Keep First," and "Keep Last"SEToolbox: SEToolbox 01.031.009 Release 1: Added mirroring of ConveyorTubeCurved. Updated Ship cube rotation to rotate ship back to original location (cubes are reoriented but ship appears no different to outsider), and to rotate Grouped items. Repair now fixes the loss of Grouped controls due to changes in Space Engineers 01.030. Added export asteroids. Rejoin ships will merge grouping and conveyor systems (even though broken ships currently only maintain the Grouping on one part of the ship). Installation of this version wi...Player Framework by Microsoft: Player Framework for Windows and WP v2.0: Support for new Universal and Windows Phone 8.1 projects for both Xaml and JavaScript projects. See a detailed list of improvements, breaking changes and a general overview of version 2 ADDITIONAL DOWNLOADSSmooth Streaming Client SDK for Windows 8 Applications Smooth Streaming Client SDK for Windows 8.1 Applications Smooth Streaming Client SDK for Windows Phone 8.1 Applications Microsoft PlayReady Client SDK for Windows 8 Applications Microsoft PlayReady Client SDK for Windows 8.1 Applicat...TerraMap (Terraria World Map Viewer): TerraMap 1.0.6: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Added the ability to select multiple highlighted block types. Added a dynamic, interactive highlight opacity slider, making it easier to find highlighted tiles with dark colors (and fixed blurriness from 1.0.5 alpha). Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Fixed inst...DotNet.Highcharts: DotNet.Highcharts 4.0 with Examples: DotNet.Highcharts 4.0 Tested and adapted to the latest version of Highcharts 4.0.1 Added new chart type: Heatmap Added new type PointPlacement which represents enumeration or number for the padding of the X axis. Changed target framework from .NET Framework 4 to .NET Framework 4.5. Closed issues: 974: Add 'overflow' property to PlotOptionsColumnDataLabels class 997: Split container from JS 1006: Series/Categories with numeric names don't render DotNet.Highcharts.Samples Updated s...51Degrees - Device Detection and Redirection: 3.1.1.12: Version 3.1 HighlightsDevice detection algorithm is over 100 times faster. Regular expressions and levenshtein distance calculations are no longer used. The device detection algorithm performance is no longer limited by the number of device combinations contained in the dataset. Two modes of operation are available: Memory – the detection data set is loaded into memory and there is no continuous connection to the source data file. Slower initialisation time but faster detection performanc...VisioAutomation: Visio PowerShell Module (VisioPS) 1.2.0: DocumentationDocumentation is here http://sdrv.ms/11AWkp7 Screencasthttp://vimeo.com/61329170 FilesFor easy installation, download and run the MSI file. If you want to manually install, a ZIP file is provided. ChangeLogInvoke-* cmdlets replaced with more specific PowerShell Verbs Enhanced handling of values in User-Defined Cells Get-VisioShape works more unituitivelyNino Seisei Code Generator: Nino Seisei v2.1.1: Mejoras en la interfaz, posibilidad de ejecutar instrucciones Berta cíclicas una dentro de otra con la nueva instrucción @BSETRECURSIVITY_ON. GUI Improvements, posibility of running ciclic Berta instructions one inside another with the new @BSETRECURSIVITY_ON instruction.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.3: Added CompressLogs option to the config file. Each Install / Uninstall creates a timestamped zip file with all MSI and PSAppDeployToolkit logs contained within Added variable expansion to all paths in the configuration file Added documentation for each of the Toolkit internal variables that can be used Changed Install-MSUpdates to continue if any errors are encountered when installing updates Implement /Force parameter on Update-GroupPolicy (ensure that any logoff message is ignored) ...ULS Log Viewer: Alpha 0.2: Changeset CI&T ULS Log 22/05/2014 - Inclusão do botão de limpar filtro - Inclusão da possibilidade de filtrar as entradas pelo texto da mensagem; - Inclusão da opção de abrir mais de um arquivo de log no mesmo grid para análise; - Inclusão da tela de Sobre. - Campo de Filtro Rápido Level carrega somente os Levels encontrados no arquivo de log carregado; - Campo de exibição rápida de mensagem setado para somente leitura; - Inclusão da Barra de Status com informações do nome do arquivo ...Application Parameters for Microsoft Dynamics CRM: Application Parameters (1.2.0.1): Fix plugin when updating parameters without changing parameter typeWordMat: WordMat v. 1.07: A quick fix because scientific notation was broken in v. 1.06 read more at http://wordmat.blogspot.com????: 《????》: 《????》(c???)??“????”???????,???????????????C?????????。???????,???????????????????????. ??????????????????????????????????;????????????????????????????。Mini SQL Query: Mini SQL Query (1.0.72.457): Apologies for the previous update! FK issue fixed and also a template data cache issue.Wsus Package Publisher: Release v1.3.1405.17: Add Russian translation (thanks to VSharmanov) Fix a bug that make WPP to crash if the user click on "Connect/Reload" while the Report Tab is loading. Enhance the way WPP store the password for remote computers command.MoreTerra (Terraria World Viewer): More Terra 1.12.9: =========== = Compatibility = =========== Updated to account for new format 1.2.4.1 =========== = Issues = =========== all items have not been added. Some colors for new tiles may be off. I wanted to get this out so people have a usable program.LINQ to Twitter: LINQ to Twitter v3.0.3: Supports .NET 4.5x, Windows Phone 8.x, Windows 8.x, Windows Azure, Xamarin.Android, and Xamarin.iOS. New features include Status/Lookup, Mute APIs, and bug fixes. 100% Twitter API v1.1 coverage, Async, Portable Class Library (PCL).CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.26.0: Added access to the Release Notes during 'Check for Updates...'' Debug panels Added support for generic types members Members are grouped into 'Raw View' and 'Non-Public members' categories Implemented dedicated (array-like) view for Lists and Dictionaries http://download-codeplex.sec.s-msft.com/Download?ProjectName=csscriptnpp&DownloadId=846498New Projects2111110107: Thanh Loi2111110152: Nguy?n Doãn Tu?nASP.NET MVC4 Warehouse management system: WMSNet is an easy to use warehouse management solution for manually operated warehouses and can smoothly be customized according to your requirements. Code Snippets: Code snippets to empower the developers to write quality code faster while adhering to the industry standards.CRM 2011 / CRM 2013 Form Helper: Library of CRM 2011 / CRM 2013 Web Resources that can be used on Forms for making input simpler; ex Automatic title case Kinect Translation Tool: From Sign Language to spoken text and vice versa: Software System Component 1. Kinect SDKver.1.7 for the Kinect sensor. 2. Windows 7 standard APIs- The audio, speech, and media APIs in Windows 7MDriven Getting started - MVC: This is the suggested getting started template for doing MVC with MDriven. Fork it and begin to fill up with your model. Rules Engine Validator: A simple rules validator. It's based on a rule manager component (an implementation of the Command GoF pattern), with a main method called Validate().Simple Connect To Db: ????? ???? ?? ??????? ?? ????? ??? ? ??????? ????? ??z3-str-purdue: test geekchina.com

    Read the article

  • Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

    - by SQLOS Team
    Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server. Overview The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.  Scenario 1: VMs connected through the same Cloud Service 2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required. Step 1 – Provision VMs and Configure Ports   Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):   Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal): After provisioning of the 2 VMs above, here is the default port configurations for example: Step2 – Verify / Confirm the TCP port used by the database Engine By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.   1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration 2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP - > IP Addresses   3. Confirm the port number used by SQL Server Engine; in this case 1433 4. Update from Windows Authentication to Mixed mode   5.       Restart SQL Server service for the change to take effect 6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2 Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) Your will then be prompted to enter the password. Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok. Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1 Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.   Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1. Scenario 2: VMs provisioned in different Cloud Services 2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed. Step 1 – Provision new VM3 Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal): After provisioning is complete, here is the default port configurations: Step 2 – Add public port to VM1 connect to from VM3’s DB instance Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance. Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) You will then be prompted to enter the password.   Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: Ok. netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok.   Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1 RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above. Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port   Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.  Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1. Conclusion Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections. References SQL Server in Windows Azure Virtual Machines   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Scaling Scrum within a group of 100s of programmers

    - by blunders
    Most Scrum teams lean toward 7-15 people **, though it's not clear how to scale Scrum among 100s of people, or how the effectiveness of a given team might be compared to another team within the group; meaning beyond just breaking the group into Scrum teams of 7-15 people, it's unclear how efforts between the teams are managed, compared, etc. Any suggestions related to either of these topics, or additional related topics that might be of more importance to account for in planning a large scale SCRUM grouping? ** In reviewing research related to the suggested size of software development teams, which appears to be the basis for the suggested Scrum team size, I found what appears to be an error in the research which oddly appears to show that bigger teams (15+ ppl), not smaller teams (7 ppl) are better. UPDATE, "Re: Scrum doesn't scale": Made huge amounts of progress on personally researching the topic, but thought I'd respond to the general belief of some that Scrum doesn't scale by citing a quote from Succeeding with Agile by Mike Cohn : Scrum Does Scale: You have to admire the intellectual honesty of the earliest agile authors. They were all very careful to say that agile methodolgies like Scrum were for small projects. This conservatism wasn’t because agile or Scrum turned out to be unsuited for large projects but because they hadn’t used these processes on large projects and so were reluctant to advise their readers to do so. But, in the years since the Agile Manifesto and the books that came shortly before and after it, we have learned that the principles and practices of agile development can be scaled up and applied on large projects, albeit it with a considerable amount of overhead. Fortunately, if large organizations use the techniques described regarding the role of the product owner, working with a shared product backlog, being mindful of dependencies, coordinating work among teams, and cultivating communities of practice, they can successfully scale a Scrum project. SOURCE: (ran across the book thanks to Ladislav Mrnka answer)

    Read the article

  • CodePlex Daily Summary for Tuesday, May 27, 2014

    CodePlex Daily Summary for Tuesday, May 27, 2014Popular ReleasesAD4 Application Designer for flow based .NET applications: AD4.AppDesigner.18.11: AD4.Iteration.18.11(Rendering of flow chart) Bugfix: AppWrapperClassContainer MakeWrapperBuildInstances MakeFlowClassCtor Note: Currently not all elements are shown in flow chart area! This version is able to generate the source code of some samples. See: Sample Applications (You find the flow description in the documents folder of each sample). The gluing code of the AD4.AppDesigner was created by the previous version of the AD4.AppDesigner. You find the current app definition in t...ClosedXML - The easy way to OpenXML: ClosedXML 0.71.1: More performance improvements. It's faster and consumes less memory.SQL Server Change Data Capture Application: Release 1: This is the initial release of SQLCDCApp. Download the zip file. Unzip and run SQLCDCApp.exe to start.Role Based Views in Microsoft Dynamics CRM 2011: Role Based Views in CRM 2011 and 2013 - 1.1.0.0: Issues fixed in this build: 1. Works for CRM 2013 2. Lookup view not getting blockedMathos Parser: 1.0.6.1 + source code: Removed the bug with comma/dot reported and fixed by Diego.Dynamics AX Build Scripts: DynamicsAXCommunity Powershell module (0.3.0): Change log(previous release was 0.2.4) 0.3.0 AxBuild (http://msdn.microsoft.com/en-us/library/dn528954.aspx) is supported and used by default. Smarter dealing with versions - e.g. listing configurations for all versions in the same time. $AxVersionPreference shouldn't be normally needed. Additional properties returned by Get-AXConfig. 0.2.5 -StartupCmd and -Wait added to Start-AXClient. Handles console output sent from AX (http://dev.goshoom.net/en/2012/05/console-output-ax/).xFunc: xFunc 2.15.6: Fidex #77QuickMon: Version 3.12: This release is mostly just to improve the UI for the Windows client. There are a few minor fixes as well. 1. Polling frequency presets fixed (slow, normal and fast) 2. Added collector call duration to history 3. History now displays time, state, duration and details in separate columns 4. Added a quick launch drop down list to main Window (only visible when mouse hover over it) 5. Removed the toolbar border. 6. Changed Windows service collector to report error only when all services from al...VK.NET - Vkontakte API for .NET: VkNet 1.0.5: ?????????? ????? ??????.Kartris E-commerce: Kartris v2.6002: Minor release: Double check that Logins_GetList sproc is present, sometimes seems to get missed earlier if upgrading which can give error when viewing logins page Added CSV and TXT export option; this is not Google Products compatible, but can give a good base for creating a file for some other systems such as Amazon Fixed some minor combination and options issues to improve interface back and front Turn bitcoin and some other gateways off by default Minor CSS changes Fixed currenc...SimCityPak: SimCityPak 0.3.1.0: Main New Features: Fixed Importing of Instance Names (get rid of the Dutch translations) Added advanced editor for Decal Dictionaries Added possibility to import .PNG to generate new decals Added advanced editor for Path display entriesTiny Deduplicator: Tiny Deduplicator 1.0.1.0: Increased version number to 1.0.1.0 Moved all options to a separate 'Options' dialog window. Allows the user to specify a selection strategy which will help when dealing with large numbers of duplicate files. Available options are "None," "Keep First," and "Keep Last"C64 Studio: 3.5: Add: BASIC renumber function Add: !PET pseudo op Add: elseif for !if, } else { pseudo op Add: !TRACE pseudo op Add: Watches are saved/restored with a solution Add: Ctrl-A works now in export assembly controls Add: Preliminary graphic import dialog (not fully functional yet) Add: range and block selection in sprite/charset editor (Shift-Click = range, Alt-Click = block) Fix: Expression evaluator could miscalculate when both division and multiplication were in an expression without parenthesisSEToolbox: SEToolbox 01.031.009 Release 1: Added mirroring of ConveyorTubeCurved. Updated Ship cube rotation to rotate ship back to original location (cubes are reoriented but ship appears no different to outsider), and to rotate Grouped items. Repair now fixes the loss of Grouped controls due to changes in Space Engineers 01.030. Added export asteroids. Rejoin ships will merge grouping and conveyor systems (even though broken ships currently only maintain the Grouping on one part of the ship). Installation of this version wi...Player Framework by Microsoft: Player Framework for Windows and WP v2.0: Support for new Universal and Windows Phone 8.1 projects for both Xaml and JavaScript projects. See a detailed list of improvements, breaking changes and a general overview of version 2 ADDITIONAL DOWNLOADSSmooth Streaming Client SDK for Windows 8 Applications Smooth Streaming Client SDK for Windows 8.1 Applications Smooth Streaming Client SDK for Windows Phone 8.1 Applications Microsoft PlayReady Client SDK for Windows 8 Applications Microsoft PlayReady Client SDK for Windows 8.1 Applicat...TerraMap (Terraria World Map Viewer): TerraMap 1.0.6: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Added the ability to select multiple highlighted block types. Added a dynamic, interactive highlight opacity slider, making it easier to find highlighted tiles with dark colors (and fixed blurriness from 1.0.5 alpha). Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Fixed inst...DotNet.Highcharts: DotNet.Highcharts 4.0 with Examples: DotNet.Highcharts 4.0 Tested and adapted to the latest version of Highcharts 4.0.1 Added new chart type: Heatmap Added new type PointPlacement which represents enumeration or number for the padding of the X axis. Changed target framework from .NET Framework 4 to .NET Framework 4.5. Closed issues: 974: Add 'overflow' property to PlotOptionsColumnDataLabels class 997: Split container from JS 1006: Series/Categories with numeric names don't render DotNet.Highcharts.Samples Updated s...PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.3: Added CompressLogs option to the config file. Each Install / Uninstall creates a timestamped zip file with all MSI and PSAppDeployToolkit logs contained within Added variable expansion to all paths in the configuration file Added documentation for each of the Toolkit internal variables that can be used Changed Install-MSUpdates to continue if any errors are encountered when installing updates Implement /Force parameter on Update-GroupPolicy (ensure that any logoff message is ignored) ...WordMat: WordMat v. 1.07: A quick fix because scientific notation was broken in v. 1.06 read more at http://wordmat.blogspot.com????: 《????》: 《????》(c???)??“????”???????,???????????????C?????????。???????,???????????????????????. ??????????????????????????????????;????????????????????????????。New Projects2112110016: BÀI T?P OOP2112110072: 2112110072 Bai tap OOPA more efficient algorithm for an NP-complete problem: Algorithm for finding Hamiltonian cycle.Asprise OCR for C#/VB.NET Sample Applications: Embeded with a high performance OCR (optical character recognition) engine, Asprise OCR SDK library for Java, VB.NET, CSharp.NET, VC++, VB6.0, C, C++, Delphi onDisenio1Obli1: Obligatorio desarrollado por Santiago Perez y Germán Otero Universidad ORT, Año 2014DuAnCuoiKy: lap trinh windows form cuoi ky, quan ly sinh vien truong hoc student management systemEnterprise Integration and BPM - A WF Integration and BPM blueprint: A CQRS based EAI (Enterprise Application Integration) and BMP (Business Process Management) blueprint using Message Queues and Windows Workflow Foundation.GU.ERP: saLaunchPad2: A remote device timing and control systemNMusicCreator: A simple program for creating music.PPM: ??SharePoint 2013 Latency Health Analyzer Rule: Analyze network latency for all SQL Servers using this custom health analyzer rule.SnapPea: A simple MVC content management system.The Bubble Index: The Bubble Index, a Java (TM) application to measure the level of financial bubbles. Published with GNU General Public License version 2 (GPLv2).Tools for Manufacturing: t4mfg is a set of programs to be used by small to medium sized companies that manufacture goods to order.Unix Project Template: A simple framework for creating unix projects using C++ and make.

    Read the article

  • SQL Query for Determining SharePoint ACL Sizes

    - by Damon Armstrong
    When a SharePoint Access Control List (ACL) size exceeds more than 64kb for a particular URL, the contents under that URL become unsearchable due to limitations in the SharePoint search engine.  The error most often seen is The Parameter is Incorrect which really helps to pinpoint the problem (its difficult to convey extreme sarcasm here, please note that it is intended).  Exceeding this limit is not unheard of – it can happen when users brute force security into working by continually overriding inherited permissions and assigning user-level access to securable objects. Once you have this issue, determining where you need to focus to fix the problem can be difficult.  Fortunately, there is a query that you can run on a content database that can help identify the issue: SELECT [SiteId],      MIN([ScopeUrl]) AS URL,      SUM(DATALENGTH([Acl]))/1024 as AclSizeKB,      COUNT(*) AS AclEntries FROM [Perms] (NOLOCK) GROUP BY siteid ORDER BY AclSizeKB DESC This query results in a list of ACL sizes and entry counts on a site-by-site basis.  You can also remove grouping to see a more granular breakdown: SELECT [ScopeUrl] AS URL,       SUM(DATALENGTH([Acl]))/1024 as AclSizeKB,      COUNT(*) AS AclEntries FROM [Perms] (NOLOCK) GROUP BY ScopeUrl ORDER BY AclSizeKB DESC

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >