Search Results

Search found 1611 results on 65 pages for 'technique'.

Page 62/65 | < Previous Page | 58 59 60 61 62 63 64 65  | Next Page >

  • Refactoring Part 1 : Intuitive Investments

    - by Wes McClure
    Fear, it’s what turns maintaining applications into a nightmare.  Technology moves on, teams move on, someone is left to operate the application, what was green is now perceived brown.  Eventually the business will evolve and changes will need to be made.  The approach to those changes often dictates the long term viability of the application.  Fear of change, lack of passion and a lack of interest in understanding the domain often leads to a paranoia to do anything that doesn’t involve duct tape and bailing twine.  Don’t get me wrong, those have a place in the short term viability of a project but they don’t have a place in the long term.  Add to it “us versus them” in regards to the original team and those that maintain it, internal politics and other factors and you have a recipe for disaster.  This results in code that quickly becomes unmanageable.  Even the most clever of designs will eventually become sub optimal and debt will amount that exponentially makes changes difficult.  This is where refactoring comes in, and it’s something I’m very passionate about.  Refactoring is about improving the process whereby we make change, it’s an exponential investment in the process of change. Without it we will incur exponential complexity that halts productivity. Investments, especially in the long term, require intuition and reflection.  How can we tackle new development effectively via evolving the original design and paying off debt that has been incurred? The longer we wait to ask and answer this question, the more it will cost us.  Small requests don’t warrant big changes, but realizing when changes now will pay off in the long term, and especially in the short term, is valuable. I have done my fair share of maintaining applications and continuously refactoring as needed, but recently I’ve begun work on a project that hasn’t had much debt, if any, paid down in years.  This is the first in a series of blog posts to try to capture the process which is largely driven by intuition of smaller refactorings from other projects. Signs that refactoring could help: Testability How can decreasing test time not pay dividends? One of the first things I found was that a very important piece often takes 30+ minutes to test.  I can only imagine how much time this has cost historically, but more importantly the time it might cost in the coming weeks: I estimate at least 10-20 hours per person!  This is simply unacceptable for almost any situation.  As it turns out, about 6 hours of working with this part of the application and I was able to cut the time down to under 30 seconds!  In less than the lost time of one week, I was able to fix the problem for all future weeks! If we can’t test fast then we can’t change fast, nor with confidence. Code is used by end users and it’s also used by developers, consider your own needs in terms of the code base.  Adding logic to enable/disable features during testing can help decouple parts of an application and lead to massive improvements.  What exactly is so wrong about test code in real code?  Often, these become features for operators and sometimes end users.  If you cannot run an integration test within a test runner in your IDE, it’s time to refactor. Readability Are variables named meaningfully via a ubiquitous language? Is the code segmented functionally or behaviorally so as to minimize the complexity of any one area? Are aspects properly segmented to avoid confusion (security, logging, transactions, translations, dependency management etc) Is the code declarative (what) or imperative (how)?  What matters, not how.  LINQ is a great abstraction of the what, not how, of collection manipulation.  The Reactive framework is a great example of the what, not how, of managing streams of data. Are constants abstracted and named, or are they just inline? Do people constantly bitch about the code/design? If the code is hard to understand, it will be hard to change with confidence.  It’s a large undertaking if the original designers didn’t pay much attention to readability and as such will never be done to “completion.”  Make sure not to go over board, instead use this as you change an application, not in lieu of changes (like with testability). Complexity Simplicity will never be achieved, it’s highly subjective.  That said, a lot of code can be significantly simplified, tidy it up as you go.  Refactoring will often converge upon a simplification step after enough time, keep an eye out for this. Understandability In the process of changing code, one often gains a better understanding of it.  Refactoring code is a good way to learn how it works.  However, it’s usually best in combination with other reasons, in effect killing two birds with one stone.  Often this is done when readability is poor, in which case understandability is usually poor as well.  In the large undertaking we are making with this legacy application, we will be replacing it.  Therefore, understanding all of its features is important and this refactoring technique will come in very handy. Unused code How can deleting things not help? This is a freebie in refactoring, it’s very easy to detect with modern tools, especially in statically typed languages.  We have VCS for a reason, if in doubt, delete it out (ok that was cheesy)! If you don’t know where to start when refactoring, this is an excellent starting point! Duplication Do not pray and sacrifice to the anti-duplication gods, there are excellent examples where consolidated code is a horrible idea, usually with divergent domains.  That said, mediocre developers live by copy/paste.  Other times features converge and aren’t combined.  Tools for finding similar code are great in the example of copy/paste problems.  Knowledge of the domain helps identify convergent concepts that often lead to convergent solutions and will give intuition for where to look for conceptual repetition. 80/20 and the Boy Scouts It’s often said that 80% of the time 20% of the application is used most.  These tend to be the parts that are changed.  There are also parts of the code where 80% of the time is spent changing 20% (probably for all the refactoring smells above).  I focus on these areas any time I make a change and follow the philosophy of the Boy Scout in cleaning up more than I messed up.  If I spend 2 hours changing an application, in the 20%, I’ll always spend at least 15 minutes cleaning it or nearby areas. This gives a huge productivity edge on developers that don’t. Ironically after a short period of time the 20% shrinks enough that we don’t have to spend 80% of our time there and can move on to other areas.   Refactoring is highly subjective, never attempt to refactor to completion!  Learn to be comfortable with leaving one part of the application in a better state than others.  It’s an evolution, not a revolution.  These are some simple areas to look into when making changes and can help get one started in the process.  I’ve often found that refactoring is a convergent process towards simplicity that sometimes spans a few hours but often can lead to massive simplifications over the timespan of weeks and months of regular development.

    Read the article

  • The Presentation Isn't Over Until It's Over

    - by Phil Factor
    The senior corporate dignitaries settled into their seats looking important in a blue-suited sort of way. The lights dimmed as I strode out in front to give my presentation.  I had ten vital minutes to make my pitch.  I was about to dazzle the top management of a large software company who were considering the purchase of my software product. I would present them with a dazzling synthesis of diagrams, graphs, followed by  a live demonstration of my software projected from my laptop.  My preparation had been meticulous: It had to be: A year’s hard work was at stake, so I’d prepared it to perfection.  I stood up and took them all in, with a gaze of sublime confidence. Then the laptop expired. There are several possible alternative plans of action when this happens     A. Stare at the smoking laptop vacuously, flapping ones mouth slowly up and down     B. Stand frozen like a statue, locked in indecision between fright and flight.     C. Run out of the room, weeping     D. Pretend that this was all planned     E. Abandon the presentation in favour of a stilted and tedious dissertation about the software     F. Shake your fist at the sky, and curse the sense of humour of your preferred deity I started for a few seconds on plan B, normally referred to as the ‘Rabbit in the headlamps of the car’ technique. Suddenly, a little voice inside my head spoke. It spoke the famous inane words of Yogi Berra; ‘The game isn't over until it's over.’ ‘Too right’, I thought. What to do? I ran through the alternatives A-F inclusive in my mind but none appealed to me. I was completely unprepared for this. Nowadays, longevity has since taught me more than I wanted to know about the wacky sense of humour of fate, and I would have taken two laptops. I hadn’t, but decided to do the presentation anyway as planned. I started out ignoring the dead laptop, but pretending, instead that it was still working. The audience looked startled. They were expecting plan B to be succeeded by plan C, I suspect. They weren’t used to denial on this scale. After my introductory talk, which didn’t require any visuals, I came to the diagram that described the application I’d written.  I’d taken ages over it and it was hot stuff. Well, it would have been had it been projected onto the screen. It wasn’t. Before I describe what happened then, I must explain that I have thespian tendencies.  My  triumph as Professor Higgins in My Fair Lady at the local operatic society is now long forgotten, but I remember at the time of my finest performance, the moment that, glancing up over the vast audience of  moist-eyed faces at the during the poignant  scene between Eliza and Higgins at the end, I  realised that I had a talent that one day could possibly  be harnessed for commercial use I just talked about the diagram as if it was there, but throwing in some extra description. The audience nodded helpfully when I’d done enough. Emboldened, I began a sort of mime, well, more of a ballet, to represent each slide as I came to it. Heaven knows I’d done my preparation and, in my mind’s eye, I could see every detail, but I had to somehow project the reality of that vision to the audience, much the same way any actor playing Macbeth should do the ghost of Banquo.  My desperation gave me a manic energy. If you’ve ever demonstrated a windows application entirely by mime, gesture and florid description, you’ll understand the scale of the challenge, but then I had nothing to lose. With a brief sentence of description here and there, and arms flailing whilst outlining the size and shape of  graphs and diagrams, I used the many tricks of mime, gesture and body-language  learned from playing Captain Hook, or the Sheriff of Nottingham in pantomime. I set out determinedly on my desperate venture. There wasn’t time to do anything but focus on the challenge of the task: the world around me narrowed down to ten faces and my presentation: ten souls who had to be hypnotized into seeing a Windows application:  one that was slick, well organized and functional I don’t remember the details. Eight minutes of my life are gone completely. I was a thespian berserker.  I know however that I followed the basic plan of building the presentation in a carefully controlled crescendo until the dazzling finale where the results were displayed on-screen.  ‘And here you see the results, neatly formatted and grouped carefully to enhance the significance of the figures, together with running trend-graphs!’ I waved a mime to signify an animated  window-opening, and looked up, in my first pause, to gaze defiantly  at the audience.  It was a sight I’ll never forget. Ten pairs of eyes were gazing in rapt attention at the imaginary window, and several pairs of eyes were glancing at the imaginary graphs and figures.  I hadn’t had an audience like that since my starring role in  Beauty and the Beast.  At that moment, I realized that my desperate ploy might work. I sat down, slightly winded, when my ten minutes were up.  For the first and last time in my life, the audience of a  ‘PowerPoint’ presentation burst into spontaneous applause. ‘Any questions?’ ‘Yes,  Have you got an agent?’ Yes, in case you’re wondering, I got the deal. They bought the software product from me there and then. However, it was a life-changing experience for me and I have never ever again trusted technology as part of a presentation.  Even if things can’t go wrong, they’ll go wrong and they’ll kill the flow of what you’re presenting.  if you can’t do something without the techno-props, then you shouldn’t do it.  The greatest lesson of all is that great presentations require preparation and  ‘stage-presence’ rather than fancy graphics. They’re a great supporting aid, but they should never dominate to the point that you’re lost without them.

    Read the article

  • Per-pixel displacement mapping GLSL

    - by Chris
    Im trying to implement a per-pixel displacement shader in GLSL. I read through several papers and "tutorials" I found and ended up with trying to implement the approach NVIDIA used in their Cascade Demo (http://www.slideshare.net/icastano/cascades-demo-secrets) starting at Slide 82. At the moment I am completly stuck with following problem: When I am far away the displacement seems to work. But as more I move closer to my surface, the texture gets bent in x-axis and somehow it looks like there is a little bent in general in one direction. EDIT: I added a video: click I added some screen to illustrate the problem: Well I tried lots of things already and I am starting to get a bit frustrated as my ideas run out. I added my full VS and FS code: VS: #version 400 layout(location = 0) in vec3 IN_VS_Position; layout(location = 1) in vec3 IN_VS_Normal; layout(location = 2) in vec2 IN_VS_Texcoord; layout(location = 3) in vec3 IN_VS_Tangent; layout(location = 4) in vec3 IN_VS_BiTangent; uniform vec3 uLightPos; uniform vec3 uCameraDirection; uniform mat4 uViewProjection; uniform mat4 uModel; uniform mat4 uView; uniform mat3 uNormalMatrix; out vec2 IN_FS_Texcoord; out vec3 IN_FS_CameraDir_Tangent; out vec3 IN_FS_LightDir_Tangent; void main( void ) { IN_FS_Texcoord = IN_VS_Texcoord; vec4 posObject = uModel * vec4(IN_VS_Position, 1.0); vec3 normalObject = (uModel * vec4(IN_VS_Normal, 0.0)).xyz; vec3 tangentObject = (uModel * vec4(IN_VS_Tangent, 0.0)).xyz; //vec3 binormalObject = (uModel * vec4(IN_VS_BiTangent, 0.0)).xyz; vec3 binormalObject = normalize(cross(tangentObject, normalObject)); // uCameraDirection is the camera position, just bad named vec3 fvViewDirection = normalize( uCameraDirection - posObject.xyz); vec3 fvLightDirection = normalize( uLightPos.xyz - posObject.xyz ); IN_FS_CameraDir_Tangent.x = dot( tangentObject, fvViewDirection ); IN_FS_CameraDir_Tangent.y = dot( binormalObject, fvViewDirection ); IN_FS_CameraDir_Tangent.z = dot( normalObject, fvViewDirection ); IN_FS_LightDir_Tangent.x = dot( tangentObject, fvLightDirection ); IN_FS_LightDir_Tangent.y = dot( binormalObject, fvLightDirection ); IN_FS_LightDir_Tangent.z = dot( normalObject, fvLightDirection ); gl_Position = (uViewProjection*uModel) * vec4(IN_VS_Position, 1.0); } The VS just builds the TBN matrix, from incoming normal, tangent and binormal in world space. Calculates the light and eye direction in worldspace. And finally transforms the light and eye direction into tangent space. FS: #version 400 // uniforms uniform Light { vec4 fvDiffuse; vec4 fvAmbient; vec4 fvSpecular; }; uniform Material { vec4 diffuse; vec4 ambient; vec4 specular; vec4 emissive; float fSpecularPower; float shininessStrength; }; uniform sampler2D colorSampler; uniform sampler2D normalMapSampler; uniform sampler2D heightMapSampler; in vec2 IN_FS_Texcoord; in vec3 IN_FS_CameraDir_Tangent; in vec3 IN_FS_LightDir_Tangent; out vec4 color; vec2 TraceRay(in float height, in vec2 coords, in vec3 dir, in float mipmap){ vec2 NewCoords = coords; vec2 dUV = - dir.xy * height * 0.08; float SearchHeight = 1.0; float prev_hits = 0.0; float hit_h = 0.0; for(int i=0;i<10;i++){ SearchHeight -= 0.1; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = coords + dUV * (1.0-hit_h) * 10.0f - dUV; vec2 Temp = NewCoords; SearchHeight = hit_h+0.1; float Start = SearchHeight; dUV *= 0.2; prev_hits = 0.0; hit_h = 0.0; for(int i=0;i<5;i++){ SearchHeight -= 0.02; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = Temp + dUV * (Start - hit_h) * 50.0f; return NewCoords; } void main( void ) { vec3 fvLightDirection = normalize( IN_FS_LightDir_Tangent ); vec3 fvViewDirection = normalize( IN_FS_CameraDir_Tangent ); float mipmap = 0; vec2 NewCoord = TraceRay(0.1,IN_FS_Texcoord,fvViewDirection,mipmap); //vec2 ddx = dFdx(NewCoord); //vec2 ddy = dFdy(NewCoord); vec3 BumpMapNormal = textureLod(normalMapSampler, NewCoord.xy, mipmap).xyz; BumpMapNormal = normalize(2.0 * BumpMapNormal - vec3(1.0, 1.0, 1.0)); vec3 fvNormal = BumpMapNormal; float fNDotL = dot( fvNormal, fvLightDirection ); vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection ); float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) ); vec4 fvBaseColor = textureLod( colorSampler, NewCoord.xy,mipmap); vec4 fvTotalAmbient = fvAmbient * fvBaseColor; vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor; vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) ); color = ( fvTotalAmbient + (fvTotalDiffuse + fvTotalSpecular) ); } The FS implements the displacement technique in TraceRay method, while always using mipmap level 0. Most of the code is from NVIDIA sample and another paper I found on the web, so I guess there cannot be much wrong in here. At the end it uses the modified UV coords for getting the displaced normal from the normal map and the color from the color map. I looking forward for some ideas. Thanks in advance! Edit: Here is the code loading the heightmap: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mImageData); glGenerateMipmap(GL_TEXTURE_2D); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); Maybe something wrong in here?

    Read the article

  • Master-slave vs. peer-to-peer archictecture: benefits and problems

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Almost two decades ago, I was a member of a database development team that introduced adaptive locking. Locking, the most popular concurrency control technique in database systems, is pessimistic. Locking ensures that two or more conflicting operations on the same data item don’t “trample” on each other’s toes, resulting in data corruption. In a nutshell, here’s the issue we were trying to address. In everyday life, traffic lights serve the same purpose. They ensure that traffic flows smoothly and when everyone follows the rules, there are no accidents at intersections. As I mentioned earlier, the problem with typical locking protocols is that they are pessimistic. Regardless of whether there is another conflicting operation in the system or not, you have to hold a lock! Acquiring and releasing locks can be quite expensive, depending on how many objects the transaction touches. Every transaction has to pay this penalty. To use the earlier traffic light analogy, if you have ever waited at a red light in the middle of nowhere with no one on the road, wondering why you need to wait when there’s clearly no danger of a collision, you know what I mean. The adaptive locking scheme that we invented was able to minimize the number of locks that a transaction held, by detecting whether there were one or more transactions that needed conflicting eyou could get by without holding any lock at all. In many “well-behaved” workloads, there are few conflicts, so this optimization is a huge win. If, on the other hand, there are many concurrent, conflicting requests, the algorithm gracefully degrades to the “normal” behavior with minimal cost. We were able to reduce the number of lock requests per TPC-B transaction from 178 requests down to 2! Wow! This is a dramatic improvement in concurrency as well as transaction latency. The lesson from this exercise was that if you can identify the common scenario and optimize for that case so that only the uncommon scenarios are more expensive, you can make dramatic improvements in performance without sacrificing correctness. So how does this relate to the architecture and design of some of the modern NoSQL systems? NoSQL systems can be broadly classified as master-slave sharded, or peer-to-peer sharded systems. NoSQL systems with a peer-to-peer architecture have an interesting way of handling changes. Whenever an item is changed, the client (or an intermediary) propagates the changes synchronously or asynchronously to multiple copies (for availability) of the data. Since the change can be propagated asynchronously, during some interval in time, it will be the case that some copies have received the update, and others haven’t. What happens if someone tries to read the item during this interval? The client in a peer-to-peer system will fetch the same item from multiple copies and compare them to each other. If they’re all the same, then every copy that was queried has the same (and up-to-date) value of the data item, so all’s good. If not, then the system provides a mechanism to reconcile the discrepancy and to update stale copies. So what’s the problem with this? There are two major issues: First, IT’S HORRIBLY PESSIMISTIC because, in the common case, it is unlikely that the same data item will be updated and read from different locations at around the same time! For every read operation, you have to read from multiple copies. That’s a pretty expensive, especially if the data are stored in multiple geographically separate locations and network latencies are high. Second, if the copies are not all the same, the application has to reconcile the differences and propagate the correct value to the out-dated copies. This means that the application program has to handle discrepancies in the different versions of the data item and resolve the issue (which can further add to cost and operation latency). Resolving discrepancies is only one part of the problem. What if the same data item was updated independently on two different nodes (copies)? In that case, due to the asynchronous nature of change propagation, you might land up with different versions of the data item in different copies. In this case, the application program also has to resolve conflicts and then propagate the correct value to the copies that are out-dated or have incorrect versions. This can get really complicated. My hunch is that there are many peer-to-peer-based applications that don’t handle this correctly, and worse, don’t even know it. Imagine have 100s of millions of records in your database – how can you tell whether a particular data item is incorrect or out of date? And what price are you willing to pay for ensuring that the data can be trusted? Multiple network messages per read request? Discrepancy and conflict resolution logic in the application, and potentially, additional messages? All this overhead, when all you were trying to do was to read a data item. Wouldn’t it be simpler to avoid this problem in the first place? Master-slave architectures like the Oracle NoSQL Database handles this very elegantly. A change to a data item is always sent to the master copy. Consequently, the master copy always has the most current and authoritative version of the data item. The master is also responsible for propagating the change to the other copies (for availability and read scalability). Client drivers are aware of master copies and replicas, and client drivers are also aware of the “currency” of a replica. In other words, each NoSQL Database client knows how stale a replica is. This vastly simplifies the job of the application developer. If the application needs the most current version of the data item, the client driver will automatically route the request to the master copy. If the application is willing to tolerate some staleness of data (e.g. a version that is no more than 1 second out of date), the client can easily determine which replica (or set of replicas) can satisfy the request, and route the request to the most efficient copy. This results in a dramatic simplification in application logic and also minimizes network requests (the driver will only send the request to exactl the right replica, not many). So, back to my original point. A well designed and well architected system minimizes or eliminates unnecessary overhead and avoids pessimistic algorithms wherever possible in order to deliver a highly efficient and high performance system. If you’ve every programmed an Oracle NoSQL Database application, you’ll know the difference! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Cancelling Route Navigation in AngularJS Controllers

    - by dwahlin
    If you’re new to AngularJS check out my AngularJS in 60-ish Minutes video tutorial or download the free eBook. Also check out The AngularJS Magazine for up-to-date information on using AngularJS to build Single Page Applications (SPAs). Routing provides a nice way to associate views with controllers in AngularJS using a minimal amount of code. While a user is normally able to navigate directly to a specific route, there may be times when a user triggers a route change before they’ve finalized an important action such as saving data. In these types of situations you may want to cancel the route navigation and ask the user if they’d like to finish what they were doing so that their data isn’t lost. In this post I’ll talk about a technique that can be used to accomplish this type of routing task.   The $locationChangeStart Event When route navigation occurs in an AngularJS application a few events are raised. One is named $locationChangeStart and the other is named $routeChangeStart (there are other events as well). At the current time (version 1.2) the $routeChangeStart doesn’t provide a way to cancel route navigation, however, the $locationChangeStart event can be used to cancel navigation. If you dig into the AngularJS core script you’ll find the following code that shows how the $locationChangeStart event is raised as the $browser object’s onUrlChange() function is invoked:   $browser.onUrlChange(function (newUrl) { if ($location.absUrl() != newUrl) { if ($rootScope.$broadcast('$locationChangeStart', newUrl, $location.absUrl()).defaultPrevented) { $browser.url($location.absUrl()); return; } $rootScope.$evalAsync(function () { var oldUrl = $location.absUrl(); $location.$$parse(newUrl); afterLocationChange(oldUrl); }); if (!$rootScope.$$phase) $rootScope.$digest(); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The key part of the code is the call to $broadcast. This call broadcasts the $locationChangeStart event to all child scopes so that they can be notified before a location change is made. To handle the $locationChangeStart event you can use the $rootScope.on() function. For this example I’ve added a call to $on() into a function that is called immediately after the controller is invoked:   function init() { //initialize data here.. //Make sure they're warned if they made a change but didn't save it //Call to $on returns a "deregistration" function that can be called to //remove the listener (see routeChange() for an example of using it) onRouteChangeOff = $rootScope.$on('$locationChangeStart', routeChange); } This code listens for the $locationChangeStart event and calls routeChange() when it occurs. The value returned from calling $on is a “deregistration” function that can be called to detach from the event. In this case the deregistration function is named onRouteChangeOff (it’s accessible throughout the controller). You’ll see how the onRouteChangeOff function is used in just a moment.   Cancelling Route Navigation The routeChange() callback triggered by the $locationChangeStart event displays a modal dialog similar to the following to prompt the user:     Here’s the code for routeChange(): function routeChange(event, newUrl) { //Navigate to newUrl if the form isn't dirty if (!$scope.editForm.$dirty) return; var modalOptions = { closeButtonText: 'Cancel', actionButtonText: 'Ignore Changes', headerText: 'Unsaved Changes', bodyText: 'You have unsaved changes. Leave the page?' }; modalService.showModal({}, modalOptions).then(function (result) { if (result === 'ok') { onRouteChangeOff(); //Stop listening for location changes $location.path(newUrl); //Go to page they're interested in } }); //prevent navigation by default since we'll handle it //once the user selects a dialog option event.preventDefault(); return; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Looking at the parameters of routeChange() you can see that it accepts an event object and the new route that the user is trying to navigate to. The event object is used to prevent navigation since we need to prompt the user before leaving the current view. Notice the call to event.preventDefault() at the end of the function. The modal dialog is shown by calling modalService.showModal() (see my previous post for more information about the custom modalService that acts as a wrapper around Angular UI Bootstrap’s $modal service). If the user selects “Ignore Changes” then their changes will be discarded and the application will navigate to the route they intended to go to originally. This is done by first detaching from the $locationChangeStart event by calling onRouteChangeOff() (recall that this is the function returned from the call to $on()) so that we don’t get stuck in a never ending cycle where the dialog continues to display when they click the “Ignore Changes” button. A call is then made to $location.path(newUrl) to handle navigating to the target view. If the user cancels the operation they’ll stay on the current view. Conclusion The key to canceling routes is understanding how to work with the $locationChangeStart event and cancelling it so that route navigation doesn’t occur. I’m hoping that in the future the same type of task can be done using the $routeChangeStart event but for now this code gets the job done. You can see this code in action in the Customer Manager application available on Github (specifically the customerEdit view). Learn more about the application here.

    Read the article

  • Passing a parameter so that it cannot be changed – C#

    - by nmarun
    I read this requirement of not allowing a user to change the value of a property passed as a parameter to a method. In C++, as far as I could recall (it’s been over 10 yrs, so I had to refresh memory), you can pass ‘const’ to a function parameter and this ensures that the parameter cannot be changed inside the scope of the function. There’s no such direct way of doing this in C#, but that does not mean it cannot be done!! Ok, so this ‘not-so-direct’ technique depends on the type of the parameter – a simple property or a collection. Parameter as a simple property: This is quite easy (and you might have guessed it already). Bulent Ozkir clearly explains how this can be done here. Parameter as a collection property: Obviously the above does not work if the parameter is a collection of some type. Let’s dig-in. Suppose I need to create a collection of type KeyTitle as defined below. 1: public class KeyTitle 2: { 3: public int Key { get; set; } 4: public string Title { get; set; } 5: } My class is declared as below: 1: public class Class1 2: { 3: public Class1() 4: { 5: MyKeyTitleList = new List<KeyTitle>(); 6: } 7: 8: public List<KeyTitle> MyKeyTitleList { get; set; } 9: public ReadOnlyCollection<KeyTitle> ReadonlyKeyTitleCollection 10: { 11: // .AsReadOnly creates a ReadOnlyCollection<> type 12: get { return MyKeyTitleList.AsReadOnly(); } 13: } 14: } See the .AsReadOnly() method used in the second property? As MSDN says it: “Returns a read-only IList<T> wrapper for the current collection.” Knowing this, I can implement my code as: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8:  9: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 10:  11: Console.ReadLine(); 12: } 13:  14: private static void TryToModifyCollection(ReadOnlyCollection<KeyTitle> readOnlyCollection) 15: { 16: // can only read 17: for (int i = 0; i < readOnlyCollection.Count; i++) 18: { 19: Console.WriteLine("{0} - {1}", readOnlyCollection[i].Key, readOnlyCollection[i].Title); 20: } 21: // Add() - not allowed 22: // even the indexer does not have a setter 23: } The output is as expected: The below image shows two things. In the first line, I’ve tried to access an element in my read-only collection through an indexer. It shows that the ReadOnlyCollection<> does not have a setter on the indexer. The second line tells that there’s no ‘Add()’ method for this type of collection. The capture below shows there’s no ‘Remove()’ method either, there-by eliminating all ways of modifying a collection. Mission accomplished… right? Now, even if you have a collection of different type, all you need to do is to somehow cast (used loosely) it to a List<> and then do a .AsReadOnly() to get a ReadOnlyCollection of your custom collection type. As an example, if you have an IDictionary<int, string>, you can create a List<T> of this type with a wrapper class (KeyTitle in our case). 1: public IDictionary<int, string> MyDictionary { get; set; } 2:  3: public ReadOnlyCollection<KeyTitle> ReadonlyDictionary 4: { 5: get 6: { 7: return (from item in MyDictionary 8: select new KeyTitle 9: { 10: Key = item.Key, 11: Title = item.Value, 12: }).ToList().AsReadOnly(); 13: } 14: } Cool huh? Just one thing you need to know about the .AsReadOnly() method is that the only way to modify your ReadOnlyCollection<> is to modify the original collection. So doing: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 9:  10: Console.WriteLine(); 11:  12: class1.MyKeyTitleList.Add(new KeyTitle { Key = 5, Title = "mno" }); 13: class1.MyKeyTitleList[2] = new KeyTitle{Key = 3, Title = "GHI"}; 14: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 15:  16: Console.ReadLine(); 17: } Gives me the output of: See that the second element’s Title is changed to upper-case and the fifth element also gets displayed even though we’re still looping through the same ReadOnlyCollection<KeyTitle>. Verdict: Now you know of a way to implement ‘Method(const param1)’ in your code!

    Read the article

  • Delight and Excite

    - by Applications User Experience
    Mick McGee, CEO & President, EchoUser Editor’s Note: EchoUser is a User Experience design firm in San Francisco and a member of the Oracle Usability Advisory Board. Mick and his staff regularly consult on Oracle Applications UX projects. Being part of a user experience design firm, we have the luxury of working with a lot of great people across many great companies. We get to help people solve their problems.  At least we used to. The basic design challenge is still the same; however, the goal is not necessarily to solve “problems” anymore; it is, “I want our products to delight and excite!” The question for us as UX professionals is how to design to those goals, and then how to assess them from a usability perspective. I’m not sure where I first heard “delight and excite” (A book? blog post? Facebook  status? Steve Jobs quote?), but now I hear these listed as user experience goals all the time. In particular, somewhat paradoxically, I routinely hear them in enterprise software conversations. And when asking these same enterprise companies what will make the project successful, we very often hear, “Make it like Apple.” In past days, it was “make it like Yahoo (or Amazon or Google“) but now Apple is the common benchmark. Steve Jobs and Apple were not secrets, but with Jobs’ passing and Apple becoming the world’s most valuable company in the last year, the impact of great design and experience is suddenly very widespread. In particular, users’ expectations have gone way up. Being an enterprise company is no shield to the general expectations that users now have, for all products. Designing a “Minimum Viable Product” The user experience challenge has historically been, to echo the words of Eric Ries (author of Lean Startup) , to create a “minimum viable product”: the proverbial, “make it good enough”. But, in our profession, the “minimum viable” part of that phrase has oftentimes, unfortunately, referred to the design and user experience. Technology typically dominated the focus of the biggest, most successful companies. Few have had the laser focus of Apple to also create and sell design and user experience alongside great technology. But now that Apple is the most valuable company in the world, copying their success is a common undertaking. Great design is now a premium offering that everyone wants, from the one-person startup to the largest companies, consumer and enterprise. This emerging business paradigm will have significant impact across the user experience design process and profession. One area that particularly interests me is, how are we going to evaluate these new emerging “delight and excite” experiences, which are further customized to each particular domain? How to Measure “Delight and Excite” Traditional usability measures of task completion rate, assists, time, and errors are still extremely useful in many situations; however, they are too blunt to offer much insight into emerging experiences “Satisfaction” is usually assessed in user testing, in roughly equivalent importance to the above objective metrics. Various surveys and scales have provided ways to measure satisfying UX, with whatever questions they include. However, to meet the demands of new business goals and keep users at the center of design and development processes, we have to explore new methods to better capture custom-experience goals and emotion-driven user responses. We have had success assessing custom experiences, including “delight and excite”, by employing a variety of user testing methods that tend to combine formative and summative techniques (formative being focused more on identifying usability issues and ways to improve design, and summative focused more on metrics). Our most successful tool has been one we’ve been using for a long time, Magnitude Estimation Technique (MET). But it’s not necessarily about MET as a measure, rather how it is created. Caption: For one client, EchoUser did two rounds of testing.  Each test was a mix of performing representative tasks and gathering qualitative impressions. Each user participated in an in-person moderated 1-on-1 session for 1 hour, using a testing set-up where they held the phone. The primary goal was to identify usability issues and recommend design improvements. MET is based on a definition of the desired experience, which users will then use to rate items of interest (usually tasks in a usability test). In other words, a custom experience definition needs to be created. This can then be used to measure satisfaction in accomplishing tasks; “delight and excite”; or anything else from strategic goals, user demands, or elsewhere. For reference, our standard MET definition in usability testing is: “User experience is your perception of how easy to use, well designed and productive an interface is to complete tasks.” Articulating the User Experience We’ve helped construct experience definitions for several clients to better match their business goals. One example is a modification of the above that was needed for a company that makes medical-related products: “User experience is your perception of how easy to use, well-designed, productive and safe an interface is for conducting tasks. ‘Safe’ is how free an environment (including devices, software, facilities, people, etc.) is from danger, risk, and injury.” Another example is from a company that is pushing hard to incorporate “delight” into their enterprise business line: “User experience is your perception of a product’s ease of use and learning, satisfaction and delight in design, and ability to accomplish objectives.” I find the last one particularly compelling in that there is little that identifies the experience as being for a highly technical enterprise application. That definition could easily be applied to any number of consumer products. We have gone further than the above, including “sexy” and “cool” where decision-makers insisted they were part of the desired experience. We also applied it to completely different experiences where the “interface” was, for example, riding public transit, the “tasks” were train rides, and we followed the participants through the train-riding journey and rated various aspects accordingly: “A good public transportation experience is a cost-effective way of reliably, conveniently, and safely getting me to my intended destination on time.” To construct these definitions, we’ve employed both bottom-up and top-down approaches, depending on circumstances. For bottom-up, user inputs help dictate the terms that best fit the desired experience (usually by way of cluster and factor analysis). Top-down depends on strategic, visionary goals expressed by upper management that we then attempt to integrate into product development (e.g., “delight and excite”). We like a combination of both approaches to push the innovation envelope, but still be mindful of current user concerns. Hopefully the idea of crafting your own custom experience, and a way to measure it, can provide you with some ideas how you can adapt your user experience needs to whatever company you are in. Whether product-development or service-oriented, nearly every company is ultimately providing a user experience. The Bottom Line Creating great experiences may have been popularized by Steve Jobs and Apple, but I’ll be honest, it’s a good feeling to be moving from “good enough” to “delight and excite,” despite the challenge that entails. In fact, it’s because of that challenge that we will expand what we do as UX professionals to help deliver and assess those experiences. I’m excited to see how we, Oracle, and the rest of the industry will live up to that challenge.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • How to Configure Windows Machine to Allow File Sharing with DNS Alias

    - by Michael Ferrante
    I have not seen a single article posted anywhere online that brings together all the settings one would need to do to make this work properly on Windows, so I thought I would post it here. To facilitate failover schemes, a common technique is to use DNS CNAME records (DNS Aliases) for different machine roles. Then instead of changing the Windows computername of the actual machine name, one can switch a DNS record to point to a new host. This can work on Microsoft Windows machines, but to make it work with file sharing the following configuration steps need to be taken. Outline The Problem The Solution Allowing other machines to use filesharing via the DNS Alias (DisableStrictNameChecking) Allowing server machine to use filesharing with itself via the DNS Alias (BackConnectionHostNames) Providing browse capabilities for multiple NetBIOS names (OptionalNames) Register the Kerberos service principal names (SPNs) for other Windows functions like Printing (setspn) References 1. The Problem On Windows machines, file sharing can work via the computer name, with or without full qualification, or by the IP Address. By default, however, filesharing will not work with arbitrary DNS aliases. To enable filesharing and other Windows services to work with DNS aliases, you must make registry changes as detailed below and reboot the machine. 2. The Solution Allowing other machines to use filesharing via the DNS Alias (DisableStrictNameChecking) This change alone will allow other machines on the network to connect to the machine using any arbitrary hostname. (However this change will not allow a machine to connect to itself via a hostname, see BackConnectionHostNames below). Edit the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters and add a value DisableStrictNameChecking of type DWORD set to 1. Allowing server machine to use filesharing with itself via the DNS Alias (BackConnectionHostNames) This change is necessary for a DNS alias to work with filesharing from a machine to find itself. This creates the Local Security Authority host names that can be referenced in an NTLM authentication request. To do this, follow these steps for all the nodes on the client computer: To the registry subkey HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0, add new Multi-String Value BackConnectionHostNames In the Value data box, type the CNAME or the DNS alias, that is used for the local shares on the computer, and then click OK. Note: Type each host name on a separate line. Providing browse capabilities for multiple NetBIOS names (OptionalNames) Allows ability to see the network alias in the network browse list. Edit the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters and add a value OptionalNames of type Multi-String Add in a newline delimited list of names that should be registered under the NetBIOS browse entries Names should match NetBIOS conventions (i.e. not FQDN, just hostname) Register the Kerberos service principal names (SPNs) for other Windows functions like Printing (setspn) NOTE: Should not need to do this for basic functions to work, documented here for completeness. We had one situation in which the DNS alias was not working because there was an old SPN record interfering, so if other steps aren't working check if there are any stray SPN records. You must register the Kerberos service principal names (SPNs), the host name, and the fully-qualified domain name (FQDN) for all the new DNS alias (CNAME) records. If you do not do this, a Kerberos ticket request for a DNS alias (CNAME) record may fail and return the error code KDC_ERR_S_SPRINCIPAL_UNKNOWN. To view the Kerberos SPNs for the new DNS alias records, use the Setspn command-line tool (setspn.exe). The Setspn tool is included in Windows Server 2003 Support Tools. You can install Windows Server 2003 Support Tools from the Support\Tools folder of the Windows Server 2003 startup disk. How to use the tool to list all records for a computername: setspn -L computername To register the SPN for the DNS alias (CNAME) records, use the Setspn tool with the following syntax: setspn -A host/your_ALIAS_name computername setspn -A host/your_ALIAS_name.company.com computername 3. References All the Microsoft references work via: http://support.microsoft.com/kb/ Connecting to SMB share on a Windows 2000-based computer or a Windows Server 2003-based computer may not work with an alias name Covers the basics of making file sharing work properly with DNS alias records from other computers to the server computer. KB281308 Error message when you try to access a server locally by using its FQDN or its CNAME alias after you install Windows Server 2003 Service Pack 1: "Access denied" or "No network provider accepted the given network path" Covers how to make the DNS alias work with file sharing from the file server itself. KB926642 How to consolidate print servers by using DNS alias (CNAME) records in Windows Server 2003 and in Windows 2000 Server Covers more complex scenarios in which records in Active Directory may need to be updated for certain services to work properly and for browsing for such services to work properly, how to register the Kerberos service principal names (SPNs). KB870911 Distributed File System update to support consolidation roots in Windows Server 2003 Covers even more complex scenarios with DFS (discusses OptionalNames). KB829885

    Read the article

  • WPF TabControl - how to preserve control state within tab items (MVVM pattern)

    - by Tim Coulter
    I am a newcomer to WPF, attempting to build a project that follows the recommendations of Josh Smith's excellent article describing The Model-View-ViewModel Design Pattern. Using Josh's sample code as a base, I have created a simple application that contains a number of "workspaces", each represented by a tab in a TabControl. In my application, a workspace is a document editor that allows a hierarchical document to be manipulated via a TreeView control. Although I have succeeded in opening multiple workspaces and viewing their document content in the bound TreeView control, I find that the TreeView "forgets" its state when switching between tabs. For example, if the TreeView in Tab1 is partially expanded, it will be shown as fully collapsed after switching to Tab2 and returning to Tab1. This behaviour appears to apply to all aspects of control state for all controls. After some experimentation, I have realized that I can preserve state within a TabItem by explicitly binding each control state property to a dedicated property on the underlying ViewModel. However, this seems like a lot of additional work, when I simply want all my controls to remember their state when switching between workspaces. I assume I am missing something simple, but I am not sure where to look for the answer. Any guidance would be much appreciated. Thanks, Tim Update: As requested, I will attempt to post some code that demonstrates this problem. However, since the data that underlies the TreeView is complex, I will post a simplified example that exhibits the same symtoms. Here is the XAML from the main window: <TabControl IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding Path=Docs}"> <TabControl.ItemTemplate> <DataTemplate> <ContentPresenter Content="{Binding Path=Name}" /> </DataTemplate> </TabControl.ItemTemplate> <TabControl.ContentTemplate> <DataTemplate> <view:DocumentView /> </DataTemplate> </TabControl.ContentTemplate> </TabControl> The above XAML correctly binds to an ObservableCollection of DocumentViewModel, whereby each member is presented via a DocumentView. For the simplicity of this example, I have removed the TreeView (mentioned above) from the DocumentView and replaced it with a TabControl containing 3 fixed tabs: <TabControl> <TabItem Header="A" /> <TabItem Header="B" /> <TabItem Header="C" /> </TabControl> In this scenario, there is no binding between the DocumentView and the DocumentViewModel. When the code is run, the inner TabControl is unable to remember its selection when the outer TabControl is switched. However, if I explicitly bind the inner TabControl's SelectedIndex property ... <TabControl SelectedIndex="{Binding Path=SelectedDocumentIndex}"> <TabItem Header="A" /> <TabItem Header="B" /> <TabItem Header="C" /> </TabControl> ... to a corresponding dummy property on the DocumentViewModel ... public int SelecteDocumentIndex { get; set; } ... the inner tab is able to remember its selection. I understand that I can effectively solve my problem by applying this technique to every visual property of every control, but I am hoping there is a more elegant solution.

    Read the article

  • Version Assemblies with TFS 2010 Continuous Integration

    - by Steve Michelotti
    When I first heard that TFS 2010 had moved to Workflow Foundation for Team Build, I was *extremely* skeptical. I’ve loved MSBuild and didn’t quite understand the reasons for this change. In fact, given that I’ve been exclusively using Cruise Control for Continuous Integration (CI) for the last 5+ years of my career, I was skeptical of TFS for CI in general. However, after going through the learning process for TFS 2010 recently, I’m starting to become a believer. I’m also starting to see some of the benefits with Workflow Foundation for the overall processing because it gives you constructs not available in MSBuild such as parallel tasks, better control flow constructs, and a slightly better customization story. The first customization I had to make to the build process was to version the assemblies of my solution. This is not new. In fact, I’d recommend reading Mike Fourie’s well known post on Versioning Code in TFS before you get started. This post describes several foundational aspects of versioning assemblies regardless of your version of TFS. The main points are: 1) don’t use source control operations for your version file, 2) use a schema like <Major>.<Minor>.<IncrementalNumber>.0, and 3) do not keep AssemblyVersion and AssemblyFileVersion in sync. To do this in TFS 2010, the best post I’ve found has been Jim Lamb’s post of building a custom TFS 2010 workflow activity. Overall, this post is excellent but the primary issue I have with it is that the assembly version numbers produced are based in a date and look like this: “2010.5.15.1”. This is definitely not what I want. I want to be able to communicate to the developers and stakeholders that we are producing the “1.1 release” or “1.2 release” – which would have an assembly version number of “1.1.317.0” for example. In this post, I’ll walk through the process of customizing the assembly version number based on this method – customizing the concepts in Lamb’s post to suit my needs. I’ll also be combining this with the concepts of Fourie’s post – particularly with regards to the standards around how to version the assemblies. The first thing I’ll do is add a file called SolutionAssemblyVersionInfo.cs to the root of my solution that looks like this: 1: using System; 2: using System.Reflection; 3: [assembly: AssemblyVersion("1.1.0.0")] 4: [assembly: AssemblyFileVersion("1.1.0.0")] I’ll then add that file as a Visual Studio link file to each project in my solution by right-clicking the project, “Add – Existing Item…” then when I click the SolutionAssemblyVersionInfo.cs file, making sure I “Add As Link”: Now the Solution Explorer will show our file. We can see that it’s a “link” file because of the black arrow in the icon within all our projects. Of course you’ll need to remove the AssemblyVersion and AssemblyFileVersion attributes from the AssemblyInfo.cs files to avoid the duplicate attributes since they now leave in the SolutionAssemblyVersionInfo.cs file. This is an extremely common technique so that all the projects in our solution can be versioned as a unit. At this point, we’re ready to write our custom activity. The primary consideration is that I want the developer and/or tech lead to be able to easily be in control of the Major.Minor and then I want the CI process to add the third number with a unique incremental number. We’ll leave the fourth position always “0” for now – it’s held in reserve in case the day ever comes where we need to do an emergency patch to Production based on a branched version.   Writing the Custom Workflow Activity Similar to Lamb’s post, I’m going to write two custom workflow activities. The “outer” activity (a xaml activity) will be pretty straight forward. It will check if the solution version file exists in the solution root and, if so, delegate the replacement of version to the AssemblyVersionInfo activity which is a CodeActivity highlighted in red below:   Notice that the arguments of this activity are the “solutionVersionFile” and “tfsBuildNumber” which will be passed in. The tfsBuildNumber passed in will look something like this: “CI_MyApplication.4” and we’ll need to grab the “4” (i.e., the incremental revision number) and put that in the third position. Then we’ll need to honor whatever was specified for Major.Minor in the SolutionAssemblyVersionInfo.cs file. For example, if the SolutionAssemblyVersionInfo.cs file had “1.1.0.0” for the AssemblyVersion (as shown in the first code block near the beginning of this post), then we want to resulting file to have “1.1.4.0”. Before we do anything, let’s put together a unit test for all this so we can know if we get it right: 1: [TestMethod] 2: public void Assembly_version_should_be_parsed_correctly_from_build_name() 3: { 4: // arrange 5: const string versionFile = "SolutionAssemblyVersionInfo.cs"; 6: WriteTestVersionFile(versionFile); 7: var activity = new VersionAssemblies(); 8: var arguments = new Dictionary<string, object> { 9: { "tfsBuildNumber", "CI_MyApplication.4"}, 10: { "solutionVersionFile", versionFile} 11: }; 12:   13: // act 14: var result = WorkflowInvoker.Invoke(activity, arguments); 15:   16: // assert 17: Assert.AreEqual("1.2.4.0", (string)result["newAssemblyFileVersion"]); 18: var lines = File.ReadAllLines(versionFile); 19: Assert.IsTrue(lines.Contains("[assembly: AssemblyVersion(\"1.2.0.0\")]")); 20: Assert.IsTrue(lines.Contains("[assembly: AssemblyFileVersion(\"1.2.4.0\")]")); 21: } 22: 23: private void WriteTestVersionFile(string versionFile) 24: { 25: var fileContents = "using System.Reflection;\n" + 26: "[assembly: AssemblyVersion(\"1.2.0.0\")]\n" + 27: "[assembly: AssemblyFileVersion(\"1.2.0.0\")]"; 28: File.WriteAllText(versionFile, fileContents); 29: }   At this point, the code for our AssemblyVersion activity is pretty straight forward: 1: [BuildActivity(HostEnvironmentOption.Agent)] 2: public class AssemblyVersionInfo : CodeActivity 3: { 4: [RequiredArgument] 5: public InArgument<string> FileName { get; set; } 6:   7: [RequiredArgument] 8: public InArgument<string> TfsBuildNumber { get; set; } 9:   10: public OutArgument<string> NewAssemblyFileVersion { get; set; } 11:   12: protected override void Execute(CodeActivityContext context) 13: { 14: var solutionVersionFile = this.FileName.Get(context); 15: 16: // Ensure that the file is writeable 17: var fileAttributes = File.GetAttributes(solutionVersionFile); 18: File.SetAttributes(solutionVersionFile, fileAttributes & ~FileAttributes.ReadOnly); 19:   20: // Prepare assembly versions 21: var majorMinor = GetAssemblyMajorMinorVersionBasedOnExisting(solutionVersionFile); 22: var newBuildNumber = GetNewBuildNumber(this.TfsBuildNumber.Get(context)); 23: var newAssemblyVersion = string.Format("{0}.{1}.0.0", majorMinor.Item1, majorMinor.Item2); 24: var newAssemblyFileVersion = string.Format("{0}.{1}.{2}.0", majorMinor.Item1, majorMinor.Item2, newBuildNumber); 25: this.NewAssemblyFileVersion.Set(context, newAssemblyFileVersion); 26:   27: // Perform the actual replacement 28: var contents = this.GetFileContents(newAssemblyVersion, newAssemblyFileVersion); 29: File.WriteAllText(solutionVersionFile, contents); 30:   31: // Restore the file's original attributes 32: File.SetAttributes(solutionVersionFile, fileAttributes); 33: } 34:   35: #region Private Methods 36:   37: private string GetFileContents(string newAssemblyVersion, string newAssemblyFileVersion) 38: { 39: var cs = new StringBuilder(); 40: cs.AppendLine("using System.Reflection;"); 41: cs.AppendFormat("[assembly: AssemblyVersion(\"{0}\")]", newAssemblyVersion); 42: cs.AppendLine(); 43: cs.AppendFormat("[assembly: AssemblyFileVersion(\"{0}\")]", newAssemblyFileVersion); 44: return cs.ToString(); 45: } 46:   47: private Tuple<string, string> GetAssemblyMajorMinorVersionBasedOnExisting(string filePath) 48: { 49: var lines = File.ReadAllLines(filePath); 50: var versionLine = lines.Where(x => x.Contains("AssemblyVersion")).FirstOrDefault(); 51:   52: if (versionLine == null) 53: { 54: throw new InvalidOperationException("File does not contain [assembly: AssemblyVersion] attribute"); 55: } 56:   57: return ExtractMajorMinor(versionLine); 58: } 59:   60: private static Tuple<string, string> ExtractMajorMinor(string versionLine) 61: { 62: var firstQuote = versionLine.IndexOf('"') + 1; 63: var secondQuote = versionLine.IndexOf('"', firstQuote); 64: var version = versionLine.Substring(firstQuote, secondQuote - firstQuote); 65: var versionParts = version.Split('.'); 66: return new Tuple<string, string>(versionParts[0], versionParts[1]); 67: } 68:   69: private string GetNewBuildNumber(string buildName) 70: { 71: return buildName.Substring(buildName.LastIndexOf(".") + 1); 72: } 73:   74: #endregion 75: }   At this point the final step is to incorporate this activity into the overall build template. Make a copy of the DefaultTempate.xaml – we’ll call it DefaultTemplateWithVersioning.xaml. Before the build and labeling happens, drag the VersionAssemblies activity in. Then set the LabelName variable to “BuildDetail.BuildDefinition.Name + "-" + newAssemblyFileVersion since the newAssemblyFileVersion was produced by our activity.   Configuring CI Once you add your solution to source control, you can configure CI with the build definition window as shown here. The main difference is that we’ll change the Process tab to reflect a different build number format and choose our custom build process file:   When the build completes, we’ll see the name of our project with the unique revision number:   If we look at the detailed build log for the latest build, we’ll see the label being created with our custom task:     We can now look at the history labels in TFS and see the project name with the labels (the Assignment activity I added to the workflow):   Finally, if we look at the physical assemblies that are produced, we can right-click on any assembly in Windows Explorer and see the assembly version in its properties:   Full Traceability We now have full traceability for our code. There will never be a question of what code was deployed to Production. You can always see the assembly version in the properties of the physical assembly. That can be traced back to a label in TFS where the unique revision number matches. The label in TFS gives you the complete snapshot of the code in your source control repository at the time the code was built. This type of process for full traceability has been used for many years for CI – in fact, I’ve done similar things with CCNet and SVN for quite some time. This is simply the TFS implementation of that pattern. The new features that TFS 2010 give you to make these types of customizations in your build process are quite easy once you get over the initial curve.

    Read the article

  • EF4 POCO WCF Serialization problems (no lazy loading, proxy/no proxy, circular references, etc)

    - by kdawg
    OK, I want to make sure I cover my situation and everything I've tried thoroughly. I'm pretty sure what I need/want can be done, but I haven't quite found the perfect combination for success. I'm utilizing Entity Framework 4 RTM and its POCO support. I'm looking to query for an entity (Config) that contains a many-to-many relationship with another entity (App). I turn off lazy loading and disable proxy creation for the context and explicitly load the navigation property (either through .Include() or .LoadProperty()). However, when the navigation property is loaded (that is, Apps is loaded for a given Config), the App objects that were loaded already contain references to the Configs that have been brought to memory. This creates a circular reference. Now I know the DataContractSerializer that WCF uses can handle circular references, by setting the preserveObjectReferences parameter to true. I've tried this with a couple of different attribute implementations I've found online. It is needed to prevent the "the object graph contains circular references and cannot be serialized" error. However, it doesn't prevent the serialization of the entire graph, back and forth between Config and App. If I invoke it via WcfTestClient.exe, I get a stackoverflow (ha!) exception from the client and I'm hosed. I get different results from different invocation environments (C# unit test with a local reference to the web service appears to work ok though I still can drill back and forth between Configs and Apps endlessly, but calling it from a coldfusion environment only returns the first Config in the list and errors out on the others.) My main goal is to have a serialized representation of the graph I explicitly load from EF (ie: list of Configs, each with their Apps, but no App back to Config navigation.) NOTE: I've also tried using the ProxyDataContractResolver technique and keeping the proxy creation enabled from my context. This blows up complaining about unknown types encountered. I read that the ProxyDataContractResolver didn't fully work in Beta2, but should work in RTM. For some reference, here is roughly how I'm querying the data in the service: var repo = BootStrapper.AppCtx["AppMeta.ConfigRepository"] as IRepository<Config>; repo.DisableLazyLoading(); repo.DisableProxyCreation(); //var temp2 = repo.Include(cfg => cfg.Apps).Where(cfg => cfg.Environment.Equals(environment)).ToArray(); var temp2 = repo.FindAll(cfg => cfg.Environment.Equals(environment)).ToArray(); foreach (var cfg in temp2) { repo.LoadProperty(cfg, c => c.Apps); } return temp2; I think the crux of my problem is when loading up navigation properties for POCO objects from Entity Framework 4, it prepopulates navigation properties for objects already in memory. This in turn hoses up the WCF serialization, despite every effort made to properly handle circular references. I know it's a lot of information, but it's really standing in my way of going forward with EF4/POCO in our system. I've found several articles and blogs touching upon these subjects, but for the life of me, I cannot resolve this issue. Feel free to simply ask questions and help me brainstorm this situation. PS: For the sake of being thorough, I am injecting the WCF services using the HEAD build of Spring.NET for the fix to Spring.ServiceModel.Activation.ServiceHostFactory. However I don't think this is the source of the problem.

    Read the article

  • WCF GZip Compression Request/Response Processing

    - by IanT8
    How do I get a WCF client to process server responses which have been GZipped or Deflated by IIS? On IIS, I've followed the instructions here on how to make IIS 6 gzip all responses (where the request contained "Accept-Encoding: gzip, deflate") emitted by .svc wcf services. On the client, I've followed the instructions here and here on how to inject this header into the web request: "Accept-Encoding: gzip, deflate". Fiddler2 shows the response is binary and not plain old Xml. The client crashes with an exception which basically says there's no Xml header, which ofcourse is true. In my IClientMessageInspector, the app crashes before AfterReceiveReply is called. Some further notes: (1) I can't change the WCF service or client as they are supplied by a 3rd party. I can however attach behaviors and/or message inspectors via configuration if this is the right direction to take. (2) I don't want to compress/uncompress just the soap body, but the entire message. Any ideas/solutions? * SOLVED * It was not possible to write a WCF extension to achieve these goals. Instead I followed this CodeProject article which advocate a helper class: public class CompressibleHttpRequestCreator : IWebRequestCreate { public CompressibleHttpRequestCreator() { } WebRequest IWebRequestCreate.Create(Uri uri) { HttpWebRequest httpWebRequest = Activator.CreateInstance(typeof(HttpWebRequest), BindingFlags.CreateInstance | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance, null, new object[] { uri, null }, null) as HttpWebRequest; if (httpWebRequest == null) { return null; } httpWebRequest.AutomaticDecompression =DecompressionMethods.GZip | DecompressionMethods.Deflate; return httpWebRequest; } } and also, an addition to the application configuration file: <configuration> <system.net> <webRequestModules> <remove prefix="http:"/> <add prefix="http:" type="Pajocomo.Net.CompressibleHttpRequestCreator, Pajocomo" /> </webRequestModules> </system.net> </configuration> What seems to be happening is that WCF eventually asks some factory or other deep down in system.net to provide an HttpWebRequest instance, and we provide the helper that will be asked to create the required instance. In the WCF client configuration file, a simple basicHttpBinding is all that is required, without the need for any custom extensions. When the application runs, the client Http request contains the header "Accept-Encoding: gzip, deflate", the server returns a gzipped web response, and the client transparently decompresses the http response before handing it over to WCF. When I tried to apply this technique to Web Services I found that it did NOT work. Although the helper class was executed in the same was as when used by the WCF client, the http request did not contain the "Accept-Encoding: ..." header. To make this work for Web Services, I had to edit the Web Proxy class, and add this method: protected override System.Net.WebRequest GetWebRequest(Uri uri) { System.Net.HttpWebRequest rq = (System.Net.HttpWebRequest)base.GetWebRequest(uri); rq.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate; return rq; } Note that it did not matter whether the CompressibleHttpRequestCreator and block from the application config file were present or not. For web services, only overriding GetWebRequest in the Web Service Proxy worked.

    Read the article

  • PowerShell Script to Deploy Multiple VM on Azure in Parallel #azure #powershell

    - by Marco Russo (SQLBI)
    This blog is usually dedicated to Business Intelligence and SQL Server, but I didn’t found easily on the web simple PowerShell scripts to help me deploying a number of virtual machines on Azure that I use for testing and development. Since I need to deploy, start, stop and remove many virtual machines created from a common image I created (you know, Tabular is not part of the standard images provided by Microsoft…), I wanted to minimize the time required to execute every operation from my Windows Azure PowerShell console (but I suggest you using Windows PowerShell ISE), so I also wanted to fire the commands as soon as possible in parallel, without losing the result in the console. In order to execute multiple commands in parallel, I used the Start-Job cmdlet, and using Get-Job and Receive-Job I wait for job completion and display the messages generated during background command execution. This technique allows me to reduce execution time when I have to deploy, start, stop or remove virtual machines. Please note that a few operations on Azure acquire an exclusive lock and cannot be really executed in parallel, but only one part of their execution time is subject to this lock. Thus, you obtain a better response time also in these scenarios (this is the case of the provisioning of a new VM). Finally, when you remove the VMs you still have the disk containing the virtual machine to remove. This cannot be done just after the VM removal, because you have to wait that the removal operation is completed on Azure. So I wrote a script that you have to run a few minutes after VMs removal and delete disks (and VHD) no longer related to a VM. I just check that the disk were associated to the original image name used to provision the VMs (so I don’t remove other disks deployed by other batches that I might want to preserve). These examples are specific for my scenario, if you need more complex configurations you have to change and adapt the code. But if your need is to create multiple instances of the same VM running in a workgroup, these scripts should be good enough. I prepared the following PowerShell scripts: ProvisionVMs: Provision many VMs in parallel starting from the same image. It creates one service for each VM. RemoveVMs: Remove all the VMs in parallel – it also remove the service created for the VM StartVMs: Starts all the VMs in parallel StopVMs: Stops all the VMs in parallel RemoveOrphanDisks: Remove all the disks no longer used by any VMs. Run this script a few minutes after RemoveVMs script. ProvisionVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   # Name of storage account (where VMs will be deployed) $StorageAccount = "Copy the Label property you get from Get-AzureStorageAccount"   function ProvisionVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName) $Location = "Copy the Location property you get from Get-AzureStorageAccount" $InstanceSize = "A5" # You can use any other instance, such as Large, A6, and so on $AdminUsername = "UserName" # Write the name of the administrator account in the new VM $Password = "Password"      # Write the password of the administrator account in the new VM $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }         New-AzureVMConfig -Name $VmName -ImageName $Image -InstanceSize $InstanceSize |             Add-AzureProvisioningConfig -Windows -Password $Password -AdminUsername $AdminUsername|             New-AzureVM -Location $Location -ServiceName "$VmName" -Verbose     } }   # Set the proper storage - you might remove this line if you have only one storage in the subscription Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list provisions one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed ProvisionVM "test10" ProvisionVM "test11" ProvisionVM "test12" ProvisionVM "test13" ProvisionVM "test14" ProvisionVM "test15" ProvisionVM "test16" ProvisionVM "test17" ProvisionVM "test18" ProvisionVM "test19" ProvisionVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup of jobs Remove-Job *   # Displays batch completed echo "Provisioning VM Completed" RemoveVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function RemoveVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Remove-AzureService -ServiceName $VmName -Force -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list remove one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed RemoveVM "test10" RemoveVM "test11" RemoveVM "test12" RemoveVM "test13" RemoveVM "test14" RemoveVM "test15" RemoveVM "test16" RemoveVM "test17" RemoveVM "test18" RemoveVM "test19" RemoveVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Remove VM Completed" StartVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StartVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Start-AzureVM -Name $VmName -ServiceName $VmName -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list starts one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StartVM "test10" StartVM "test11" StartVM "test11" StartVM "test12" StartVM "test13" StartVM "test14" StartVM "test15" StartVM "test16" StartVM "test17" StartVM "test18" StartVM "test19" StartVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Start VM Completed"   StopVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StopVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Stop-AzureVM -Name $VmName -ServiceName $VmName -Verbose -Force     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list stops one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StopVM "test10" StopVM "test11" StopVM "test12" StopVM "test13" StopVM "test14" StopVM "test15" StopVM "test16" StopVM "test17" StopVM "test18" StopVM "test19" StopVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Stop VM Completed" RemoveOrphanDisks $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }   # Remove all orphan disks coming from the image specified in $ImageName Get-AzureDisk |     Where-Object {$_.attachedto -eq $null -and $_.SourceImageName -eq $ImageName} |     Remove-AzureDisk -DeleteVHD -Verbose  

    Read the article

  • Using a WPF ListView as a DataGrid

    - by psheriff
    Many people like to view data in a grid format of rows and columns. WPF did not come with a data grid control that automatically creates rows and columns for you based on the object you pass it. However, the WPF Toolkit can be downloaded from CodePlex.com that does contain a DataGrid control. This DataGrid gives you the ability to pass it a DataTable or a Collection class and it will automatically figure out the columns or properties and create all the columns for you and display the data.The DataGrid control also supports editing and many other features that you might not always need. This means that the DataGrid does take a little more time to render the data. If you want to just display data (see Figure 1) in a grid format, then a ListView works quite well for this task. Of course, you will need to create the columns for the ListView, but with just a little generic code, you can create the columns on the fly just like the WPF Toolkit’s DataGrid. Figure 1: A List of Data using a ListView A Simple ListView ControlThe XAML below is what you would use to create the ListView shown in Figure 1. However, the problem with using XAML is you have to pre-define the columns. You cannot re-use this ListView except for “Product” data. <ListView x:Name="lstData"          ItemsSource="{Binding}">  <ListView.View>    <GridView>      <GridViewColumn Header="Product ID"                      Width="Auto"               DisplayMemberBinding="{Binding Path=ProductId}" />      <GridViewColumn Header="Product Name"                      Width="Auto"               DisplayMemberBinding="{Binding Path=ProductName}" />      <GridViewColumn Header="Price"                      Width="Auto"               DisplayMemberBinding="{Binding Path=Price}" />    </GridView>  </ListView.View></ListView> So, instead of creating the GridViewColumn’s in XAML, let’s learn to create them in code to create any amount of columns in a ListView. Create GridViewColumn’s From Data TableTo display multiple columns in a ListView control you need to set its View property to a GridView collection object. You add GridViewColumn objects to the GridView collection and assign the GridView to the View property. Each GridViewColumn object needs to be bound to a column or property name of the object that the ListView will be bound to. An ADO.NET DataTable object contains a collection of columns, and these columns have a ColumnName property which you use to bind to the GridViewColumn objects. Listing 1 shows a sample of reading and XML file into a DataSet object. After reading the data a GridView object is created. You can then loop through the DataTable columns collection and create a GridViewColumn object for each column in the DataTable. Notice the DisplayMemberBinding property is set to a new Binding to the ColumnName in the DataTable. C#private void FirstSample(){  // Read the data  DataSet ds = new DataSet();  ds.ReadXml(GetCurrentDirectory() + @"\Xml\Product.xml");    // Create the GridView  GridView gv = new GridView();   // Create the GridView Columns  foreach (DataColumn item in ds.Tables[0].Columns)  {    GridViewColumn gvc = new GridViewColumn();    gvc.DisplayMemberBinding = new Binding(item.ColumnName);    gvc.Header = item.ColumnName;    gvc.Width = Double.NaN;    gv.Columns.Add(gvc);  }   // Setup the GridView Columns  lstData.View = gv;  // Display the Data  lstData.DataContext = ds.Tables[0];} VB.NETPrivate Sub FirstSample()  ' Read the data  Dim ds As New DataSet()  ds.ReadXml(GetCurrentDirectory() & "\Xml\Product.xml")   ' Create the GridView  Dim gv As New GridView()   ' Create the GridView Columns  For Each item As DataColumn In ds.Tables(0).Columns    Dim gvc As New GridViewColumn()    gvc.DisplayMemberBinding = New Binding(item.ColumnName)    gvc.Header = item.ColumnName    gvc.Width = [Double].NaN    gv.Columns.Add(gvc)  Next   ' Setup the GridView Columns  lstData.View = gv  ' Display the Data  lstData.DataContext = ds.Tables(0)End SubListing 1: Loop through the DataTable columns collection to create GridViewColumn objects A Generic Method for Creating a GridViewInstead of having to write the code shown in Listing 1 for each ListView you wish to create, you can create a generic method that given any DataTable will return a GridView column collection. Listing 2 shows how you can simplify the code in Listing 1 by setting up a class called WPFListViewCommon and create a method called CreateGridViewColumns that returns your GridView. C#private void DataTableSample(){  // Read the data  DataSet ds = new DataSet();  ds.ReadXml(GetCurrentDirectory() + @"\Xml\Product.xml");   // Setup the GridView Columns  lstData.View =      WPFListViewCommon.CreateGridViewColumns(ds.Tables[0]);  lstData.DataContext = ds.Tables[0];} VB.NETPrivate Sub DataTableSample()  ' Read the data  Dim ds As New DataSet()  ds.ReadXml(GetCurrentDirectory() & "\Xml\Product.xml")   ' Setup the GridView Columns  lstData.View = _      WPFListViewCommon.CreateGridViewColumns(ds.Tables(0))  lstData.DataContext = ds.Tables(0)End SubListing 2: Call a generic method to create GridViewColumns. The CreateGridViewColumns MethodThe CreateGridViewColumns method will take a DataTable as a parameter and create a GridView object with a GridViewColumn object in its collection for each column in your DataTable. C#public static GridView CreateGridViewColumns(DataTable dt){  // Create the GridView  GridView gv = new GridView();  gv.AllowsColumnReorder = true;   // Create the GridView Columns  foreach (DataColumn item in dt.Columns)  {    GridViewColumn gvc = new GridViewColumn();    gvc.DisplayMemberBinding = new Binding(item.ColumnName);    gvc.Header = item.ColumnName;    gvc.Width = Double.NaN;    gv.Columns.Add(gvc);  }   return gv;} VB.NETPublic Shared Function CreateGridViewColumns _  (ByVal dt As DataTable) As GridView  ' Create the GridView  Dim gv As New GridView()  gv.AllowsColumnReorder = True   ' Create the GridView Columns  For Each item As DataColumn In dt.Columns    Dim gvc As New GridViewColumn()    gvc.DisplayMemberBinding = New Binding(item.ColumnName)    gvc.Header = item.ColumnName    gvc.Width = [Double].NaN    gv.Columns.Add(gvc)  Next   Return gvEnd FunctionListing 3: The CreateGridViewColumns method takes a DataTable and creates GridViewColumn objects in a GridView. By separating this method out into a class you can call this method anytime you want to create a ListView with a collection of columns from a DataTable. SummaryIn this blog you learned how to create a ListView that acts like a DataGrid. You are able to use a DataTable as both the source of the data, and for creating the columns for the ListView. In the next blog entry you will learn how to use the same technique, but for Collection classes. NOTE: You can download the complete sample code (in both VB and C#) at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "WPF ListView as a DataGrid" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free eBook on "Fundamentals of N-Tier".

    Read the article

  • Parallelism in .NET – Part 2, Simple Imperative Data Parallelism

    - by Reed
    In my discussion of Decomposition of the problem space, I mentioned that Data Decomposition is often the simplest abstraction to use when trying to parallelize a routine.  If a problem can be decomposed based off the data, we will often want to use what MSDN refers to as Data Parallelism as our strategy for implementing our routine.  The Task Parallel Library in .NET 4 makes implementing Data Parallelism, for most cases, very simple. Data Parallelism is the main technique we use to parallelize a routine which can be decomposed based off data.  Data Parallelism refers to taking a single collection of data, and having a single operation be performed concurrently on elements in the collection.  One side note here: Data Parallelism is also sometimes referred to as the Loop Parallelism Pattern or Loop-level Parallelism.  In general, for this series, I will try to use the terminology used in the MSDN Documentation for the Task Parallel Library.  This should make it easier to investigate these topics in more detail. Once we’ve determined we have a problem that, potentially, can be decomposed based on data, implementation using Data Parallelism in the TPL is quite simple.  Let’s take our example from the Data Decomposition discussion – a simple contrast stretching filter.  Here, we have a collection of data (pixels), and we need to run a simple operation on each element of the pixel.  Once we know the minimum and maximum values, we most likely would have some simple code like the following: for (int row=0; row < pixelData.GetUpperBound(0); ++row) { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This simple routine loops through a two dimensional array of pixelData, and calls the AdjustContrast routine on each pixel. As I mentioned, when you’re decomposing a problem space, most iteration statements are potentially candidates for data decomposition.  Here, we’re using two for loops – one looping through rows in the image, and a second nested loop iterating through the columns.  We then perform one, independent operation on each element based on those loop positions. This is a prime candidate – we have no shared data, no dependencies on anything but the pixel which we want to change.  Since we’re using a for loop, we can easily parallelize this using the Parallel.For method in the TPL: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Here, by simply changing our first for loop to a call to Parallel.For, we can parallelize this portion of our routine.  Parallel.For works, as do many methods in the TPL, by creating a delegate and using it as an argument to a method.  In this case, our for loop iteration block becomes a delegate creating via a lambda expression.  This lets you write code that, superficially, looks similar to the familiar for loop, but functions quite differently at runtime. We could easily do this to our second for loop as well, but that may not be a good idea.  There is a balance to be struck when writing parallel code.  We want to have enough work items to keep all of our processors busy, but the more we partition our data, the more overhead we introduce.  In this case, we have an image of data – most likely hundreds of pixels in both dimensions.  By just parallelizing our first loop, each row of pixels can be run as a single task.  With hundreds of rows of data, we are providing fine enough granularity to keep all of our processors busy. If we parallelize both loops, we’re potentially creating millions of independent tasks.  This introduces extra overhead with no extra gain, and will actually reduce our overall performance.  This leads to my first guideline when writing parallel code: Partition your problem into enough tasks to keep each processor busy throughout the operation, but not more than necessary to keep each processor busy. Also note that I parallelized the outer loop.  I could have just as easily partitioned the inner loop.  However, partitioning the inner loop would have led to many more discrete work items, each with a smaller amount of work (operate on one pixel instead of one row of pixels).  My second guideline when writing parallel code reflects this: Partition your problem in a way to place the most work possible into each task. This typically means, in practice, that you will want to parallelize the routine at the “highest” point possible in the routine, typically the outermost loop.  If you’re looking at parallelizing methods which call other methods, you’ll want to try to partition your work high up in the stack – as you get into lower level methods, the performance impact of parallelizing your routines may not overcome the overhead introduced. Parallel.For works great for situations where we know the number of elements we’re going to process in advance.  If we’re iterating through an IList<T> or an array, this is a typical approach.  However, there are other iteration statements common in C#.  In many situations, we’ll use foreach instead of a for loop.  This can be more understandable and easier to read, but also has the advantage of working with collections which only implement IEnumerable<T>, where we do not know the number of elements involved in advance. As an example, lets take the following situation.  Say we have a collection of Customers, and we want to iterate through each customer, check some information about the customer, and if a certain case is met, send an email to the customer and update our instance to reflect this change.  Normally, this might look something like: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } } Here, we’re doing a fair amount of work for each customer in our collection, but we don’t know how many customers exist.  If we assume that theStore.GetLastContact(customer) and theStore.EmailCustomer(customer) are both side-effect free, thread safe operations, we could parallelize this using Parallel.ForEach: Parallel.ForEach(customers, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); Just like Parallel.For, we rework our loop into a method call accepting a delegate created via a lambda expression.  This keeps our new code very similar to our original iteration statement, however, this will now execute in parallel.  The same guidelines apply with Parallel.ForEach as with Parallel.For. The other iteration statements, do and while, do not have direct equivalents in the Task Parallel Library.  These, however, are very easy to implement using Parallel.ForEach and the yield keyword. Most applications can benefit from implementing some form of Data Parallelism.  Iterating through collections and performing “work” is a very common pattern in nearly every application.  When the problem can be decomposed by data, we often can parallelize the workload by merely changing foreach statements to Parallel.ForEach method calls, and for loops to Parallel.For method calls.  Any time your program operates on a collection, and does a set of work on each item in the collection where that work is not dependent on other information, you very likely have an opportunity to parallelize your routine.

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • December release of Microsoft All-In-One Code Framework is available now.

    - by Jialiang
    The code samples in Microsoft All-In-One Code Framework are updated on 2010-12-13. Download address: http://1code.codeplex.com/releases/view/57459#DownloadId=185534 Updated code sample index categorized by technologies: http://1code.codeplex.com/wikipage?title=All-In-One%20Code%20Framework%20Sample%20Catalog (it also allows you to download individual code samples instead of the entire All-In-One Code Framework sample package.) If it’s the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on YouTube http://www.youtube.com/watch?v=cO5Li3APU58, or read the introduction on our homepage http://1code.codeplex.com/,  and this Port25 article http://port25.technet.com/archive/2010/01/18/the-all-in-one-code-framework.aspx.  -------------- New ASP.NET Code Samples VBASPNETAJAXWebChat and CSASPNETAJAXWebChat Most of you have some experience in chatting with friends on the web. So you may want to know how to make a web chat application, it seems to be quite complicated. But ASP.NET gives you the power to buiild a chat room easily. In this code sample, we will construct our own web chat room with the amazing AJAX feature. The principle is simple relatively. As we all know, a base chat application need 4 base controls: one List control to show the chat room members, one List control to show the message list, one TextBox control to input messages and one button to send message. User inputs his message in the textbox first and then presses Send button, it will send the message to the server. The message list will update every 2 seconds to get the newest message list in the chat room from the server. We need to know, it is hard for us to make an AJAX web chat application like a windows form application because we cannot keep the connection after one web request ended. So a lot of events which communicates between client side and server side cannot be realized. The common workaround is to make web requests in every some seconds to check whether the server side has been updated. But another technique called COMET makes it possible. But it is different with AJAX and will not be talked in details in this KB. For more details about COMET, we can get some clues from the Reference.   CSASPNETCurrentOnlineUserList and VBASPNETCurrentOnlineUserList This sample demos a system that needs to display a list of current online users' information. As a matter of fact, Membership.GetNumberOfUsersOnline Method  can get the number of online users and there is a convenient approach to check whether the user is online by using Membership.GetUser(string userName).IsOnline property,however many asp.net projects are not using membership.So in this case,the sample shows how to display a list of current online users' information without using membership provider. It is not difficult to check whether the user is online by using session.Many projects tend to be used “Session_End” event to mark a user as “Offline”,however ,it may not be a good idea,because it can’t detect the user status accurately. In addition, "Session_End" event is only available in the "InProc" session mode. If you are storing session states in the State Server or SQL Server, "Session_End" event will never fire. To handle this issue, we need to save the user online status to a  global DataTable or  DataBase. In the sample application, define a global DataTable to store the information of online users.Use XmlHttpRequest in the pages to update and check user's last active time at intervals and also retrieve information on how many users are still online. The sample project can auto delete offline users' information from a global DataTable by checking users’ last active time. A step-by-step guide illustrating how to display a list of current online users' information without using membership provider: 1. Login page. Let user sign in and add current user’s information to a global datatable while Initialize the global datatable which used to store information of current online users. 2. Current online user list page. Use XmlHttpRequest in this page to update and check user's last active time at intervals and also retrieve information on how many users are still online. 3. If user closes the page without clicking  the sign out link button ,the sample project can auto mark the user as offline and delete offline users' information from a global DataTable which used to store information of current online users  by checking users’ last active time. Then the current online user list will be like this:   CSASPNETIPtoLocation This sample demonstrates how to find the geographical location from an IP address. As we know, it is not hard for us to get the IP address of visitors via Request.ServerVariable property, but it is really difficult for us to know where they come from. To achieve this feature, the sample uses a free third party web service from http://freegeoip.appspot.com/, which returns the information about an IP address we send to the server in the format of XML, JSON or CSV. It makes all things easier.   CSASPNETBackgroundWorker Sometimes we do an operation which needs long time to complete. It will stop the response and the page is blank until the operation finished. In this case, we want the operation to run in the background, and in the page, we want to display the progress of the running operation. Therefore, the user can know the operation is running and can know the progress. CSASPNETInheritingFromTreeNode In windows forms TreeView, each tree node has a property called "Tag" which can be used to store a custom object. Many customers want to implement the same tag feature in ASP.NET TreeView. This project creates a custom TreeView control named "CustomTreeView" to achieve this goal. CSASPNETRemoteUploadAndDownload and VBASPNETRemoteUploadAndDownload This code sample was created in response to a code sample request in our new code sample request frunction for customers. The code samples demonstrate uploading files to and downloading files from a remote HTTP or FTP server. In .NET Framework 2.0 and higher versions, there are some lightweight class libraries which support HTTP and FTP protocol transmission. By using these classes, we can achieve this programming requirement.   CSASPNETImageEditUpload and VBASPNETImageEditUpload This demo will shows how to insert, edit and update a common image with the type of "jpg", "png", "gif" or "bmp" . We mainly use two different SqlDataSources with the same database to bind to GridView and FormView in order to establish the “cascading” effort. Besides we apply our self-made ImageHanlder to encoding or decoding images of different types, and use context to output the stream of images. We will explicitly assign the binary streams of images through the event of “FormView_ItemInserting” or “Form_ItemUpdating” to synchronize the stream both in what we can see on an aspx page as well as in what’s really stored in the database.   WebBrowser Control, Network and other Windows General New Code Samples   CSWebBrowserSuppressError and VBWebBrowserSuppressError The sample demonstrates how to make WebBrowser suppress errors, such as script error, navigation error and so on.   CSWebBrowserWithProxy and VBWebBrowserWithProxy The sample demonstrates how to make WebBrowser use a proxy server.   CSWebDownloadProgress and VBWebDownloadProgress The sample demonstrates how to show progress during the download. It also supplies the features to Start, Pause, Resume and Cancel a download.   CppSetDesktopWallpaper, CSSetDesktopWallpaper and VBSetDesktopWallpaper This code sample application allows you select an image, view a preview (resized smaller to fit if necessary), select a display style among Tile, Center, Stretch, Fit (Windows 7 and later) and Fill (Windows 7 and later), and set the image as the Desktop wallpaper. CSWindowsServiceRecoveryProperty and VBWindowsServiceRecoveryProperty CSWindowsServiceRecoveryProperty example demonstrates how to use ChangeServiceConfig2 to configure the service "Recovery" properties in C#. This example operates all the options you can see on the service "Recovery" tab, including setting the "Enable actions for stops with errors" option in Windows Vista and later operating systems. This example also include how to grant the shut down privilege to the process, so that we can configure a special option in the "Recovery" tab - "Restart Computer Options...".   New Office Development Code Samples   CSOneNoteRibbonAddIn and VBOneNoteRibbonAddIn The code sample demonstrates a OneNote 2010 COM add-in that implements IDTExtensibility2. The add-in also supports customizing the Ribbon by implementing the IRibbonExtensibility interface. It is a skeleton OneNote add-in that developers can extend it to implement more functions. The code sample was requested by a customer in our code sample request service. We expect that this could help developers in the community.   New Windows Shell Code Samples   CppShellExtPreviewHandler, CSShellExtPreviewHandler and VBShellExtPreviewHandler In the past two months, we released the code samples of Windows Context Menu Handler, Infotip Handler, and Thumbnail Handler. This is the fourth part of the shell extension series: Preview Handler. The code samples demo the C++, C# and VB.NET implementation of a preview handler for a new file type registered with the .recipe extension. Preview handlers are called when an item is selected to show a lightweight, rich, read-only preview of the file's contents in the view's reading pane. This is done without launching the file's associated application. Windows Vista and later operating systems support preview handlers. To be a valid preview handler, several interfaces must be implemented. This includes IPreviewHandler (shobjidl.h); IInitializeWithFile, IInitializeWithStream, or IInitializeWithItem (propsys.h); IObjectWithSite (ocidl.h); and IOleWindow (oleidl.h). There are also optional interfaces, such as IPreviewHandlerVisuals (shobjidl.h), that a preview handler can implement to provide extended support. Windows API Code Pack for Microsoft .NET Framework makes the implementation of these interfaces very easy in .NET. The example preview handler provides previews for .recipe files. The .recipe file type is simply an XML file registered as a unique file name extension. It includes the title of the recipe, its author, difficulty, preparation time, cook time, nutrition information, comments, an embedded preview image, and so on. The preview handler extracts the title, comments, and the embedded image, and display them in a preview window.   In response to many customers' request, we added setup projects in every shell extension samples in this release. Those setup projects allow you to deploy the shell extensions to your end users' machines. ---------- Download address: http://1code.codeplex.com/releases/view/57459#DownloadId=185534 Updated code sample index categorized by technologies: http://1code.codeplex.com/wikipage?title=All-In-One%20Code%20Framework%20Sample%20Catalog (it also allows you to download individual code samples instead of the entire All-In-One Code Framework sample package.) If you have any feedback for us, please email: [email protected]. We look forward to your comments.

    Read the article

  • CodePlex Daily Summary for Saturday, June 12, 2010

    CodePlex Daily Summary for Saturday, June 12, 2010New ProjectsAdverTool (Advertisement tool): AdverTool is an online tool which integrates the most popular advertisement networks (such as Microsoft adCenter, Google AdWords, Yahoo! Search Mar...Authentication Configuration Tool for SharePoint: Helpful tools to automatically configure SharePoint 2007 and 2010 for forms based authentication and other authentication mechanisms.Bacicworx: A C# .Net 3.5 helper library containing functionality for compression, encryption, hashes, downloading, PayPal API, text analysis and generation, a...BlogEngine.Net iPhone Theme: A port of BETouch originally created by soundbbgBT UPnP Nat Library: This Library makes it extremly simple to add NAT upnp port forwarding to your .net applications. Developed in C# using .Net 4.0CheckBox & CheckBoxList Validators: These validators fill the much needed gap in the Asp.Net Server controlsDataFactories: The DataFactories project was created to provide a standardized interface to SSAS and MSSQL data. However, as it is implemented using the Abstract ...DVD Swarm: Converts unprotected DVD video & audio streams to H.264 with AAC/Vorbis.Frio IM: Frio IM - is cross protocol instant messenger.jiuyuan: jiuyuan management systemMGM: MyGroupManager is a simple graphical interface written in PowerShell that can be deployed to Active Directory users to simplify the managed of grou...MGR2010: This the MA thesis by Witold Stanik & Michał Sereja, PJWSTK.Nauplius.ActiveDirectory: Web-based Active Directory management.Partial rendering control using JQuery: This article show a web custom control that allows partial rendering using JQueryREG - The Random Entertainment Generator: A simple tool to make your mid up when you can't figure out what you want to do!Runes of Magic - Heilerrechner: Heilerrechner für die Heiler von Runes of Magic (www.runes.ofmagic.com)Semagsoft Calculator: Basic calculator for Windows XP, Vista and Windows 7.SO League Tables: SOLT: Stack Overflow League Tables. A fun little app that lets you compare your stack overflow performance for each month, relative to other member...Stacky StackApps .Net Client Library: StackApps is a REST API for which provides access to the stackoverflow.com family of websites. Stacky is a .net client for that API. Stacky current...TwitterDotNet: TwitterDotNet is a TwitterLibrary for .NET Framework.ValiVIN: VIN (Vehicle Identification Number) Validator Validate Vin NumberWorkLogger: Simple work hour logger in WPFNew ReleasesAdverTool (Advertisement tool): Official releases: Please visit http://advertool.org to access the complete source code and downloads.Authentication Configuration Tool for SharePoint: Auth Config Tool (WSS 3.0, MOSS 2007 version): This tool automates the setup of dual authentication web applications in SharePoint that use Windows Authentication and Forms Based Authentication....BlogEngine.Net iPhone Theme: Version 0.1: Original version 0.1 from soundbbgBraintree Client Library: Braintree-2.3.0: Return AvsErrorResponseCode, AvsPostalCodeResponseCode, AvsStreetAddressResponseCode, CurrencyIsoCode, CvvResponseCode with Transaction Return Cr...BT UPnP Nat Library: Bt_Upnp Nat Library Alpha: Alpha Release of the libraryCNZK Library: Silverlight Behaviors - Deep Zoom Tag Filter: Behavior library for Silverlight 4 containing a Deep Zoom Tag Filter Behavior. Sample at the Expression Gallery http://gallery.expression.microsof...Demina: Demina Binaries version 0.2: Updated binaries. This release contains all of the new features, including simple animation transitions.DTLoggedExec: 1.0.0.2: -Fixed a bug that prevented loading packages from SSIS Package Store -Added support for {filename} placeholder in both Data Flow Profiling and CSV ...DVD Swarm: v0.8.10.611: Initial release, mostly stable.Exchange 2010 RBAC Editor (RBAC GUI) - updated on 6/11/2010: RBAC Editor 0.9.5.1: now supports creating and editing Role Assignment Policies; rest of the stuff is the same - still a lot of way to go :) Please use email address i...Extend SmallBasic: Teaching Extensions v.021: Compatible with SmallBasic v0.9 Lame version of TicTacToe Added - more coming later.Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.1.1 GA Released: Hi, Today we are releasing Visifire 3.1.1 GA with the following features: * Logarithmic Axis * ShowIndicator() in Chart. * HideIndica...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.5.4 GA Released: Hi, Today we are releasing Visifire 3.1.1 GA with the following features: Logarithmic Axis ShowIndicator() in Chart. HideIndicator() in Chart...Keep Focused - an enhanced tool for Time Management using Pomodoro Technique: Release 0.3.1 Alpha: Release 0.3.1 Alpha Technical patch. The previous release 0.3 Alpha had some errors and missing features. It was probably not build from the source...Mesopotamia Experiment: Mesopotamia 1.2.96: Bug Fixes - Fixed duplicate cells being added on creating new cells via mutations - Fixed bug where organisms without IO synapses where getting ios...NLog - Advanced .NET Logging: Nightly Build 2010.06.11.001: Changes since the last build:No changes. Unit test results:Passed 243/243 (100%) Passed 243/243 (100%) Passed 267/267 (100%) Passed 269/269 (100%)...Partial rendering control using JQuery: JQuery Web Control V 1.0: This is the first release of the code. It includes the source code and a web application to see how it worksphpxw: Phpxw2.0: 框架目录说明 ./_mod 模块存放目录 ./phpxw/ 框架核心目录 ./phpxw/common/ 框架核心函数 ./phpxw/system/ 框架核心基础类存放目录 ./phpxw/userlib/ 用户继承类存放目录 ./temp...Questionable Content Screensaver: Questionable Content Screensaver: Should be pretty self explanatory, install the appropriate version for your computer (x64 or x86). Features Include Cache comics for offline viewi...Quick Performance Monitor: Version 1.4.1: Added option to change the 'minimum' maximum value visible on the graph at run-time. Also fixed a number of other bugs.Refix - .NET dependency management: Refix v0.1.0.82 ALPHA: This has now been run against a real life project to tease out some of the issues. While this remains alpha software, which you use at your own ris...Rhyduino - Arduino and Managed Code: Beta Release (v0.8.2): ContentsSample Project - Demonstrates basic functionality and is flooded with code comments, so it's capable of being used as a learning tool. It d...Runes of Magic - Heilerrechner: Rom_Heiler_0.1: Erste Version von "RoM Heilerrechner". .Net 4.0 Framework wird vorausgesetzt. Das erhälst du hier: http://www.microsoft.com/downloads/details.aspx?...Semagsoft Calculator: 2.0: new theme and bug fix'sSilverlight Reporting: Release 2: Updated to correct issue in report footer xaml, and to add support for a calculated report footer.Stacky StackApps .Net Client Library: Beta Preview: This is a beta preview to go along with the StackApps beta.TwitterDotNet: TwitterDotNet Library: first versionUnOfficial AW Wrapper dot Net: Aw Wrapper 1.0.0.0 (5.0): New Functions :DValiVIN: ValiVIN first release: First Iteration. METHODS: IsValid(string vin) - Checks if a string is a valid VIN (returns true or false) GetCheckSumValue(string vin) - Returns...VCC: Latest build, v2.1.30611.0: Automatic drop of latest buildViewModelSupport: ViewModelSupport 1.0: Version 1.0 More information: http://houseofbilz.net/archives/2010/05/08/adventures-in-mvvm-my-viewmodel-base/ http://houseofbilz.net/archives/201...VolgaTransTelecomClient: v.1.0.3.0: v.1.0.3.0WCF Client Generator: Version 0.9.3.19259: Changed: - Always generate full type names for parameters and return typesWCF Client Generator: Version 0.9.3.21153: Fixed: - Service contracts namespace generation Added: - Templates assembly code base read from configurationXen: Graphics API for XNA: Xen 2.0 ALPHA: This is a very early alpha for Xen 2.0. Please note: The documentation for this alpha has not been updated yet. Xen 2.0 is not backwards compatib...ZGuideTV.NET: ZGuideTV.NET 0.93: Vendredi 11 avril 2010 (ZGuideTV.NET bêta 9 build 0.93) - English below Ajout : - Classement du contenu dans la description (affichage légende si...Most Popular ProjectsCAML GeneratorSharePoint Geographic Data VisualizerDbIdiom for ADO.NET CorestudyDTSRun Job RunnerXBStudio.asp.net.automationSilverlight load on demand with MEFCloud Business ServicesSharePoint 2010 Taxonomy Import UtilitySTS Federation Metadata EditorMost Active ProjectsRhyduino - Arduino and Managed Codepatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog ModuleCommunity Forums NNTP bridgeCassandraemonBlogEngine.NETMediaCoder.NETMicrosoft Silverlight Media FrameworkAndrew's XNA Helpers

    Read the article

  • Introducing… SharePress!

    - by Bil Simser
    For those that follow me I’ve been away from blogging and twittering for a couple of months. This is the reason. For the last few months I’ve been working with a cross-functional team putting together a new product from the people that run WordPress, the free premiere blogging platform. The result is a new product we call SharePress, a highly extensible blogging and content management platform with the usability of WordPress and the power of SharePoint combined into a single product. SharePress gives you SharePoint sites that are SEO-friendly delivered with a Web 2.0 ease of use, leveraging all of the existing abilities of SharePoint and WordPress that we know today. The Reason Back in December I was approached by the WordPress team about building a new platform that took advantage of the power of SharePoint but the ease of WordPress. I’m no stranger to WordPress and it’s 5 minute no-holds-barred install (I’ve always wanted SharePoint to do this!) and I run my personal blog on WordPress as does my better half, Princess Jenn. There’s always been a pitch by so-called Web 2.0 applications to deliver the power of SharePoint but the ease of [insert product here] over the past year or so. I checked each and every one of them out, but they fell woefully short when it came to SharePoint’s document management, versioning, and customization. They try, but it’s never been up to par in my books. On the flipside, SharePoint has always been tops in collaboration in the Enterprise but it’s painful to develop web parts, UI customization can be tricky, and there’s just no user community for something as simple as themes and designs. The Product Enter SharePress. Is it SharePoint? Is it WordPress? It’s both, and neither. Everything you like about both products are there but this is a bold new product that is positioned to bring SharePoint to the masses while maintaining the fidelity of an Enterprise 2.0 collaboration platform. SharePress delivers on all fronts including: The ability to leverage any WordPress/Joomla/Drupal/DotNetNuke themes and skins inside of SharePoint Run any WordPress/Drupal/Joomla/DotNetNuke/SharePoint plug-in/module/web part/feature works out of the box with SharePress SEO-friendly URLs and pages Permalinks for all content All the features of SharePoint Server 2010 (including InfoPath, Excel, and Access services) included in the price Small deployment footprint. You decide how much to deploy and where. Independent Database Abstraction Layer (iDal) that allows you to deploy to SQL Server 2005/2008, MySQL, and PostgreSQL Portable Rendering Engine Layer (PREL) so you host .NET or PHP on Apache or IIS (version 7 or higher). The install feature is built around WordPress and it’s famous 5-minute install (actually, it’s never taken me more than 1 minute). SharePress installs with two screens after the files are uploaded to your server (which can be done entirely using FTP): After you enter two fields of information click “Install SharePress” and you’ll be done: No mess, no fuss, no complicated dependencies, and no server access required! How simpler could this be? The Technology WordPress plug-ins and themes working with SharePoint? Of course! The answer is IronPython which has now reached a maturity level capable of doing on the fly code language conversions. SharePress is a brand new product not built on top of any previous platform but leverages all the power of each of those applications through a patent pending technique called SharePress Multi-plAtfoRm Technology (SMART). SMART will convert PHP code on the fly into Python (using SWIG as an intermediate processor) which is then compiled to MSIL and then delivered back as an ASP.NET MVC application (output is C# or VB.NET, but you can build your own SMART converter to output a different language). Sound complicated? It is, but it’s all behind the scenes and you don’t have to worry about a thing. This image illustrates the technology stack and process: So users can load up out of the box PHP themes and plug-ins from the WordPress/Joomla/Drupal community into the SMART converter and output MSIL that is used by the SharePress engine and rendered on the fly to the end user. Supported PHP versions are 4.xx and 5.xx with version 6 support to come when it’s released. Similarly you can take any .NET application, DotNetNuke Module, SharePoint Web Part or event handler and feed it into the converter to output the same. Everything is reverse compiled into MSIL so it becomes technology agnostic. No source code access is needed and the SMART converter can handle obfuscated .NET assemblies that were built with .NET 1.0, 1.1, 2.0, 3.5, and 4.0. With this technology you can also with the flip of a switch have the output create PHP pages for you. This allows you to run SharePress on Unix based systems running PHP and MySQL, allowing you to deliver your SharePoint like experience to your users with a $0 infrastructure footprint. Here’s SharePress with the default WordPress post imported then a stock SharePoint collaboration site was imported. The site was then applied with the default Kubrick theme from WordPress. The Features Deploy any of the freely available 100,000 WordPress/Joomla/Drupal themes instantly to your runtime SharePress environment and preview or activate them right from your browser. Built-in Web 2.0 jQuery Enabled End User and Administrator Web Interface. Never have to remote into a server again! Run any SharePoint Web Part or Event Handler directly without modification or access to source code in SharePress. Use any WordPress/Joomla/Drupal plug-in directly in SharePress, no local admin or access to server. Just upload and activate. Upload and Activate any SharePoint Solution Package to any site remotely. No rebuilding. Changes made to sites require no compiling or rebuilding and are published immediately. Password Protected Content. You can give passwords to individual posts, articles, pages, documents, forms, and list items. A powerful polymorphic Captcha system backs the security interface and vendors can easily tie into smart card readers, fingerprint readers, and retina scanners for authorization and identification. OpenID, Windows Live, and Windows Authentication are supported out of the box. Infinitely customizable and extensible. You can leverage plug-ins from the open source community to do practically anything, all configured and uploaded via the browser. Additionally the developer API (available soon) allows you to build extensions in .NET, PHP, and Python with little effort. Easy Importing. We have importers for Blogger, WordPress, Drupal, Joomla, DotNetNuke, and SharePoint so you can populate your site quickly and easily with full metadata modeling and creation. Banner Management. It’s easy to setup banners for your web site complete with impression numbers, special URLs, and more. Menu Manager. The Menu Manager allows you to create as many menus as you want, each one can be associated to specific audiences or roles and then be styled across multiple contexts including the same menu delivered as a fly out, rollover, drop down, and just about any navigation you can think of. Collaborative ShareBook. Our exclusive book feature allows you to setup a “book” and then authorize individuals to contribute content. Permalinks. All content in SharePress has a permanent or “perma link” associated with it so people can link to it freely without fear of broken links. Apache or IIS, Unix / Linux / BSD / Solaris / Windows / Mac OS X support. Deliver SharePress the way *you* want from the platform *you* decide. Database Independence. We know people wanted to run on any database platform so SharePress is built on top of a database abstraction layer that allows you to run on SQL Server, MySQL, PostgreSQL. Other databases can be supported by writing a supporting database script consisting of fourteen function calls. The script can be written in Perl, Python, AWK, PowerShell, Unix Shell scripts, VBA, or simple DOS batch files. The Team SharePress is the work of a lot of people in both the WordPress and SharePoint community. I worked with a lot of SharePoint MVPs to create this new product as we really wanted to deliver the most compatible and feature rich system in a product that we would be proud of. Many thanks go out to Eli Bleeker, Todd Robillard, Scot Larson, Daniel Hillier, Shane Fox, Box Peran, Amanda English, and Bill Murray for doing the heavy lifting and all of their expertise and innovative thinking to get this product out. Licensing and Pricing SharePress is still in the final stages for pricing but we’re looking at a price point somewhere between $99-$100 to make it affordable for everyone. We plan to announce final pricing sometime in the next few weeks. There are no additional charges for Enterprise versions or additional features. Everything you see is what’s available and it’s just a matter of lighting up your site with whatever feature you want to enable. The product will not be open source but source code licenses will be available to ISVs who are interested in interfacing with the API at a low level. Cost will be $25,000 USD per developer and gives you complete access to the source code to the SharePress Foundation System and the .NET 4.0 Framework source code. Conclusion We hope you enjoy the launch of SharePress as the new premium blogging and content management platform for both Intranets and the Internet. We think we’ve build the best of breed solutions here and made it easy for anyone to get started with a minimal of infrastructure but allow the scalability of SharePress to shine through in the Enterprise 2.0 world. We encourage your feedback so please leave comments as to what you’re looking for in this system as we’re always evolving it to make it a better product for everyone.

    Read the article

  • Installing a configuration profile on iPhone - programmatically

    - by Seva Alekseyev
    Hi all, I would like to ship a configuration profile with my iPhone application, and install it if needed. Mind you, we're talking about a configuration profile, not a provisioning profile. First off, such a task is possible. If you place a config profile on a Web page and click on it from Safari, it will get installed. If you e-mail a profile and click the attachment, it will install as well. "Installed" in this case means "The installation UI is invoked" - but I could not even get that far. So I was working under the theory that initiating a profile installation involves navigating to it as a URL. I added the profile to my app bundle. A) First, I tried [sharedApp openURL] with the file:// URL into my bundle. No such luck - nothing happens. B) I then added an HTML page to my bundle that has a link to the profile, and loaded it into a UIWebView. Clicking on the link does nothing. Loading an identical page from a Web server in Safari, however, works fine - the link is clickable, the profile installs. I provided a UIWebViewDelegate, answering YES to every navigation request - no difference. C) Then I tried to load the same Web page from my bundle in Safari (using [sharedApp openURL] - nothing happens. I guess, Safari cannot see files inside my app bundle. D) Uploading the page and the profile on a Web server is doable, but a pain on the organizational level, not to mention an extra source of failures (what if no 3G coverage? etc.). So my big question is: how do I install a profile programmatically? And the little questions are: what can make a link non-clickable within a UIWebView? Is it possible to load a file:// URL from my bundle in Safari? If not, is there a local location on iPhone where I can place files and Safari can find them? EDIT on B): the problem is somehow in the fact that we're linking to a profile. I renamed it from .mobileconfig to .xml ('cause it's really XML), altered the link. And the link worked in my UIWebView. Renamed it back - same stuff. It looks as if UIWebView is reluctant to do application-wide stuff - since installation of the profile closes the app. I tried telling it that it's OK - by means of UIWebViewDelegate - but that did not convince. Same behavior for mailto: URLs within UIWebView. For mailto: URLs the common technique is to translate them into [openURL] calls, but that doesn't quite work for my case, see scenario A. For itms: URLs, however, UIWebView works as expected... EDIT2: tried feeding a data URL to Safari via [openURL] - does not work, see here: http://stackoverflow.com/questions/641461/iphone-open-data-url-in-safari EDIT3: found a lot of info on how Safari does not support file:// URLs. UIWebView, however, very much does. Also, Safari on the simulator open them just fine. The latter bit is the most frustrating.

    Read the article

  • April 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m excited to announce the April 2013 release of the Ajax Control Toolkit. For this release, we focused on improving two controls: the AjaxFileUpload and the MaskedEdit controls. You can download the latest release from CodePlex at http://AjaxControlToolkit.CodePlex.com or, better yet, you can execute the following NuGet command within Visual Studio 2010/2012: There are three builds of the Ajax Control Toolkit: .NET 3.5, .NET 4.0, and .NET 4.5. A Better AjaxFileUpload Control We completely rewrote the AjaxFileUpload control for this release. We had two primary goals. First, we wanted to support uploading really large files. In particular, we wanted to support uploading multi-gigabyte files such as video files or application files. Second, we wanted to support showing upload progress on as many browsers as possible. The previous version of the AjaxFileUpload could show upload progress when used with Google Chrome or Mozilla Firefox but not when used with Apple Safari or Microsoft Internet Explorer. The new version of the AjaxFileUpload control shows upload progress when used with any browser. Using the AjaxFileUpload Control Let me walk-through using the AjaxFileUpload in the most basic scenario. And then, in following sections, I can explain some of its more advanced features. Here’s how you can declare the AjaxFileUpload control in a page: <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:AjaxFileUpload ID="AjaxFileUpload1" AllowedFileTypes="mp4" OnUploadComplete="AjaxFileUpload1_UploadComplete" runat="server" /> The exact appearance of the AjaxFileUpload control depends on the features that a browser supports. In the case of Google Chrome, which supports drag-and-drop upload, here’s what the AjaxFileUpload looks like: Notice that the page above includes two Ajax Control Toolkit controls: the AjaxFileUpload and the ToolkitScriptManager control. You always need to include the ToolkitScriptManager with any page which uses Ajax Control Toolkit controls. The AjaxFileUpload control declared in the page above includes an event handler for its UploadComplete event. This event handler is declared in the code-behind page like this: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to App_Data folder AjaxFileUpload1.SaveAs(MapPath("~/App_Data/" + e.FileName)); } This method saves the uploaded file to your website’s App_Data folder. I’m assuming that you have an App_Data folder in your project – if you don’t have one then you need to create one or you will get an error. There is one more thing that you must do in order to get the AjaxFileUpload control to work. The AjaxFileUpload control relies on an HTTP Handler named AjaxFileUploadHandler.axd. You need to declare this handler in your application’s root web.config file like this: <configuration> <system.web> <compilation debug="true" targetFramework="4.5" /> <httpRuntime targetFramework="4.5" maxRequestLength="42949672" /> <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </httpHandlers> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="4294967295"/> </requestFiltering> </security> </system.webServer> </configuration> Notice that the web.config file above also contains configuration settings for the maxRequestLength and maxAllowedContentLength. You need to assign large values to these configuration settings — as I did in the web.config file above — in order to accept large file uploads. Supporting Chunked File Uploads Because one of our primary goals with this release was support for large file uploads, we added support for client-side chunking. When you upload a file using a browser which fully supports the HTML5 File API — such as Google Chrome or Mozilla Firefox — then the file is uploaded in multiple chunks. You can see chunking in action by opening F12 Developer Tools in your browser and observing the Network tab: Notice that there is a crazy number of distinct post requests made (about 360 distinct requests for a 1 gigabyte file). Each post request looks like this: http://localhost:24338/AjaxFileUploadHandler.axd?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&fileId=B7CCE31C-6AB1-BB28-2940-49E0C9B81C64 &fileName=Sita_Sings_the_Blues_480p_2150kbps.mp4&chunked=true&firstChunk=false Each request posts another chunk of the file being uploaded. Notice that the request URL includes a chunked=true parameter which indicates that the browser is breaking the file being uploaded into multiple chunks. Showing Upload Progress on All Browsers The previous version of the AjaxFileUpload control could display upload progress only in the case of browsers which fully support the HTML5 File API. The new version of the AjaxFileUpload control can display upload progress in the case of all browsers. If a browser does not fully support the HTML5 File API then the browser polls the server every few seconds with an Ajax request to determine the percentage of the file that has been uploaded. This technique of displaying progress works with any browser which supports making Ajax requests. There is one catch. Be warned that this new feature only works with the .NET 4.0 and .NET 4.5 versions of the AjaxControlToolkit. To show upload progress, we are taking advantage of the new ASP.NET HttpRequest.GetBufferedInputStream() and HttpRequest.GetBufferlessInputStream() methods which are not supported by .NET 3.5. For example, here is what the Network tab looks like when you use the AjaxFileUpload with Microsoft Internet Explorer: Here’s what the requests in the Network tab look like: GET /WebForm1.aspx?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&poll=1&guid=9206FF94-76F9-B197-D1BC-EA9AD282806B HTTP/1.1 Notice that each request includes a poll=1 parameter. This parameter indicates that this is a polling request to get the size of the file buffered on the server. Here’s what the response body of a request looks like when about 20% of a file has been uploaded: Buffering to a Temporary File When you upload a file using the AjaxFileUpload control, the file upload is buffered to a temporary file located at Path.GetTempPath(). When you call the SaveAs() method, as we did in the sample page above, the temporary file is copied to a new file and then the temporary file is deleted. If you don’t call the SaveAs() method, then you must ensure that the temporary file gets deleted yourself. For example, if you want to save the file to a database then you will never call the SaveAs() method and you are responsible for deleting the file. The easiest way to delete the temporary file is to call the AjaxFileUploadEventArgs.DeleteTemporaryData() method in the UploadComplete handler: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to a database table e.DeleteTemporaryData(); } You also can call the static AjaxFileUpload.CleanAllTemporaryData() method to delete all temporary data and not only the temporary data related to the current file upload. For example, you might want to call this method on application start to ensure that all temporary data is removed whenever your application restarts. A Better MaskedEdit Extender This release of the Ajax Control Toolkit contains bug fixes for the top-voted issues related to the MaskedEdit control. We closed over 25 MaskedEdit issues. Here is a complete list of the issues addressed with this release: · 17302 MaskedEditExtender MaskType=Date, Mask=99/99/99 Undefined JS Error · 11758 MaskedEdit causes error in JScript when working with 2-digits year · 18810 Maskededitextender/validator Date validation issue · 23236 MaskEditValidator does not work with date input using format dd/mm/yyyy · 23042 Webkit based browsers (Safari, Chrome) and MaskedEditExtender · 26685 MaskedEditExtender@(ClearMaskOnLostFocus=false) adds a zero character when you each focused to target textbox · 16109 MaskedEditExtender: Negative amount, followed by decimal, sets value to positive · 11522 MaskEditExtender of AjaxtoolKit-1.0.10618.0 does not work properly for Hungarian Culture · 25988 MaskedEditExtender – CultureName (HU-hu) > DateSeparator · 23221 MaskedEditExtender date separator problem · 15233 Day and month swap in Dynamic user control · 15492 MaskedEditExtender with ClearMaskOnLostFocus and with MaskedEditValidator with ClientValidationFunction · 9389 MaskedEditValidator – when on no entry · 11392 MaskedEdit Number format messed up · 11819 MaskedEditExtender erases all values beyond first comma separtor · 13423 MaskedEdit(Extender/Validator) combo problem · 16111 MaskedEditValidator cannot validate date with DayMonthYear in UserDateFormat of MaskedEditExtender · 10901 MaskedEdit: The months and date fields swap values when you hit submit if UserDateFormat is set. · 15190 MaskedEditValidator can’t make use of MaskedEditExtender’s UserDateFormat property · 13898 MaskedEdit Extender with custom date type mask gives javascript error · 14692 MaskedEdit error in “yy/MM/dd” format. · 16186 MaskedEditExtender does not handle century properly in a date mask · 26456 MaskedEditBehavior. ConvFmtTime : function(input,loadFirst) fails if this._CultureAMPMPlaceholder == “” · 21474 Error on MaskedEditExtender working with number format · 23023 MaskedEditExtender’s ClearMaskOnLostFocus property causes problems for MaskedEditValidator when set to false · 13656 MaskedEditValidator Min/Max Date value issue Conclusion This latest release of the Ajax Control Toolkit required many hours of work by a team of talented developers. I want to thank the members of the Superexpert team for the long hours which they put into this release.

    Read the article

  • how do I access XHR responseBody from Javascript?

    - by Cheeso
    I've got a web page that uses XMLHttpRequest to download a binary resource. Because it's binary I'm trying to use xhr.responseBody to access the bytes. I've seen a few posts suggesting that it's impossible to access the bytes directly from Javascript. This sounds crazy to me. Weirdly, xhr.responseBody is accessible from VBScript, so the suggestion is that I must define a method in VBScript in the webpage, and then call that method from Javascript. See jsdap for one example. var IE_HACK = (/msie/i.test(navigator.userAgent) && !/opera/i.test(navigator.userAgent)); if (IE_HACK) document.write('<script type="text/vbscript">\n\ Function BinaryToArray(Binary)\n\ Dim i\n\ ReDim byteArray(LenB(Binary))\n\ For i = 1 To LenB(Binary)\n\ byteArray(i-1) = AscB(MidB(Binary, i, 1))\n\ Next\n\ BinaryToArray = byteArray\n\ End Function\n\ </script>'); var xml = (window.XMLHttpRequest) ? new XMLHttpRequest() // Mozilla/Safari/IE7+ : (window.ActiveXObject) ? new ActiveXObject("MSXML2.XMLHTTP") // IE6 : null; // Commodore 64? xml.open("GET", url, true); if (xml.overrideMimeType) { xml.overrideMimeType('text/plain; charset=x-user-defined'); } else { xml.setRequestHeader('Accept-Charset', 'x-user-defined'); } xml.onreadystatechange = function() { if (xml.readyState == 4) { if (!binary) { callback(xml.responseText); } else if (IE_HACK) { // call a VBScript method to copy every single byte callback(BinaryToArray(xml.responseBody).toArray()); } else { callback(getBuffer(xml.responseText)); } } }; xml.send(''); Is this really true? The best way? copying every byte? For a large binary stream that's not gonna be very efficient. There is also a possible technique using ADODB.Stream, which is a COM equivalent of a MemoryStream. See here for an example. It does not require VBScript but does require a separate COM object. if (typeof (ActiveXObject) != "undefined" && typeof (httpRequest.responseBody) != "undefined") { // Convert httpRequest.responseBody byte stream to shift_jis encoded string var stream = new ActiveXObject("ADODB.Stream"); stream.Type = 1; // adTypeBinary stream.Open (); stream.Write (httpRequest.responseBody); stream.Position = 0; stream.Type = 1; // adTypeBinary; stream.Read.... /// ???? what here } I don't think that's gonna work - ADODB.Stream is disabled on most machines these days. In The IE8 developer tools - the IE equivalent of Firebug - I can see the responseBody is an array of bytes and I can even see the bytes themselves. The data is right there. I don't understand why I can't get to it. Is it possible for me to read it with responseText? hints? (other than defining a VBScript method)

    Read the article

  • CodePlex Daily Summary for Sunday, October 06, 2013

    CodePlex Daily Summary for Sunday, October 06, 2013Popular ReleasesMedia Companion: Media Companion MC3.580b: Fixed IMDB Actor names and Actor Roles, empty <actor> entries in movie nfo, and actor scraping during initial movie scrape. Revision HistoryEvent-Based Components AppBuilder: AB3.Iteration.53: Iteration 53 (Feature): Allow drag&drop of existing component (flow, step) from component list to chart. Duplicate names are automatically recognized and solved. By the color of the draged component you can see what kind of component (flow or step) is currently draged. New: AddExistingComponentFlow, PartDragDropEventHandler, ExistingStepPreparerPulse: Pulse 0.6.7.3: Pulse is now accepting donations. To donate by Bitcoin or PayPal see https://pulse.codeplex.com/wikipage?title=Donations Lots of updates in v0.6.7.3: (Feature) New option allows you to disable wallpaper changing when a full screen application is running. This way Pulse doesn't slow down/lag your videos and games :) (Fix) Some users were getting Wallbase errors when logging in. This has been fixed. (Feature) Right click a provider and you can now make a copy of it by selecting the "Dupl...MoreTerra (Terraria World Viewer): MoreTerra 1.11.1: Release 1.11.1 =========== =Bug Fixes= =========== Added more tile blocks (Clouds, crimstone) Added items (binoculars, rope, Pirahna Gun) Added ores (Lead, Tin) Chests now work, I broke them yesterday. =============== =Known Issues= =============== I am having trouble with new background walls. So you will see a red outline for crimson then a pink inside. Same with where I think the queen bee lives.VG-Ripper & PG-Ripper: PG-Ripper 1.4.19: NEW: Added Option to login as Guest NEW: Added Menu Option to delete an Forum Account NEW: Added Support for "ImageTeam.org links FIXED: Fixed Ripping of http://forum.babeunion.com ForumsSimpleExcelReportMaker: Serm 0.03: SourceCode and Sample .Net Framework 3.5 AnyCPU compile.Application Architecture Guidelines: App Architecture Guidelines 3.0.8: This document is an overview of software qualities, principles, patterns, practices, tools and libraries.fastJSON: v2.0.22: 2.0.22 - added .net 3.5 project - now compiling to 'output' directory - added signed assembly - version numbers will stay at 2.0.0.0 for drop in compatibility - file version will reflect the build number - bug fix deserializing to dictionaries instead of dataset when type is not definedResponsive SharePoint: Bootstrap 3 for SharePoint 2013 - Alpha 0.1: Bootstrap 3 for SharePoint 2013 Alpha version 0.1 NOTE - This is an alpha version, there are bound to be issues. Please help us solve them by contributing in our Discussion. Publishing - The source for Twitter Bootstrap 3.0.0 integrated into SharePoint 2013 for a site with Publishing enabled. Non-Publishing - A master page and branding assets for Twitter Bootstrap 3.0.0 integrated into SharePoint 2013 without Publishing enabled. PageLayoutSampleContent - Sample content for included page l...C++ AMP Conformance Test Suite: C++ AMP Conformance Test Suite 1.0.0: This release contains following changes from previous release: Removed the tests that were testing Microsoft specific behavior not part of open specification. The test suite now contains two folders, containing set of test cases, named ‘Tests’ and ‘TestsWithProp'. The set of tests under these two folders are identical except one difference. The set of test cases under directory ‘TestsWithProp’ makes use of ‘properties’ (which the compiler being tested should handle as mentioned in the open ...ASP.NET dhtmxGantt Class: dhtmlxGantt2.vb class: This is the latest class based on work performed. For more information read the project description and get the source files from dhtmlx.comExpressiveDataGenerators: Alpha 2: Fix serveral bugs, more testsQuickTestsFramework: 1.0.0: First release with stable API.VS Tiny Extension for TortoiseGit: 0.1c: + Icons revised + Push button disappeared when IDE loads the menu instead of toolbar. + Detected twice loading and prevented. + About box deprecated. + Next version will have major improvements. NEW: Visual Studio 2013 Support!BlackJumboDog: Ver5.9.6: 2013.09.30 Ver5.9.6 (1)SMTP???????、???????????????? (2)WinAPI??????? (3)Web???????CGI???????????????????????PayBox payment gateway provider for NB_Store: NB_Store_Gateway_01.00.02_PayBox: Paybox DNN module installMicrosoft Ajax Minifier: Microsoft Ajax Minifier 5.2: Mostly internal code tweaks. added -nosize switch to turn off the size- and gzip-calculations done after minification. removed the comments in the build targets script for the old AjaxMin build task (discussion #458831). Fixed an issue with extended Unicode characters encoded inside a string literal with adjacent \uHHHH\uHHHH sequences. Fixed an IndexOutOfRange exception when encountering a CSS identifier that's a single underscore character (_). In previous builds, the net35 and net20...AJAX Control Toolkit: September 2013 Release: AJAX Control Toolkit Release Notes - September 2013 Release (Updated) Version 7.1005September 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Important UpdateThis release has been updated to fix three issues: Up...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.4.apifix-alpha: WDTVHubGen.v2.1.4.apifix-alpha is for testers to figure out if we got the NEW api plugged in ok. thanksVisual Log Parser: VisualLogParser: Portable Visual Log Parser for Dotnet 4.0New ProjectsBasic4Android (B4A) Charting Framework: dhtlmxCharts, GoogleCharts etc: Basic4Android (B4A) mobile charting framework.Client Meeting Tool: This site facilitate users to create and schedule meetings for an event.FoodScan: This app focuses on implementing diet monitoring application for Malaysian overweight and obese adolescents using AR technique on Windows Phone 8.Hello Team foundation server: Try to use team foundation server and compare it with GitKDG C# Password Generator: C# password generator developed by KDG.KDG's C# Password Generator: C# password generator that uses Random to create strong passwords based on user input.Meta: Meta is the EECS 111 programming language at Northwestern University. Meta is a dynamically-typed scheme-like language built on .NET. This is its home.Monoscript: Allows using Mono and C# for scripting on Unix. Source files are automatically compiled and executed. Caching is employed to avoid recompiling unchanged files.mtdsharp: Developed by Chris Hyndman, Alec KC, Lu Huang and Merrill Huang for CS 196 at the University of Illinois.Planr.me: Planr is a time management website currently in the early development stageProject Hermes: This very project is currently closed door and under core development. The project description and other works would be published soon.Remindme for Windows Phone 8: Simple, open source Pocket client for Windows Phone 8Remindme for WinRT: Simple, open source Pocket client for WinRT and Windows 8.SQL Server Periodic Table with Molecules: This a SQL Server Database intended to be used by students and researchers for Chemistry and Physics projects. Tesseract: The Tesseract Project aims to easily display and rotate 4 Dimensional Objects in 3D Test Case Manager: A Windows Application which extends Microsoft Test Manager. Features: * one click search * test case export * better test case reader * extended edit modeTest Project for Assignment 1: This is a test ProjectTorah File: Torah File is an project that allow you to use Torah Bible and Mishneh for the computer by type of the programming languages that will be able to use the ToUSAePay nopCommerce Payment Plugin: A simple plugin for nopCommerce to use the USAePay SOAP API interface for processing credit cards.Veterinaria Dr Leo: Este es nuestro Proyecto del curso Calidad y Pruebas de Software 2013-2 Arevalo Ticlla, Susan. Chalán Malca, Elvis. Cruzado Asencio, Gustavo.Visual Studio Test Extensions: The Visual Studio Test Extensions provides extensions and tools for the Visual Studio MSTest engine. It allows to execute unit tests in a separate AppDomain.WSAAD7COM1052: Central repository for 7COM1052 - Web Scripting & Application Development (COM)wscc2013online: this is a project related to Web Application Development at the University of Hertfordshire

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65  | Next Page >