Search Results

Search found 37088 results on 1484 pages for 'object element'.

Page 599/1484 | < Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

  • How to use SharePoint modal dialog box to display Custom Page Part3

    - by ybbest
    In the second part of the series, I showed you how to display and close a custom page in a SharePoint modal dialog using JavaScript and display a message after the modal dialog is closed. In this post, I’d like to show you how to use SPLongOperation with the Modal dialog box. You can download the source code here. 1. Firstly, modify the element file as follow   2. In your code behind, you can implement a close dialog function as below. This will close your modal dialog box once the button is clicked and display a status bar. Note that you need to use window.frameElement.commonModalDialogClose instead of window.frameElement.commonModalDialogClose References: How to: Display a Page as a Modal Dialog Box

    Read the article

  • jQuery 1.4 Opacity and IE Filters

    - by Rick Strahl
    Ran into a small problem today with my client side jQuery library after switching to jQuery 1.4. I ran into a problem with a shadow plugin that I use to provide drop shadows for absolute elements – for Mozilla WebKit browsers the –moz-box-shadow and –webkit-box-shadow CSS attributes are used but for IE a manual element is created to provide the shadow that underlays the original element along with a blur filter to provide the fuzziness in the shadow. Some of the key pieces are: var vis = el.is(":visible"); if (!vis) el.show(); // must be visible to get .position var pos = el.position(); if (typeof shEl.style.filter == "string") sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')'); sh.show() .css({ position: "absolute", width: el.outerWidth(), height: el.outerHeight(), opacity: opt.opacity, background: opt.color, left: pos.left + opt.offset, top: pos.top + opt.offset }); This has always worked in previous versions of jQuery, but with 1.4 the original filter no longer works. It appears that applying the opacity after the original filter wipes out the original filter. IOW, the opacity filter is not applied incrementally, but absolutely which is a real bummer. Luckily the workaround is relatively easy by just switching the order in which the opacity and filter are applied. If I apply the blur after the opacity I get my correct behavior back with both opacity: sh.show() .css({ position: "absolute", width: el.outerWidth(), height: el.outerHeight(), opacity: opt.opacity, background: opt.color, left: pos.left + opt.offset, top: pos.top + opt.offset }); if (typeof shEl.style.filter == "string") sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')'); While this works this still causes problems in other areas where opacity is implicitly set in code such as for fade operations or in the case of my shadow component the style/property watcher that keeps the shadow and main object linked. Both of these may set the opacity explicitly and that is still broken as it will effectively kill the blur filter. This seems like a really strange design decision by the jQuery team, since clearly the jquery css function does the right thing for setting filters. Internally however, the opacity setting doesn’t use .css instead hardcoding the filter which given jQuery’s usual flexibility and smart code seems really inappropriate. The following is from jQuery.js 1.4: var style = elem.style || elem, set = value !== undefined; // IE uses filters for opacity if ( !jQuery.support.opacity && name === "opacity" ) { if ( set ) { // IE has trouble with opacity if it does not have layout // Force it by setting the zoom level style.zoom = 1; // Set the alpha filter to set the opacity var opacity = parseInt( value, 10 ) + "" === "NaN" ? "" : "alpha(opacity=" + value * 100 + ")"; var filter = style.filter || jQuery.curCSS( elem, "filter" ) || ""; style.filter = ralpha.test(filter) ? filter.replace(ralpha, opacity) : opacity; } return style.filter && style.filter.indexOf("opacity=") >= 0 ? (parseFloat( ropacity.exec(style.filter)[1] ) / 100) + "": ""; } You can see here that the style is explicitly set in code rather than relying on $.css() to assign the value resulting in the old filter getting wiped out. jQuery 1.32 looks a little different: // IE uses filters for opacity if ( !jQuery.support.opacity && name == "opacity" ) { if ( set ) { // IE has trouble with opacity if it does not have layout // Force it by setting the zoom level elem.zoom = 1; // Set the alpha filter to set the opacity elem.filter = (elem.filter || "").replace( /alpha\([^)]*\)/, "" ) + (parseInt( value ) + '' == "NaN" ? "" : "alpha(opacity=" + value * 100 + ")"); } return elem.filter && elem.filter.indexOf("opacity=") >= 0 ? (parseFloat( elem.filter.match(/opacity=([^)]*)/)[1] ) / 100) + '': ""; } Offhand I’m not sure why the latter works better since it too is assigning the filter. However, when checking with the IE script debugger I can see that there are actually a couple of filter tags assigned when using jQuery 1.32 but only one when I use jQuery 1.4. Note also that the jQuery 1.3 compatibility plugin for jQUery 1.4 doesn’t address this issue either. Resources ww.jquery.js (shadow plug-in $.fn.shadow) © Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • SmtpClient and Locked File Attachments

    - by Rick Strahl
    Got a note a couple of days ago from a client using one of my generic routines that wraps SmtpClient. Apparently whenever a file has been attached to a message and emailed with SmtpClient the file remains locked after the message has been sent. Oddly this particular issue hasn’t cropped up before for me although these routines are in use in a number of applications I’ve built. The wrapper I use was built mainly to backfit an old pre-.NET 2.0 email client I built using Sockets to avoid the CDO nightmares of the .NET 1.x mail client. The current class retained the same class interface but now internally uses SmtpClient which holds a flat property interface that makes it less verbose to send off email messages. File attachments in this interface are handled by providing a comma delimited list for files in an Attachments string property which is then collected along with the other flat property settings and eventually passed on to SmtpClient in the form of a MailMessage structure. The jist of the code is something like this: /// <summary> /// Fully self contained mail sending method. Sends an email message by connecting /// and disconnecting from the email server. /// </summary> /// <returns>true or false</returns> public bool SendMail() { if (!this.Connect()) return false; try { // Create and configure the message MailMessage msg = this.GetMessage(); smtp.Send(msg); this.OnSendComplete(this); } catch (Exception ex) { string msg = ex.Message; if (ex.InnerException != null) msg = ex.InnerException.Message; this.SetError(msg); this.OnSendError(this); return false; } finally { // close connection and clear out headers // SmtpClient instance nulled out this.Close(); } return true; } /// <summary> /// Configures the message interface /// </summary> /// <param name="msg"></param> protected virtual MailMessage GetMessage() { MailMessage msg = new MailMessage(); msg.Body = this.Message; msg.Subject = this.Subject; msg.From = new MailAddress(this.SenderEmail, this.SenderName); if (!string.IsNullOrEmpty(this.ReplyTo)) msg.ReplyTo = new MailAddress(this.ReplyTo); // Send all the different recipients this.AssignMailAddresses(msg.To, this.Recipient); this.AssignMailAddresses(msg.CC, this.CC); this.AssignMailAddresses(msg.Bcc, this.BCC); if (!string.IsNullOrEmpty(this.Attachments)) { string[] files = this.Attachments.Split(new char[2] { ',', ';' }, StringSplitOptions.RemoveEmptyEntries); foreach (string file in files) { msg.Attachments.Add(new Attachment(file)); } } if (this.ContentType.StartsWith("text/html")) msg.IsBodyHtml = true; else msg.IsBodyHtml = false; msg.BodyEncoding = this.Encoding; … additional code omitted return msg; } Basically this code collects all the property settings of the wrapper object and applies them to the SmtpClient and in GetMessage() to an individual MailMessage properties. Specifically notice that attachment filenames are converted from a comma-delimited string to filenames from which new attachments are created. The code as it’s written however, will cause the problem with file attachments not being released properly. Internally .NET opens up stream handles and reads the files from disk to dump them into the email send stream. The attachments are always sent correctly but the local files are not immediately closed. As you probably guessed the issue is simply that some resources are not automatcially disposed when sending is complete and sure enough the following code change fixes the problem: // Create and configure the message using (MailMessage msg = this.GetMessage()) { smtp.Send(msg); if (this.SendComplete != null) this.OnSendComplete(this); // or use an explicit msg.Dispose() here } The Message object requires an explicit call to Dispose() (or a using() block as I have here) to force the attachment files to get closed. I think this is rather odd behavior for this scenario however. The code I use passes in filenames and my expectation of an API that accepts file names is that it uses the files by opening and streaming them and then closing them when done. Why keep the streams open and require an explicit .Dispose() by the calling code which is bound to lead to unexpected behavior just as my customer ran into? Any API level code should clean up as much as possible and this is clearly not happening here resulting in unexpected behavior. Apparently lots of other folks have run into this before as I found based on a few Twitter comments on this topic. Odd to me too is that SmtpClient() doesn’t implement IDisposable – it’s only the MailMessage (and Attachments) that implement it and require it to clean up for left over resources like open file handles. This means that you couldn’t even use a using() statement around the SmtpClient code to resolve this – instead you’d have to wrap it around the message object which again is rather unexpected. Well, chalk that one up to another small unexpected behavior that wasted a half an hour of my time – hopefully this post will help someone avoid this same half an hour of hunting and searching. Resources: Full code to SmptClientNative (West Wind Web Toolkit Repository) SmtpClient Documentation MSDN © Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  

    Read the article

  • Web Browser Control &ndash; Specifying the IE Version

    - by Rick Strahl
    I use the Internet Explorer Web Browser Control in a lot of my applications to display document type layout. HTML happens to be one of the most common document formats and displaying data in this format – even in desktop applications, is often way easier than using normal desktop technologies. One issue the Web Browser Control has that it’s perpetually stuck in IE 7 rendering mode by default. Even though IE 8 and now 9 have significantly upgraded the IE rendering engine to be more CSS and HTML compliant by default the Web Browser control will have none of it. IE 9 in particular – with its much improved CSS support and basic HTML 5 support is a big improvement and even though the IE control uses some of IE’s internal rendering technology it’s still stuck in the old IE 7 rendering by default. This applies whether you’re using the Web Browser control in a WPF application, a WinForms app, a FoxPro or VB classic application using the ActiveX control. Behind the scenes all these UI platforms use the COM interfaces and so you’re stuck by those same rules. Rendering Challenged To see what I’m talking about here are two screen shots rendering an HTML 5 doctype page that includes some CSS 3 functionality – rounded corners and border shadows - from an earlier post. One uses IE 9 as a standalone browser, and one uses a simple WPF form that includes the Web Browser control. IE 9 Browser:   Web Browser control in a WPF form: The IE 9 page displays this HTML correctly – you see the rounded corners and shadow displayed. Obviously the latter rendering using the Web Browser control in a WPF application is a bit lacking. Not only are the new CSS features missing but the page also renders in Internet Explorer’s quirks mode so all the margins, padding etc. behave differently by default, even though there’s a CSS reset applied on this page. If you’re building an application that intends to use the Web Browser control for a live preview of some HTML this is clearly undesirable. Feature Delegation via Registry Hacks Fortunately starting with Internet Explore 8 and later there’s a fix for this problem via a registry setting. You can specify a registry key to specify which rendering mode and version of IE should be used by that application. These are not global mind you – they have to be enabled for each application individually. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: yourapplication.exe 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: yourapplication.exe The value to set this key to is (taken from MSDN here) as decimal values: 9999 (0x270F) Internet Explorer 9. Webpages are displayed in IE9 Standards mode, regardless of the !DOCTYPE directive. 9000 (0x2328) Internet Explorer 9. Webpages containing standards-based !DOCTYPE directives are displayed in IE9 mode. 8888 (0x22B8) Webpages are displayed in IE8 Standards mode, regardless of the !DOCTYPE directive. 8000 (0x1F40) Webpages containing standards-based !DOCTYPE directives are displayed in IE8 mode. 7000 (0x1B58) Webpages containing standards-based !DOCTYPE directives are displayed in IE7 Standards mode.   The added key looks something like this in the Registry Editor: With this in place my Html Html Help Builder application which has wwhelp.exe as its main executable now works with HTML 5 and CSS 3 documents in the same way that Internet Explorer 9 does. Incidentally I accidentally added an ‘empty’ DWORD value of 0 to my EXE name and that worked as well giving me IE 9 rendering. Although not documented I suspect 0 (or an invalid value) will default to the installed browser. Don’t have a good way to test this but if somebody could try this with IE 8 installed that would be great: What happens when setting 9000 with IE 8 installed? What happens when setting 0 with IE 8 installed? Don’t forget to add Keys for Host Environments If you’re developing your application in Visual Studio and you run the debugger you may find that your application is still not rendering right, but if you run the actual generated EXE from Explorer or the OS command prompt it works. That’s because when you run the debugger in Visual Studio it wraps your application into a debugging host container. For this reason you might want to also add another registry key for yourapp.vshost.exe on your development machine. If you’re developing in Visual FoxPro make sure you add a key for vfp9.exe to see the rendering adjustments in the Visual FoxPro development environment. Cleaner HTML - no more HTML mangling! There are a number of additional benefits to setting up rendering of the Web Browser control to the IE 9 engine (or even the IE 8 engine) beyond the obvious rendering functionality. IE 9 actually returns your HTML in something that resembles the original HTML formatting, as opposed to the IE 7 default format which mangled the original HTML content. If you do the following in the WPF application: private void button2_Click(object sender, RoutedEventArgs e) { dynamic doc = this.webBrowser.Document; MessageBox.Show(doc.body.outerHtml); } you get different output depending on the rendering mode active. With the default IE 7 rendering you get: <BODY><DIV> <H1>Rounded Corners and Shadows - Creating Dialogs in CSS</H1> <DIV class=toolbarcontainer><A class=hoverbutton href="./"><IMG src="../../css/images/home.gif"> Home</A> <A class=hoverbutton href="RoundedCornersAndShadows.htm"><IMG src="../../css/images/refresh.gif"> Refresh</A> </DIV> <DIV class=containercontent> <FIELDSET><LEGEND>Plain Box</LEGEND><!-- Simple Box with rounded corners and shadow --> <DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow"> <DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox">Simple Rounded Corner Box. </DIV></DIV></FIELDSET> <FIELDSET><LEGEND>Box with Header</LEGEND> <DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow"> <DIV class="gridheaderleft roundbox-top">Box with a Header</DIV> <DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox-bottom">Simple Rounded Corner Box. </DIV></DIV></FIELDSET> <FIELDSET><LEGEND>Dialog Style Window</LEGEND> <DIV style="POSITION: relative; WIDTH: 450px" id=divDialog class="dialog boxshadow" jQuery16107208195684204002="2"> <DIV style="POSITION: relative" class=dialog-header> <DIV class=closebox></DIV>User Sign-in <DIV class=closebox jQuery16107208195684204002="3"></DIV></DIV> <DIV class=descriptionheader>This dialog is draggable and closable</DIV> <DIV class=dialog-content><LABEL>Username:</LABEL> <INPUT name=txtUsername value=" "> <LABEL>Password</LABEL> <INPUT name=txtPassword value=" "> <HR> <INPUT id=btnLogin value=Login type=button> </DIV> <DIV class=dialog-statusbar>Ready</DIV></DIV></FIELDSET> </DIV> <SCRIPT type=text/javascript>     $(document).ready(function () {         $("#divDialog")             .draggable({ handle: ".dialog-header" })             .closable({ handle: ".dialog-header",                 closeHandler: function () {                     alert("Window about to be closed.");                     return true;  // true closes - false leaves open                 }             });     }); </SCRIPT> </DIV></BODY> Now lest you think I’m out of my mind and create complete whacky HTML rooted in the last century, here’s the IE 9 rendering mode output which looks a heck of a lot cleaner and a lot closer to my original HTML of the page I’m accessing: <body> <div>         <h1>Rounded Corners and Shadows - Creating Dialogs in CSS</h1>     <div class="toolbarcontainer">         <a class="hoverbutton" href="./"> <img src="../../css/images/home.gif"> Home</a>         <a class="hoverbutton" href="RoundedCornersAndShadows.htm"> <img src="../../css/images/refresh.gif"> Refresh</a>     </div>         <div class="containercontent">     <fieldset>         <legend>Plain Box</legend>                <!-- Simple Box with rounded corners and shadow -->             <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">                              <div style="background: khaki;" class="boxcontenttext roundbox">                     Simple Rounded Corner Box.                 </div>             </div>     </fieldset>     <fieldset>         <legend>Box with Header</legend>         <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">                          <div class="gridheaderleft roundbox-top">Box with a Header</div>             <div style="background: khaki;" class="boxcontenttext roundbox-bottom">                 Simple Rounded Corner Box.             </div>         </div>     </fieldset>       <fieldset>         <legend>Dialog Style Window</legend>         <div style="width: 450px; position: relative;" id="divDialog" class="dialog boxshadow">             <div style="position: relative;" class="dialog-header">                 <div class="closebox"></div>                 User Sign-in             <div class="closebox"></div></div>             <div class="descriptionheader">This dialog is draggable and closable</div>                    <div class="dialog-content">                             <label>Username:</label>                 <input name="txtUsername" value=" " type="text">                 <label>Password</label>                 <input name="txtPassword" value=" " type="text">                                 <hr/>                                 <input id="btnLogin" value="Login" type="button">                        </div>             <div class="dialog-statusbar">Ready</div>         </div>     </fieldset>     </div> <script type="text/javascript">     $(document).ready(function () {         $("#divDialog")             .draggable({ handle: ".dialog-header" })             .closable({ handle: ".dialog-header",                 closeHandler: function () {                     alert("Window about to be closed.");                     return true;  // true closes - false leaves open                 }             });     }); </script>        </div> </body> IOW, in IE9 rendering mode IE9 is much closer (but not identical) to the original HTML from the page on the Web that we’re reading from. As a side note: Unfortunately, the browser feature emulation can't be applied against the Html Help (CHM) Engine in Windows which uses the Web Browser control (or COM interfaces anyway) to render Html Help content. I tried setting up hh.exe which is the help viewer, to use IE 9 rendering but a help file generated with CSS3 features will simply show in IE 7 mode. Bummer - this would have been a nice quick fix to allow help content served from CHM files to look better. HTML Editing leaves HTML formatting intact In the same vane, if you do any inline HTML editing in the control by setting content to be editable, IE 9’s control does a much more reasonable job of creating usable and somewhat valid HTML. It also leaves the original content alone other than the text your are editing or adding. No longer is the HTML output stripped of excess spaces and reformatted in IEs format. So if I do: private void button3_Click(object sender, RoutedEventArgs e) { dynamic doc = this.webBrowser.Document; doc.body.contentEditable = true; } and then make some changes to the document by typing into it using IE 9 mode, the document formatting stays intact and only the affected content is modified. The created HTML is reasonably clean (although it does lack proper XHTML formatting for things like <br/> <hr/>). This is very different from IE 7 mode which mangled the HTML as soon as the page was loaded into the control. Any editing you did stripped out all white space and lost all of your existing XHTML formatting. In IE 9 mode at least *most* of your original formatting stays intact. This is huge! In Html Help Builder I have supported HTML editing for a long time but the HTML mangling by the Web Browser control made it very difficult to edit the HTML later. Previously IE would mangle the HTML by stripping out spaces, upper casing all tags and converting many XHTML safe tags to its HTML 3 tags. Now IE leaves most of my document alone while editing, and creates cleaner and more compliant markup (with exception of self-closing elements like BR/HR). The end result is that I now have HTML editing in place that's much cleaner and actually capable of being manually edited. Caveats, Caveats, Caveats It wouldn't be Internet Explorer if there weren't some major compatibility issues involved in using this various browser version interaction. The biggest thing I ran into is that there are odd differences in some of the COM interfaces and what they return. I specifically ran into a problem with the document.selection.createRange() function which with IE 7 compatibility returns an expected text range object. When running in IE 8 or IE 9 mode however. I could not retrieve a valid text range with this code where loEdit is the WebBrowser control: loRange = loEdit.document.selection.CreateRange() The loRange object returned (here in FoxPro) had a length property of 0 but none of the other properties of the TextRange or TextRangeCollection objects were available. I figured this was due to some changed security settings but even after elevating the Intranet Security Zone and mucking with the other browser feature flags pertaining to security I had no luck. In the end I relented and used a JavaScript function in my editor document that returns a selection range object: function getselectionrange() { var range = document.selection.createRange(); return range; } and call that JavaScript function from my host applications code: *** Use a function in the document to get around HTML Editing issues loRange = loEdit.document.parentWindow.getselectionrange(.f.) and that does work correctly. This wasn't a big deal as I'm already loading a support script file into the editor page so all I had to do is add the function to this existing script file. You can find out more how to call script code in the Web Browser control from a host application in a previous post of mine. IE 8 and 9 also clamp down the security environment a little more than the default IE 7 control, so there may be other issues you run into. Other than the createRange() problem above I haven't seen anything else that is breaking in my code so far though and that's encouraging at least since it uses a lot of HTML document manipulation for the custom editor I've created (and would love to replace - any PROFESSIONAL alternatives anybody?) Registry Key Installation for your Application It’s important to remember that this registry setting is made per application, so most likely this is something you want to set up with your installer. Also remember that 32 and 64 bit settings require separate settings in the registry so if you’re creating your installer you most likely will want to set both keys in the registry preemptively for your application. I use Tarma Installer for all of my application installs and in Tarma I configure registry keys for both and set a flag to only install the latter key group in the 64 bit version: Because this setting is application specific you have to do this for every application you install unfortunately, but this also means that you can safely configure this setting in the registry because it is after only applied to your application. Another problem with install based installation is version detection. If IE 8 is installed I’d want 8000 for the value, if IE 9 is installed I want 9000. I can do this easily in code but in the installer this is much more difficult. I don’t have a good solution for this at the moment, but given that the app works with IE 7 mode now, IE 9 mode is just a bonus for the moment. If IE 9 is not installed and 9000 is used the default rendering will remain in use.   It sure would be nice if we could specify the IE rendering mode as a property, but I suspect the ActiveX container has to know before it loads what actual version to load up and once loaded can only load a single version of IE. This would account for this annoying application level configuration… Summary The registry feature emulation has been available for quite some time, but I just found out about it today and started experimenting around with it. I’m stoked to see that this is available as I’d pretty much given up in ever seeing any better rendering in the Web Browser control. Now at least my apps can take advantage of newer HTML features. Now if we could only get better HTML Editing support somehow <snicker>… ah can’t have everything.© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  FoxPro  Windows  

    Read the article

  • Customizing the processing of ListItems for asp:RadioButtonList with "Flow" layout and "Horizontal"

    - by evovision
    Hi, recently I was asked to add an ability to pad specific elements from each other to a certain distance in RadioButtonList control. Not quite common everyday task I would say :)   Ok, let's get started!   Prerequisites: ASP.NET Page having RadioButtonList control with RepeatLayout="Flow" RepeatDirection="Horizontal" properties set.   Implementation:  The underlying data was coming from another source, so the only fast way to add meta information about padding was the text value itself (yes, not very optimal solution): Id = 1, Name = "This is first element" and for padding we agreed to use <space/> meta tag: Id = 2, Name = "<space padcount="30px"/>This is second padded element"   To handle items rendering in RadioButtonList control I've created custom class and subclassed from it:    public class CustomRadioButtonList : RadioButtonList    {        private Action<ListItem, HtmlTextWriter> _preProcess;         protected override void RenderItem(ListItemType itemType, int repeatIndex, RepeatInfo repeatInfo, HtmlTextWriter writer)        {            if (_preProcess != null)            {                _preProcess(this.Items[repeatIndex], writer);            }             base.RenderItem(itemType, repeatIndex, repeatInfo, writer);        }         public void SetPrePrenderItemFunction(Action<ListItem, HtmlTextWriter> func)        {            _preProcess = func;        }    }   It is pretty straightforward approach, the key is to override RenderItem method. Class has SetPrePrenderItemFunction method which is used to pass custom processing function that takes 2 parameters: ListItem and HtmlTextWriter objects.   Now update existing RadioButtonList control in Default.aspx: add this to beginning of the page:   <%@ Register Namespace="Sample.Controls" TagPrefix="uc1" %>   and update the control to:   <uc1:CustomRadioButtonList ID="customRbl" runat="server" DataValueField="Id" DataTextField="Name"            RepeatLayout="Flow" RepeatDirection="Horizontal"></uc1:CustomRadioButtonList>   Now, from codebehind of the page:   Add regular expression that will be used for parsing:   private Regex _regex = new Regex(@"(?:[<]space padcount\s*?=\s*?(?:'|"")(?<padcount>\d+)(?:(?:\s+)?px)?(?:'|"")\s*?/>)(?<content>.*)?", RegexOptions.IgnoreCase | RegexOptions.Compiled);   and finally setup the processing function in Page_Load:   protected void Page_Load(object sender, EventArgs e)    {        customRbl.DataSource = DataObjects;         customRbl.SetPrePrenderItemFunction((listItem, writer) =>        {            Match match = _regex.Match(listItem.Text);            if (match.Success)            {                writer.Write(string.Format(@"<span style=""padding-left:{0}"">Extreme values: </span>", match.Groups["padcount"].Value + "px"));                 // if you need to pad listitem use code below                //x.Attributes.CssStyle.Add("padding-left", match.Groups["padcount"].Value + "px");                 // remove meta tag from text                listItem.Text = match.Groups["content"].Value;            }        });         customRbl.DataBind();    }   That's it! :)   Run the attached sample application:     P.S.: of course several other approaches could have been used for that purpose including events and the functionality for processing could also be embedded inside control itself. Current solution suits slightly better due some other reasons for situation where it was used, in your case consider this as a kick start for your own implementation :)   Source application: CustomRadioButtonList.zip

    Read the article

  • Dynamic Grouping and Columns

    - by Tim Dexter
    Some good collaboration between myself and Kan Nishida (Oracle BIP Consulting) over at bipconsulting on a question that came in yesterday to an internal mailing list. Is there a way to allow columns to be place into a template dynamically? This would be similar to the Answers Column selector. A customer has said Crystal can do this and I am trying to see how BI Pub can do the same. Example: Report has Regions as a dimension in a table, they want the user to select a parameter that will insert either Units or Dollars without having to create multiple templates. Now whether Crystal can actually do it or not is another question, can Publisher? Yes we can! Kan took the first stab. His approach, was to allow to swap out columns in a table in the report. Some quick steps: 1. Create a parameter from BIP server UI 2. Declare the parameter in RTF template You can check this post to see how you can declare the parameter from the server. http://bipconsulting.blogspot.com/2010/02/how-to-pass-user-input-values-to-report.html 3. Use the parameter value to condition if a particular column needs to be displayed or not. You can use <?if@column:.....?> syntax for Column level IF condition. The if@column is covered in user documentation. This would allow a developer to create a report with the parameter or multiple parameters to allow the user to pick a column to be included in the report. I took a slightly different tack, with the mention of the column selector in the Answers report I took that to mean that the user wanted to select more of a dimensional column and then have the report recalculate all its totals and subtotals based on that selected column. This is a little bit more involved and involves some smart XSL and XPATH expressions, but still very doable. The user can select a column as a parameter, that is passed to the template rather than the query. The parameter value that is actually passed is the element name that you want to regroup the data by. Inside the template we then reference that parameter value in our for-each-group loop. That's where we need the trixy XSL/XPATH code to get the regrouping to happen. At this juncture, I need to hat tip to Klaus, for his article on dynamic sorting that he wrote back in 2006. I basically took his sorting code and applied it to the for-each loop. You can follow both of Kan's first two steps above i.e. Create a parameter from BIP server UI - this just needs to be based on a 'list' type list of value with name/value pairs e.g. Department/DEPARTMENT_NAME, Job/JOB_TITLE, etc. The user picks the 'friendly' value and the server passes the element name to the template. Declare the parameter in RTF template - been here before lots of times right? <?param@begin:group1;'"DEPARTMENT_NAME"'?> I have used a default value so that I can test the funtionality inside the template builder (notice the single and double quotes.) Next step is to use the template builder to build a re-grouped report layout. It does not matter if its hard coded right now; we will add in the dynamic piece next. Once you have a functioning template that is re-grouping correctly. Open up the for-each-group field and modify it to use the parameter: <?for-each-group:ROW;./*[name(.) = $group1]?> 'group1' is my grouping parameter, declared above. We need the XPATH expression to find the column in the XML structure we want to group that matches the one passed by the parameter. Its essentially looking through the data tree for a match. We can show the actual grouping value in the report output with a similar XPATH expression <?./*[name(.) = $group1]?> In my example, I took things a little further so that I could have a dynamic label for the parameter value. For instance if I am using MANAGER as the parameter I want to show: Manager: Tim Dexter My XML elements are readable e.g. DEPARTMENT_NAME. Its a simple case of replacing the underscore with a space and then 'initcapping' the result: <?xdoxslt:init_cap(translate($group1,'_',' '))?> With this in place, the user can now select a grouping column in the BIP report viewer and the layout will re-group the data and any calculations based on that column. I built a group above report but you could equally build the group left version to truly mimic the Answers column selector. If you are interested you can get an example report, sample data and layout template here. Of course, you can combine Klaus' dynamic sorting, Kan's conditional column approach and this dynamic grouping to build a real kick ass report for users that will keep them happy for hours..

    Read the article

  • Microsoft, jQuery, and Templating

    - by Stephen Walther
    About two months ago, John Resig and I met at Café Algiers in Harvard square to discuss how Microsoft can contribute to the jQuery project. Today, Scott Guthrie announced in his second-day MIX keynote that Microsoft is throwing its weight behind jQuery and making it the primary way to develop client-side Ajax applications using Microsoft technologies. What does this announcement mean? It means that Microsoft is shifting its resources to invest in jQuery. Developers on the ASP.NET team are now working full-time to contribute features to the core jQuery library. Furthermore, we are working with other teams at Microsoft to ensure that our technologies work great with jQuery. We are contributing to the open-source jQuery project in the exact same way that any other company or individual from the community can contribute to jQuery. We are writing proposals, submitting the proposals to the jQuery forums, and revising the proposals in response to community feedback. The jQuery team can decide to reject or accept any feature that we propose. Any feature that Microsoft contributes to jQuery will be platform neutral. In other words, Microsoft contributions will benefit PHP and RAILS developers just as much as they benefit ASP.NET developers. Microsoft contributions to jQuery will improve the web for everyone. Contributing Support for Templates to jQuery Core Our first proposal concerns templating. We want to contribute support for templates to jQuery so that JavaScript developers can use jQuery to easily display a set of database records. You can read our templating proposal here: http://wiki.github.com/nje/jquery/jquery-templates-proposal You can download and play with our prototype for templating here: http://github.com/nje/jquery-tmpl The following code illustrates how you can use a template to display a set of products in a bulleted list: <script type="text/javascript"> jQuery(function(){ var products = [ { name: "Product 1", price: 12.99}, { name: "Product 2", price: 9.99}, { name: "Product 3", price: 35.59} ]; $("ul").append("#template", products); }); </script> <script id="template" type="text/html"> <li>{%= name %} - {%= price %}</li> </script> <ul></ul> The template is contained in a SCRIPT element that has a TYPE=”text/html” attribute. Browsers ignore the contents of a SCRIPT element when they don’t understand the content type. Notice that the placeholder {%=...%} is used within the template to indicate where the name and price of a product should appear. The delimiters {%=…%} are used for expressions and the delimiters {%...%} are used for code. Finally, the products are rendered using the template with the call to $(“ul”).append(“#template”, products). The standard jQuery DOM manipulation methods have been modified to support templates. When the page above is rendered, you get the bulleted list displayed in the following figure. Our goal is to keep our proposal for templates as simple as possible. After support for templating has been added to jQuery, plug-in authors can take advantage of templating when building complex data-driven plug-ins such as a DataGrid plug-in. The Ajax Control Toolkit Over 100,000 developers download the Ajax Control Toolkit every month. That’s a mind-boggling number of downloads. We realize that the Ajax Control Toolkit is extremely popular among ASP.NET Web Forms developers and we want to continue to invest in the Ajax Control Toolkit. If you are adding JavaScript interactivity to an ASP.NET Web Forms application, and you don’t want to write JavaScript, then we recommend that you use the server controls in the Ajax Control Toolkit. Using the Ajax Control Toolkit does not require knowledge of JavaScript and the toolkit enables you to build applications with the concepts familiar to ASP.NET Web Forms applications developers. If, however, you are interested in creating client-side interactivity without server controls then we recommend that you use jQuery. We plan to continue to release new versions of the Ajax Control Toolkit every few months. Our goal is to continue to improve the quality of the Ajax Control Toolkit and to make it easier for the community to contribute code, bug fixes, and documentation. The ASP.NET Ajax Library We are moving the ASP.NET Ajax Library into the Ajax Control Toolkit. If you currently use ASP.NET Ajax Library client templates, client data-binding, or the client script loader then you can continue to use these features by downloading the Ajax Control Toolkit. Be aware that our focus with the Ajax Control Toolkit is server-side Ajax.  For client-side Ajax, we are shifting our focus to jQuery. For example, if you have been using ASP.NET Ajax Library client templates then we recommend that you shift to using jQuery instead. Conclusion Our plan is to focus on jQuery as the primary technology for building client-side Ajax applications moving forward. We want to adapt Microsoft technologies to work great with jQuery and we want to contribute features to jQuery that will make the web better for everyone. We are very excited to be working with the jQuery core team.

    Read the article

  • When should JavaScript generate HTML?

    - by VirtuosiMedia
    I try to generate as little HTML from JavaScript as possible. Instead, I prefer to manipulate existing markup whenever I can and only generate HTML when I need to dynamically insert an element that isn't a good candidate for using Ajax. This, I believe, makes it far easier to maintain the code and quickly make changes to it because the markup is easier to read and trace. My rule of thumb is: HTML is for document structure, CSS is for presentation, JavaScript is for behavior. However, I've seen a lot of JS code that generates mounds of HTML, including entire forms and content-heavy modal dialogs. In general, which method is considered best practice? In what circumstances should JavaScript be used to generate HTML and when should it not?

    Read the article

  • Is the “jQuery programming style” a kind of Reactive programming?

    - by Peter Krauss
    jQuery is a Javascript library and framework, but when we are programming with jQuery into DOM problems/solutions, we can practice a style quite different of programming... We can read about jQuery at Wikipedia, The set of jQuery core features — DOM element selections, traversal and manipulation —, enabled by its selector engine (...), created a new "programming style", fusing algorithms and DOM-data-structures This question is similar to the "subquestion-3" of this question but not so generic. The focus here is about this new kind of "programming style"... So, the question: Is the "jQuery programming style in DOM context" a new paradign? Or it is more one example of reactive programming (not "cell-oriented" but "DOM-node oriented") or another one? We have no "standard taxonomy of paradigms", so, please, in your answer, indicate also your "best choice for Wikipedia Paradign". Example: if you understand that "jQuery programming DOM" is like "awk filtering data", your choice can be event-driven.

    Read the article

  • Using the new CSS Analyzer in JavaFX Scene Builder

    - by Jerome Cambon
    As you know, JavaFX provides from the API many properties that you can set to customize or make your components to behave as you want. For instance, for a Button, you can set its font, or its max size.Using Scene Builder, these properties can be explored and modified using the inspector. However, JavaFX also provides many other properties to have a fine grained customization of your components : the css properties. These properties are typically set from a css stylesheet. For instance, you can set a background image on a Button, change the Button corners, etc... Using Scene Builder, until now, you could set a css property using the inspector Style and Stylesheet editors. But you had to go to the JavaFX css documentation to know the css properties that can be applied to a given component. Hopefully, Scene Builder 1.1 added recently a very interesting new feature : the CSS Analyzer.It allows you to explore all the css properties available for a JavaFX component, and helps you to build your css rules. A very simple example : make a Button rounded Let’s take a very simple example:you would like to customize your Buttons to make them rounded. First, enable the CSS Analyzer, using the ‘View->Show CSS Analyzer’ menu. Grow the main window, and the CSS Analyzer to get more room: Then, drop a Button from the Library to the ContentView: the CSS Analyzer is now showing the Button css properties: As you can see, there is a ‘-fx-background-radius’ css property that allow to define the radius of the background (note that you can get the associated css documentation by clicking on the property name). You can then experiment this by setting the Button style property from the inspector: As you can see in the css doc, one can set the same radius for the 4 corners by a simple number. Once the style value is applied, the Button is now rounded, as expected.Look at the CSS Analyzer: the ‘-fx-background-radius’ property has now 2 entries: the default one, and the one we just entered from the Style property. The new value “win”: it overrides the default one, and become the actual value (to highlight this, the cell background becomes blue). Now, you will certainly prefer to apply this new style to all the Buttons of your FXML document, and have a css rule for this.To do this, save you document first, and create a css file in the same directory than the new document.Create an empty css file (e.g. test.css), and attach it the the root AnchorPane, by first selecting the AnchorPane, then using the Stylesheets editor from the inspector: Add the corresponding css rule to your new test.css file, from your preferred editor (Netbeans for me ;-) and save it. .button { -fx-background-radius: 10px;} Now, select your Button and have a look at the CSS Analyzer. As you can see, the Button is inheriting the css rule (since the Button is a child of the AnchorPane), and still have its inline Style. The Inline style “win”, since it has precedence on the stylesheet. The CSS Analyzer columns are displayed by precedence order.Note the small right-arrow icons, that allow to jump to the source of the value (either test.css, or the inspector in this case).Of course, unless you want to set a specific background radius for this particular Button, you can remove the inline Style from the inspector. Changing the color of a TitledPane arrow In some cases, it can be useful to be able to select the inner element you want to style directly from the Content View . Drop a TitledPane to the Content View. Then select from the CSS Analyzer the CSS cursor (the other cursor on the left allow you to come back to ‘standard’ selection), that will allow to select an inner element: height: 62px;" align="LEFT" border="0"> … and select the TitledPane arrow, that will get a yellow background: … and the Styleable Path is updated: To define a new css rule, you can first copy the Styleable path : .. then paste it in your test.css file. Then, add an entry to set the -fx-background-color to red. You should have something like: .titled-pane:expanded .title .arrow-button .arrow { -fx-background-color : red;} As soon as the test.css is saved, the change is taken into account in Scene Builder. You can also use the Styleable Path to discover all the inner elements of TitledPane, by clicking on the arrow icon: More details You can see the CSS Analyzer in action (and many other features) from the Java One BOF: BOF4279 - In-Depth Layout and Styling with the JavaFX Scene Builder presented by my colleague Jean-Francois Denise. On the right hand, click on the Media link to go to the video (streaming) of the presa. The Scene Builder support of CSS starts at 9:20 The CSS Analyzer presentation starts at 12:50

    Read the article

  • Algorithmia Source Code released on CodePlex

    - by FransBouma
    Following the release of our BCL Extensions Library on CodePlex, we have now released the source-code of Algorithmia on CodePlex! Algorithmia is an algorithm and data-structures library for .NET 3.5 or higher and is one of the pillars LLBLGen Pro v3's designer is built on. The library contains many data-structures and algorithms, and the source-code is well documented and commented, often with links to official descriptions and papers of the algorithms and data-structures implemented. The source-code is shared using Mercurial on CodePlex and is licensed under the friendly BSD2 license. User documentation is not available at the moment but will be added soon. One of the main design goals of Algorithmia was to create a library which contains implementations of well-known algorithms which weren't already implemented in .NET itself. This way, more developers out there can enjoy the results of many years of what the field of Computer Science research has delivered. Some algorithms and datastructures are known in .NET but are re-implemented because the implementation in .NET isn't efficient for many situations or lacks features. An example is the linked list in .NET: it doesn't have an O(1) concat operation, as every node refers to the containing LinkedList object it's stored in. This is bad for algorithms which rely on O(1) concat operations, like the Fibonacci heap implementation in Algorithmia. Algorithmia therefore contains a linked list with an O(1) concat feature. The following functionality is available in Algorithmia: Command, Command management. This system is usable to build a fully undo/redo aware system by building your object graph using command-aware classes. The Command pattern is implemented using a system which allows transparent undo-redo and command grouping so you can use it to make a class undo/redo aware and set properties, use its contents without using commands at all. The Commands namespace is the namespace to start. Classes you'd want to look at are CommandifiedMember, CommandifiedList and KeyedCommandifiedList. See the CommandQueueTests in the test project for examples. Graphs, Graph algorithms. Algorithmia contains a sophisticated graph class hierarchy and algorithms implemented onto them: non-directed and directed graphs, as well as a subgraph view class, which can be used to create a view onto an existing graph class which can be self-maintaining. Algorithms include transitive closure, topological sorting and others. A feature rich depth-first search (DFS) crawler is available so DFS based algorithms can be implemented quickly. All graph classes are undo/redo aware, as they can be set to be 'commandified'. When a graph is 'commandified' it will do its housekeeping through commands, which makes it fully undo-redo aware, so you can remove, add and manipulate the graph and undo/redo the activity automatically without any extra code. If you define the properties of the class you set as the vertex type using CommandifiedMember, you can manipulate the properties of vertices and the graph contents with full undo/redo functionality without any extra code. Heaps. Heaps are data-structures which have the largest or smallest item stored in them always as the 'root'. Extracting the root from the heap makes the heap determine the next in line to be the 'maximum' or 'minimum' (max-heap vs. min-heap, all heaps in Algorithmia can do both). Algorithmia contains various heaps, among them an implementation of the Fibonacci heap, one of the most efficient heap datastructures known today, especially when you want to merge different instances into one. Priority queues. Priority queues are specializations of heaps. Algorithmia contains a couple of them. Sorting. What's an algorithm library without sort algorithms? Algorithmia implements a couple of sort algorithms which sort the data in-place. This aspect is important in situations where you want to sort the elements in a buffer/list/ICollection in-place, so all data stays in the data-structure it already is stored in. PropertyBag. It re-implements Tony Allowatt's original idea in .NET 3.5 specific syntax, which is to have a generic property bag and to be able to build an object in code at runtime which can be bound to a property grid for editing. This is handy for when you have data / settings stored in XML or other format, and want to create an editable form of it without creating many editors. IEditableObject/IDataErrorInfo implementations. It contains default implementations for IEditableObject and IDataErrorInfo (EditableObjectDataContainer for IEditableObject and ErrorContainer for IDataErrorInfo), which make it very easy to implement these interfaces (just a few lines of code) without having to worry about bookkeeping during databinding. They work seamlessly with CommandifiedMember as well, so your undo/redo aware code can use them out of the box. EventThrottler. It contains an event throttler, which can be used to filter out duplicate events in an event stream coming into an observer from an event. This can greatly enhance performance in your UI without needing to do anything other than hooking it up so it's placed between the event source and your real handler. If your UI is flooded with events from data-structures observed by your UI or a middle tier, you can use this class to filter out duplicates to avoid redundant updates to UI elements or to avoid having observers choke on many redundant events. Small, handy stuff. A MultiValueDictionary, which can store multiple unique values per key, instead of one with the default Dictionary, and is also merge-aware so you can merge two into one. A Pair class, to quickly group two elements together. Multiple interfaces for helping with building a de-coupled, observer based system, and some utility extension methods for the defined data-structures. We regularly update the library with new code. If you have ideas for new algorithms or want to share your contribution, feel free to discuss it on the project's Discussions page or send us a pull request. Enjoy!

    Read the article

  • Upgrading VSIX extensions from VS2012 to VS2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/27/upgrading-vsix-extensions-from-vs2012-to-vs2013.aspx  As consumers of your Visual Studio extensions start to move over to VS 2013, you will have to upgrade the Visual Studio extensions you build for Visual Studio 2012 to Visual Studio 2013 and republish to the Visual Studio extension gallery. Failing which, it will not be possible for your consumers to install and use your extensions on Visual Studio 2013.   Objective In this blog post, I’ll show you how simple it is to upgrade your Visual Studio 2012 extension to Visual Studio 2013. There aren’t any reported breaking changes between VS 2012 SDK and VS 2013 SDK, the upgrade usually involves, rebuilding the extension against VS 2013 SDK and updating the vsix manifest file.              Walkthrough Download the Visual Studio 2013 SDK - You will need to download the Visual Studio 2013 SDK in order to open up the Visual Studio extension project in Visual Studio 2013. The SDK can be downloaded from here. Install the SDK before you proceed.                2. Once the VS 2013 SDK has been installed, open up your package project. For the purposes of this blog post, I’ll open up the Avanade Extension – Software Inventory in Visual Studio 2013. You will notice that Visual Studio doesn’t load the project but let’s you know that the project needs to be Migrated.                  3. Right click the project and choose the option ‘Reload Project’ from the Context Menu.                  4. Choosing the Reload Project option brings up an upgrade window, telling you that the upgrade is a one way only upgrade i.e. the project will be changed to work with Visual Studio 2013 and you will not be able to open the project up in Visual Studio 2012. My recommendation would be to create a Visual Studio 2013 branch and upgrading the project in that branch only, so if you need to go back to Visual Studio 2012 project at some point, you have a handy reference in a separate branch.             5. Upon clicking Ok, the project is updated. See below, the following changes are made at the time of upgrade,           - The runtime version is updated in the Resources.Designer.cs file                      - The Minimum version of Visual Studio in the package project file is changed from 11.0 to 12.0                    6. Reference VS 2013 dll’s rather than VS 2012 dll’s. So reference Microsoft.TeamFoundation.Client.dll and Microsoft.TeamFoundation.Controls.dll from C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ReferenceAssemblies\v2.0 and C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ReferenceAssemblies\v4.5. If you have any other API references, then change the references to point to VS 2013 instead of VS 2012.                          7. Rebuild your solution to ensure there are no breaking changes. Success!                8. Update VSIX Manifest file (the file source.extnsion.vsixmanifest contains the meta data for your VSIX).          - Update the Install Targets from 11.0 to 12.0. This basically enforces that the extension can be installed on Visual Studio 2013 version of Visual Studio.                         - Update the Dependencies from Visual Studio MPF 11.0 to Visual Studio MPF 12.0              9. Rebuild the solution and open up the bin folder for the Package project and look for the file *.vsix file [Microsoft Visual Studio Extension].         - This is basically the installer for your extension.                 - Double click the installer to launch the installer wizard. Viola! You can see the package installation wizard opens up and gives you the option to install the extension for Visual Studio 2013.                    - Click Install to Continue                    - Note – If you run into the exception “23/06/2013 10:42:18 - Install Error : Microsoft.VisualStudio.ExtensionManager.InstallByMsiException: The InstalledByMSI element in extension Avanade Extensions cannot be 'true' when installing an extension through the Extensions and Updates Installer.  The element can only be 'true' when an MSI lays down the extension manifest file.” Ensure you have the option “This VSIX is installed by Windows Installer” unchecked in the Install Targets tab.        10. Verifying that the extension has installed correctly.           - Open Extension Manager and verify that the installed extension shows up in the extension manager “list of installed VSIX”.                      11. First Look at the updated Extension                         - The links have now been moved to the context menu, so to see the navigation links, you’ll have to right click on the icon and select the option from the context menu.                                        Note – The Avanade Extension being used in the demo has been developed by Utkarsh and Tarun. The Software Inventory Extension for Visual Studio 2012…  allows you to see the list of Software installed on the hosted build server right from with in Visual Studio,  the extension also allows you to export this list to excel. More details on how this has been implemented can be found here.   I hope you found this useful. In case you have any questions or feedback, feel free to reach out on Visual Studio extensibility MSDN forums or via Microsoft Visual Studio feedback forum. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Did you miss the IIS7 FastCGI Update ?

    In case you did, there are some interesting improvements like. Monitor changes to a file. The module can be configured to listen for file change notifications on a specific file and when that file changes, the module will recycle FastCGI processes for the process pool. This feature can be used to recycle PHP processes when changes to php.ini file occur. To enable this feature use the monitorChangesTo setting in the <fastCgi> configuration element. Real-time tuning of MaxInstances setting....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Trabajando el redireccionamiento de usuarios/Working with user redirect methods

    - by Jason Ulloa
    La protección de las aplicaciones es un elemento que no se puede dejar por fuera cuando se elabora un sistema. Cada parte o elemento de código que protege nuetra aplicación debe ser cuidadosamente seleccionado y elaborado. Una de las cosas comunes con las que nos topamos en asp.net cuando deseamos trabajar con usuarios, es con la necesidad de poder redireccionarlos a los distintos elementos o páginas dependiendo del rol. Pues precisamente eso es lo que haremos, vamos a trabajar con el Web.config de nuestra aplicación y le añadiremos unas pequeñas líneas de código para lograr dar un poco mas de seguridad al sistema y sobre todo lograr el redireccionamiento. Así que veamos como logramos lo deseado: Como bien sabemos el web.config nos permite manejar muchos elementos dentro de asp.net, muchos de ellos relacionados con la seguridad, asi como tambien nos brinda la posibilidad de poder personalizar los elementos para poder adaptarlo a nuestras necesidades. Así que, basandonos en el principio de que podemos personalizar el web.config, entonces crearemos una sección personalizada, que será la que utilicemos para manejar el redireccionamiento: Nuestro primer paso será ir a nuestro web.config y buscamos las siguientes líneas: <configuration>     <configSections>  </sectionGroup>             </sectionGroup>         </sectionGroup> Y luego de ellas definiremos una nueva sección  <section name="loginRedirectByRole" type="crabit.LoginRedirectByRoleSection" allowLocation="true" allowDefinition="Everywhere" /> El section name corresponde al nombre de nuestra nueva sección Type corresponde al nombre de la clase (que pronto realizaremos) y que será la encargada del Redirect Como estamos trabajando dentro de la seccion de configuración una vez definidad nuestra sección personalizada debemos cerrar esta sección  </configSections> Por lo que nuestro web.config debería lucir de la siguiente forma <configuration>     <configSections>  </sectionGroup>             </sectionGroup>         </sectionGroup> <section name="loginRedirectByRole" type="crabit.LoginRedirectByRoleSection" allowLocation="true" allowDefinition="Everywhere" /> </configSections> Anteriormente definimos nuestra sección, pero esta sería totalmente inútil sin el Metodo que le da vida. En nuestro caso el metodo loginRedirectByRole, este metodo lo definiremos luego del </configSections> último que cerramos: <loginRedirectByRole>     <roleRedirects>       <add role="Administrador" url="~/Admin/Default.aspx" />       <add role="User" url="~/User/Default.aspx" />     </roleRedirects>   </loginRedirectByRole> Como vemos, dentro de nuestro metodo LoginRedirectByRole tenemos el elemento add role. Este elemento será el que posteriormente le indicará a la aplicación hacia donde irá el usuario cuando realice un login correcto. Así que, veamos un poco esta configuración: add role="Administrador" corresponde al nombre del Role que tenemos definidio, pueden existir tantos elementos add role como tengamos definidos en nuestra aplicación. El elemento URL indica la ruta o página a la que será dirigido un usuario una vez logueado y dentro de la aplicación. Como vemos estamos utilizando el ~ para indicar que es una ruta relativa. Con esto hemos terminado la configuración de nuestro web.config, ahora veamos a fondo el código que se encargará de leer estos elementos y de utilziarlos: Para nuestro ejemplo, crearemos una nueva clase denominada LoginRedirectByRoleSection, recordemos que esta clase es la que llamamos en el elemento TYPE definido en la sección de nuestro web.config. Una vez creada la clase, definiremos algunas propiedades, pero antes de ello le indicaremos a nuestra clase que debe heredar de configurationSection, esto para poder obtener los elementos del web.config.  Inherits ConfigurationSection Ahora nuestra primer propiedad   <ConfigurationProperty("roleRedirects")> _         Public Property RoleRedirects() As RoleRedirectCollection             Get                 Return DirectCast(Me("roleRedirects"), RoleRedirectCollection)             End Get             Set(ByVal value As RoleRedirectCollection)                 Me("roleRedirects") = value             End Set         End Property     End Class Esta propiedad será la encargada de obtener todos los roles que definimos en la metodo personalizado de nuestro web.config Nuestro segundo paso será crear una segunda clase (en la misma clase LoginRedirectByRoleSection) a esta clase la llamaremos RoleRedirectCollection y la heredaremos de ConfigurationElementCollection y definiremos lo siguiente Public Class RoleRedirectCollection         Inherits ConfigurationElementCollection         Default Public ReadOnly Property Item(ByVal index As Integer) As RoleRedirect             Get                 Return DirectCast(BaseGet(index), RoleRedirect)             End Get         End Property         Default Public ReadOnly Property Item(ByVal key As Object) As RoleRedirect             Get                 Return DirectCast(BaseGet(key), RoleRedirect)             End Get         End Property         Protected Overrides Function CreateNewElement() As ConfigurationElement             Return New RoleRedirect()         End Function         Protected Overrides Function GetElementKey(ByVal element As ConfigurationElement) As Object             Return DirectCast(element, RoleRedirect).Role         End Function     End Class Nuevamente crearemos otra clase esta vez llamada RoleRedirect y en este caso la heredaremos de ConfigurationElement. Nuestra nueva clase debería lucir así: Public Class RoleRedirect         Inherits ConfigurationElement         <ConfigurationProperty("role", IsRequired:=True)> _         Public Property Role() As String             Get                 Return DirectCast(Me("role"), String)             End Get             Set(ByVal value As String)                 Me("role") = value             End Set         End Property         <ConfigurationProperty("url", IsRequired:=True)> _         Public Property Url() As String             Get                 Return DirectCast(Me("url"), String)             End Get             Set(ByVal value As String)                 Me("url") = value             End Set         End Property     End Class Una vez que nuestra clase madre esta lista, lo unico que nos queda es un poc de codigo en la pagina de login de nuestro sistema (por supuesto, asumo que estan utilizando  los controles de login que por defecto tiene asp.net). Acá definiremos nuestros dos últimos metodos  Protected Sub ctllogin_LoggedIn(ByVal sender As Object, ByVal e As System.EventArgs) Handles ctllogin.LoggedIn         RedirectLogin(ctllogin.UserName)     End Sub El procedimiento loggeding es parte del control login de asp.net y se desencadena en el momento en que el usuario hace loguin correctametne en nuestra aplicación Este evento desencadenará el siguiente procedimiento para redireccionar.     Private Sub RedirectLogin(ByVal username As String)         Dim roleRedirectSection As crabit.LoginRedirectByRoleSection = DirectCast(ConfigurationManager.GetSection("loginRedirectByRole"), crabit.LoginRedirectByRoleSection)         For Each roleRedirect As crabit.RoleRedirect In roleRedirectSection.RoleRedirects             If Roles.IsUserInRole(username, roleRedirect.Role) Then                 Response.Redirect(roleRedirect.Url)             End If         Next     End Sub   Con esto, nuestra aplicación debería ser capaz de redireccionar sin problemas y manejar los roles.  Además, tambien recordar que nuestro ejemplo se basa en la utilización del esquema de bases de datos que por defecto nos proporcionada asp.net.

    Read the article

  • RIDC Accelerator for Portal

    - by Stefan Krantz
    What is RIDC?Remote IntraDoc Client is a Java enabled API that leverages simple transportation protocols like Socket, HTTP and JAX/WS to execute content service operations in WebCenter Content Server. Each operation by design in the Content Server will execute stateless and return a complete result of the request. Each request object simply specifies the in a Map format (key and value pairs) what service to call and what parameters settings to apply. The result responded with will be built on the same Map format (key and value pairs). The possibilities with RIDC is endless since you can consume any available service (even custom made ones), RIDC can be executed from any Java SE application that has any WebCenter Content Services needs. WebCenter Portal and the example Accelerator RIDC adapter frameworkWebCenter Portal currently integrates and leverages WebCenter Content Services to enable available use cases in the portal today, like Content Presenter and Doc Lib. However the current use cases only covers few of the scenarios that the Content Server has to offer, in addition to the existing use cases it is not rare that the customer requirements requires additional steps and functionality that is provided by WebCenter Content but not part of the use cases from the WebCenter Portal.The good news to this is RIDC, the second good news is that WebCenter Portal already leverages the RIDC and has a connection management framework in place. The million dollar question here is how can I leverage this infrastructure for my custom use cases. Oracle A-Team has during its interactions produced a accelerator adapter framework that will reuse and leverage the existing connections provisioned in the webcenter portal application (works for WebCenter Spaces as well), as well as a very comprehensive design patter to minimize the work involved when exposing functionality. Let me introduce the RIDCCommon framework for accelerating WebCenter Content consumption from WebCenter Portal including Spaces. How do I get started?Through a few easy steps you will be on your way, Extract the zip file RIDCCommon.zip to the WebCenter Portal Application file structure (PortalApp) Open you Portal Application in JDeveloper (PS4/PS5) select to open the project in your application - this will add the project as a member of the application Update the Portal project dependencies to include the new RIDCCommon project Make sure that you WebCenter Content Server connection is marked as primary (a checkbox at the top of the connection properties form) You should by this stage have a similar structure in your JDeveloper Application Project Portal Project PortalWebAssets Project RIDCCommon Since the API is coming with some example operations that has already been exposed as DataControl actions, if you open Data Controls accordion you should see following: How do I implement my own operation? Create a new Java Class in for example com.oracle.ateam.portal.ridc.operation call it (GetDocInfoOperation) Extend the abstract class com.oracle.ateam.portal.ridc.operation.RIDCAbstractOperation and implement the interface com.oracle.ateam.portal.ridc.operation.IRIDCOperation The only method you actually are required to implement is execute(RIDCManager, IdcClient, IdcContext) The best practice to set object references for the operation is through the Constructor, example below public GetDocInfoOperation(String dDocName)By leveraging the constructor you can easily force the implementing class to pass right information, you can also overload the Constructor with more or less parameters as required Implement the execute method, the work you supposed to execute here is creating a new request binder and retrieve a response binder with the information in the request binder.In this case the dDocName for which we want the DocInfo Secondly you have to process the response binder by extracting the information you need from the request and restore this information in a simple POJO Java BeanIn the example below we do this in private void processResult(DataBinder responseData) - the new SearchDataObject is a Member of the GetDocInfoOperation so we can return this from a access method. Since the RIDCCommon API leverage template pattern for the operations you are now required to add a method that will enable access to the result after the execution of the operationIn the example below we added the method public SearchDataObject getDataObject() - this method returns the pre processed SearchDataObject from the execute method  This is it, as you can see on the code below you do not need more than 32 lines of very simple code 1: public class GetDocInfoOperation extends RIDCAbstractOperation implements IRIDCOperation { 2: private static final String DOC_INFO_BY_NAME = "DOC_INFO_BY_NAME"; 3: private String dDocName = null; 4: private SearchDataObject sdo = null; 5: 6: public GetDocInfoOperation(String dDocName) { 7: super(); 8: this.dDocName = dDocName; 9: } 10:   11: public boolean execute(RIDCManager manager, IdcClient client, 12: IdcContext userContext) throws Exception { 13: DataBinder dataBinder = createNewRequestBinder(DOC_INFO_BY_NAME); 14: dataBinder.putLocal(DocumentAttributeDef.NAME.getName(), dDocName); 15: 16: DataBinder responseData = getResponseBinder(dataBinder); 17: processResult(responseData); 18: return true; 19: } 20: 21: private void processResult(DataBinder responseData) { 22: DataResultSet rs = responseData.getResultSet("DOC_INFO"); 23: for(DataObject dobj : rs.getRows()) { 24: this.sdo = new SearchDataObject(dobj); 25: } 26: super.setMessage(responseData.getLocal(ATTR_MESSAGE)); 27: } 28: 29: public SearchDataObject getDataObject() { 30: return this.sdo; 31: } 32: } How do I execute my operation? In the previous section we described how to create a operation, so by now you should be ready to execute the operation Step one either add a method to the class  com.oracle.ateam.portal.datacontrol.ContentServicesDC or a class of your own choiceRemember the RIDCManager is a very light object and can be created where needed Create a method signature look like this public SearchDataObject getDocInfo(String dDocName) throws Exception In the method body - create a new instance of GetDocInfoOperation and meet the constructor requirements by passing the dDocNameGetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName) Execute the operation via the RIDCManager instance rMgr.executeOperation(docInfo) Return the result by accessing it from the executed operationreturn docInfo.getDataObject() 1: private RIDCManager rMgr = null; 2: private String lastOperationMessage = null; 3:   4: public ContentServicesDC() { 5: super(); 6: this.rMgr = new RIDCManager(); 7: } 8: .... 9: public SearchDataObject getDocInfo(String dDocName) throws Exception { 10: GetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName); 11: boolean boolVal = rMgr.executeOperation(docInfo); 12: lastOperationMessage = docInfo.getMessage(); 13: return docInfo.getDataObject(); 14: }   Get the binaries! The enclosed code in a example that can be used as a reference on how to consume and leverage similar use cases, user has to guarantee appropriate quality and support.  Download link: https://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/RIDCCommon.zip RIDC API Referencehttp://docs.oracle.com/cd/E23943_01/apirefs.1111/e17274/toc.htm

    Read the article

  • Getting WCF Bindings and Behaviors from any config source

    - by cibrax
    The need of loading WCF bindings or behaviors from different sources such as files in a disk or databases is a common requirement when dealing with configuration either on the client side or the service side. The traditional way to accomplish this in WCF is loading everything from the standard configuration section (serviceModel section) or creating all the bindings and behaviors by hand in code. However, there is a solution in the middle that becomes handy when more flexibility is needed. This solution involves getting the configuration from any place, and use that configuration to automatically configure any existing binding or behavior instance created with code.  In order to configure a binding instance (System.ServiceModel.Channels.Binding) that you later inject in any endpoint on the client channel or the service host, you first need to get a binding configuration section from any configuration file (you can generate a temp file on the fly if you are using any other source for storing the configuration).  private BindingsSection GetBindingsSection(string path) { System.Configuration.Configuration config = System.Configuration.ConfigurationManager.OpenMappedExeConfiguration( new System.Configuration.ExeConfigurationFileMap() { ExeConfigFilename = path }, System.Configuration.ConfigurationUserLevel.None); var serviceModel = ServiceModelSectionGroup.GetSectionGroup(config); return serviceModel.Bindings; }   The BindingsSection contains a list of all the configured bindings in the serviceModel configuration section, so you can iterate through all the configured binding that get the one you need (You don’t need to have a complete serviceModel section, a section with the bindings only works).  public Binding ResolveBinding(string name) { BindingsSection section = GetBindingsSection(path); foreach (var bindingCollection in section.BindingCollections) { if (bindingCollection.ConfiguredBindings.Count > 0 && bindingCollection.ConfiguredBindings[0].Name == name) { var bindingElement = bindingCollection.ConfiguredBindings[0]; var binding = (Binding)Activator.CreateInstance(bindingCollection.BindingType); binding.Name = bindingElement.Name; bindingElement.ApplyConfiguration(binding); return binding; } } return null; }   The code above does just that, and also instantiates and configures the Binding object (System.ServiceModel.Channels.Binding) you are looking for. As you can see, the binding configuration element contains a method “ApplyConfiguration” that receives the binding instance that needs to be configured. A similar thing can be done for instance with the “Endpoint” behaviors. You first get the BehaviorsSection, and then, the behavior you want to use.  private BehaviorsSection GetBehaviorsSection(string path) { System.Configuration.Configuration config = System.Configuration.ConfigurationManager.OpenMappedExeConfiguration( new System.Configuration.ExeConfigurationFileMap() { ExeConfigFilename = path }, System.Configuration.ConfigurationUserLevel.None); var serviceModel = ServiceModelSectionGroup.GetSectionGroup(config); return serviceModel.Behaviors; }public List<IEndpointBehavior> ResolveEndpointBehavior(string name) { BehaviorsSection section = GetBehaviorsSection(path); List<IEndpointBehavior> endpointBehaviors = new List<IEndpointBehavior>(); if (section.EndpointBehaviors.Count > 0 && section.EndpointBehaviors[0].Name == name) { var behaviorCollectionElement = section.EndpointBehaviors[0]; foreach (BehaviorExtensionElement behaviorExtension in behaviorCollectionElement) { object extension = behaviorExtension.GetType().InvokeMember("CreateBehavior", BindingFlags.InvokeMethod | BindingFlags.NonPublic | BindingFlags.Instance, null, behaviorExtension, null); endpointBehaviors.Add((IEndpointBehavior)extension); } return endpointBehaviors; } return null; }   In this case, the code for creating the behavior instance is more tricky. First of all, a behavior in the configuration section actually represents a set of “IEndpoint” behaviors, and the behavior element you get from the configuration does not have any public method to configure an existing behavior instance. This last one only contains a protected method “CreateBehavior” that you can use for that purpose. Once you get this code implemented, a client channel can be easily configured as follows  var binding = resolver.ResolveBinding("MyBinding"); var behaviors = resolver.ResolveEndpointBehavior("MyBehavior"); SampleServiceClient client = new SampleServiceClient(binding, new EndpointAddress(new Uri("http://localhost:13749/SampleService.svc"), new DnsEndpointIdentity("localhost"))); foreach (var behavior in behaviors) { if(client.Endpoint.Behaviors.Contains(behavior.GetType())) { client.Endpoint.Behaviors.Remove(behavior.GetType()); } client.Endpoint.Behaviors.Add(behavior); }   The code above assumes that a configuration file (in any place) with a binding “MyBinding” and a behavior “MyBehavior” exists. That file can look like this,  <system.serviceModel> <bindings> <basicHttpBinding> <binding name="MyBinding"> <security mode="Transport"></security> </binding> </basicHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="MyBehavior"> <clientCredentials> <windows/> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel>   The same thing can be done of course in the service host if you want to manually configure the bindings and behaviors.  

    Read the article

  • Juniper Strategy, LLC is hiring SharePoint Developers&hellip;

    - by Mark Rackley
    Isn’t everybody these days? It seems as though there are definitely more jobs than qualified devs these days, but yes, we are looking for a few good devs to help round out our burgeoning SharePoint team. Juniper Strategy is located in the DC area, however we will consider remote devs for the right fit. This is your chance to get in on the ground floor of a bright company that truly “gets it” when it comes to SharePoint, Project Management, and Information Assurance. We need like-minded people who “get it”, enjoy it, and who are looking for more than just a job. We have government and commercial opportunities as well as our own internal product that has a bright future of its own. Our immediate needs are for SharePoint .NET developers, but feel free to submit your resume for us to keep on file as it looks as though we’ll need several people in the coming months. Please email us your resume and salary requirements to [email protected] Below are our official job postings. Thanks for stopping by, we look forward to  hearing from you. Senior SharePoint .NET Developer Senior developer will focus on design and coding of custom, end-to-end business process solutions within the SharePoint framework. Senior developer with the ability to serve as a senior developer/mentor and manage day-to-day development tasks. Work with business consultants and clients to gather requirements to prepare business functional specifications. Analyze and recommend technical/development alternative paths based on business functional specifications. For selected development path, prepare technical specification and build the solution. Assist project manager with defining development task schedule and level-of-effort. Lead technical solution deployment. Job Requirements Minimum of 7 years experience in agile development, with at least 3 years of SharePoint-related development experience (SPS, SharePoint 2007/2010, WSS2-4). Thorough understanding of and demonstrated experience in development under the SharePoint Object Model, with focus on the WSS 3.0 foundation (MOSS 2007 Standard/Enterprise, Project Server 2007). Experience with using multiple data sources/repositories for database CRUD activities, including relational databases, SAP, Oracle e-Business. Experience with designing and deploying performance-based solutions in SharePoint for business processes that involve a very large number of records. Experience designing dynamic dashboards and mashups with data from multiple sources (internal to SharePoint as well as from external sources). Experience designing custom forms to facilitate user data entry, both with and without leveraging Forms Services. Experience building custom web part solutions. Experience with designing custom solutions for processing underlying business logic requirements including, but not limited to, SQL stored procedures, C#/ASP.Net workflows/event handlers (including timer jobs) to support multi-tiered decision trees and associated computations. Ability to create complex solution packages for deployment (e.g., feature-stapled site definitions). Must have impeccable communication skills, both written and verbal. Seeking a "tinkerer"; proactive with a thirst for knowledge (and a sense of humor). A US Citizen is required, and need to be able to pass NAC/E-Verify. An active Secret clearance is preferred. Applicants must pass a skills assessment test. MCP/MCTS or comparable certification preferred. Salary & Travel Negotiable SharePoint Project Lead Define project task schedule, work breakdown structure and level-of-effort. Serve as principal liaison to the customer to manage deliverables and expectations. Day-to-day project and team management, including preparation and maintenance of project plans, budgets, and status reports. Prepare technical briefings and presentation decks, provide briefs to C-level stakeholders. Work with business consultants and clients to gather requirements to prepare business functional specifications. Analyze and recommend technical/development alternative paths based on business functional specifications. The SharePoint Project Lead will be working with SharePoint architects and system owners to perform requirements/gap analysis and develop the underlying functional specifications. Once we have functional specifications as close to "final" as possible, the Project Lead will be responsible for preparation of the associated technical specification/development blueprint, along with assistance in preparing IV&V/test plan materials with support from other team members. This person will also be responsible for day-to-day management of "developers", but is also expected to engage in development directly as needed.  Job Requirements Minimum 8 years of technology project management across the software development life-cycle, with a minimum of 3 years of project management relating specifically to SharePoint (SPS 2003, SharePoint2007/2010) and/or Project Server. Thorough understanding of and demonstrated experience in development under the SharePoint Object Model, with focus on the WSS 3.0 foundation (MOSS 2007 Standard/Enterprise, Project Server 2007). Ability to interact and collaborate effectively with team members and stakeholders of different skill sets, personalities and needs. General "development" skill set required is a fundamental understanding of MOSS 2007 Enterprise, SP1/SP2, from the top-level of skinning to the core of the SharePoint object model. Impeccable communication skills, both written and verbal, and a sense of humor are required. The projects will require being at a client site at least 50% of the time in Washington DC (NW DC) and Maryland (near Suitland). A US Citizen is required, and need to be able to pass NAC/E-Verify. An active Secret clearance is preferred. PMP certification, PgMP preferred. Salary & Travel Negotiable

    Read the article

  • Gnome Terminal tabs ugly and oversized

    - by adamnfish
    Both gnome terminal and terminator (which I am using on my laptop these days) can be customised to look very pretty. By using full screen and keeping desktop clutter down to a minimum it's possible to get a good-sized area to work in, even on my little EeePC. However, there is one element that I don't seem to be able to control. Gnome's tabs are massively oversized and ugly at best. They don't fit into the theme at all which looks silly, but for me the biggest problem is the screen real estate that is wasted. On a small laptop screen in particular, it's a real problem. Is there a way to change these tabs? I realize it's possible to put them up the side of the window, but then they take up even more space! If this isn't possible with theme-ing or gnome configuration, are there any terminal programs like terminator that can handle the tabs themselves? (Ideally in a more elegant fashion!)

    Read the article

  • JavaScript Intellisense Improvements with VS 2010

    - by ScottGu
    This is the twentieth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release.  Today’s blog post covers some of the nice improvements coming with JavaScript intellisense with VS 2010 and the free Visual Web Developer 2010 Express.  You’ll find with VS 2010 that JavaScript Intellisense loads much faster for large script files and with large libraries, and that it now provides statement completion support for more advanced scenarios compared to previous versions of Visual Studio. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Improved JavaScript Intellisense Providing Intellisense for a dynamic language like JavaScript is more involved than doing so with a statically typed language like VB or C#.  Correctly inferring the shape and structure of variables, methods, etc is pretty much impossible without pseudo-executing the actual code itself – since JavaScript as a language is flexible enough to dynamically modify and morph these things at runtime.  VS 2010’s JavaScript code editor now has the smarts to perform this type of pseudo-code execution as you type – which is how its intellisense completion is kept accurate and complete.  Below is a simple walkthrough that shows off how rich and flexible it is with the final release. Scenario 1: Basic Type Inference When you declare a variable in JavaScript you do not have to declare its type.  Instead, the type of the variable is based on the value assigned to it.  Because VS 2010 pseudo-executes the code within the editor, it can dynamically infer the type of a variable, and provide the appropriate code intellisense based on the value assigned to a variable. For example, notice below how VS 2010 provides statement completion for a string (because we assigned a string to the “foo” variable): If we later assign a numeric value to “foo” the statement completion (after this assignment) automatically changes to provide intellisense for a number: Scenario 2: Intellisense When Manipulating Browser Objects It is pretty common with JavaScript to manipulate the DOM of a page, as well as work against browser objects available on the client.  Previous versions of Visual Studio would provide JavaScript statement completion against the standard browser objects – but didn’t provide much help with more advanced scenarios (like creating dynamic variables and methods).  VS 2010’s pseudo-execution of code within the editor now allows us to provide rich intellisense for a much broader set of scenarios. For example, below we are using the browser’s window object to create a global variable named “bar”.  Notice how we can now get intellisense (with correct type inference for a string) with VS 2010 when we later try and use it: When we assign the “bar” variable as a number (instead of as a string) the VS 2010 intellisense engine correctly infers its type and modifies statement completion appropriately to be that of a number instead: Scenario 3: Showing Off Because VS 2010 is psudo-executing code within the editor, it is able to handle a bunch of scenarios (both practical and wacky) that you throw at it – and is still able to provide accurate type inference and intellisense. For example, below we are using a for-loop and the browser’s window object to dynamically create and name multiple dynamic variables (bar1, bar2, bar3…bar9).  Notice how the editor’s intellisense engine identifies and provides statement completion for them: Because variables added via the browser’s window object are also global variables – they also now show up in the global variable intellisense drop-down as well: Better yet – type inference is still fully supported.  So if we assign a string to a dynamically named variable we will get type inference for a string.  If we assign a number we’ll get type inference for a number.  Just for fun (and to show off!) we could adjust our for-loop to assign a string for even numbered variables (bar2, bar4, bar6, etc) and assign a number for odd numbered variables (bar1, bar3, bar5, etc): Notice above how we get statement completion for a string for the “bar2” variable.  Notice below how for “bar1” we get statement completion for a number:   This isn’t just a cool pet trick While the above example is a bit contrived, the approach of dynamically creating variables, methods and event handlers on the fly is pretty common with many Javascript libraries.  Many of the more popular libraries use these techniques to keep the size of script library downloads as small as possible.  VS 2010’s support for parsing and pseudo-executing libraries that use these techniques ensures that you get better code Intellisense out of the box when programming against them. Summary Visual Studio 2010 (and the free Visual Web Developer 2010 Express) now provide much richer JavaScript intellisense support.  This support works with pretty much all popular JavaScript libraries.  It should help provide a much better development experience when coding client-side JavaScript and enabling AJAX scenarios within your ASP.NET applications. Hope this helps, Scott P.S. You can read my previous blog post on VS 2008’s JavaScript Intellisense to learn more about our previous JavaScript intellisense (and some of the scenarios it supported).  VS 2010 obviously supports all of the scenarios previously enabled with VS 2008.

    Read the article

  • Independence Day for Software Components &ndash; Loosening Coupling by Reducing Connascence

    - by Brian Schroer
    Today is Independence Day in the USA, which got me thinking about loosely-coupled “independent” software components. I was reminded of a video I bookmarked quite a while ago of Jim Weirich’s “Grand Unified Theory of Software Design” talk at MountainWest RubyConf 2009. I finally watched that video this morning. I highly recommend it. In the video, Jim talks about software connascence. The dictionary definition of connascence (con-NAY-sense) is: 1. The common birth of two or more at the same time 2. That which is born or produced with another. 3. The act of growing together. The brief Wikipedia page about Connascent Software Components says that: Two software components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system. Connascence is a way to characterize and reason about certain types of complexity in software systems. The term was introduced to the software world in Meilir Page-Jones’ 1996 book “What Every Programmer Should Know About Object-Oriented Design”. The middle third of that book is the author’s proposed graphical notation for describing OO designs. UML became the standard about a year later, so a revised version of the book was published in 1999 as “Fundamentals of Object-Oriented Design in UML”. Weirich says that the third part of the book, in which Page-Jones introduces the concept of connascence “is worth the price of the entire book”. (The price of the entire book, by the way, is not much – I just bought a used copy on Amazon for $1.36, so that was a pretty low-risk investment. I’m looking forward to getting the book and learning about connascence from the original source.) Meanwhile, here’s my summary of Weirich’s summary of Page-Jones writings about connascence: The stronger the form of connascence, the more difficult and costly it is to change the elements in the relationship. Some of the connascence types, ordered from weak to strong are: Connascence of Name Connascence of name is when multiple components must agree on the name of an entity. If you change the name of a method or property, then you need to change all references to that method or property. Duh. Connascence of name is unavoidable, assuming your objects are actually used. My main takeaway about connascence of name is that it emphasizes the importance of giving things good names so you don’t need to go changing them later. Connascence of Type Connascence of type is when multiple components must agree on the type of an entity. I assume this is more of a problem for languages without compilers (especially when used in apps without tests). I know it’s an issue with evil JavaScript type coercion. Connascence of Meaning Connascence of meaning is when multiple components must agree on the meaning of particular values, e.g that “1” means normal customer and “2” means preferred customer. The solution to this is to use constants or enums instead of “magic” strings or numbers, which reduces the coupling by changing the connascence form from “meaning” to “name”. Connascence of Position Connascence of positions is when multiple components must agree on the order of values. This refers to methods with multiple parameters, e.g.: eMailer.Send("[email protected]", "[email protected]", "Your order is complete", "Order completion notification"); The more parameters there are, the stronger the connascence of position is between the component and its callers. In the example above, it’s not immediately clear when reading the code which email addresses are sender and receiver, and which of the final two strings are subject vs. body. Connascence of position could be improved to connascence of type by replacing the parameter list with a struct or class. This “introduce parameter object” refactoring might be overkill for a method with 2 parameters, but would definitely be an improvement for a method with 10 parameters. This points out two “rules” of connascence:  The Rule of Degree: The acceptability of connascence is related to the degree of its occurrence. The Rule of Locality: Stronger forms of connascence are more acceptable if the elements involved are closely related. For example, positional arguments in private methods are less problematic than in public methods. Connascence of Algorithm Connascence of algorithm is when multiple components must agree on a particular algorithm. Be DRY – Don’t Repeat Yourself. If you have “cloned” code in multiple locations, refactor it into a common function.   Those are the “static” forms of connascence. There are also “dynamic” forms, including… Connascence of Execution Connascence of execution is when the order of execution of multiple components is important. Consumers of your class shouldn’t have to know that they have to call an .Initialize method before it’s safe to call a .DoSomething method. Connascence of Timing Connascence of timing is when the timing of the execution of multiple components is important. I’ll have to read up on this one when I get the book, but assume it’s largely about threading. Connascence of Identity Connascence of identity is when multiple components must reference the entity. The example Weirich gives is when you have two instances of the “Bob” Employee class and you call the .RaiseSalary method on one and then the .Pay method on the other does the payment use the updated salary?   Again, this is my summary of a summary, so please be forgiving if I misunderstood anything. Once I get/read the book, I’ll make corrections if necessary and share any other useful information I might learn.   See Also: Gregory Brown: Ruby Best Practices Issue #24: Connascence as a Software Design Metric (That link is failing at the time I write this, so I had to go to the Google cache of the page.)

    Read the article

  • Oracle confirme l'arrivée du Java Development Kit 7, la modularité serait la principale nouveauté du

    Oracle confirme l'arrivée du JDK 7 Qui aura pour principale nouveauté la modularité, et réitère son attachement à Java Oracle vient de réitérer son attachement à Java lors de l'EclipseCon 2010 qui se déroule actuellement en Californie. L'ancien de Sun, Jeet Kaul - aujourd'hui vice-président de Oracle - et Steve Harris, lui aussi vice-président de la société, ont multiplié les déclarations allant dans ce sens lors de la manifestation. Pour Kaul, « l'élément clef du succès de Java, c'est sa plateforme » en faisant allusion à GlassFish, le serveur d'application de référence de Java EE 6. GlassFish, ont-il continué, devrait d'ailleurs connaître une mise à...

    Read the article

  • Entity Framework Batch Update and Future Queries

    - by pwelter34
    Entity Framework Extended Library A library the extends the functionality of Entity Framework. Features Batch Update and Delete Future Queries Audit Log Project Package and Source NuGet Package PM> Install-Package EntityFramework.Extended NuGet: http://nuget.org/List/Packages/EntityFramework.Extended Source: http://github.com/loresoft/EntityFramework.Extended Batch Update and Delete A current limitations of the Entity Framework is that in order to update or delete an entity you have to first retrieve it into memory. Now in most scenarios this is just fine. There are however some senerios where performance would suffer. Also, for single deletes, the object must be retrieved before it can be deleted requiring two calls to the database. Batch update and delete eliminates the need to retrieve and load an entity before modifying it. Deleting //delete all users where FirstName matches context.Users.Delete(u => u.FirstName == "firstname"); Update //update all tasks with status of 1 to status of 2 context.Tasks.Update( t => t.StatusId == 1, t => new Task {StatusId = 2}); //example of using an IQueryable as the filter for the update var users = context.Users .Where(u => u.FirstName == "firstname"); context.Users.Update( users, u => new User {FirstName = "newfirstname"}); Future Queries Build up a list of queries for the data that you need and the first time any of the results are accessed, all the data will retrieved in one round trip to the database server. Reducing the number of trips to the database is a great. Using this feature is as simple as appending .Future() to the end of your queries. To use the Future Queries, make sure to import the EntityFramework.Extensions namespace. Future queries are created with the following extension methods... Future() FutureFirstOrDefault() FutureCount() Sample // build up queries var q1 = db.Users .Where(t => t.EmailAddress == "[email protected]") .Future(); var q2 = db.Tasks .Where(t => t.Summary == "Test") .Future(); // this triggers the loading of all the future queries var users = q1.ToList(); In the example above, there are 2 queries built up, as soon as one of the queries is enumerated, it triggers the batch load of both queries. // base query var q = db.Tasks.Where(t => t.Priority == 2); // get total count var q1 = q.FutureCount(); // get page var q2 = q.Skip(pageIndex).Take(pageSize).Future(); // triggers execute as a batch int total = q1.Value; var tasks = q2.ToList(); In this example, we have a common senerio where you want to page a list of tasks. In order for the GUI to setup the paging control, you need a total count. With Future, we can batch together the queries to get all the data in one database call. Future queries work by creating the appropriate IFutureQuery object that keeps the IQuerable. The IFutureQuery object is then stored in IFutureContext.FutureQueries list. Then, when one of the IFutureQuery objects is enumerated, it calls back to IFutureContext.ExecuteFutureQueries() via the LoadAction delegate. ExecuteFutureQueries builds a batch query from all the stored IFutureQuery objects. Finally, all the IFutureQuery objects are updated with the results from the query. Audit Log The Audit Log feature will capture the changes to entities anytime they are submitted to the database. The Audit Log captures only the entities that are changed and only the properties on those entities that were changed. The before and after values are recorded. AuditLogger.LastAudit is where this information is held and there is a ToXml() method that makes it easy to turn the AuditLog into xml for easy storage. The AuditLog can be customized via attributes on the entities or via a Fluent Configuration API. Fluent Configuration // config audit when your application is starting up... var auditConfiguration = AuditConfiguration.Default; auditConfiguration.IncludeRelationships = true; auditConfiguration.LoadRelationships = true; auditConfiguration.DefaultAuditable = true; // customize the audit for Task entity auditConfiguration.IsAuditable<Task>() .NotAudited(t => t.TaskExtended) .FormatWith(t => t.Status, v => FormatStatus(v)); // set the display member when status is a foreign key auditConfiguration.IsAuditable<Status>() .DisplayMember(t => t.Name); Create an Audit Log var db = new TrackerContext(); var audit = db.BeginAudit(); // make some updates ... db.SaveChanges(); var log = audit.LastLog;

    Read the article

  • Does programming knowledge have a half-life?

    - by Gary Rowe
    In answering this question, I asserted that programming knowledge has a half-life of about 18 months. In physics, we have radioactive decay which is the process by which a radioactive element transforms into something less energetic. The half-life is the measure of how long it takes for this process to result in only half of the material to remain. A parallel concept might be that over time our programming knowledge ceases to be the current idiom and eventually becomes irrelevant. Noting that a half-life is asymptotic (so some knowledge will always be relevant), what are your thoughts on this? Is 18 months a good estimate? Is it even the case? Does it apply to design patterns, but over a longer period? What are the inherent advantages/disadvantages of this half-life? Update Just found this question which covers the material fairly well: "Half of everything you know will be obsolete in 18-24 months" = ( True, or False? )

    Read the article

  • Do print and bookmark links really work?

    - by Joseph Mastey
    It seems to be common on the web to provide users with some visual element on the page to either print or bookmark a page. This is all well and good (and probably doesn't hurt for the most part), but I question its effectiveness at causing the intended behavior. Is there any evidence to suggest that this causes an increase in bookmarking/printing behavior? Similarly, is there any evidence that users will use this method rather than the browser's default interface for the functions? I am really looking for user research with actual results, rather than anecdotes to answer this question. Thanks, Joseph Mastey

    Read the article

< Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >