Search Results

Search found 12765 results on 511 pages for 'format'.

Page 447/511 | < Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >

  • CascadingDropDown jQuery Plugin for ASP.NET MVC

    - by rajbk
    CascadingDropDown is a jQuery plugin that can be used by a select list to get automatic population using AJAX. A sample ASP.NET MVC project is attached at the bottom of this post.   Usage The code below shows two select lists : <select id="customerID" name="customerID"> <option value="ALFKI">Maria Anders</option> <option value="ANATR">Ana Trujillo</option> <option value="ANTON">Antonio Moreno</option> </select>   <select id="orderID" name="orderID"> </select> When a customer is selected in the first select list, the second list will auto populate itself with the following code: $("#orderID").CascadingDropDown("#customerID", '/Sales/AsyncOrders'); Internally, an AJAX post is made to ‘/Sales/AsyncOrders’ with the post body containing  customerID=[selectedCustomerID]. This executes the action AsyncOrders on the SalesController with signature AsyncOrders(string customerID).  The AsyncOrders method returns JSON which is then used to populate the select list. The JSON format expected is shown below : [{ "Text": "John", "Value": "10326" }, { "Text": "Jane", "Value": "10801" }] Details $(targetID).CascadingDropDown(sourceID, url, settings) targetID The ID of the select list that will auto populate.  sourceID The ID of the select list, which, on change, causes the targetID to auto populate. url The url to post to Options promptText Text for the first item in the select list Default : -- Select -- loadingText Optional text to display in the select list while it is being loaded. Default : Loading.. errorText Optional text to display if an error occurs while populating the list Default: Error loading data. postData Data you want posted to the url in place of the default Example : { postData : { customerID : $(‘#custID’), orderID : $(‘#orderID’) }} will cause customerID=ALFKI&orderID=2343 to be sent as the POST body. Default: A text string obtained by calling serialize on the sourceID onLoading (event) Raised before the list is populated. onLoaded (event) Raised after the list is populated, The code below shows how to “animate” the  select list after load. Example using custom options: $("#orderID").CascadingDropDown("#customerID", '/Sales/AsyncOrders', { promptText: '-- Pick an Order--', onLoading: function () { $(this).css("background-color", "#ff3"); }, onLoaded: function () { $(this).animate({ backgroundColor: '#ffffff' }, 300); } }); To return JSON from our action method, we use the Json ActionResult passing in an IEnumerable<SelectListItem>. public ActionResult AsyncOrders(string customerID) { var orders = repository.GetOrders(customerID).ToList().Select(a => new SelectListItem() { Text = a.OrderDate.HasValue ? a.OrderDate.Value.ToString("MM/dd/yyyy") : "[ No Date ]", Value = a.OrderID.ToString(), }); return Json(orders); } Sample Project using VS 2010 RTM NorthwindCascading.zip

    Read the article

  • Parallelism in .NET – Part 20, Using Task with Existing APIs

    - by Reed
    Although the Task class provides a huge amount of flexibility for handling asynchronous actions, the .NET Framework still contains a large number of APIs that are based on the previous asynchronous programming model.  While Task and Task<T> provide a much nicer syntax as well as extending the flexibility, allowing features such as continuations based on multiple tasks, the existing APIs don’t directly support this workflow. There is a method in the TaskFactory class which can be used to adapt the existing APIs to the new Task class: TaskFactory.FromAsync.  This method provides a way to convert from the BeginOperation/EndOperation method pair syntax common through .NET Framework directly to a Task<T> containing the results of the operation in the task’s Result parameter. While this method does exist, it unfortunately comes at a cost – the method overloads are far from simple to decipher, and the resulting code is not always as easily understood as newer code based directly on the Task class.  For example, a single call to handle WebRequest.BeginGetResponse/EndGetReponse, one of the easiest “pairs” of methods to use, looks like the following: var task = Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The compiler is unfortunately unable to infer the correct type, and, as a result, the WebReponse must be explicitly mentioned in the method call.  As a result, I typically recommend wrapping this into an extension method to ease use.  For example, I would place the above in an extension method like: public static class WebRequestExtensions { public static Task<WebResponse> GetReponseAsync(this WebRequest request) { return Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); } } This dramatically simplifies usage.  For example, if we wanted to asynchronously check to see if this blog supported XHTML 1.0, and report that in a text box to the user, we could do: var webRequest = WebRequest.Create("http://www.reedcopsey.com"); webRequest.GetReponseAsync().ContinueWith(t => { using (var sr = new StreamReader(t.Result.GetResponseStream())) { string str = sr.ReadLine();; this.textBox1.Text = string.Format("Page at {0} supports XHTML 1.0: {1}", t.Result.ResponseUri, str.Contains("XHTML 1.0")); } }, TaskScheduler.FromCurrentSynchronizationContext());   By using a continuation with a TaskScheduler based on the current synchronization context, we can keep this request asynchronous, check based on the first line of the response string, and report the results back on our UI directly.

    Read the article

  • Surface Review from Canadian Guy Who Didn&rsquo;t Go To Build

    - by D'Arcy Lussier
    I didn’t go to Build last week, opted to stay home and go trick-or-treating with my daughters instead. I had many friends that did go however, and I was able to catch up with James Chambers last night to hear about the conference and play with his Surface RT and Nokia 920 WP8 devices. I’ve been using Windows 8 for a while now, so I’m not going to comment on OS features – lots of posts out there on that already. Let me instead comment on the hardware itself. Size and Weight The size of the tablet was awesome. The Windows 8 tablet I’m using to reference this against is the one from Build 2011 (Samsung model) we received as well as my iPad. The Surface RT was taller and slightly heavier than the iPad, but smaller and lighter than the Samsung Win 8 tablet. I still don’t prefer the default wide-screen format, but the Surface RT is much more usable even when holding it by the long edge than the Samsung. Build Quality No issues with the build quality, it seemed very solid. But…y’know, people have been going on about how the Surface RT materials are so much better than the plastic feeling models Samsung and others put out. I didn’t really notice *that* much difference in that regard with the Surface RT. Interesting feature I didn’t expect – the Windows button on the device is touch-sensitive, not a mechanical one. I didn’t try video or anything, so I can’t comment on the media experience. The kickstand is a great feature, and the way the Surface RT connects to the combo case/keyboard touchcover is very slick while being incredibly simple. What About That Touch Cover Keyboard? So first, kudos to Microsoft on the touch cover! This thing was insanely responsive (including the trackpad) and really delivered on the thinness I was expecting. With that said, and remember this is with very limited use, I would probably go with the Type Cover instead of the Touch Cover. The difference is buttons. The Touch Cover doesn’t actually have “buttons” on the keyboard – hence why its a “touch” cover. You tap on a key to type it. James tells me after a while you get used to it and you can type very fast. For me, I just prefer the tactile feeling of a button being pressed/depressed. But still – typing on the touch case worked very well. Would I Buy One? So after playing with it, did I cry out in envy and rage that I wasn’t able to get one of these machines? Did I curse my decision to collect Halloween candy with my kids instead of being at Build getting hardware? Well – no. Even with the keyboard, the Surface RT is not a business laptop replacement device. While Office does come included, you can’t install any other applications outside of Windows Store Apps. This might be limiting depending on what other applications you need to have available on your computer. Surface RT is a great personal computing device, as long as you’re not already invested in a competing ecosystem. I’ve heard people make statements that they’re going to replace all the iPads in their homes with Surface tablets. In my home, that’s not feasible – my wife and daughters have amassed quite a collection of games via iTunes. We also buy all our music via iTunes as well, so even with the XBox streaming music service now available we’re still tied quite tightly to iTunes. So who is the Surface RT for? In my mind, if you’re looking for a solid, compact device that provides basic business functionality (read: email) or if you have someone that needs a very simple to use computer for email, web browsing, etc., then Surface RT is a great option. For me, I’m waiting on the Samsung Ativ Smart PC Pro and am curious to see what changes the Surface Pro will come with.

    Read the article

  • RegexClean Transformation

    Use the power of regular expressions to cleanse your data right there inside the Data Flow. This transformation includes a full user interface for simple configuration, as well as advanced features such as error output configuration. Two regular expressions are used, a match expression and a replace expression. The transformation is designed around the named capture groups or match groups, and even supports multiple expressions. This allows for rich and complex expressions to be built, all through an easy to reuse transformation where a bespoke Script Component was previously the only alternative. Some simple properties are available for each column selected – Behaviour The two behaviour modes offer similar functionality but with a difference. Replace, replaces tokens with the input, and Emit overwrites the whole string. Cascade Cascade allows you to define multiple expressions, each on a new line. The match expression will be processed into one operation per line, which are then processed in order at run-time. Multiple replace expressions can also be specified, again each on a new line. If there is no corresponding replace expression for a match expression line, then the last replace expression will be used instead. It is common to have multiple match expressions, but only a single replace expression. Match Expression The expression used to define the named capture groups. This is where you can analyse the data, and tag or name elements within it as found by the match expression. Replace Expression The replace determines the final output. It will reference the named groups from the match expression and assembles them into the final output. If you want to use regular expressions to validate data then try the Regular Expression Transformation. Quick Start Guide Select a column. A new output column is created for each selected column; there is no option for in-place replacement of column values. One input column can be used to populate multiple output columns, just select the column again in the lower grid, using the Input Columns drop-down selector. Amend the output column name and size as required. They default to the same as the input column selected. Amend the behaviour as required, the default is Replace. Amend the cascade option as required, the default is true. Finally enter your match and replace regular expressions Quick Sample #1 Parse an email address and extract the user and domain portions. Format as a web address passing the user portion as a URL parameter. This uses two match groups, user and host, which correspond to the text before the @ and after it respectively. Behaviour is Emit, and cascade of false, we only have a single match expression. Match Expression ^(?<user>[^@]+)@(?<host>.+)$ Replace Expression - http://www.${host}?user=${user} Results Sample Input Sample Output [email protected] http://www.adventure-works.com?user=zheng0 The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the RegexClean Transformation from the list. Downloads The RegexClean Transformation is available for both SQL Server 2005 and SQL Server 2008. Please choose the version to match your SQL Server version, or you can install both versions and use them side by side if you have both SQL Server 2005 and SQL Server 2008 installed. RegexClean Transformation for SQL Server 2005 RegexClean Transformation for SQL Server 2008 Version History SQL Server 2005 Version 1.0.0.105 - Public Release (28 Jan 2008) SQL Server 2005 Version 1.0.0.105 - Public Release (28 Jan 2008) Screenshot

    Read the article

  • BizTalk Cross Reference Data Management Strategy

    - by charlie.mott
    Article Source: http://geekswithblogs.net/charliemott This article describes an approach to the management of cross reference data for BizTalk.  Some articles about the BizTalk Cross Referencing features can be found here: http://home.comcast.net/~sdwoodgate/xrefseed.zip http://geekswithblogs.net/michaelstephenson/archive/2006/12/24/101995.aspx http://geekswithblogs.net/charliemott/archive/2009/04/20/value-vs.id-cross-referencing-in-biztalk.aspx Options Current options to managing this data include: Maintaining xml files in the format that can be used by the out-of-the-box BTSXRefImport.exe utility. Use of user interfaces that have been developed to manage this data: BizTalk Cross Referencing Tool XRef XML Creation Tool However, there are the following issues with the above options: The 'BizTalk Cross Referencing Tool' requires a separate database to manage.  The 'XRef XML Creation' tool has no means of persisting the data settings. The 'BizTalk Cross Referencing tool' generates integers in the common id field. I prefer to use a string (e.g. acme.country.uk). This is more readable. (see naming conventions below). Both UI tools continue to use BTSXRefImport.exe.  This utility replaces all xref data. This can be a problem in continuous integration environments that support multiple clients or BizTalk target instances.  If you upload the data for one client it would destroy the data for another client.  Yet in TFS where builds run concurrently, this would break unit tests. Alternative Approach In response to these issues, I instead use simple SQL scripts to directly populate the BizTalkMgmtDb xref tables combined with a data namepacing strategy to isolate client data. Naming Conventions All data keys use namespace prefixing.  The pattern will be <companyName>.<data Type>.  The naming conventions will be to use lower casing for all items.  The data must follow this pattern to isolate it from other company cross-reference data.  The table below shows some sample data. (Note: this data uses the 'ID' cross-reference tables.  the same principles apply for the 'value' cross-referencing tables). Table.Field Description Sample Data xref_AppType.appType Application Types acme.erp acme.portal acme.assetmanagement xref_AppInstance.appInstance Application Instances (each will have a corresponding application type). acme.dynamics.ax acme.dynamics.crm acme.sharepoint acme.maximo xref_IDXRef.idXRef Holds the cross reference data types. acme.taxcode acme.country xref_IDXRefData.CommonID Holds each cross reference type value used by the canonical schemas. acme.vatcode.exmpt acme.vatcode.std acme.country.usa acme.country.uk xref_IDXRefData.AppID This holds the value for each application instance and each xref type. GBP USD SQL Scripts The data to be stored in the BizTalkMgmtDb xref tables will be managed by SQL scripts stored in a database project in the visual studio solution. File(s) Description Build.cmd A sqlcmd script to deploy data by running the SQL scripts below.  (This can be run as part of the MSBuild process).   acme.purgexref.sql SQL script to clear acme.* data from the xref tables.  As such, this will not impact data for any other company. acme.applicationInstances.sql   SQL script to insert application type and application instance data.   acme.vatcode.sql acme.country.sql etc ...  There will be a separate SQL script to insert each cross-reference data type and application specific values for these types.

    Read the article

  • UPK 3.6.1 (is Coming)

    - by marc.santosusso
    In anticipation of the release of UPK 3.6.1, I'd like to briefly describe some of the features that will be available in this new version. Topic Editor in Tabs Topic Editors now open in tabs instead of separate Developer windows. This offers several improvements: First, the bubble editor can be docked and resized in the same way as other editor panes. That's right, you can resize the bubble editor! The second enhancement that this changes brings is an improved undo and redo which allows each action to be undone and redone in the Topic Editor. New Sound Editor The topic and web page editors include a new sound editor with all the bells and whistles necessary to record, edit, import, and export, sound. Sound can be captured during topic recording--which is great for a Subject Matter Expert (SME) to narrate what they're recording--or after the topic has been recorded. Sound can also be added to web pages and played on the concept panes of modules, sections and topics. Turn off bubbles in Topics Authors may opt to hide bubbles either per frame or for an entire topic. When you want to draw a user's attention to the content on the screen instead of the bubble. This feature works extremely well in conjunction with the new sound capabilities. For instance, consider recording conceptual information with narration and no bubbles. Presentation Output UPK content can be published as a Presentation in Microsoft PowerPoint format. Publishing for Presentation will create a presentation for each topic published. The presentation template can be customized Using the same methods offered for the UPK document outputs, allowing your UPK-generated presentations to match your corporate branding. Autosave and Recovery The Developer will automatically save your work as often as you would like. This affords authors the ability to recover these automatically saved documents if their system or UPK were to close unexpectedly. The Developer defaults to save open documents every ten minutes. Package Editor Enhancement Files in packages will now open in the associated application when double-clicked. Authors can also choose to "Open with..." from the context menu (AKA right click menu.) See It! Window See It! mode may now be launched in a non-fullscreen window. This is available from the kp.html file in any Player package. This version of See It! mode offers on-screen navigation controls including previous frame, next frame, pause etc. Firefox Enhancments The UPK Player will now offer both Do It! mode and sound playback when viewed using Firefox web browser. Player Support for Safari The UPK Player is now fully supported on the Safari web browser for both Mac OS and Windows platforms. Keep document checked out Authors may choose to keep a document checked out when performing a check in. This allows an author to have a new version created on the server and continue editing. Close button on individual tabs A close button has been added to the tabs making it easier to close a specific tab. Outline Editor Enhancements Authors will have the option to prevent concepts from immediately displaying in the Developer when an outline item is selected. This makes it faster to move around in the outline editor. Tell us which feature you're most excited to use in the comments.

    Read the article

  • Clean Code: A Handbook of Agile Software Craftsmanship – book review

    - by DigiMortal
       Writing code that is easy read and test is not something that is easy to achieve. Unfortunately there are still way too much programming students who write awful spaghetti after graduating. But there is one really good book that helps you raise your code to new level – your code will be also communication tool for you and your fellow programmers. “Clean Code: A Handbook of Agile Software Craftsmanship” by Robert C. Martin is excellent book that helps you start writing the easily readable code. Of course, you are the one who has to learn and practice but using this book you have very good guide that keeps you going to right direction. You can start writing better code while you read this book and you can do it right in your current projects – you don’t have to create new guestbook or some other simple application to start practicing. Take the project you are working on and start making it better! My special thanks to Robert C. Martin I want to say my special thanks to Robert C. Martin for this book. There are many books that teach you different stuff and usually you have markable learning curve to go before you start getting results. There are many books that show you the direction to go and then leave you alone figuring out how to achieve all that stuff you just read about. Clean Code gives you a lot more – the mental tools to use so you can go your way to clean code being sure you will be soon there. I am reading books as much as I have time for it. Clean Code is top-level book for developers who have to write working code. Before anything else take Clean Code and read it. You will never regret your decision. I promise. Fragment of editorial review “Even bad code can function. But if code isn’t clean, it can bring a development organization to its knees. Every year, countless hours and significant resources are lost because of poorly written code. But it doesn’t have to be that way. What kind of work will you be doing? You’ll be reading code—lots of code. And you will be challenged to think about what’s right about that code, and what’s wrong with it. More importantly, you will be challenged to reassess your professional values and your commitment to your craft. Readers will come away from this book understanding How to tell the difference between good and bad code How to write good code and how to transform bad code into good code How to create good names, good functions, good objects, and good classes How to format code for maximum readability How to implement complete error handling without obscuring code logic How to unit test and practice test-driven development This book is a must for any developer, software engineer, project manager, team lead, or systems analyst with an interest in producing better code.” Table of contents Clean code Meaningful names Functions Comments Formatting Objects and data structures Error handling Boundaries Unit tests Classes Systems Emergence Concurrency Successive refinement JUnit internals Refactoring SerialDate Smells and heuristics A Concurrency II org.jfree.date.SerialDate Cross references of heuristics Epilogue Index

    Read the article

  • Displaying JSON in your Browser

    - by Rick Strahl
    Do you work with AJAX requests a lot and need to quickly check URLs for JSON results? Then you probably know that it’s a fairly big hassle to examine JSON results directly in the browser. Yes, you can use FireBug or Fiddler which work pretty well for actual AJAX requests, but if you just fire off a URL for quick testing in the browser you usually get hit by the Save As dialog and the download manager, followed by having to open the saved document in a text editor in FireFox. Enter JSONView which allows you to simply display JSON results directly in the browser. For example, imagine I have a URL like this: http://localhost/westwindwebtoolkitweb/RestService.ashx?Method=ReturnObject&format=json&Name1=Rick&Name2=John&date=12/30/2010 typed directly into the browser and that that returns a complex JSON object. With JSONView the result looks like this: No fuss, no muss. It just works. Here the result is an array of Person objects that contain additional address child objects displayed right in the browser. JSONView basically adds content type checking for application/json results and when it finds a JSON result takes over the rendering and formats the display in the browser. Note that it re-formats the raw JSON as well for a nicer display view along with collapsible regions for objects. You can still use View Source to see the raw JSON string returned. For me this is a huge time-saver. As I work with AJAX result data using GET and REST style URLs quite a bit it’s a big timesaver. To quickly and easily display JSON is a key feature in my development day and JSONView for all its simplicity fits that bill for me. If you’re doing AJAX development and you often review URL based JSON results do yourself a favor and pick up a copy of JSONView. Other Browsers JSONView works only with FireFox – what about other browsers? Chrome Chrome actually displays raw JSON responses as plain text without any plug-ins. There’s no plug-in or configuration needed, it just works, although you won’t get any fancy formatting. [updated from comments] There’s also a port of JSONView available for Chrome from here: https://chrome.google.com/webstore/detail/chklaanhfefbnpoihckbnefhakgolnmc It looks like it works just about the same as the JSONView plug-in for FireFox. Thanks for all that pointed this out… Internet Explorer Internet Explorer probably has the worst response to JSON encoded content: It displays an error page as it apparently tries to render JSON as XML: Yeah that seems real smart – rendering JSON as an XML document. WTF? To get at the actual JSON output, you can use View Source. To get IE to display JSON directly as text you can add a Mime type mapping in the registry:   Create a new application/json key in: HKEY_CLASSES_ROOT\MIME\Database\ContentType\application/json Add a string value of CLSID with a value of {25336920-03F9-11cf-8FD0-00AA00686F13} Add a DWORD value of Encoding with a value of 80000 I can’t take credit for this tip – found it here first on Sky Sander’s Blog. Note that the CLSID can be used for just about any type of text data you want to display as plain text in the IE. It’s the in-place display mechanism and it should work for most text content. For example it might also be useful for looking at CSS and JS files inside of the browser instead of downloading those documents as well. © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  

    Read the article

  • Principles of Big Data By Jules J Berman, O&rsquo;Reilly Media Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/11/04/principles-of-big-data-by-jules-j-berman-orsquoreilly-media.aspx A fantastic book! Must be part, if not yet, of the fundamentals of the Big Data as a field of science. Highly recommend to those who are into the Big Data practice. Yet, I confess this book is one of my best reads this year and for a number of reasons: The book is full of wisdom, intimate insight, historical facts and real life examples to how Big Data projects get conceived, operate and sadly, yes, sometimes die. But not only that, the book is most importantly is filled with valuable advice, accurate and even overwhelming amount of reference (from the positive side), and the author does not event stop there: there are numerous technical excerpts, links and examples allowing to quickly accomplish many daunting tasks or make you aware of what one needs to perform as a data practitioner (excuse my use of the word practitioner, I just did not find a better substitute to it to trying to reference all who face Big Data). Be aware that Jules Berman’s background is in medicine, naturally, this book discusses this subject a lot as it is very dear to the author’s heart I believe, this does not make this book any less significant however, quite the opposite, I trust if there is an area in science or practice where the biggest benefits can be ripped from Big Data projects it is indeed the medical science, let’s make Cancer history! On a personal note, for me as a database, BI professional it has helped to understand better the motives behind Big Data initiatives, their underwater rivers and high altitude winds that divert or propel them forward. Additionally, I was impressed by the depth and number of mining algorithms covered in it. I must tell this made me very curious and tempting to find out more about these indispensable attributes of Big Data so sure I will be trying stretching my wallet to acquire several books that go more in depth on several most popular of them. My favorite parts of the book, well, all of them actually, but especially chapter 9: Analysis, it is just very close to my heart. But the real reason is it let me see what I do with data from a different angle. And then the next - “Special Considerations”, they are just two logical parts. The writing language is of this book is very acceptable for all levels, I had no technical problem reading it in ebook format on my 8” tablet or a large screen monitor. If I would be asked to say at least something negative I have to state I had a feeling initially that the book’s first part reads like an academic material relaxing the reader as the book progresses forward. I admit I am impressed with Jules’ abilities to use several programming languages and OSS tools, bravo! And I agree, it is not too, too hard to grasp at least the principals of a modern programming language, which seems becomes a defacto knowledge standard item for any modern human being. So grab a copy of this book, read it end to end and make yourself shielded from making mistakes at any stage of your Big Data initiative, by the way this book also helps build better future Big Data projects. Disclaimer: I received a free electronic copy of this book as part of the O'Reilly Blogger Program.

    Read the article

  • Spending the summer at camp… Web Camp, that is

    - by Jon Galloway
    Microsoft is sponsoring a series of Web Camps this summer. They’re a series of free two day events being held worldwide, and I’m really excited about being taking part. The camp is targeted at a broad range of developer background and experience. Content builds from 101 level introductory material to 200-300 level coverage, but we hit some advanced bits (e.g. MVC 2 features, jQuery templating, IIS 7 features, etc.) that advanced developers may not yet have seen. We start with a lap around ASP.NET & Web Forms, then move on to building and application with ASP.NET MVC 2, jQuery, and Entity Framework 4, and finally deploy to IIS. I got to spend some time working with James before the first Web Camp refining the content, and I think he’s packed about as much goodness into the time available as is scientifically possible. The content is really code focused – we start with File/New Project and spend the day building a real, working application. The second day of the Web Camp provides attendees an opportunity to get hands on. There are two options: Join a team and build an application of your choice Work on a lab or tutorial James Senior and I kicked off the fun with the first Web Camp in Toronto a few weeks ago. It was sold out, lots of fun, and by all accounts a great way to spend two days. I’m really enthusiastic about the format. Rather than just listening to speakers and then forgetting everything in a few days, attendees actually build something of their choice. They get an opportunity to pitch projects they’re interested in, form teams, and build it – getting experience with “real world” problems, with all the help they need from experienced developers. James got help on the second day practical part from the good folks that run Startup Weekend. Startup Weekend is a fantastic program that gathers developers together to build cool apps in a weekend, so their input on how to organize successful teams for weekend projects was invaluable. Nick Seguin joined us in Toronto, and in addition to making sure that everything flowed smoothly, he just added a lot of fun and excitement to the event, reminding us all about how much fun it is to come up with a cool idea and just build it. In addition to the Toronto camp, I’ll be at the Mountain View, London, Munich, and New York camps over the next month. London is sold out, but the rest still have space available, so come join us! Here’s the full list, with the ones I’ll be at bolded because - you know - it’s my blog. The the whole speaker list is great, including Scott Guthrie, Scott Hanselman, James Senior, Rachel Appel, Dan Wahlin, and Christian Wenz. Toronto May 7-8 (James Senior and I were thrown out on our collective ears) Moscow May 19 Beijing May 21-22 Shanghai May 24-25 Mountain View May 27-28 (I’m speaking with Rachel Appel) Sydney May 28-29 Singapore June 04-05 London June 04-05 (I’m speaking with Christian Wenz – SOLD OUT) Munich June 07-08 (I’m speaking with Christian Wenz) Chicago June 11-12 Redmond, WA June 18-19 New York June 25-26 (I’m speaking with Dan Wahlin) Come say hi!

    Read the article

  • Optional Parameters and Named Arguments in C# 4 (and a cool scenario w/ ASP.NET MVC 2)

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the seventeenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post covers two new language feature being added to C# 4.0 – optional parameters and named arguments – as well as a cool way you can take advantage of optional parameters (both in VB and C#) with ASP.NET MVC 2. Optional Parameters in C# 4.0 C# 4.0 now supports using optional parameters with methods, constructors, and indexers (note: VB has supported optional parameters for awhile). Parameters are optional when a default value is specified as part of a declaration.  For example, the method below takes two parameters – a “category” string parameter, and a “pageIndex” integer parameter.  The “pageIndex” parameter has a default value of 0, and as such is an optional parameter: When calling the above method we can explicitly pass two parameters to it: Or we can omit passing the second optional parameter – in which case the default value of 0 will be passed:   Note that VS 2010’s Intellisense indicates when a parameter is optional, as well as what its default value is when statement completion is displayed: Named Arguments and Optional Parameters in C# 4.0 C# 4.0 also now supports the concept of “named arguments”.  This allows you to explicitly name an argument you are passing to a method – instead of just identifying it by argument position.  For example, I could write the code below to explicitly identify the second argument passed to the GetProductsByCategory method by name (making its usage a little more explicit): Named arguments come in very useful when a method supports multiple optional parameters, and you want to specify which arguments you are passing.  For example, below we have a method DoSomething that takes two optional parameters: We could use named arguments to call the above method in any of the below ways: Because both parameters are optional, in cases where only one (or zero) parameters is specified then the default value for any non-specified arguments is passed. ASP.NET MVC 2 and Optional Parameters One nice usage scenario where we can now take advantage of the optional parameter support of VB and C# is with ASP.NET MVC 2’s input binding support to Action methods on Controller classes. For example, consider a scenario where we want to map URLs like “Products/Browse/Beverages” or “Products/Browse/Deserts” to a controller action method.  We could do this by writing a URL routing rule that maps the URLs to a method like so: We could then optionally use a “page” querystring value to indicate whether or not the results displayed by the Browse method should be paged – and if so which page of the results should be displayed.  For example: /Products/Browse/Beverages?page=2. With ASP.NET MVC 1 you would typically handle this scenario by adding a “page” parameter to the action method and make it a nullable int (which means it will be null if the “page” querystring value is not present).  You could then write code like below to convert the nullable int to an int – and assign it a default value if it was not present in the querystring: With ASP.NET MVC 2 you can now take advantage of the optional parameter support in VB and C# to express this behavior more concisely and clearly.  Simply declare the action method parameter as an optional parameter with a default value: C# VB If the “page” value is present in the querystring (e.g. /Products/Browse/Beverages?page=22) then it will be passed to the action method as an integer.  If the “page” value is not in the querystring (e.g. /Products/Browse/Beverages) then the default value of 0 will be passed to the action method.  This makes the code a little more concise and readable. Summary There are a bunch of great new language features coming to both C# and VB with VS 2010.  The above two features (optional parameters and named parameters) are but two of them.  I’ll blog about more in the weeks and months ahead. If you are looking for a good book that summarizes all the language features in C# (including C# 4.0), as well provides a nice summary of the core .NET class libraries, you might also want to check out the newly released C# 4.0 in a Nutshell book from O’Reilly: It does a very nice job of packing a lot of content in an easy to search and find samples format. Hope this helps, Scott

    Read the article

  • What makes them click ?

    - by Piet
    The other day (well, actually some weeks ago while relaxing at the beach in Kos) I read ‘Neuro Web Design - What makes them click?’ by Susan Weinschenk. (http://neurowebbook.com) The book is a fast and easy read (no unnecessary filler) and a good introduction on how your site’s visitors can be steered in the direction you want them to go. The Obvious The book handles some of the more known/proven techniques, like for example that ratings/testimonials of other people can help sell your product or service. Another well known technique it talks about is inducing a sense of scarcity/urgency in the visitor. Only 2 seats left! Buy now and get 33% off! It’s not because these are known techniques that they stop working. Luckily 2/3rd of the book handles less obvious techniques, otherwise it wouldn’t be worth buying. The Not So Obvious A less known influencing technique is reciprocity. And then I’m not talking about swapping links with another website, but the fact that someone is more likely to do something for you after you did something for them first. The book cites some studies (I always love the facts and figures) and gives some actual examples of how to implement this in your site’s design, which is less obvious when you think about it. Want to know more ? Buy the book! Other interesting sources For a more general introduction to the same principles, I’d suggest ‘Yes! 50 Secrets from the Science of Persuasion’. ‘Yes!…’ cites some of the same studies (it seems there’s a rather limited pool of studies covering this subject), but of course doesn’t show how to implement these techniques in your site’s design. I read ‘Yes!…’ last year, making ‘Neuro Web Design’ just a little bit less interesting. !!!Always make sure you’re able to measure your changes. If you haven’t yet, check out the advanced segmentation in Google Analytics (don’t be afraid because it says ‘beta’, it works just fine) and Google Website Optimizer. Worth Buying? Can I recommend it ? Sure, why not. I think it can be useful for anyone who ever had to think about the design or content of a site. You don’t have to be a marketing guy to want a site you’re involved with to be successful. The content/filler ratio is excellent too: you don’t need to wade through dozens of pages to filter out the interesting bits. (unlike ‘The Design of Sites’, which contains too much useless info and because it’s in dead-tree format, you can’t google it) If you like it, you might also check out ‘Yes! 50 Secrets from the Science of Persuasion’. Tip for people living in Europe: check Amazon UK for your book buying needs. Because of the low UK Pound exchange rate, it’s usually considerably cheaper and faster to get a book delivered to your doorstep by Amazon UK compared to having to order it at the local book store or web-shop.

    Read the article

  • Customize Team Build 2010 – Part 12: How to debug my custom activities

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application       Developers are “spoilt” persons who expect to be able to have easy debugging experiences for every technique they work with. So they also expect it when developing custom activities for the build process template. This post describes how you can debug your custom activities without having to develop on the build server itself. Remote debugging prerequisites The prerequisite for these steps are to install the Microsoft Visual Studio Remote Debugging Monitor. You can find information how to install this at http://msdn.microsoft.com/en-us/library/bt727f1t.aspx. I chose for the option to run the remote debugger on the build server from a file share. Debugging symbols prerequisites To be able to start the debugging, you need to have the pdb files on the buildserver together with the assembly. The pdb must have been build with Full Debug Info. Steps In my setup I have a development machine and a build server. To setup the remote debugging, I performed the following steps Locate on your development machine the folder C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Remote Debugger Create a share for the Remote Debugger folder. Make sure that the share (and the folder) has the correct permissions so the user on the build server has access to the share. On the build server go to the shared “Remote Debugger” folder Start msvsmon.exe which is located in the folder that represents the platform of the build server. This will open a winform application like   Go back to your development machine and open the BuildProcess solution. Start the Attach to process command (Ctrl+Alt+P) Type in the Qualifier the name of the build server. In my case the user account that has started the msvsmon is another user then the user on my development machine. In that case you have to type the qualifier in the format that is shown in the Remote Debugging Monitor (in my case LOCAL\Administrator@TFSLAB) and confirm it by pressing <Enter> Since the build service is running with other credentials, check the option “Show processes from all users”. Now the Attach to process dialog shows the TFSBuildServiceHost process Set the breakpoint in the activity you want to debug and kick of a build. Be aware that when you attach to the TFSBuildServiceHost that you debug every single build that is run by this windows service, so make sure you don’t debug the build server that is in production! You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Christian Radio Locator iPhone app

    - by Tim Hibbard
    For the last three months or so I've been working on an iPhone (and iPad) app in my spare time. It all started when I took the kids to Minneapolis and had a hard time finding radio stations to listen to on the trip. I looked in the App Store for an app that would use my GPS to show me Christian radio stations nearby, but there wasn't one. So I decided to build my own. Using public information from the FCC and a few other sources, I built a database in Google docs that contains the frequency for all Christian radio stations, where the tower is located and how far the tower can reach. I also included any streaming audio information and other contact information like Facebook or Twitter that I could find. Google spreadsheets publish in JSON format (yes, really) and Xcode can automatically deserialize JSON into a properly formatted entity. This is one area that Xcode is far superior to C#. In a just a few lines of code, I can have a list of in-memory strongly typed objects from a web-based JSON feed. To accomplish the same thing natively in .NET would be much more work and wouldn't feel nearly as clean when it was said and done. The snazzy icon shown above was built by my very talented wife. She hasn't yet provided any feedback on the app's user interface, which is why it is so plain and boring. I used a navigation view controller and EGO pull to refresh table view to construct the main window. Pulling down to refresh initiates a GPS lookup, which queries the database for radio stations in range (yes, you can pass parameters to Google spreadsheets and get a subset back in JSON). Pulling up on the table extends the range of the search and includes stations that may not be close enough to get clear audio. This feature is not that intuitive and the next version contains an update to that functionality. Tapping a cell will show a detail view that displays additional information about the station. The user can click to view the station on a map, click to listen to an online stream (if available) or click to see the station's Facebook or Twitter pages. Swiping back and forth on the table changes the information that is displayed on the right hand side of the table cell. It scrolls through the city where the tower is located, how far the phone is from the tower, the range of the tower and in the next version a signal strength indicator. This was pretty easy to implement once I figured out how to assign the gesture recognizer delegate.  Tapping and holding on a cell will jump the user to the map view screen. Which is pretty cool, but very hard for even a power user to discover. To tackle the issue of discoverability, the next version has a series of instructions displayed at the bottom of the screen to show the user the various shortcuts. Once the user has performed the swipes and long holds, the instructions disappear. I've learned a lot developing this app. Spending over a decade exclusively in .NET made the learning curve a bit steep, but once I learned the structure and syntax of Objective-C, I've learned to appreciate the power and simplicity of it. Here are a few screenshots. I would really appreciate any feedback and especially iTunes reviews. Technically it is open source and a smart googler could probably find it. I just haven't promoted it as open source.     Cross posted from timhibbard.com

    Read the article

  • Oracle’s New Memory-Optimized x86 Servers: Getting the Most Out of Oracle Database In-Memory

    - by Josh Rosen, x86 Product Manager-Oracle
    With the launch of Oracle Database In-Memory, it is now possible to perform real-time analytics operations on your business data as it exists at that moment – in the DRAM of the server – and immediately return completely current and consistent data. The Oracle Database In-Memory option dramatically accelerates the performance of analytics queries by storing data in a highly optimized columnar in-memory format.  This is a truly exciting advance in database technology.As Larry Ellison mentioned in his recent webcast about Oracle Database In-Memory, queries run 100 times faster simply by throwing a switch.  But in order to get the most from the Oracle Database In-Memory option, the underlying server must also be memory-optimized. This week Oracle announced new 4-socket and 8-socket x86 servers, the Sun Server X4-4 and Sun Server X4-8, both of which have been designed specifically for Oracle Database In-Memory.  These new servers use the fastest Intel® Xeon® E7 v2 processors and each subsystem has been designed to be the best for Oracle Database, from the memory, I/O and flash technologies right down to the system firmware.Amongst these subsystems, one of the most important aspects we have optimized with the Sun Server X4-4 and Sun Server X4-8 are their memory subsystems.  The new In-Memory option makes it possible to select which parts of the database should be memory optimized.  You can choose to put a single column or table in memory or, if you can, put the whole database in memory.  The more, the better.  With 3 TB and 6 TB total memory capacity on the Sun Server X4-4 and Sun Server X4-8, respectively, you can memory-optimize more, if not your entire database.   Sun Server X4-8 CMOD with 24 DIMM slots per socket (up to 192 DIMM slots per server) But memory capacity is not the only important factor in selecting the best server platform for Oracle Database In-Memory.  As you put more of your database in memory, a critical performance metric known as memory bandwidth comes into play.  The total memory bandwidth for the server will dictate the rate in which data can be stored and retrieved from memory.  In order to achieve real-time analysis of your data using Oracle Database In-Memory, even under heavy load, the server must be able to handle extreme memory workloads.  With that in mind, the Sun Server X4-8 was designed with the maximum possible memory bandwidth, providing over a terabyte per second of total memory bandwidth.  Likewise, the Sun Server X4-4 also provides extreme memory bandwidth in an even more compact form factor with over half a terabyte per second, providing customers with scalability and choice depending on the size of the database.Beyond the memory subsystem, Oracle’s Sun Server X4-4 and Sun Server X4-8 systems provide other key technologies that enable Oracle Database to run at its best.  The Sun Server X4-4 allows for up 4.8 TB of internal, write-optimized PCIe flash while the Sun Server X4-8 allows for up to 6.4 TB of PCIe flash.  This enables dramatic acceleration of data inserts and updates to Oracle Database.  And with the new elastic computing capability of Oracle’s new x86 servers, server performance can be adapted to your specific Oracle Database workload to ensure that every last bit of processing power is utilized.Because Oracle designs and tests its x86 servers specifically for Oracle workloads, we provide the highest possible performance and reliability when running Oracle Database.  To learn more about Sun Server X4-4 and Sun Server X4-8, you can find more details including data sheets and white papers here. Josh Rosen is a Principal Product Manager for Oracle’s x86 servers, focusing on Oracle’s operating systems and software.  He previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers. 

    Read the article

  • Converting Openfire IM datetime values in SQL Server to / from VARCHAR(15) and DATETIME data types

    - by Brian Biales
    A client is using Openfire IM for their users, and would like some custom queries to audit user conversations (which are stored by Openfire in tables in the SQL Server database). Because Openfire supports multiple database servers and multiple platforms, the designers chose to store all date/time stamps in the database as 15 character strings, which get converted to Java Date objects in their code (Openfire is written in Java).  I did some digging around, and, so I don't forget and in case someone else will find this useful, I will put the simple algorithms here for converting back and forth between SQL DATETIME and the Java string representation. The Java string representation is the number of milliseconds since 1/1/1970.  SQL Server's DATETIME is actually represented as a float, the value being the number of days since 1/1/1900, the portion after the decimal point representing the hours/minutes/seconds/milliseconds... as a fractional part of a day.  Try this and you will see this is true:     SELECT CAST(0 AS DATETIME) and you will see it returns the date 1/1/1900. The difference in days between SQL Server's 0 date of 1/1/1900 and the Java representation's 0 date of 1/1/1970 is found easily using the following SQL:   SELECT DATEDIFF(D, '1900-01-01', '1970-01-01') which returns 25567.  There are 25567 days between these dates. So to convert from the Java string to SQL Server's date time, we need to convert the number of milliseconds to a floating point representation of the number of days since 1/1/1970, then add the 25567 to change this to the number of days since 1/1/1900.  To convert to days, you need to divide the number by 1000 ms/s, then by  60 seconds/minute, then by 60 minutes/hour, then by 24 hours/day.  Or simply divide by 1000*60*60*24, or 86400000.   So, to summarize, we need to cast this string as a float, divide by 86400000 milliseconds/day, then add 25567 days, and cast the resulting value to a DateTime.  Here is an example:   DECLARE @tmp as VARCHAR(15)   SET @tmp = '1268231722123'   SELECT @tmp as JavaTime, CAST((CAST(@tmp AS FLOAT) / 86400000) + 25567 AS DATETIME) as SQLTime   To convert from SQL datetime back to the Java time format is not quite as simple, I found, because floats of that size do not convert nicely to strings, they end up in scientific notation using the CONVERT function or CAST function.  But I found a couple ways around that problem. You can convert a date to the number of  seconds since 1/1/1970 very easily using the DATEDIFF function, as this value fits in an Int.  If you don't need to worry about the milliseconds, simply cast this integer as a string, and then concatenate '000' at the end, essentially multiplying this number by 1000, and making it milliseconds since 1/1/1970.  If, however, you do care about the milliseconds, you will need to use DATEPART to get the milliseconds part of the date, cast this integer to a string, and then pad zeros on the left to make sure this is three digits, and concatenate these three digits to the number of seconds string above.  And finally, I discovered by casting to DECIMAL(15,0) then to VARCHAR(15), I avoid the scientific notation issue.  So here are all my examples, pick the one you like best... First, here is the simple approach if you don't care about the milliseconds:   DECLARE @tmp as VARCHAR(15)   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SET @tmp = CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15)) + '000'   SELECT @tmp as JavaTime, @dt as SQLTime If you want to keep the milliseconds:   DECLARE @tmp as VARCHAR(15)   DECLARE @dt as DATETIME   DECLARE @ms as int   SET @dt = '2010-03-10 14:35:22.123'   SET @ms as DATEPART(ms, @dt)   SET @tmp = CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15))           + RIGHT('000' + CAST(@ms AS VARCHAR(3)), 3)   SELECT @tmp as JavaTime, @dt as SQLTime Or, in one fell swoop:   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SELECT @dt as SQLTime     , CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15))           + RIGHT('000' + CAST( DATEPART(ms, @dt) AS VARCHAR(3)), 3) as JavaTime   And finally, a way to simply reverse the math used converting from Java date to SQL date. Note the parenthesis - watch out for operator precedence, you want to subtract, then multiply:   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SELECT @dt as SQLTime     , CAST(CAST((CAST(@dt as Float) - 25567.0) * 86400000.0 as DECIMAL(15,0)) as VARCHAR(15)) as JavaTime Interestingly, I found that converting to SQL Date time can lose some accuracy, when I converted the time above to Java time then converted  that back to DateTime, the number of milliseconds is 120, not 123.  As I am not interested in the milliseconds, this is ok for me.  But you may want to look into using DateTime2 in SQL Server 2008 for more accuracy.

    Read the article

  • What's new in ASP.Net 4.5 and VS 2012 - part 2

    - by nikolaosk
    This is the second post in a series of posts titled "What's new in ASP.Net 4.5 and VS 2012".You can have a look at the first post in this series, here. Please find all my posts regarding VS 2012, here. In this post I will be looking into the various new features available in ASP.Net 4.5 and VS 2012.I will be looking into the enhancements in the HTML Editor,CSS Editor and Javascript Editor.In order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 2012 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine.I will work fine in Windows 7 as well so do not worry if you do not have the latest Microsoft operating system.1) Launch VS 2012 and create a new Web Forms application by going to File - >New Web Site - > ASP.Net Web Forms Site.2) Choose an appropriate name for your web site.3) I would like to point out the new enhancements in the CSS editor in VS 2012. In the Solution Explorer in the Content folder and open the Site.cssThen when I try to change the background-color property of the html element, I get a brand new handy color-picker. Have a look at the picture below  Please note that the color-picker shows all the colors that have been used in this website. Then you can expand the color-picker by clicking on the arrows. Opacity is also supported. Have a look at the picture below4) There are also mobile styles in the Site.css .These are based on media queries.Please have a look at another post of mine on CSS3 media queries. Have a look at the picture below In this case when the maximum width of the screen is less than 850px there will be a new layout that will be dictated by these new rules. Also CSS snippets are supported. Have a look at the picture below I am writing a new CSS rule for an image element. I write the property transform and hit tab and then I have cross-browser CSS handling all of the major vendors.Then I simply add the value rotate and it is applied to all the cross browser options.Have a look at the picture below.  I am sure you realise how productive you can become with all these CSS snippets. 5) Now let's have a look at the new HTML editor enhancements in VS 2012You can drag and drop a GridView web server control from the Toolbox in the Site.master file.You will see a smart tag (that was only available in the Design View) that you can expand and add fields, format the web server control.Have a look at the picture below 6) We also have available code snippets. I type <video and then press tab twice.By doing that I have the rest of the HTML 5 markup completed.Have a look at the picture below 7) I have new support for the input tag including all the HTML 5 types and all the new accessibility features.Have a look at the picture below   8) Another interesting feature is the new Intellisense capabilities. When I change the DocType to 4.01 and the type <audio>,<video> HTML 5 tags, Intellisense does not recognise them and add squiggly lines.Have a look at the picture below All these features support ASP.Net Web forms, ASP.Net MVC applications and Web Pages. 9) Finally I would like to show you the enhanced support that we have for Javascript in VS 2012. I have full Intellisense support and code snippets support.I create a sample javascript file. I type If and press tab. I type while and press tab.I type for and press tab.In all three cases code snippet support kicks in and completes the code stack. Have a look at the picture below We also have full Intellisense support.Have a look at the picture below I am creating a simple function and then type some sort of XML like comments for the input parameters. Have a look at the picture below. Then when I call this function, Intellisense has picked up the XML comments and shows the variables data types.Have a look at the picture below Hope it helps!!!

    Read the article

  • SharePoint OCR image files indexing

    Introduction This article describes how to setup indexing of the image files (including TIFF, PDF, JPEG, BMP...) using OCR technology. The indexing described below utilizes Microsoft IFilter technology and as such is not specific to SharePoint, but can be used with any product that uses Microsoft indexing: Microsoft Search, Desktop search, SQL Server search, and through the plug-ins with Google desktop search. I however use it with Microsoft Windows SharePoint Services 2003. For those other products, the registration may need to be slightly different. Background  One of the projects I was working on required a storage of old documents scanned into PDF files. Then there was a separate team of people responsible for providing a tags for a search engine so those image documents could be found. The whole process was clumsy, labor intensive, and error prone. That was what started me on my exploration path. OCR The first search I fired was for the Open Source OCR products. Pretty quickly, I narrowed it down to TESSERACT (http://code.google.com/p/tesseract-ocr/). Tesseract is an orphaned brain child of HP that worked on it from 1985 to 1995. Then it was moved to the Open Source, and now if I understand it correctly, Google is working on it. With credentials like that, it's no wonder that Tesseract scores one of the highest marks on OCR recognition and accuracy. After downloading and struggling just a bit, I got Tesseract to work. The struggling part was that the home page claims that its base input format is a TIFF file. May be my TIFFs were bad, but I was able to get it to work only for BMP files. Image files conversion So now that I have an OCR that can convert BMP files into text, how do I get text out of the image PDF files? One more search, and I settled down on ImageMagic (http://www.imagemagick.org/). This is another wonderful Open Source utility that can convert any file into image. It did work out of the box, converting any TIFF files into bitmaps, but to get PDF files converted, it requires a GhostScript (http://mirror.cs.wisc.edu/pub/mirrors/ghost/GPL/gs864/gs864w32.exe). Dealing with text PDFs With that utility installed, I was cooking - I can convert any file (in particular PDF and TIFF) into bitmap, and then I can extract the text out of the bitmap. The only consideration was to somehow treat PDF files containing text differently - after all, OCR is very computation intensive and somewhat error prone even with perfect image quality and resolution. So another quick search, and I have a PDFTOTEXT (ftp://ftp.foolabs.com/pub/xpdf/xpdf-3.02pl4-win32.zip) - thank God for Open Source! With these guys, I can pull text out of PDF in an eye blink. However, I would get nothing for pure image PDFs, but I already have a solution for that! Batch process It took another 15 minutes to setup a batch script to automate the process: Check the file extension If file is a PDF file try to extract text out of it if there is more than certain amount of text in the file - done! if there is no text, convert first page into bitmap run OCR on the bitmap For any other file type, convert file into bitmap Run OCR on the bitmap Once you unzip the attached project, check out the bin\OCR.BAT file. It will create a temporary file in the directory where your source file is with the same name + the '.txt' extension.Continue span.fullpost {display:none;}

    Read the article

  • SQL SERVER – Vacation, Travel and Study – A New Concept

    - by pinaldave
    Quite often when developers go to training sessions they either find it very boring because of study or great because they treat it as a vacation. There should be a perfect balance between study and extra activities. Studying is Boring Studying is very hard. Nobody likes to study, very few people are going to list “studying” as one of their favorite hobbies.  Already my young daughter knows she doesn’t want to study, and I don’t want to either.  If you read my blog regularly you know that I am always saying that we need to be students for life.  However, all philosophy aside, if you are put in a room with an instructor to study for eight hours a day, you are going to feel bored, uncomfortable, and unhappy.  I was a trainer myself, and I understand that all-day study sessions are no fun – even for the trainer.  I always tried to be entertaining, but even eight hours of jokes and laughter is tiresome.  Eight hours at a comedy club would be boring after a little while – and if we can’t even enjoy fun stuff for eight hours straight, how can we expect to study for eight hours straight? Studying for Career or Certification Even those who have advanced degrees and went to college for years, or even decades, find studying hard.  There is a difference between studying for a career and studying for a certification.  At least to get a degree there is a variety of subjects, with labs, exams, and practice problems to make things more interesting.  You can also choose your major and what you want to spend your time studying.  For certification you do not have this luxury.  You have to learn and memorize specific parts, and there is no option to change your major if you don’t like it.  Your option is to gain your certification, or fail.  Many people will find that last option unacceptable. Studying at Vacation We have established: eight hours of uninterrupted study is boring.  That is why I am so excited about what my very good friend is doing with Koenig Solutions.  His whole goal was to make classes that are intensive but not in a traditional format.  He adds in aspects of the vacation.  It is true that you will study and sit with instructors for six or eight hours a day, but in the mornings and evenings you can go out and see the sights in exotic locations.  He has chosen the locations for his training courses for their proximity to tourist attractions like the Himalayas, the Taj Mahal, and Goa, India’s most popular resort town.  Every location has access to great experiences like river rafting, safari tours, or meditation.  There are five locations to choose from: Dehli, Dehradun, Shimla (close to the Himalayas), Goa Beach, and Dubai.  After a day of classes and hours of sight-seeing, you will be more than ready to return to campus tired and ready to study.  This is the kind of study I can do! My friend’s point is that studying and fun can still go hand-in-hand.  How many times have we heard a professor say this?  But this time it is true.  There is great fun in learning in exotic locations.  If you want to travel in India and are interested in also taking the opportunity to learn something, let Koenig Solutions know. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Excel Template Teaser

    - by Tim Dexter
    In lieu of some official documentation I'm in the process of putting together some posts on the new 10.1.3.4.1 Excel templates. No more HTML, maskerading as Excel; far more flexibility than Excel Analyzer and no need to write complex XSL templates to create the same output. Multi sheet outputs with macros and embeddable XSL commands are here. Their capabilities are pretty extensive and I have not worked on them for a few years since I helped put them together for EBS FSG users, so Im back on the learning curve. Let me say up front, there is no template builder, its a completely manual process to build them but, the results can be fantastic and provide yet another 'superstar' opportunity for you. The templates can take hierarchical XML data and walk the structure much like an RTF template. They use named cells/ranges and a hidden sheet to provide the rendering engine the hooks to drop the data in. As a taster heres the data and output I worked with on my first effort: <EMPLOYEES> <LIST_G_DEPT> <G_DEPT> <DEPARTMENT_ID>10</DEPARTMENT_ID> <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME> <LIST_G_EMP> <G_EMP> <EMPLOYEE_ID>200</EMPLOYEE_ID> <EMP_NAME>Jennifer Whalen</EMP_NAME> <EMAIL>JWHALEN</EMAIL> <PHONE_NUMBER>515.123.4444</PHONE_NUMBER> <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE> <SALARY>4400</SALARY> </G_EMP> </LIST_G_EMP> <TOTAL_EMPS>1</TOTAL_EMPS> <TOTAL_SALARY>4400</TOTAL_SALARY> <AVG_SALARY>4400</AVG_SALARY> <MAX_SALARY>4400</MAX_SALARY> <MIN_SALARY>4400</MIN_SALARY> </G_DEPT> ... </LIST_G_DEPT> </EMPLOYEES> Structured XML coming from a data template, check out the data template progression post. I can then generate the following binary XLS file. There are few cool things to notice in this output. DEPARTMENT-EMPLOYEE master detail output. Not easy to do in the Excel analyzer. Date formatting - this is using an Excel function. Remember BIP generates XML dates in the canonical format. I have formatted the other data in the template using native Excel functionality Salary Total - although in the data I have calculated this in the template Conditional formatting - this is handled by Excel based on the incoming data Bursting department data across sheets and using the department name for the sheet name. This alone is worth the wait! there's more, but this is surely enough to whet your appetite. These new templates are already tucked away in EBS R12 under controlled release by the GL team and have now come to the BIEE and standalone releases in the 10.1.3.4.1+ rollup patch. For the rest of you, its going to be a bit of a waiting game for the relevant teams to uptake the latest BIP release. Look out for more soon with some explanation of how they work and how to put them together!

    Read the article

  • Calling Web Services in classic ASP

    - by cabhilash
      Last day my colleague asked me the provide her a solution to call the Web service from classic ASP. (Yes Classic ASP. still people are using this :D ) We can call web service SOAP toolkit also. But invoking the service using the XMLHTTP object was more easier & fast. To create the Service I used the normal Web Service in .Net 2.0 with [Webmethod] public class WebService1 : System.Web.Services.WebService { [WebMethod] public string HelloWorld(string name){return name + " Pay my dues :) "; // a reminder to pay my consultation fee :D} } In Web.config add the following entry in System.web<webServices><protocols><add name="HttpGet"/><add name="HttpPost"/></protocols></webServices> Alternatively, you can enable these protocols for all Web services on the computer by editing the <protocols> section in Machine.config. The following example enables HTTP GET, HTTP POST, and also SOAP and HTTP POST from localhost: <protocols> <add name="HttpSoap"/> <add name="HttpPost"/> <add name="HttpGet"/> <add name="HttpPostLocalhost"/> <!-- Documentation enables the documentation/test pages --> <add name="Documentation"/> </protocols> By adding these entries I am enabling the HTTPGET & HTTPPOST (After .Net 1.1 by default HTTPGET & HTTPPOST is disabled because of security concerns)The .NET Framework 1.1 defines a new protocol that is named HttpPostLocalhost. By default, this new protocol is enabled. This protocol permits invoking Web services that use HTTP POST requests from applications on the same computer. This is true provided the POST URL uses http://localhost, not http://hostname. This permits Web service developers to use the HTML-based test form to invoke the Web service from the same computer where the Web service resides. Classic ASP Code to call Web service <%Option Explicit Dim objRequest, objXMLDoc, objXmlNode Dim strRet, strError, strNome Dim strName strName= "deepa" Set objRequest = Server.createobject("MSXML2.XMLHTTP") With objRequest .open "GET", "http://localhost:3106/WebService1.asmx/HelloWorld?name=" & strName, False .setRequestHeader "Content-Type", "text/xml" .setRequestHeader "SOAPAction", "http://localhost:3106/WebService1.asmx/HelloWorld" .send End With Set objXMLDoc = Server.createobject("MSXML2.DOMDocument") objXmlDoc.async = false Response.ContentType = "text/xml" Response.Write(objRequest.ResponseText) %> In Line 6 I created an MSXML XMLHTTP object. Line 9 Using the HTTPGET protocol I am openinig connection to WebService Line 10:11 – setting the Header for the service In line 15, I am getting the output from the webservice in XML Doc format & reading the responseText(line 18). In line 9 if you observe I am passing the parameter strName to the Webservice You can pass multiple parameters to the Web service by just like any other QueryString Parameters. In similar fashion you can invoke the Web service using HTTPPost. Only you have to ensure that the form contains all th required parameters for webmethod.  Happy coding !!!!!!!

    Read the article

  • Silverlight Cream for March 23, 2010 -- #818

    - by Dave Campbell
    In this Issue: Max Paulousky, Jeremy Likness, Mark Tucker, Christian Schormann, Page Brooks, Brad Abrams(-2-), Jeff Wilcox, Unnir, Bea Stollnitz, John Papa and Adam Kinney, and Bill Reiss(-2-). Shoutouts: Ashish Shetty posted his material from his MIX10 presentation: Stepping outside the browser with Silverlight 4 Not Silverlight, but dang useful, Karl Shifflett posted a Visual Studio 2010 XAML Editor IntelliSense Presenter Extension Yavor Georgiev posted his MIX10 material: Two samples from today's MIX talk From SilverlightCream.com: GroupBox Sketching Control for WPF applications Using Blend Max Paulousky creates a GroupBox control for SketchFlow for WPF. He includes a link to an example of doing the same for Silverlight. Sequential Asynchronous Workflows in Silverlight using Coroutines Jeremy Likness' latest post begann with a post on the Silverlight.net forum and Rob Eisenburg's MVVM presentation from MIX10 resulting in the use of Wintellect's PowerThreading library (downloadable), and Coroutines. Windows Phone 7 UI Templates Mark Tucker has been putting a lot of thought into WP7 apps and produced 5 templates for building apps, downloadable in PowerPoint format. He's also looking to discuss this concept. Blend 4: About Path Layout, Part I Christian Schormann has a great tutorial up about Expression Blend 4 and path layout ... this is lots of great info, and it's only part 1! Custom Splash Screen for Windows Phone Page Brooks makes very quick work of showing how to add a splash screen to your WP7 app... very nice, Page! Silverlight 4 + RIA Services - Ready for Business: Exposing Data from Entity Framework Brad Abrams next post in the series is is on pulling your data from wherever it lives, and uses a DomainService to shape it for your Silverlight app. Silverlight 4 + RIA Services - Ready for Business: Consuming Data in the Silverlight Client Brad Abrams then discusses consuming that data in a Silverlight app. Not much code involvement at all.. great ROI :) Building Silverlight 3 and Silverlight 4 applications on a .NET 3.5 build machine Jeff Wilcox talks about building Silverlight 3 and Silverlight 4B both on a .NET 3.5 machine. He then adds in the Toolkit, and even WCF RIA Services. Expression Blend 4 - XAML generation tweaks Unnir demonstrates a few changes to Expression Blend 4 that produce more compact XAML. He's also asking for other examples you'd like to see tightened up. How can I sort a hierarchy? Bea Stollnitz posts plausible solutions to sorting data items at each level of a hierarchical UI, with descriptions of why they don't work, followed by the real deal... Silverlight and WPF. Silverlight Training Course (Silverlight 4) John Papa and Adam Kinney have posted a huge body of work to get us up-to-speed on Silverlight 4 -- a WhitePaper, hands-on labs, and an 8-unit course with 25 accompanying videos... geez... Silverlight game development on Windows Phone 7 Bill Reiss has a post up discussing game development on WP7 in general and then discusses his SilverSprite library, with a link to it. XNA or Silverlight for Windows Phone 7 game development? Bill Reiss next discusses the advantage of using Silverlight or XNA for your WP7 game development, and who better to discuss both? Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Start/Stop Window Service from ASP.NET page

    - by kaushalparik27
    Last week, I needed to complete one task on which I am going to blog about in this entry. The task is "Create a control panel like webpage to control (Start/Stop) Window Services which are part of my solution installed on computer where the main application is hosted". Here are the important points to accomplish:[1] You need to add System.ServiceProcess reference in your application. This namespace holds ServiceController Class to access the window service.[2] You need to check the status of the window services before you explicitly start or stop it.[3] By default, IIS application runs under ASP.NET account which doesn't have access rights permission to window service. So, Very Important part of the solution is: Impersonation. You need to impersonate the application/part of the code with the User Credentials which is having proper rights and permission to access the window service. If you try to access window service it will generate "access denied" error.The alternatives are: You can either impersonate whole application by adding Identity tag in web.cofig as:        <identity impersonate="true" userName="" password=""/>This tag will be under System.Web section. the "userName" and "password" will be the credentials of the user which is having rights to access the window service. But, this would not be a wise and good solution; because you may not impersonate whole website like this just to have access window service (which is going to be a small part of code).Second alternative is: Only impersonate part of code where you need to access the window service to start or stop it. I opted this one. But, to be fair; I am really unaware of the code part for impersonation. So, I just googled it and injected the code in my solution in a separate class file named as "Impersonate" with required static methods. In Impersonate class; impersonateValidUser() is the method to impersonate a part of code and undoImpersonation() is the method to undo the impersonation. Below is one example:  You need to provide domain name (which is "." if you are working on your home computer), username and password of appropriate user to impersonate.[4] Here, it is very important to note that: You need to have to store the Access Credentials (username and password) which you are going to user for impersonation; to some secured and encrypted format. I have used Machinekey Encryption to store the value encrypted value inside database.[5] So now; The real part is to start or stop a window service. You are almost done; because ServiceController class has simple Start() and Stop() methods to start or stop a window service. A ServiceController class has parametrized constructor that takes name of the service as parameter.Code to Start the window service: Code to Stop the window service: Isn't that too easy! ServiceController made it easy :) I have attached a working example with this post here to start/stop "SQLBrowser" service where you need to provide proper credentials who have permission to access to window service.  hope it would helps./.

    Read the article

  • How SQL Server 2014 impacts Red Gate’s SQL Compare

    - by Michelle Taylor
    SQL Compare 10.7 successfully connects to SQL Server 2014, but it doesn’t yet cover the SQL Server 2014 features which would require us to make major changes to SQL Compare to support. In this post I’m going to talk about the SQL Server 2014 features we’ve already begun supporting, and which ones we’re working on for the next release of SQL Compare (v11). From SQL Compare’s perspective, the new memory-optimized table functionality (some might know it as ‘Hekaton’) has been the most important change. It can’t be described as its own object type, but the new functionality is split across two existing object types (three if you count indexes), as it also comes with native stored procedures and inline indexes. Along with connectivity support, the SQL Compare team has already implemented the first part of the puzzle – inline specification of indexes. These are essential for memory-optimized tables because it’s not possible to alter the memory optimized table’s structure, and so indexes can’t be added after the fact without dropping the table. Books Online  shows this in more detail in the table_index and column_index clauses of http://msdn.microsoft.com/en-us/library/ms174979(v=sql.120).aspx. SQL Compare 10.7 currently supports reading the new inline index specification from script folders and source control repositories, and will write out inline indexes where it’s necessary to do so (i.e. in UDDTs or when attempting to write projects compatible with the SSDT database project format). However, memory-optimized tables themselves are not yet supported in 10.7. The team is actively working on making them available in the v11 release with full support later in the year, and in a beta version before that. Fortunately, SQL Compare already has some ways of handling tables that have to be dropped and created rather than altered, which are being adapted to handle this new kind of table. Because it’s one of the largest new database engine features, there’s an equally large Books Online section on memory-optimized tables, but for us the most important parts of the documentation are the normal table features that are changed or unsupported and the new syntax found in the T-SQL reference pages. We are treating SQL Compare’s support of Natively Compiled Stored Procedures as a separate unit of work, which will be available in a subsequent beta and also feed into the v11 release. This new type of stored procedure is designed to work with memory-optimized tables to maintain the performance improvements gained by them – but you can still also access memory-optimized tables from normal stored procedures and ad-hoc queries. To us, they’re essentially a limited-syntax stored procedure with a few extra options in the create statement, embodied in the updated CREATE PROCEDURE documentation and with the detailed limitations. They should be easier to handle than memory-optimized tables simply because the handling of stored procedures is less sensitive to dropping the object than the handling of tables. However, both share an incompatibility with DDL triggers and Event Notifications which mean we’ll need to temporarily disable these during the specific deployment operations that involve them – don’t worry, we’ll supply a warning if this is the case so that you can check your auditing arrangements can handle the situation. There are also a handful of other improvements in SQL Server 2014 which affect SQL Compare and SQL Data Compare that are not connected to memory optimized tables. The largest of these are the improvements to columnstore indexes, with the capability to create clustered columnstore indexes and update columnstore tables through them – for more detail, take a look at the new syntax reference. There’s also a new index option for better compression of columnstores (COLUMNSTORE_ARCHIVE) and a new statistics option for incremental per-partition statistics, plus the 90 compatibility level is being retired. We’re planning to finish up these small clean-up features last, and be ready to release SQL Compare 11 with full SQL 2014 support early in Q3 this year. For a more thorough overview of what’s new in SQL Server 2014, Books Online’s What’s New section is a good place to start (although almost all the changes in this version are in the Database Engine).

    Read the article

  • SharePoint 2010 Hosting :: SharePoint 2010 Custom Web Template

    - by mbridge
    SharePoint 2010 offers some changes and additions to the SharePoint 2007 approach. Site definitions and publishing providers remain largely the same, but site templates created from the SharePoint UI or SharePoint Designer are now saved to a .WSP file, the same solution deployment packaging file format used for deploying custom SharePoint solutions. Site Templates saved to a .WSP solution file can be imported into Visual Studio for additional customization. Introducing the WebTemplate Feature Element The WebTemplate element, introduced in SharePoint 2010, allows site templates to be defined and deployed as a Feature as part of a solution package. A WebTemplate element feature can be used to deploy site templates in either a Farm or Sandbox solution - without modification. If deployed as a Farm feature and solution, site templates will appear in the site collection provisioning page in Central Administration and can be used to provision new site collections, or within a Site Collection to create sub-sites. If deployed as a Site feature and Sandbox solution, site templates will appear within the site collection to support creating a root site or sub-sites. Creating a new WebTemplate Feature in Visual Studio 2010 In addition to supporting the ability to save and import Site Templates created from the SharePoint UI into Visual Studio for customization, it can also be used to create new site templates from scratch. In the following sample we will walk through how to create a new WebTemplate solution based on  a customized version of the out-of-box Blank Site. 1. Create a new Empty SharePoint Project in Visual Studio 2010. 2. Add a new Empty Element to the project. we like to create folders for each type of element in our solution, so in our sample, we have created a Web Templates folder, and then added the BLANKENT element. NOTE: The Elements folder MUST share the same name as the WebTemplate name property. 3. Open the empty Elements.xml and add the <WebTemplate /> element block. 4. Copy the default.aspx and ONET.XML files from the STS site definition location at 14\TEMPLATES\Site Templates\STS. We will customize the ONET.XML in the next section. Open the properties for each file and set the Deployment Type to ElementFile. This ensures the files are deployed with the Element when included in a Feature. 5. By default a new feature is added to the solution for you automatically when a new element is added to the solution. Rename and edit the feature as appropriate. Select Farm for the scope to deploy the WebTemplate to the entire farm, or Site for a sandboxed solution. Customize the ONET.XML At this point, you have a working WebTemplate solution that will deploy the identical site to the out-of-box Blank Site, however the ONET.XML supporting the STS site definition contains 3 configurations – essentially 3 separate site templates and can be simplified before customizing. In the following sample, we have trimmed the ONET.XML to the essentials for a single Site Template, and added references to the <SiteFeatures /> and <WebFeatures /> elements to include the SharePoint Standard and Enterprise features. We have left the top-level navigation bar, and the default page module intact, but removed all other extraneous markup.

    Read the article

< Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >