Search Results

Search found 65997 results on 2640 pages for 'custom post type'.

Page 688/2640 | < Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >

  • Microsoft, jQuery, and Templating

    - by Stephen Walther
    About two months ago, John Resig and I met at Café Algiers in Harvard square to discuss how Microsoft can contribute to the jQuery project. Today, Scott Guthrie announced in his second-day MIX keynote that Microsoft is throwing its weight behind jQuery and making it the primary way to develop client-side Ajax applications using Microsoft technologies. What does this announcement mean? It means that Microsoft is shifting its resources to invest in jQuery. Developers on the ASP.NET team are now working full-time to contribute features to the core jQuery library. Furthermore, we are working with other teams at Microsoft to ensure that our technologies work great with jQuery. We are contributing to the open-source jQuery project in the exact same way that any other company or individual from the community can contribute to jQuery. We are writing proposals, submitting the proposals to the jQuery forums, and revising the proposals in response to community feedback. The jQuery team can decide to reject or accept any feature that we propose. Any feature that Microsoft contributes to jQuery will be platform neutral. In other words, Microsoft contributions will benefit PHP and RAILS developers just as much as they benefit ASP.NET developers. Microsoft contributions to jQuery will improve the web for everyone. Contributing Support for Templates to jQuery Core Our first proposal concerns templating. We want to contribute support for templates to jQuery so that JavaScript developers can use jQuery to easily display a set of database records. You can read our templating proposal here: http://wiki.github.com/nje/jquery/jquery-templates-proposal You can download and play with our prototype for templating here: http://github.com/nje/jquery-tmpl The following code illustrates how you can use a template to display a set of products in a bulleted list: <script type="text/javascript"> jQuery(function(){ var products = [ { name: "Product 1", price: 12.99}, { name: "Product 2", price: 9.99}, { name: "Product 3", price: 35.59} ]; $("ul").append("#template", products); }); </script> <script id="template" type="text/html"> <li>{%= name %} - {%= price %}</li> </script> <ul></ul> The template is contained in a SCRIPT element that has a TYPE=”text/html” attribute. Browsers ignore the contents of a SCRIPT element when they don’t understand the content type. Notice that the placeholder {%=...%} is used within the template to indicate where the name and price of a product should appear. The delimiters {%=…%} are used for expressions and the delimiters {%...%} are used for code. Finally, the products are rendered using the template with the call to $(“ul”).append(“#template”, products). The standard jQuery DOM manipulation methods have been modified to support templates. When the page above is rendered, you get the bulleted list displayed in the following figure. Our goal is to keep our proposal for templates as simple as possible. After support for templating has been added to jQuery, plug-in authors can take advantage of templating when building complex data-driven plug-ins such as a DataGrid plug-in. The Ajax Control Toolkit Over 100,000 developers download the Ajax Control Toolkit every month. That’s a mind-boggling number of downloads. We realize that the Ajax Control Toolkit is extremely popular among ASP.NET Web Forms developers and we want to continue to invest in the Ajax Control Toolkit. If you are adding JavaScript interactivity to an ASP.NET Web Forms application, and you don’t want to write JavaScript, then we recommend that you use the server controls in the Ajax Control Toolkit. Using the Ajax Control Toolkit does not require knowledge of JavaScript and the toolkit enables you to build applications with the concepts familiar to ASP.NET Web Forms applications developers. If, however, you are interested in creating client-side interactivity without server controls then we recommend that you use jQuery. We plan to continue to release new versions of the Ajax Control Toolkit every few months. Our goal is to continue to improve the quality of the Ajax Control Toolkit and to make it easier for the community to contribute code, bug fixes, and documentation. The ASP.NET Ajax Library We are moving the ASP.NET Ajax Library into the Ajax Control Toolkit. If you currently use ASP.NET Ajax Library client templates, client data-binding, or the client script loader then you can continue to use these features by downloading the Ajax Control Toolkit. Be aware that our focus with the Ajax Control Toolkit is server-side Ajax.  For client-side Ajax, we are shifting our focus to jQuery. For example, if you have been using ASP.NET Ajax Library client templates then we recommend that you shift to using jQuery instead. Conclusion Our plan is to focus on jQuery as the primary technology for building client-side Ajax applications moving forward. We want to adapt Microsoft technologies to work great with jQuery and we want to contribute features to jQuery that will make the web better for everyone. We are very excited to be working with the jQuery core team.

    Read the article

  • The SQL Server Reporting Services SDK for PHP Debuts

    - by The Official Microsoft IIS Site
    Microsoft has just released the SQL Server Reporting Services SDK for PHP, which enables PHP developers to easily create reports and integrate them in their web applications. The SDK offers a simple Application Programming Interface to interoperate with SQL Server Reporting Services, Microsoft's Reporting and Business Intelligence solution. Developers will be able to use the SDK to perform common operations like listing reports in PHP applications, providing custom report parameters from a PHP...(read more)

    Read the article

  • Project Navigation and File Nesting in ASP.NET MVC Projects

    - by Rick Strahl
    More and more I’m finding myself getting lost in the files in some of my larger Web projects. There’s so much freaking content to deal with – HTML Views, several derived CSS pages, page level CSS, script libraries, application wide scripts and page specific script files etc. etc. Thankfully I use Resharper and the Ctrl-T Go to Anything which autocompletes you to any file, type, member rapidly. Awesome except when I forget – or when I’m not quite sure of the name of what I’m looking for. Project navigation is still important. Sometimes while working on a project I seem to have 30 or more files open and trying to locate another new file to open in the solution often ends up being a mental exercise – “where did I put that thing?” It’s those little hesitations that tend to get in the way of workflow frequently. To make things worse most NuGet packages for client side frameworks and scripts, dump stuff into folders that I generally don’t use. I’ve never been a fan of the ‘Content’ folder in MVC which is just an empty layer that doesn’t serve much of a purpose. It’s usually the first thing I nuke in every MVC project. To me the project root is where the actual content for a site goes – is there really a need to add another folder to force another path into every resource you use? It’s ugly and also inefficient as it adds additional bytes to every resource link you embed into a page. Alternatives I’ve been playing around with different folder layouts recently and found that moving my cheese around has actually made project navigation much easier. In this post I show a couple of things I’ve found useful and maybe you find some of these useful as well or at least get some ideas what can be changed to provide better project flow. The first thing I’ve been doing is add a root Code folder and putting all server code into that. I’m a big fan of treating the Web project root folder as my Web root folder so all content comes from the root without unneeded nesting like the Content folder. By moving all server code out of the root tree (except for Code) the root tree becomes a lot cleaner immediately as you remove Controllers, App_Start, Models etc. and move them underneath Code. Yes this adds another folder level for server code, but it leaves only code related things in one place that’s easier to jump back and forth in. Additionally I find myself doing a lot less with server side code these days, more with client side code so I want the server code separated from that. The root folder itself then serves as the root content folder. Specifically I have the Views folder below it, as well as the Css and Scripts folders which serve to hold only common libraries and global CSS and Scripts code. These days of building SPA style application, I also tend to have an App folder there where I keep my application specific JavaScript files, as well as HTML View templates for client SPA apps like Angular. Here’s an example of what this looks like in a relatively small project: The goal is to keep things that are related together, so I don’t end up jumping around so much in the solution to get to specific project items. The Code folder may irk some of you and hark back to the days of the App_Code folder in non Web-Application projects, but these days I find myself messing with a lot less server side code and much more with client side files – HTML, CSS and JavaScript. Generally I work on a single controller at a time – once that’s open it’s open that’s typically the only server code I work with regularily. Business logic lives in another project altogether, so other than the controller and maybe ViewModels there’s not a lot of code being accessed in the Code folder. So throwing that off the root and isolating seems like an easy win. Nesting Page specific content In a lot of my existing applications that are pure server side MVC application perhaps with some JavaScript associated with them , I tend to have page level javascript and css files. For these types of pages I actually prefer the local files stored in the same folder as the parent view. So typically I have a .css and .js files with the same name as the view in the same folder. This looks something like this: In order for this to work you have to also make a configuration change inside of the /Views/web.config file, as the Views folder is blocked with the BlockViewHandler that prohibits access to content from that folder. It’s easy to fix by changing the path from * to *.cshtml or *.vbhtml so that view retrieval is blocked:<system.webServer> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*.cshtml" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" /> </handlers> </system.webServer> With this in place, from inside of your Views you can then reference those same resources like this:<link href="~/Views/Admin/QuizPrognosisItems.css" rel="stylesheet" /> and<script src="~/Views/Admin/QuizPrognosisItems.js"></script> which works fine. JavaScript and CSS files in the Views folder deploy just like the .cshtml files do and can be referenced from this folder as well. Making this happen is not really as straightforward as it should be with just Visual Studio unfortunately, as there’s no easy way to get the file nesting from the VS IDE directly (you have to modify the .csproj file). However, Mads Kristensen has a nice Visual Studio Add-in that provides file nesting via a short cut menu option. Using this you can select each of the ‘child’ files and then nest them under a parent file. In the case above I select the .js and .css files and nest them underneath the .cshtml view. I was even toying with the idea of throwing the controller.cs files into the Views folder, but that’s maybe going a little too far :-) It would work however as Visual Studio doesn’t publish .cs files and the compiler doesn’t care where the files live. There are lots of options and if you think that would make life easier it’s another option to help group related things together. Are there any downside to this? Possibly – if you’re using automated minification/packaging tools like ASP.NET Bundling or Grunt/Gulp with Uglify, it becomes a little harder to group script and css files for minification as you may end up looking in multiple folders instead of a single folder. But – again that’s a one time configuration step that’s easily handled and much less intrusive then constantly having to search for files in your project. Client Side Folders The particular project shown above in the screen shots above is a traditional server side ASP.NET MVC application with most content rendered into server side Razor pages. There’s a fair amount of client side stuff happening on these pages as well – specifically several of these pages are self contained single page Angular applications that deal with 1 or maybe 2 separate views and the layout I’ve shown above really focuses on the server side aspect where there are Razor views with related script and css resources. For applications that are more client centric and have a lot more script and HTML template based content I tend to use the same layout for the server components, but the client side code can often be broken out differently. In SPA type applications I tend to follow the App folder approach where all the application pieces that make the SPA applications end up below the App folder. Here’s what that looks like for me – here this is an AngularJs project: In this case the App folder holds both the application specific js files, and the partial HTML views that get loaded into this single SPA page application. In this particular Angular SPA application that has controllers linked to particular partial views, I prefer to keep the script files that are associated with the views – Angular Js Controllers in this case – with the actual partials. Again I like the proximity of the view with the main code associated with the view, because 90% of the UI application code that gets written is handled between these two files. This approach works well, but only if controllers are fairly closely aligned with the partials. If you have many smaller sub-controllers or lots of directives where the alignment between views and code is more segmented this approach starts falling apart and you’ll probably be better off with separate folders in js folder. Following Angular conventions you’d have controllers/directives/services etc. folders. Please note that I’m not saying any of these ways are right or wrong  – this is just what has worked for me and why! Skipping Project Navigation altogether with Resharper I’ve talked a bit about project navigation in the project tree, which is a common way to navigate and which we all use at least some of the time, but if you use a tool like Resharper – which has Ctrl-T to jump to anything, you can quickly navigate with a shortcut key and autocomplete search. Here’s what Resharper’s jump to anything looks like: Resharper’s Goto Anything box lets you type and quick search over files, classes and members of the entire solution which is a very fast and powerful way to find what you’re looking for in your project, by passing the solution explorer altogether. As long as you remember to use (which I sometimes don’t) and you know what you’re looking for it’s by far the quickest way to find things in a project. It’s a shame that this sort of a simple search interface isn’t part of the native Visual Studio IDE. Work how you like to work Ultimately it all comes down to workflow and how you like to work, and what makes *you* more productive. Following pre-defined patterns is great for consistency, as long as they don’t get in the way you work. A lot of the default folder structures in Visual Studio for ASP.NET MVC were defined when things were done differently. These days we’re dealing with a lot more diverse project content than when ASP.NET MVC was originally introduced and project organization definitely is something that can get in the way if it doesn’t fit your workflow. So take a look and see what works well and what might benefit from organizing files differently. As so many things with ASP.NET, as things evolve and tend to get more complex I’ve found that I end up fighting some of the conventions. The good news is that you don’t have to follow the conventions and you have the freedom to do just about anything that works for you. Even though what I’ve shown here diverges from conventions, I don’t think anybody would stumble over these relatively minor changes and not immediately figure out where things live, even in larger projects. But nevertheless think long and hard before breaking those conventions – if there isn’t a good reason to break them or the changes don’t provide improved workflow then it’s not worth it. Break the rules, but only if there’s a quantifiable benefit. You may not agree with how I’ve chosen to divert from the standard project structures in this article, but maybe it gives you some ideas of how you can mix things up to make your existing project flow a little nicer and make it easier to navigate for your environment. © Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Unity: Is there a way to edit a Skin file?

    - by Roberto
    My project has multiple skins and sometimes we have to deal with skins with many custom styles. Editing them in the editor is difficult, for instance, I cannot delete one style that is not the last one without deleting the ones after it. Would there be a way to edit a file that represents this skin? Could I edit a skin file if I use Text in the Asset Serialization Mode (Unity Pro)? If not, is there something in the Unity Store to help me better edit skins?

    Read the article

  • SSIS – Delete all files except for the most recent one

    - by jorg
    Quite often one or more sources for a data warehouse consist of flat files. Most of the times these files are delivered as a zip file with a date in the file name, for example FinanceDataExport_20100528.zip Currently I work at a project that does a full load into the data warehouse every night. A zip file with some flat files in it is dropped in a directory on a daily basis. Sometimes there are multiple zip files in the directory, this can happen because the ETL failed or somebody puts a new zip file in the directory manually. Because the ETL isn’t incremental only the most recent file needs to be loaded. To implement this I used the simple code below; it checks which file is the most recent and deletes all other files. Note: In a previous blog post I wrote about unzipping zip files within SSIS, you might also find this useful: SSIS – Unpack a ZIP file with the Script Task Public Sub Main() 'Use this piece of code to loop through a set of files in a directory 'and delete all files except for the most recent one based on a date in the filename. 'File name example: 'DataExport_20100413.zip Dim rootDirectory As New DirectoryInfo(Dts.Variables("DirectoryFromSsisVariable").Value.ToString) Dim mostRecentFile As String = "" Dim currentFileDate As Integer Dim mostRecentFileDate As Integer = 0 'Check which file is the most recent For Each fi As FileInfo In rootDirectory.GetFiles("*.zip") currentFileDate = CInt(Left(Right(fi.Name, 12), 8)) 'Get date from current filename (based on a file that ends with: YYYYMMDD.zip) If currentFileDate > mostRecentFileDate Then mostRecentFileDate = currentFileDate mostRecentFile = fi.Name End If Next 'Delete all files except the most recent one For Each fi As FileInfo In rootDirectory.GetFiles("*.zip") If fi.Name <> mostRecentFile Then File.Delete(rootDirectory.ToString + "\" + fi.Name) End If Next Dts.TaskResult = ScriptResults.Success End Sub Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Translate report data export from RUEI into HTML for import into OpenOffice Calc Spreadsheets

    - by [email protected]
    A common question of users is, How to import the data from the automated data export of Real User Experience Insight (RUEI) into tools for archiving, dashboarding or combination with other sets of data.XML is well-suited for such a translation via the companion Extensible Stylesheet Language Transformations (XSLT). Basically XSLT utilizes XSL, a template on what to read from your input XML data file and where to place it into the target document. The target document can be anything you like, i.e. XHTML, CSV, or even a OpenOffice Spreadsheet, etc. as long as it is a plain text format.XML 2 OpenOffice.org SpreadsheetFor the XSLT to work as an OpenOffice.org Calc Import Filter:How to add an XML Import Filter to OpenOffice CalcStart OpenOffice.org Calc andselect Tools > XML Filter SettingsNew...Fill in the details as follows:Filter name: RUEI Import filterApplication: OpenOffice.org Calc (.ods)Name of file type: Oracle Real User Experience InsightFile extension: xmlSwitch to the transformation tab and enter/select the following leaving the rest untouchedXSLT for import: ruei_report_data_import_filter.xslPlease see at the end of this blog post for a download of the referenced file.Select RUEI Import filter from list and Test XSLTClick on Browse to selectTransform file: export.php.xmlOpenOffice.org Calc will transform and load the XML file you retrieved from RUEI in a human-readable format.You can now select File > Open... and change the filetype to open your RUEI exports directly in OpenOffice.org Calc, just like any other a native Spreadsheet format.Files of type: Oracle Real User Experience Insight (*.xml)File name: export.php.xml XML 2 XHTMLMost XML-powered browsers provides for inherent XSL Transformation capabilities, you only have to reference the XSLT Stylesheet in the head of your XML file. Then open the file in your favourite Web Browser, Firefox, Opera, Safari or Internet Explorer alike.<?xml version="1.0" encoding="ISO-8859-1"?><!-- inserted line below --> <?xml-stylesheet type="text/xsl" href="ruei_report_data_export_2_xhtml.xsl"?><!-- inserted line above --><report>You can find a patched example export from RUEI plus the above referenced XSL-Stylesheets here: export.php.xml - Example report data export from RUEI ruei_report_data_export_2_xhtml.xsl - RUEI to XHTML XSL Transformation Stylesheetruei_report_data_import_filter.xsl - OpenOffice.org XML import filter for RUEI report export data If you would like to do things like this on the command line you can use either Xalan or xsltproc.The basic command syntax for xsltproc is very simple:xsltproc -o output.file stylesheet.xslt inputfile.xmlYou can use this with the above two stylesheets to translate RUEI Data Exports into XHTML and/or OpenOffice.org Calc ODS-Format. Or you could write your own XSLT to transform into Comma separated Value lists.Please let me know what you think or do with this information in the comments below.Kind regards,Stefan ThiemeReferences used:OpenOffice XML Filter - Create XSLT filters for import and export - http://user.services.openoffice.org/en/forum/viewtopic.php?f=45&t=3490SUN OpenOffice.org XML File Format 1.0 - http://xml.openoffice.org/xml_specification.pdf

    Read the article

  • Copyrights concerning code snippets and larger amounts of code

    - by JustcallmeDrago
    I am designing a public code repository. Users will be allowed to post and edit whatever amount of code they want, from code snippets to entire multi-file projects. I have a few major legal concerns about this: Not getting sued/shut down - I feel the site would be a much easier target than tracking down an individual user to sue. I have looked around a bit and see links to legal info in the footer of each page is common. What specific things should I do--and what does does a site such as YouTube (which I see copyrighted material on all the time) do--for protection? Citing sources and editing sourced code - If a user wants to post code that isn't theirs, what concerns/safeguards should I have? Will a link suffice, and what do I need further to allow the code to be edited (to improve it for example)? What can happen if a user posts copyrighted code without citing it? Large chunks of code - What legal differences should I look out for as the amount grows? Not having a mess of licenses for the site - I would like to have a single license (like RosettaCode) that keeps things simple for interaction on the site. I want the code to be postable and editable. I have looked into StackOverflow's CreativeCommons license a little and it says that If you alter, transform, or build upon this work, you may distribute the resulting work only under the same or similar license to this one. And on RosettaCode: All software found on Rosetta Code should be considered potentially hazardous. Use at your own risk. Be aware that all code on Rosetta Code is under the GNU Free Documentation License, as are any edits made by contributors. See Rosetta Code:Copyrights for details. What other licenses are like this? Commercializing the site - In what ways can I and can't I make money off of a site that contains code like this? All code will be publicly visible. Initial thoughts are having ads or making money by charging for advanced features.

    Read the article

  • An end to the static &hellip;

    - by Dave Oliver
    Last October I learnt my company wanted to put together a new blog/social networking policy. I decided that out of respect for my employer I wouldn’t blog until this was sorted out. This was perhaps was an easy decision to make as I was separating from my ex-wife at the time and frankly needed the time to concentrate on other things. So now the company has a brand new policy and I’m back into the dating game I thought I would blow off the cobwebs and get back to what I enjoy doing. First and foremost SQL Server 2008 R2 is almost here and to mark that fact I will be in London on Thursday at the Microsoft UK Tech-Day’s event. The subjects I most want to see are … Power Pivot – this is such an exciting technology! I’ve been a fan of Qlikview for years so it will be good to see how it compares SQL Azure – Cloud Computing is big right now, so it will be interesting to see what the RTM product can do. I have afew ideas for its use and will be interesting to see if SQL Azure is the right product … more on this in the next few weeks. Master Data Services – This is one of those technologies that Microsoft hasn’t been making much noise about … and frankly should have because it is a game changer. Hmmm, queue future “What is … ?” post StreamInsight – An exciting events technology, again another “What is … ?” post is around the corner on that. So, you thought that SQL Server 2008 R2 was just a release to make sure the years between SQL Server 2008 and SQL Server 2010 weren’t so long? I am however disappointed that Clustering across Subnets didn’t make it and not sure if Control Points made it but all will be revealed later on this week. Till then I will have to wait! Technorati Tags: Microsoft,Techdays,SQL Server 2008 R2

    Read the article

  • Internet Protocol Suite: Transition Control Protocol (TCP) vs. User Datagram Protocol (UDP)

    How do we communicate over the Internet?  How is data transferred from one machine to another? These types of act ivies can only be done by using one of two Internet protocols currently. The collection of Internet Protocol consists of the Transition Control Protocol (TCP) and the User Datagram Protocol (UDP).  Both protocols are used to send data between two network end points, however they both have very distinct ways of transporting data from one endpoint to another. If transmission speed and reliability is the primary concern when trying to transfer data between two network endpoints then TCP is the proper choice. When a device attempts to send data to another endpoint using TCP it creates a direct connection between both devices until the transmission has completed. The direct connection between both devices ensures the reliability of the transmission due to the fact that no intermediate devices are needed to transfer the data. Due to the fact that both devices have to continuously poll the connection until transmission has completed increases the resources needed to perform the transmission. An example of this type of direct communication can be seen when a teacher tells a students to do their homework. The teacher is talking directly to the students in order to communicate that the homework needs to be done.  Students can then ask questions about the assignment to ensure that they have received the proper instructions for the assignment. UDP is a less resource intensive approach to sending data between to network endpoints. When a device uses UDP to send data across a network, the data is broken up and repackaged with the destination address. The sending device then releases the data packages to the network, but cannot ensure when or if the receiving device will actually get the data.  The sending device depends on other devices on the network to forward the data packages to the destination devices in order to complete the transmission. As you can tell this type of transmission is less resource intensive because not connection polling is needed,  but should not be used for transmitting data with speed or reliability requirements. This is due to the fact that the sending device can not ensure that the transmission is received.  An example of this type of communication can be seen when a teacher tells a student that they would like to speak with their parents. The teacher is relying on the student to complete the transmission to the parents, and the teacher has no guarantee that the student will actually inform the parents about the request. Both TCP and UPD are invaluable when attempting to send data across a network, but depending on the situation one protocol may be better than the other. Before deciding on which protocol to use an evaluation for transmission speed, reliability, latency, and overhead must be completed in order to define the best protocol for the situation.  

    Read the article

  • A dacpac limitation – Deploy dacpac wizard does not understand SqlCmd variables

    - by jamiet
    Since the release of SQL Server 2012 I have become a big fan of using dacpacs for deploying SQL Server databases (for reasons that I will explain some other day) and I chose to use a dacpac to distribute my recently announced utility sp_ssiscatalog (read: Introducing sp_ssiscatalog (v1.0.0.0)). Unfortunately if you read that blog post you may have taken note of the following: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. I think it is worth calling out this limitation separately in this blog post because its a limitation that all dacpac users need to be aware of. If you try and deploy the dacpac containing sp_ssiscatalog using the wizard in SSMS then this is what you will see: TITLE: Microsoft SQL Server Management Studio ------------------------------ Could not deploy package. (Microsoft.SqlServer.Dac) ------------------------------ ADDITIONAL INFORMATION: Missing values for the following SqlCmd variables:SSISDB. (Microsoft.Data.Tools.Schema.Sql) ------------------------------ BUTTONS: OK ------------------------------ The message is quite correct. The SSDT DB project that I used to build this dacpac *does* have a SqlCmd variable in it called SSISDB: Quite simply, the Dac Deployment wizard in SSMS is not capable of deploying such dacpacs. Your only option for deploying such dacpacs is to use the command-line tool sqlpackage.exe. Generally I use sqlpackage.exe anyway (which is why it has taken me months to encounter the aforementioned problem) and have found it preferable to using a GUI-based wizard. Your mileage may vary. @Jamiet

    Read the article

  • Announcing the latest update to Oracle VM Server for x86 2.2 Release

    - by Honglin Su
    More and more customers have discovered that Oracle delivers more value with Oracle VM compared with other server virtualization solutions, and we've seen the momentum that customers succeed with Oracle VM and leading partners support Oracle VM. Recently, Oracle VM server for x86 with Windows PV Drivers passed Microsoft SVVP requirements for Windows servers, which provides customers more confidence to deploy Microsoft Windows guest OS onto Oracle VM server for x86. Today I'm pleased to announce the Oracle VM server for x86 2.2.2 release. See the new features introduced in the 2.2.2 release. Expand the guest OS support to include Oracle Linux 6.x and Oracle Solaris 11 Express OS. For a complete list of guest OS support, please refer to the Oracle VM server x86 release note. The VMPInfo system information and cluster troubleshooting utility is provided with this release. Additional information on VMPInfo is also available in the following My Oracle Support Notes: 1263293.1 Post-installation check list for new Oracle VM Server 1290587.1 Performing Site Reviews and Cluster Troubleshooting with VMPInfo A new storage repository option to provide NFS mount options when creating a storage repository. For more information on this new parameter, see Oracle VM Server User Guide "Adding a Storage Repository". Updated OCFS2 cluster file system, libdhcp, device drivers. See details and additional enhancements at Oracle VM Server User Guide: New Features in Release 2.2.2. The Oracle VM Server for x86 ISO image is available for download at Oracle's E-Delivery site. If you've subscribed to Oracle's Unbreakable Linux Network (ULN), you can simply run up2date command to update the server. Please refer to Oracle VM Upgrade Guide: Upgrading Oracle VM Server. There's no change to Oracle VM Manager, which remains at 2.2.0 with the patch 2.2-16. If you have any questions about Oracle VM Serer for x86, you may post your questions at OTN discussion forum; or purchase support for Oracle Unbreakable Linux and Oracle VM. For Oracle's x86 systems, Oracle VM support as well Oracle Linux and Oracle Solaris support are included in the Oracle Premier Support for Systems. For more information about Oracle's virtualization, visit oracle.com/virtualization.

    Read the article

  • Daily tech links for .net and related technologies - Mar 26-28, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - Mar 26-28, 2010 Web Development Creating Rich View Components in ASP.NET MVC - manzurrashid Diagnosing ASP.NET MVC Problems - Brad Wilson Templated Helpers & Custom Model Binders in ASP.NET MVC 2 - gshackles The jQuery Templating Plugin and Why You Should Be Excited! - Chris Love Web Deployment Made Awesome: If You're Using XCopy, You're Doing It Wrong - Scott Hansleman Dynamic User Specific CSS Selection at Run Time - Misfit Geek Sending email...(read more)

    Read the article

  • Fixing NativeHR 0×80070002 Error When Retracting or Deploying SharePoint Apps from Visual Studio

    - by Damon Armstrong
    Sometimes when App deployment fails from Visual Studio you will get the following error when trying to retract or redeploy the app: <nativehr>0×80070002</nativehr><nativestack></nativestack> There seems to be some issue with the information in the content database.  To fix the problem I just deleted all of the information from the App tables in the content database.  We only have one app in testing at a time so this worked fine for me.  Doing this in a production environment or if you have multiple apps installed is not recommended, so if you are in either of those scenarios you will probably have to dig into those tables to find the offending entries.  You may also have to remove some of the App principal entries from the App Service Application database as well (I’ll try to update this post if we find that to be true).   I’m going to post the SQL to do the full deletion, but be careful when running this: DO NOT RUN THIS IN A PRODUCTION ENVIRONMENT: DELETE FROM [dbo].[AppDatabaseMetadata] DELETE FROM [dbo].[AppInstallationProperty] DELETE FROM [dbo].[AppInstallations] DELETE FROM [dbo].[AppJobs] DELETE FROM [dbo].[AppLifecycleErrors] DELETE FROM [dbo].[AppPackages] DELETE FROM [dbo].[AppPrincipalPerms] DELETE FROM [dbo].[AppPrincipals] DELETE FROM [dbo].[AppResources] DELETE FROM [dbo].[AppRuntimeIcons] DELETE FROM [dbo].[AppRuntimeMetadata] DELETE FROM [dbo].[AppRuntimeSubstitutionDictionary] DELETE FROM [dbo].[AppSourceInfo] DELETE FROM [dbo].[AppSubscriptionCosts] DELETE FROM [dbo].[AppTaskDependencies]

    Read the article

  • DevConnections jQuery Session Slides and Samples posted

    - by Rick Strahl
    I’ve posted all of my slides and samples from the DevConnections VS 2010 Launch event last week in Vegas. All three sessions are contained in a single zip file which contains all slide decks and samples in one place: www.west-wind.com/files/conferences/jquery.zip There were 3 separate sessions: Using jQuery with ASP.NET Starting with an overview of jQuery client features via many short and fun examples, you'll find out about core features like the power of selectors to select document elements, manipulate these elements with jQuery's wrapped set methods in a browser independent way, how to hook up and handle events easily and generally apply concepts of unobtrusive JavaScript principles to client scripting. The session also covers AJAX interaction between jQuery and the .NET server side code using several different approaches including sending HTML and JSON data and how to avoid user interface duplication by using client side templating. This session relies heavily on live examples and walk-throughs. jQuery Extensibility and Integration with ASP.NET Server Controls One of the great strengths of the jQuery Javascript framework is its simple, yet powerful extensibility model that has resulted in an explosion of plug-ins available for jQuery. You need it - chances are there's a plug-in for it! In this session we'll look at a few plug-ins to demonstrate the power of the jQuery plug-in model before diving in and creating our own custom jQuery plug-ins. We'll look at how to create a plug-in from scratch as well as discussing when it makes sense to do so. Once you have a plug-in it can also be useful to integrate it more seamlessly with ASP.NET by creating server controls that coordinate both server side and jQuery client side behavior. I'll demonstrate a host of custom components that utilize a combination of client side jQuery functionality and server side ASP.NET server controls that provide smooth integration in the user interface development process. This topic focuses on component development both for pure client side plug-ins and mixed mode controls. jQuery Tips and Tricks This session was kind of a last minute substitution for an ASP.NET AJAX talk. Nothing too radical here :-), but I focused on things that have been most productive for myself. Look at the slide deck for individual points and some of the specific samples.   It was interesting to see that unlike in previous conferences this time around all the session were fairly packed – interest in jQuery is definitely getting more pronounced especially with microsoft’s recent announcement of focusing on jQuery integration rather than continuing on the path of ASP.NET AJAX – which is a welcome change. Most of the samples also use the West Wind Web & Ajax Toolkit and the support tools contained within it – a snapshot version of the toolkit is included in the samples download. Specicifically a number of the samples use functionality in the ww.jquery.js support file which contains a fairly large set of plug-ins and helper functionality – most of these pieces while contained in the single file are self-contained and can be lifted out of this file (several people asked). Hopefully you'll find something useful in these slides and samples.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  jQuery  

    Read the article

  • How do I make good guy attacks only hit bad guys and vice versa?

    - by tieTYT
    My game has many different type of good guys and many different type of bad guys. They will all be firing projectiles at each other but I don't want any accidental collateral damage to occur for either alignment. So bad guys should not be able to hit/damage other bad guys and good guys should not be able to hit/damage other good guys. The way I'm thinking of solving this is by making it so that the Unit instance (this is javascript, btw), has an alignment property that can be either good or bad. And I'll only let collision happen if the class Attack boolean didAttackCollideWithTarget(target) return attack.source.alignment != target.alignment and collisionDetected(attack.source, target) This is pseudo-code, of course. But I'm asking this question because I get the sense that there might be a much more elegant way to design this besides adding yet another property to my Unit class.

    Read the article

  • Unable to uninstall maas completely

    - by user210844
    I'm not able to uninstall MAAS sudo apt-get purge maas ; sudo apt-get autoremove Reading package lists... Done Building dependency tree Reading state information... Done Package 'maas' is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up maas-region-controller (1.2+bzr1373+dfsg-0ubuntu1) ... Considering dependency proxy for proxy_http: Module proxy already enabled Module proxy_http already enabled Module expires already enabled Module wsgi already enabled sed: -e expression #1, char 91: unterminated `s' command dpkg: error processing maas-region-controller (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of maas-dns: maas-dns depends on maas-region-controller (= 1.2+bzr1373+dfsg-0ubuntu1); however: Package maas-region-controller is not configured yet. dpkg: error processing maas-dns (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: maas-region-controller maas-dns E: Sub-process /usr/bin/dpkg returned an error code (1) Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up maas-region-controller (1.2+bzr1373+dfsg-0ubuntu1) ... Considering dependency proxy for proxy_http: Module proxy already enabled Module proxy_http already enabled Module expires already enabled Module wsgi already enabled sed: -e expression #1, char 91: unterminated `s' command dpkg: error processing maas-region-controller (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of maas-dns: maas-dns depends on maas-region-controller (= 1.2+bzr1373+dfsg-0ubuntu1); however: Package maas-region-controller is not configured yet. dpkg: error processing maas-dns (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: maas-region-controller maas-dns E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Demystifying Silverlight Dependency Properties

    - by dwahlin
    I have the opportunity to teach a lot of people about Silverlight (amongst other technologies) and one of the topics that definitely confuses people initially is the concept of dependency properties. I confess that when I first heard about them my initial thought was “Why do we need a specialized type of property?” While you can certainly use standard CLR properties in Silverlight applications, Silverlight relies heavily on dependency properties for just about everything it does behind the scenes. In fact, dependency properties are an essential part of the data binding, template, style and animation functionality available in Silverlight. They simply back standard CLR properties. In this post I wanted to put together a (hopefully) simple explanation of dependency properties and why you should care about them if you’re currently working with Silverlight or looking to move to it.   What are Dependency Properties? XAML provides a great way to define layout controls, user input controls, shapes, colors and data binding expressions in a declarative manner. There’s a lot that goes on behind the scenes in order to make XAML work and an important part of that magic is the use of dependency properties. If you want to bind data to a property, style it, animate it or transform it in XAML then the property involved has to be a dependency property to work properly. If you’ve ever positioned a control in a Canvas using Canvas.Left or placed a control in a specific Grid row using Grid.Row then you’ve used an attached property which is a specialized type of dependency property. Dependency properties play a key role in XAML and the overall Silverlight framework. Any property that you bind, style, template, animate or transform must be a dependency property in Silverlight applications. You can programmatically bind values to controls and work with standard CLR properties, but if you want to use the built-in binding expressions available in XAML (one of my favorite features) or the Binding class available through code then dependency properties are a necessity. Dependency properties aren’t needed in every situation, but if you want to customize your application very much you’ll eventually end up needing them. For example, if you create a custom user control and want to expose a property that consumers can use to change the background color, you have to define it as a dependency property if you want bindings, styles and other features to be available for use. Now that the overall purpose of dependency properties has been discussed let’s take a look at how you can create them. Creating Dependency Properties When .NET first came out you had to write backing fields for each property that you defined as shown next: Brush _ScheduleBackground; public Brush ScheduleBackground { get { return _ScheduleBackground; } set { _ScheduleBackground = value; } } Although .NET 2.0 added auto-implemented properties (for example: public Brush ScheduleBackground { get; set; }) where the compiler would automatically generate the backing field used by get and set blocks, the concept is still the same as shown in the above code; a property acts as a wrapper around a field. Silverlight dependency properties replace the _ScheduleBackground field shown in the previous code and act as the backing store for a standard CLR property. The following code shows an example of defining a dependency property named ScheduleBackgroundProperty: public static readonly DependencyProperty ScheduleBackgroundProperty = DependencyProperty.Register("ScheduleBackground", typeof(Brush), typeof(Scheduler), null);   Looking through the code the first thing that may stand out is that the definition for ScheduleBackgroundProperty is marked as static and readonly and that the property appears to be of type DependencyProperty. This is a standard pattern that you’ll use when working with dependency properties. You’ll also notice that the property explicitly adds the word “Property” to the name which is another standard you’ll see followed. In addition to defining the property, the code also makes a call to the static DependencyProperty.Register method and passes the name of the property to register (ScheduleBackground in this case) as a string. The type of the property, the type of the class that owns the property and a null value (more on the null value later) are also passed. In this example a class named Scheduler acts as the owner. The code handles registering the property as a dependency property with the call to Register(), but there’s a little more work that has to be done to allow a value to be assigned to and retrieved from the dependency property. The following code shows the complete code that you’ll typically use when creating a dependency property. You can find code snippets that greatly simplify the process of creating dependency properties out on the web. The MVVM Light download available from http://mvvmlight.codeplex.com comes with built-in dependency properties snippets as well. public static readonly DependencyProperty ScheduleBackgroundProperty = DependencyProperty.Register("ScheduleBackground", typeof(Brush), typeof(Scheduler), null); public Brush ScheduleBackground { get { return (Brush)GetValue(ScheduleBackgroundProperty); } set { SetValue(ScheduleBackgroundProperty, value); } } The standard CLR property code shown above should look familiar since it simply wraps the dependency property. However, you’ll notice that the get and set blocks call GetValue and SetValue methods respectively to perform the appropriate operation on the dependency property. GetValue and SetValue are members of the DependencyObject class which is another key component of the Silverlight framework. Silverlight controls and classes (TextBox, UserControl, CompositeTransform, DataGrid, etc.) ultimately derive from DependencyObject in their inheritance hierarchy so that they can support dependency properties. Dependency properties defined in Silverlight controls and other classes tend to follow the pattern of registering the property by calling Register() and then wrapping the dependency property in a standard CLR property (as shown above). They have a standard property that wraps a registered dependency property and allows a value to be assigned and retrieved. If you need to expose a new property on a custom control that supports data binding expressions in XAML then you’ll follow this same pattern. Dependency properties are extremely useful once you understand why they’re needed and how they’re defined. Detecting Changes and Setting Defaults When working with dependency properties there will be times when you want to assign a default value or detect when a property changes so that you can keep the user interface in-sync with the property value. Silverlight’s DependencyProperty.Register() method provides a fourth parameter that accepts a PropertyMetadata object instance. PropertyMetadata can be used to hook a callback method to a dependency property. The callback method is called when the property value changes. PropertyMetadata can also be used to assign a default value to the dependency property. By assigning a value of null for the final parameter passed to Register() you’re telling the property that you don’t care about any changes and don’t have a default value to apply. Here are the different constructor overloads available on the PropertyMetadata class: PropertyMetadata Constructor Overload Description PropertyMetadata(Object) Used to assign a default value to a dependency property. PropertyMetadata(PropertyChangedCallback) Used to assign a property changed callback method. PropertyMetadata(Object, PropertyChangedCalback) Used to assign a default property value and a property changed callback.   There are many situations where you need to know when a dependency property changes or where you want to apply a default. Performing either task is easily accomplished by creating a new instance of the PropertyMetadata class and passing the appropriate values to its constructor. The following code shows an enhanced version of the initial dependency property code shown earlier that demonstrates these concepts: public Brush ScheduleBackground { get { return (Brush)GetValue(ScheduleBackgroundProperty); } set { SetValue(ScheduleBackgroundProperty, value); } } public static readonly DependencyProperty ScheduleBackgroundProperty = DependencyProperty.Register("ScheduleBackground", typeof(Brush), typeof(Scheduler), new PropertyMetadata(new SolidColorBrush(Colors.LightGray), ScheduleBackgroundChanged)); private static void ScheduleBackgroundChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { var scheduler = d as Scheduler; scheduler.Background = e.NewValue as Brush; } The code wires ScheduleBackgroundProperty to a property change callback method named ScheduleBackgroundChanged. What’s interesting is that this callback method is static (as is the dependency property) so it gets passed the instance of the object that owns the property that has changed (otherwise we wouldn’t be able to get to the object instance). In this example the dependency object is cast to a Scheduler object and its Background property is assigned to the new value of the dependency property. The code also handles assigning a default value of LightGray to the dependency property by creating a new instance of a SolidColorBrush. To Sum Up In this post you’ve seen the role of dependency properties and how they can be defined in code. They play a big role in XAML and the overall Silverlight framework. You can think of dependency properties as being replacements for fields that you’d normally use with standard CLR properties. In addition to a discussion on how dependency properties are created, you also saw how to use the PropertyMetadata class to define default dependency property values and hook a dependency property to a callback method. The most important thing to understand with dependency properties (especially if you’re new to Silverlight) is that they’re needed if you want a property to support data binding, animations, transformations and styles properly. Any time you create a property on a custom control or user control that has these types of requirements you’ll want to pick a dependency property over of a standard CLR property with a backing field. There’s more that can be covered with dependency properties including a related property called an attached property….more to come.

    Read the article

  • Bulletin Board System with tagging, email notification

    - by user678220
    I am looking for nice BBS system, Bulletin Board System, Discussion Board, or nice in-company communication platform. There are lots of people, about 30 people, joining in our project. We would like to share idea among us on that platform. We can post questions and concerns related with the project, and we would like to respond each other. Here is my list of functionality I want: Tagging Thread e.g) Announcement, Finance, Legal, Idea. One thread can have multiple Tags. members can set on/off to receive email when new comments are posted. They can set on/off on each Tag. e.g) one member on to receive email related with "Announcement", but off to receive "Finance". Thread owner can change threads' tag any time. Thread can have several type of post. Thread can be "vote" thread. Everyone can vote their opinion. Thread can be "action plan" thread. In this thread, "who" will "what" remains in the thread. By viewing all "action plan" thread, all action plans needed in the company is visualized.

    Read the article

  • SQL Server 2008 R2 Installation and the Phantom of SQL Server 2005 Express

    - by Davide Mauri
    Today I’ve happy started to install SQL Server 2008R2 on my development machine, which has this software installed Windows Server 2008 R2 Standard SQL Server 2008 SP1 CU5 Visual Studio 2008 SP1 BOL October 2009 AdventuresWorks2008 Databases SR4 Visual Studio 2010 RTM So, all the basic standard stuff. SQL Server 2008 R2 installation went smooth ‘till somewhere in the middle, where the rule engine checks that software pre-requisite are satisfied before starting to copy files. Here I had this @][@@[?!?! error: “The SQL Server 2005 Express Tools are installed. To continue, remove the SQL Server 2005 Express Tools.” Fun enough, I don’t have and I’ve never had SQL Server 2005 Express on my machine. Armed with patience I analyzed the install log here C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\yyyymmdd_hhmmss\Detail.txt and I’ve found that the rule “Sql2005SsmsExpressFacet” is the one in charge of this check and it look for existance of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\90\Tools\ShellSEM (on x86) HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Microsoft SQL Server\90\Tools\ShellSEM (on x64) In my registry I’ve found that key existsing, due to the installation of the uber-cool Red-Gate SQL Search. I removed the registry key and here it is! SQL Server 2008 R2 is installing while I’m writing this post. A note to Microsoft: can you please add more detailed information on the setup while such error happens. Just saying “you have SQL Server 2005 Express installed” is not enough. Please show us what the rule look for and why it has failed directly in the Detailed Report, so that we don’t have to spend time to look for the needle in the logs. Thanks! :) PS I did a side-by-side installation with the existing SQL Server 2008 instance. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Web Platform Installer bundles for Visual Studio 2010 SP1 - and how you can build your own WebPI bundles

    - by Jon Galloway
    Visual Studio SP1 is  now available via the Web Platform Installer, which means you've got three options: Download the 1.5 GB ISO image Run the 750KB Web Installer (which figures out what you need to download) Install via Web PI Note: I covered some tips for installing VS2010 SP1 last week - including some that apply to all of these, such as removing options you don't use prior to installing the service pack to decrease the installation time and download size. Two Visual Studio 2010 SP1 Web PI packages There are actually two WebPI packages for VS2010 SP1. There's the standard Visual Studio 2010 SP1 package [Web PI link], which includes (quoting ScottGu's post): VS2010 2010 SP1 ASP.NET MVC 3 (runtime + tools support) IIS 7.5 Express SQL Server Compact Edition 4.0 (runtime + tools support) Web Deployment 2.0 The notes on that package sum it up pretty well: Looking for the latest everything? Look no further. This will get you Visual Studio 2010 Service Pack 1 and the RTM releases of ASP.NET MVC 3, IIS 7.5 Express, SQL Server Compact 4.0 with tooling, and Web Deploy 2.0. It's the value meal of Microsoft products. Tell your friends! Note: This bundle includes the Visual Studio 2010 SP1 web installer, which will dynamically determine the appropriate service pack components to download and install. This is typically in the range of 200-500 MB and will take 30-60 minutes to install, depending on your machine configuration. There is also a Visual Studio 2010 SP1 Core package [Web PI link], which only includes only the SP without any of the other goodies (MVC3, IIS Express, etc.). If you're doing any web development, I'd highly recommend the main pack since it the other installs are small, simple installs, but if you're working in another space, you might want the core package. Installing via the Web Platform Installer I generally like to go with the Web PI when possible since it simplifies most software installations due to things like: Smart dependency management - installing apps or tools which have software dependencies will automatically figure out which dependencies you don't have and add them to the list (which you can review before install) Simultaneous download and install - if your install includes more than one package, it will automatically pull the dependencies first and begin installing them while downloading the others Lists the latest downloads - no need to search around, as they're all listed based on a live feed Includes open source applications - a lot of popular open source applications are included as well as Microsoft software and tools No worries about reinstallation - WebPI installations detect what you've got installed, so for instance if you've got MVC 3 installed you don't need to worry about the VS2010 SP1 package install messing anything up In addition to the links I included above, you can install the WebPI from http://www.microsoft.com/web/downloads/platform.aspx, and if you have Web PI installed you can just tap the Windows key and type "Web Platform" to bring it up in the Start search list. You'll see Visual Studio SP1 listed in the spotlight list as shown below. That's the standard package, which includes MVC 3 / IIS 7.5 Express / SQL Compact / Web Deploy. If you just want the core install, you can use the search box in the upper right corner, typing in "Visual Studio SP1" as shown. Core Install: Use Web PI or the Visual Studio Web Installer? I think the big advantage of using Web PI to install VS 2010 SP1 is that it includes the other new bits. If you're going to install the SP1 core, I don't think there's as much advantage to using Web PI, as the Web PI Core install just downloads the Visual Studio Web Installer anyways. I think Web PI makes it a little easier to find the download, but not a lot. The Visual Studio Web Installer checks dependencies, so there's no big advantage there. If you do happen to hit any problems installing Visual Studio SP1 via Web PI, I'd recommend running the Visual Studio Web Installer, then running the Web PI VS 2010 SP1 package to get all the other goodies. I talked to one person who hit some random snag, recommended that, and it worked out. Custom Web Platform Installer bundles You can create links that will launch the Web Platform Installer with a custom list of tools. You can see an example of this by clicking through on the install button at http://asp.net/downloads (cancelling the installation dialog). You'll see this in the address bar: http://www.microsoft.com/web/gallery/install.aspx?appsxml=&appid=MVC3;ASPNET;NETFramework4;SQLExpress;VWD Notice that the appid querystring parameter includes a semicolon delimited list, and you can make your own custom Web PI links with your own desired app list. I can think of a lot of cases where that would be handy: linking to a recommended software configuration from a software project or product, setting up a recommended / documented / supported install list for a software development team or IT shop, etc. For instance, here's a link that installs just VS2010 SP1 Core and the SQL CE tools: http://www.microsoft.com/web/gallery/install.aspx?appsxml=&appid=VS2010SP1Core;SQLCETools Note: If you've already got all or some of the products installed, the display will reflect that. On my dev box which has the full SP1 package, here's what the above link gives me: Here's another example - on a fresh box I created a link to install MVC 3 and the Web Farm Framework (http://www.microsoft.com/web/gallery/install.aspx?appsxml=&appid=MVC3;WebFarmFramework) and got the following items added to the cart: But where do I get the App ID's? Aha, that's the trick. You can link to a list of cool packages, but you need to know the App ID's to link to them. To figure that out, I turned on tracing in Web Platform Installer  (also handy if you're ever having trouble with a WebPI install) and from the trace logs saw that the list of packages is pulled from an XML file: DownloadManager Information: 0 : Loading product xml from: https://go.microsoft.com/?linkid=9763242 DownloadManager Verbose: 0 : Connecting to https://go.microsoft.com/?linkid=9763242 with (partial) headers: Referer: wpi://2.1.0.0/Microsoft Windows NT 6.1.7601 Service Pack 1 If-Modified-Since: Wed, 09 Mar 2011 14:15:27 GMT User-Agent:Platform-Installer/3.0.3.0(Microsoft Windows NT 6.1.7601 Service Pack 1) DownloadManager Information: 0 : https://go.microsoft.com/?linkid=9763242 responded with 302 DownloadManager Information: 0 : Response headers: HTTP/1.1 302 Found Cache-Control: private Content-Length: 175 Content-Type: text/html; charset=utf-8 Expires: Wed, 09 Mar 2011 22:52:28 GMT Location: https://www.microsoft.com/web/webpi/3.0/webproductlist.xml Server: Microsoft-IIS/7.5 X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Date: Wed, 09 Mar 2011 22:53:27 GMT Browsing to https://www.microsoft.com/web/webpi/3.0/webproductlist.xml shows the full list. You can search through that in your browser / text editor if you'd like, open it in Excel as an XML table, etc. Here's a list of the App ID's as of today: SMO SMO32 PHP52ForIISExpress PHP53ForIISExpress StaticContent DefaultDocument DirectoryBrowse HTTPErrors HTTPRedirection ASPNET NETExtensibility ASP CGI ISAPIExtensions ISAPIFilters ServerSideIncludes HTTPLogging LoggingTools RequestMonitor Tracing CustomLogging ODBCLogging BasicAuthentication WindowsAuthentication DigestAuthentication ClientCertificateMappingAuthentication IISClientCertificateMappingAuthentication URLAuthorization RequestFiltering IPSecurity StaticContentCompression DynamicContentCompression IISManagementConsole IISManagementScriptsAndTools ManagementService MetabaseAndIIS6Compatibility WASProcessModel WASNetFxEnvironment WASConfigurationAPI IIS6WPICompatibility IIS6ScriptingTools IIS6ManagementConsole LegacyFTPServer FTPServer WebDAV LegacyFTPManagementConsole FTPExtensibility AdminPack AdvancedLogging WebFarmFrameworkNonLoc ExternalCacheNonLoc WebFarmFramework WebFarmFrameworkv2 WebFarmFrameworkv2_beta ExternalCache ECacheUpdate ARRv1 ARRv2Beta1 ARRv2Beta2 ARRv2RC ARRv2NonLoc ARRv2 ARRv2Update MVC MVCBeta MVCRC1 MVCRC2 DBManager DbManagerUpdate DynamicIPRestrictions DynamicIPRestrictionsUpdate DynamicIPRestrictionsLegacy DynamicIPRestrictionsBeta2 FTPOOB IISPowershellSnapin RemoteManager SEOToolkit VS2008RTM MySQL SQLDriverPHP52IIS SQLDriverPHP53IIS SQLDriverPHP52IISExpress SQLDriverPHP53IISExpress SQLExpress SQLManagementStudio SQLExpressAdv SQLExpressTools UrlRewrite UrlRewrite2 UrlRewrite2NonLoc UrlRewrite2RC UrlRewrite2Beta UrlRewrite10 UrlScan MVC3Installer MVC3 MVC3LocInstaller MVC3Loc MVC2 VWD VWD2010SP1Pack NETFramework4 WebMatrix WebMatrix_v1Refresh IISExpress IISExpress_v1 IIS7 AspWebPagesVS AspWebPagesVS_1_0 Plan9 Plan9Loc WebMatrix_WHP SQLCE SQLCETools SQLCEVSTools SQLCEVSTools_4_0 SQLCEVSToolsInstaller_4_0 SQLCEVSToolsInstallerNew_4_0 SQLCEVSToolsInstallerRepair_EN_4_0 SQLCEVSToolsInstallerRepair_JA_4_0 SQLCEVSToolsInstallerRepair_FR_4_0 SQLCEVSToolsInstallerRepair_DE_4_0 SQLCEVSToolsInstallerRepair_ES_4_0 SQLCEVSToolsInstallerRepair_IT_4_0 SQLCEVSToolsInstallerRepair_RU_4_0 SQLCEVSToolsInstallerRepair_KO_4_0 SQLCEVSToolsInstallerRepair_ZH_CN_4_0 SQLCEVSToolsInstallerRepair_ZH_TW_4_0 VWD2008 WebDAVOOB WDeploy WDeploy_v2 WDeployNoSMO WDeploy11 WinCache52 WinCache53 NETFramework35 WindowsImagingComponent VC9Redist NETFramework20SP2 WindowsInstaller31 PowerShell PowerShellMsu PowerShell2 WindowsInstaller45 FastCGIUpdate FastCGIBackport FastCGIIIS6 IIS51 IIS60 SQLNativeClient SQLNativeClient2008 SQLNativeClient2005 SQLCLRTypes SQLCLRTypes32 SMO_10_1 MySQLConnector PHP52 PHP53 PHPManager VSVWD2010Feature VWD2010WebFeature_0 VWD2010WebFeature_1 VWD2010WebFeature_2 VS2010SP1Prerequisite RIAServicesToolkitMay2010 Silverlight4Toolkit Silverlight4Tools VSLS SSMAMySQL WebsitePanel VS2010SP1Core VS2010SP1Installer VS2010SP1Pack MissingVWDOrVSVWD2010Feature VB2010Beta2Express VCS2010Beta2Express VC2010Beta2Express RIAServicesToolkitApr2010 VS2010Beta1 VS2010RC VS2010Beta2 VS2010Beta2Express VS2k8RTM VSCPP2k8RTM VSVB2k8RTM VSCS2k8RTM VSVWDFeature LegacyWinCache SQLExpress2005 SSMS2005

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Silverlight Cream for April 25, 2010 -- #847

    - by Dave Campbell
    In this Issue: Michael Washington, David Poll, Andrea Boschin, Kunal Chowdhury(-2-), Lee, and Chad Campbell. Shoutout: Not at all Silverlight, but Kirupa has a great article up about Elastic Collisions From SilverlightCream.com: Easily decouple your MVVM ViewModel from your Model using RX Extensions Michael Washington continues with his Simplified MVVM and uses Rx to allow him to put web service methods in the model and call them from the ViewModel. A “refreshing” Authentication/Authorization experience with Silverlight 4 David Poll expands on his previous post and demonstrates returning the user to the page prior to the login, just the way you'd like it. Using a PollingDuplex service to handle long running activities Andrea Boschin is back discussing PollingDuplex again, this time is discussing getting updates from a long-running process. Introduction to Silverlight - Silverlight tutorials Chapter 1 Kunal Chowdhury has an intro to Silverlight Tutorial that looks good... if you're just getting started, here's a place to look. Introduction to Silverlight Application Development - Silverlight tutorial Chapter 2 In his 2nd introductory tutorial, Kunal Chowdhury gets into code and discusses User Controls. Drag and Drop grouping in Datagrid Lee has a post up where he's taken a DataGrid and produced some of the features of some of the 3rd-party controls, specifically dragging column headers to a place above the grid to sort the grid. Silverlight – HTTP request to [Url] was aborted. …local channel being closed while the request was still in progress Chad Campbell is addressing timeout problems you may hit with connecting up to your WCF service. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • repair broken packages-"dpkg: error: conflicting actions -f (--field) and -r (--remove)"

    - by yinon
    Ubuntu 12.04 LTS. if more information will be needed, tell me and'll give. the main problem is: tzach@tzach-pc:~$ sudo apt-get install docky [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done docky is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ca-certificates-java : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not going to be installed or java6-runtime-headless openjdk-7-jre-lib : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). tzach@tzach-pc:~$ and also: tzach@tzach-pc:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. **The following packages have unmet dependencies: ca-certificates-java : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not installed or java6-runtime-headless openjdk-7-jre-lib : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not installed E: Unmet dependencies. Try using ******* so we tryied the guide here in messege #9: http://ubuntuforums.org/showthread.php?t=947124 we run all the first 4 commands and the last one-"sudo apt-get autoremove" gave us: tzach@tzach-pc:~$ sudo apt-get autoremove Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: **ca-certificates-java** : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not installed or java6-runtime-headless **openjdk-7-jre-lib** : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not installed E: Unmet dependencies. Try using -f. so we run the last command twice: sudo dpkg --remove -force --force-remove-reinstreq ca-certificates-java and sudo dpkg --remove -force --force-remove-reinstreq openjdk-7-jre-lib but both of them gives: tzach@tzach-pc:~$ sudo dpkg --remove -force --force-remove-reinstreq ca-certificates-java [sudo] password for tzach: dpkg: error: conflicting actions -f (--field) and -r (--remove) Type dpkg --help for help about installing and deinstalling packages [*]; Use `dselect' or `aptitude' for user-friendly package management; Type dpkg -Dhelp for a list of dpkg debug flag values; Type dpkg --force-help for a list of forcing options; Type dpkg-deb --help for help about manipulating *.deb files; Options marked [*] produce a lot of output - pipe it through `less' or `more' ! EDIT FOR green7-output of "sudo apt-get -f install": tzach@tzach-pc:~$ sudo apt-get -f install [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: icedtea-7-jre-cacao icedtea-7-jre-jamvm java-common openjdk-7-jre-headless tzdata-java Suggested packages: default-jre equivs sun-java6-fonts ttf-dejavu-extra fonts-ipafont-gothic fonts-ipafont-mincho ttf-telugu-fonts ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts The following packages will be REMOVED: ttf-mscorefonts-installer The following NEW packages will be installed: icedtea-7-jre-cacao icedtea-7-jre-jamvm java-common openjdk-7-jre-headless tzdata-java 0 upgraded, 5 newly installed, 1 to remove and 355 not upgraded. 5 not fully installed or removed. Need to get 0 B/29.6 MB of archives. After this operation, 88.5 MB of additional disk space will be used. Do you want to continue [Y/n]? y debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: warning: there's no installed package matching ttf-mscorefonts-installer:amd64 Setting up tzdata (2012e-0ubuntu0.12.04) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing tzdata (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: tzdata E: Sub-process /usr/bin/dpkg returned an error code (1) EDIT2 FOR green7: tzach@tzach-pc:~$ sudo apt-get remove --purge tzdata [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ca-certificates-java : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not going to be installed or java6-runtime-headless libc6 : Depends: tzdata but it is not going to be installed libc6:i386 : Depends: tzdata:i386 libical0 : Depends: tzdata but it is not going to be installed openjdk-7-jre-lib : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not going to be installed python-dateutil : Depends: tzdata but it is not going to be installed ubuntu-minimal : Depends: tzdata but it is not going to be installed util-linux : Depends: tzdata (>= 2006c-2) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). EDIT3 FOR green7: tzach@tzach-pc:~$ sudo apt-get install openjdk-7-jre-headless [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: openjdk-7-jre-headless : Depends: tzdata-java but it is not going to be installed Depends: java-common (>= 0.28) but it is not going to be installed Recommends: icedtea-7-jre-cacao (= 7~u3-2.1.1~pre1-1ubuntu3) but it is not going to be installed Recommends: icedtea-7-jre-jamvm (= 7~u3-2.1.1~pre1-1ubuntu3) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). some things in the text also supposed to be bolded. but not critic (: Thanks for the editing! Thanks a lot for your assistance.

    Read the article

  • Trabajando el redireccionamiento de usuarios/Working with user redirect methods

    - by Jason Ulloa
    La protección de las aplicaciones es un elemento que no se puede dejar por fuera cuando se elabora un sistema. Cada parte o elemento de código que protege nuetra aplicación debe ser cuidadosamente seleccionado y elaborado. Una de las cosas comunes con las que nos topamos en asp.net cuando deseamos trabajar con usuarios, es con la necesidad de poder redireccionarlos a los distintos elementos o páginas dependiendo del rol. Pues precisamente eso es lo que haremos, vamos a trabajar con el Web.config de nuestra aplicación y le añadiremos unas pequeñas líneas de código para lograr dar un poco mas de seguridad al sistema y sobre todo lograr el redireccionamiento. Así que veamos como logramos lo deseado: Como bien sabemos el web.config nos permite manejar muchos elementos dentro de asp.net, muchos de ellos relacionados con la seguridad, asi como tambien nos brinda la posibilidad de poder personalizar los elementos para poder adaptarlo a nuestras necesidades. Así que, basandonos en el principio de que podemos personalizar el web.config, entonces crearemos una sección personalizada, que será la que utilicemos para manejar el redireccionamiento: Nuestro primer paso será ir a nuestro web.config y buscamos las siguientes líneas: <configuration>     <configSections>  </sectionGroup>             </sectionGroup>         </sectionGroup> Y luego de ellas definiremos una nueva sección  <section name="loginRedirectByRole" type="crabit.LoginRedirectByRoleSection" allowLocation="true" allowDefinition="Everywhere" /> El section name corresponde al nombre de nuestra nueva sección Type corresponde al nombre de la clase (que pronto realizaremos) y que será la encargada del Redirect Como estamos trabajando dentro de la seccion de configuración una vez definidad nuestra sección personalizada debemos cerrar esta sección  </configSections> Por lo que nuestro web.config debería lucir de la siguiente forma <configuration>     <configSections>  </sectionGroup>             </sectionGroup>         </sectionGroup> <section name="loginRedirectByRole" type="crabit.LoginRedirectByRoleSection" allowLocation="true" allowDefinition="Everywhere" /> </configSections> Anteriormente definimos nuestra sección, pero esta sería totalmente inútil sin el Metodo que le da vida. En nuestro caso el metodo loginRedirectByRole, este metodo lo definiremos luego del </configSections> último que cerramos: <loginRedirectByRole>     <roleRedirects>       <add role="Administrador" url="~/Admin/Default.aspx" />       <add role="User" url="~/User/Default.aspx" />     </roleRedirects>   </loginRedirectByRole> Como vemos, dentro de nuestro metodo LoginRedirectByRole tenemos el elemento add role. Este elemento será el que posteriormente le indicará a la aplicación hacia donde irá el usuario cuando realice un login correcto. Así que, veamos un poco esta configuración: add role="Administrador" corresponde al nombre del Role que tenemos definidio, pueden existir tantos elementos add role como tengamos definidos en nuestra aplicación. El elemento URL indica la ruta o página a la que será dirigido un usuario una vez logueado y dentro de la aplicación. Como vemos estamos utilizando el ~ para indicar que es una ruta relativa. Con esto hemos terminado la configuración de nuestro web.config, ahora veamos a fondo el código que se encargará de leer estos elementos y de utilziarlos: Para nuestro ejemplo, crearemos una nueva clase denominada LoginRedirectByRoleSection, recordemos que esta clase es la que llamamos en el elemento TYPE definido en la sección de nuestro web.config. Una vez creada la clase, definiremos algunas propiedades, pero antes de ello le indicaremos a nuestra clase que debe heredar de configurationSection, esto para poder obtener los elementos del web.config.  Inherits ConfigurationSection Ahora nuestra primer propiedad   <ConfigurationProperty("roleRedirects")> _         Public Property RoleRedirects() As RoleRedirectCollection             Get                 Return DirectCast(Me("roleRedirects"), RoleRedirectCollection)             End Get             Set(ByVal value As RoleRedirectCollection)                 Me("roleRedirects") = value             End Set         End Property     End Class Esta propiedad será la encargada de obtener todos los roles que definimos en la metodo personalizado de nuestro web.config Nuestro segundo paso será crear una segunda clase (en la misma clase LoginRedirectByRoleSection) a esta clase la llamaremos RoleRedirectCollection y la heredaremos de ConfigurationElementCollection y definiremos lo siguiente Public Class RoleRedirectCollection         Inherits ConfigurationElementCollection         Default Public ReadOnly Property Item(ByVal index As Integer) As RoleRedirect             Get                 Return DirectCast(BaseGet(index), RoleRedirect)             End Get         End Property         Default Public ReadOnly Property Item(ByVal key As Object) As RoleRedirect             Get                 Return DirectCast(BaseGet(key), RoleRedirect)             End Get         End Property         Protected Overrides Function CreateNewElement() As ConfigurationElement             Return New RoleRedirect()         End Function         Protected Overrides Function GetElementKey(ByVal element As ConfigurationElement) As Object             Return DirectCast(element, RoleRedirect).Role         End Function     End Class Nuevamente crearemos otra clase esta vez llamada RoleRedirect y en este caso la heredaremos de ConfigurationElement. Nuestra nueva clase debería lucir así: Public Class RoleRedirect         Inherits ConfigurationElement         <ConfigurationProperty("role", IsRequired:=True)> _         Public Property Role() As String             Get                 Return DirectCast(Me("role"), String)             End Get             Set(ByVal value As String)                 Me("role") = value             End Set         End Property         <ConfigurationProperty("url", IsRequired:=True)> _         Public Property Url() As String             Get                 Return DirectCast(Me("url"), String)             End Get             Set(ByVal value As String)                 Me("url") = value             End Set         End Property     End Class Una vez que nuestra clase madre esta lista, lo unico que nos queda es un poc de codigo en la pagina de login de nuestro sistema (por supuesto, asumo que estan utilizando  los controles de login que por defecto tiene asp.net). Acá definiremos nuestros dos últimos metodos  Protected Sub ctllogin_LoggedIn(ByVal sender As Object, ByVal e As System.EventArgs) Handles ctllogin.LoggedIn         RedirectLogin(ctllogin.UserName)     End Sub El procedimiento loggeding es parte del control login de asp.net y se desencadena en el momento en que el usuario hace loguin correctametne en nuestra aplicación Este evento desencadenará el siguiente procedimiento para redireccionar.     Private Sub RedirectLogin(ByVal username As String)         Dim roleRedirectSection As crabit.LoginRedirectByRoleSection = DirectCast(ConfigurationManager.GetSection("loginRedirectByRole"), crabit.LoginRedirectByRoleSection)         For Each roleRedirect As crabit.RoleRedirect In roleRedirectSection.RoleRedirects             If Roles.IsUserInRole(username, roleRedirect.Role) Then                 Response.Redirect(roleRedirect.Url)             End If         Next     End Sub   Con esto, nuestra aplicación debería ser capaz de redireccionar sin problemas y manejar los roles.  Además, tambien recordar que nuestro ejemplo se basa en la utilización del esquema de bases de datos que por defecto nos proporcionada asp.net.

    Read the article

< Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >