Search Results

Search found 30815 results on 1233 pages for 'build xml'.

Page 530/1233 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • Need a Better Cleanup Tool [Humorous Image]

    - by Asian Angel
    Obviously some cleanup tools work better than others…and sometimes common sense cleanup is the best tool of all! Note: Notice the timeline in the image… View the Full-Size Version of the Image Got a sales guys laptop back… Note the times. [via Fail Desk] The Best Free Portable Apps for Your Flash Drive Toolkit How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC

    Read the article

  • Web Site Performance and Assembly Versioning

    - by capgpilk
    I originally wanted to write this post in one, but there is quite a large amount of information which can be broken down into different areas, so I am going to publish it in three posts. Minification and Concatination of JavaScript and CSS Files – this post Versioning Combined Files Using Subversion – published shortly Versioning Combined Files Using Mercurial – published shortly Website Performance There are many ways to improve web site performance, two areas are reducing the amount of data that is served up from the web server and reducing the number of files that are requested. Here I will outline the process of minimizing and concatenating your javascript and css files automatically at build time of your visual studio web site/ application. To edit the project file in Visual Studio, you need to first unload it by right clicking the project in Solution Explorer. I prefer to do this in a third party tool such as Notepad++ and save it there forcing VS to reload it each time I make a change as the whole process in Visual Studio can be a bit tedious. Now you have the project file, you will notice that it is an MSBuild project file. I am going to use a fantastic utility from Microsoft called Ajax Minifier. This tool minifies both javascript and css. 1. Import the tasks for AjaxMin choosing the location you installed to. I keep all third party utilities in a Tools directory within my solution structure and source control. This way I know I can get the entire solution from source control without worrying about what other tools I need to get the project to build locally. 1: <Import Project="..\Tools\MicrosoftAjaxMinifier\AjaxMin.tasks" /> 2. Now create ItemGroups for all your js and css files like this. Separating out your non minified files and minified files. This can go in the AfterBuild container. 1: <Target Name="AfterBuild"> 2:  3: <!-- Javascript files that need minimizing --> 4: <ItemGroup> 5: <JSMin Include="Scripts\jqModal.js" /> 6: <JSMin Include="Scripts\jquery.jcarousel.js" /> 7: <JSMin Include="Scripts\shadowbox.js" /> 8: </ItemGroup> 9: <!-- CSS files that need minimizing --> 10: <ItemGroup> 11: <CSSMin Include="Content\Site.css" /> 12: <CSSMin Include="Content\themes\base\jquery-ui.css" /> 13: <CSSMin Include="Content\shadowbox.css" /> 14: </ItemGroup>   1: <!-- Javascript files to combine --> 2: <ItemGroup> 3: <JSCat Include="Scripts\jqModal.min.js" /> 4: <JSCat Include="Scripts\jquery.jcarousel.min.js" /> 5: <JSCat Include="Scripts\shadowbox.min.js" /> 6: </ItemGroup> 7: <!-- CSS files to combine --> 8: <ItemGroup> 9: <CSSCat Include="Content\Site.min.css" /> 10: <CSSCat Include="Content\themes\base\jquery-ui.min.css" /> 11: <CSSCat Include="Content\shadowbox.min.css" /> 12: </ItemGroup>   3. Call AjaxMin to do the crunching. 1: <Message Text="Minimizing JS and CSS Files..." Importance="High" /> 2: <AjaxMin JsSourceFiles="@(JSMin)" JsSourceExtensionPattern="\.js$" 3: JsTargetExtension=".min.js" JsEvalTreatment="MakeImmediateSafe" 4: CssSourceFiles="@(CSSMin)" CssSourceExtensionPattern="\.css$" 5: CssTargetExtension=".min.css" /> This will create the *.min.css and *.min.js files in the same directory the original files were. 4. Now concatenate the minified files into one for javascript and another for css. Here we write out the files with a default file name. In later posts I will cover versioning these files the same as your project assembly again to help performance. 1: <Message Text="Concat JS Files..." Importance="High" /> 2: <ReadLinesFromFile File="%(JSCat.Identity)"> 3: <Output TaskParameter="Lines" ItemName="JSLinesSite" /> 4: </ReadLinesFromFile> 5: <WriteLinestoFile File="Scripts\site-script.combined.min.js" Lines="@(JSLinesSite)" 6: Overwrite="true" /> 7: <Message Text="Concat CSS Files..." Importance="High" /> 8: <ReadLinesFromFile File="%(CSSCat.Identity)"> 9: <Output TaskParameter="Lines" ItemName="CSSLinesSite" /> 10: </ReadLinesFromFile> 11: <WriteLinestoFile File="Content\site-style.combined.min.css" Lines="@(CSSLinesSite)" 12: Overwrite="true" /> 5. Save the project file, if you have Visual Studio open it will ask you to reload the project. You can now run a build and these minified and combined files will be created automatically. 6. Finally reference these minified combined files in your web page. In the next two posts I will cover versioning these files to match your assembly.

    Read the article

  • Developing add-ins for multiple versions of Office

    - by Pranav
    Do you want to develop an add-in targeting multiple versions of Office? And you have basic questions like “Is it possible to do? ” and “How to do it?” ? Then you came to the right place. Few months back, I got a requirement to developed add-ins for Outlook 2003 and Outlook 2007. The functionality for both the versions is same. A doubt stroked… when the functionality is same, why would I develop two add-ins separately? Why don’t I make a single build for both the versions of Office? Then I started searching for techniques to develop add-ins which works in both (2003 and 2007) and read many articles written by VSTO Experts in their blogs, Official VSTO Blog, MSDN, Forums and what not. Misha Says: Theoretically, you can develop an add-in for multiple versions of Microsoft Office by catering to the lowest common denominator. This means if you use an Excel 2003 add-in template in Visual Studio 2008, you would be able to develop and debug this with Excel 2007. However if you try this, you may meet these error messages: “You cannot debug or run this project, because the required version of the Microsoft Office application is not installed.”, followed by “Unable to start debugging.” You can develop Office 2003 add-in in a system where Office 2007 is installed. The following is the procedure that demonstrates how to update your Visual Studio debugging options to use Microsoft Outlook 2007 to debug an add-in targeting Microsoft Outlook 2003. On the Project menu, click on ProjectName Properties Click on the Debug tab In the Start Action pane, click the Start external program radio button Click the file browser button and navigate to %ProgramFiles%\Microsoft Office\Office12 Choose Outlook.exe and click Open Press F5 to debug your add-in For more details. Go through this article in Misha Shneerson’s Blog. There are some tips and tricks to be followed and the things that one needs to take care while developing add-ins targeting multiple versions of Office in Andrew’s Blog. Have a look at this too. You might find it interesting and useful. http://blogs.msdn.com/andreww/archive/2007/06/15/can-you-build-one-add-in-for-multiple-versions-of-office.aspx Here is an MSDN article on Running Solutions in Different Versions of Microsoft Office http://msdn.microsoft.com/en-us/library/bb772080.aspx Hope this helps!

    Read the article

  • Inserting data to database from Android

    - by Angel
    I have to build an application where the requirement is that my clients will send data from their Android device and I have to save that data to a database. I have done the part of coding that inserts data from Android emulator to my XAMPP database on localhost, now I have to implement the real thing. How can I connect the devices where my application will be installed to the XAMPP database I have created so that the data they send can be inserted into it?

    Read the article

  • Using ISO Image with a Local Repository for updating Exadata Compute nodes

    - by Rene Kundersma
    For systems that cannot connect directly to Oracle ULN to build a local repository an ISO image file is made available by Oracle. This ISO image can be mounted and used as a local repository. The ISO image contains a file system that contains only the latest (x86_64) ULN channel and cannot be used to update the database servers to any other release than release 11.2.3.1.1. ISO and instructions can be found here  Rene Kundersma

    Read the article

  • Hype and LINQ

    - by Tony Davis
    "Tired of querying in antiquated SQL?" I blinked in astonishment when I saw this headline on the LinqPad site. Warming to its theme, the site suggests that what we need is to "kiss goodbye to SSMS", and instead use LINQ, a modern query language! Elsewhere, there is an article entitled "Why LINQ beats SQL". The designers of LINQ, along with many DBAs, would, I'm sure, cringe with embarrassment at the suggestion that LINQ and SQL are, in any sense, competitive ways of doing the same thing. In fact what LINQ really is, at last, is an efficient, declarative language for C# and VB programmers to access or manipulate data in objects, local data stores, ORMs, web services, data repositories, and, yes, even relational databases. The fact is that LINQ is essentially declarative programming in a .NET language, and so in many ways encourages developers into a "SQL-like" mindset, even though they are not directly writing SQL. In place of imperative logic and loops, it uses various expressions, operators and declarative logic to build up an "expression tree" describing only what data is required, not the operations to be performed to get it. This expression tree is then parsed by the language compiler, and the result, when used against a relational database, is a SQL string that, while perhaps not always perfect, is often correctly parameterized and certainly no less "optimal" than what is achieved when a developer applies blunt, imperative logic to the SQL language. From a developer standpoint, it is a mistake to consider LINQ simply as a substitute means of querying SQL Server. The strength of LINQ is that that can be used to access any data source, for which a LINQ provider exists. Microsoft supplies built-in providers to access not just SQL Server, but also XML documents, .NET objects, ADO.NET datasets, and Entity Framework elements. LINQ-to-Objects is particularly interesting in that it allows a declarative means to access and manipulate arrays, collections and so on. Furthermore, as Michael Sorens points out in his excellent article on LINQ, there a whole host of third-party LINQ providers, that offers a simple way to get at data in Excel, Google, Flickr and much more, without having to learn a new interface or language. Of course, the need to be generic enough to deal with a range of data sources, from something as mundane as a text file to as esoteric as a relational database, means that LINQ is a compromise and so has inherent limitations. However, it is a powerful and beautifully compact language and one that, at least in its "query syntax" guise, is accessible to developers and DBAs alike. Perhaps there is still hope that LINQ can fulfill Phil Factor's lobster-induced fantasy of a language that will allow us to "treat all data objects, whether Word files, Excel files, XML, relational databases, text files, HTML files, registry files, LDAPs, Outlook and so on, in the same logical way, as linked databases, and extract the metadata, create the entities and relationships in the same way, and use the same SQL syntax to interrogate, create, read, write and update them." Cheers, Tony.

    Read the article

  • Cumulative Update packages for SQL Server 2008 are available now: CU7 for SQL2008 SP2 and CU2 for SQL2008 SP3

    - by ssqa.net
    Another instalment of Cumulative Update package for SQL Server 2008 SP3 is available now, which is CU2 and the build number is known as 10.00.5768.00. As usual this CU2 for SQL2008 SP3 contains hotfixes for issues that were fixed after the release of SQL Server 2008 Service Pack 3 (SP3). KBA2633143 list the following article numbers about more information on the fixes: VSTS bug number KB article number Description 794387 2522893 (http://support.microsoft.com/kb/2522893/ ) FIX: A backup operation...(read more)

    Read the article

  • How should I pitch moving to an agile/iterative development cycle with mandated 3-week deployments?

    - by Wayne M
    I'm part of a small team of four, and I'm the unofficial team lead (I'm lead in all but title, basically). We've largely been a "cowboy" environment, with no architecture or structure and everyone doing their own thing. Previously, our production deployments would be every few months without being on a set schedule, as things were added/removed to the task list of each developer. Recently, our CIO (semi-technical but not really a programmer) decided we will do deployments every three weeks; because of this I instantly thought that adopting an iterative development process (not necessarily full-blown Agile/XP, which would be a huge thing to convince everyone else to do) would go a long way towards helping manage expectations properly so there isn't this far-fetched idea that any new feature will be done in three weeks. IMO the biggest hurdle is that we don't have ANY kind of development approach in place right now (among other things like no CI or automated tests whatsoever). We don't even use Waterfall, we use "Tell Developer X to do a task, expect him to do everything and get it done". Are there any pointers that would help me start to ease us towards an iterative approach and A) Get the other developers on board with it and B) Get management to understand how iterative works? So far my idea involves trying to set up a CI server and get our build process automated (it takes about 10-20 minutes right now to simply build the application to put it on our development server), since pushing tests and/or TDD will be met with a LOT of resistance at this point, and constantly force us to break larger projects into smaller chunks that could be done iteratively in a three-week cycle; my only concern is that, unless I'm misunderstanding, an agile/iterative process may or may not release the software (depending on the project scope you might have "working" software after three weeks, but there isn't enough of it that works to let users make use of it), while I think the expectation here from management is that there will always be something "ready to go" in three weeks, and that disconnect could cause problems. On that note, is there any literature or references that explains the agile/iterative approach from a business standpoint? Everything I've seen only focuses on the developers, how to do it, but nothing seems to describe it from the perspective of actually getting the buy-in from the businesspeople.

    Read the article

  • Clone an Azure VM using Powershell

    - by jamiet
    In a few months time I will, in association with Technitrain, be running a training course entitled Introduction to SQL Server Data Tools. I am currently working on putting together some hands-on lab material for the course delegates and have decided that in order to save time in asking people to install software during the course I am simply going to prepare a virtual machine (VM) containing all the software and lab material for each delegate to use. Given that I am an MSDN subscriber it makes sense to use Windows Azure to host those VMs given that it will be close to, if not completely, free to do so. What I don’t want to do however is separately build a VM for each delegate, I would much rather build one VM and clone it for each delegate. I’ve spent a bit of time figuring out how to do this using Powershell and in this blog post I am sharing a script that will: Prompt for some information (Azure credentials, Azure subscription name, VM name, username & password, etc…) Create a VM on Azure using that information Prompt you to sysprep the VM and image it (this part can’t be done with Powershell so has to be done manually, a link to instructions is provided in the script output) Create three new VMs based on the image Remove those three VMs Simply download the script and execute it within Powershell, assuming you have an Azure account it should take about 20minutes to execute (spinning up VMs and shutting the down isn’t instantaneous). If you experience any issues please do let me know. There are additional notes below. Hope this is useful! @Jamiet  Notes: Obviously there isn’t a lot of point in creating some new VMs and then instantly deleting them. However, this demo script does provide everything you need should you want to do any of these operations in isolation. The names of the three VMs that get created will be suffixed with 001, 002, 003 but you can edit the script to call them whatever you like. The script doesn’t totally clean up after itself. If you specify a service name & storage account name that don’t already exist then it will create them however it won’t remove them when everything is complete. The created image file will also not be deleted. Removing these items can be done by visiting http://manage.windowsazure.com. When creating the image, ensure you use the correct name (the script output tells you what name to use): Here are some screenshots taken from running the script: When the third and final VM gets removed you are asked to confirm via this dialog: Select ‘Yes’

    Read the article

  • How closely can a game resemble another game without legal problems

    - by Fuu
    The majority of games build on successes of other games and many are downright clones. So where is the limit of emulating before legal issues come into play? Is it down to literary or graphic work like characters and storyline that cause legal problems, or can someone actually claim to own gameplay mechanics? There are so many similar clone games out there that the rules are probably very slack or nonexistent, but I'd like to hear the views of more experienced developers / designers.

    Read the article

  • Video documentary on the open source culture ?

    - by explorest
    Hello, I'm looking for some videos on these subjects: A movie/documentary detailing the origin, history, and current state of open source culture A movie/documentary on how open source software actually gets developed. What are the technical workflows. How do people create projects, recruit contributors, build a community, assign roles, track issues, assimilate new comers ... etc etc. Could someone suggest a title?

    Read the article

  • Google I/O 2010 - Integrate apps w/ Google Apps Marketplace

    Google I/O 2010 - Integrate apps w/ Google Apps Marketplace Google I/O 2010 - Integrating your app with the Google Apps Marketplace: Navigation, SSO, Data APIs and manifests Enterprise 201 Ryan Boyd, Steve Bazyl In this fast-paced, demo-focused session, you'll learn how to build, integrate, and sell a web app on the Google Apps Marketplace. We'll go end-to-end in 40 minutes with time left for Q&A. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 5 0 ratings Time: 59:45 More in Science & Technology

    Read the article

  • How to use ULS in SharePoint 2010 for Custom Code Exception Logging?

    - by venkatx5
    What is ULS in SharePoint 2010? ULS stands for Unified Logging Service which captures and writes Exceptions/Logs in Log File(A Plain Text File with .log extension). SharePoint logs Each and every exceptions with ULS. SharePoint Administrators should know ULS and it's very useful when anything goes wrong. but when you ask any SharePoint 2007 Administrator to check log file then most of them will Kill you. Because read and understand the log file is not so easy. Imagine open a plain text file of 20 MB in NotePad and go thru line by line. Now Microsoft developed a tool "ULS Viewer" to view those Log files in easily readable format. This tools also helps to filter events based on exception priority. You can read on this blog to know in details about ULS Viewer . Where to get ULS Viewer? ULS Viewer is developed by Microsoft and available to download for free. URL : http://code.msdn.microsoft.com/ULSViewer/Release/ProjectReleases.aspx?ReleaseId=3308 Note: Eventhought this tool developed by Microsoft, it's not supported by Microsoft. Means you can't support for this tool from Microsoft and use it on your own Risk. By the way what's the risk in viewing Log Files?! How to use ULS in SharePoint 2010 Custom Code? ULS can be extended to use in user solutions to log exceptions. In Detail, Developer can use ULS to log his own application errors and exceptions on SharePoint Log files. So now all in Single Place (That's why it's called "Unified Logging"). Well in this article I am going to use Waldek's Code (Reference Link). However the article is core and am writing container for that (Basically how to implement the code in Detail). Let's see the steps. Open Visual Studio 2010 -> File -> New Project -> Visual C# -> Windows -> Class Library -> Name : ULSLogger (Make sure you've selected .net Framework 3.5)   In Solution Explorer Panel, Rename the Class1.cs to LoggingService.cs   Right Click on References -> Add Reference -> Under .Net tab select "Microsoft.SharePoint"   Right Click on the Project -> Properties. Select "Signing" Tab -> Check "Sign the Assembly".   In the below drop down select <New> and enter "ULSLogger", uncheck the "Protect my key with a Password" option.   Now copy the below code and paste. (Or Just refer.. :-) ) using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint; using Microsoft.SharePoint.Administration; using System.Runtime.InteropServices; namespace ULSLogger { public class LoggingService : SPDiagnosticsServiceBase { public static string vsDiagnosticAreaName = "Venkats SharePoint Logging Service"; public static string CategoryName = "vsProject"; public static uint uintEventID = 700; // Event ID private static LoggingService _Current; public static LoggingService Current {  get   {    if (_Current == null)     {       _Current = new LoggingService();     }    return _Current;   } }private LoggingService() : base("Venkats SharePoint Logging Service", SPFarm.Local) {}protected override IEnumerable<SPDiagnosticsArea> ProvideAreas() { List<SPDiagnosticsArea> areas = new List<SPDiagnosticsArea>  {   new SPDiagnosticsArea(vsDiagnosticAreaName, new List<SPDiagnosticsCategory>    {     new SPDiagnosticsCategory(CategoryName, TraceSeverity.Medium, EventSeverity.Error)    })   }; return areas; }public static string LogErrorInULS(string errorMessage) { string strExecutionResult = "Message Not Logged in ULS. "; try  {   SPDiagnosticsCategory category = LoggingService.Current.Areas[vsDiagnosticAreaName].Categories[CategoryName];   LoggingService.Current.WriteTrace(uintEventID, category, TraceSeverity.Unexpected, errorMessage);   strExecutionResult = "Message Logged"; } catch (Exception ex) {  strExecutionResult += ex.Message; } return strExecutionResult; }public static string LogErrorInULS(string errorMessage, TraceSeverity tsSeverity) { string strExecutionResult = "Message Not Logged in ULS. "; try  {  SPDiagnosticsCategory category = LoggingService.Current.Areas[vsDiagnosticAreaName].Categories[CategoryName];  LoggingService.Current.WriteTrace(uintEventID, category, tsSeverity, errorMessage);  strExecutionResult = "Message Logged";  } catch (Exception ex)  {   strExecutionResult += ex.Message;   } return strExecutionResult;  } } }   Just build the solution and it's ready to use now. This ULS solution can be used in SharePoint Webparts or Console Application. Lets see how to use it in a Console Application. SharePoint Server 2010 must be installed in the same Server or the application must be hosted in SharPoint Server 2010 environment. The console application must be set to "x64" Platform target.   Create a New Console Application. (Visual Studio -> File -> New Project -> C# -> Windows -> Console Application) Right Click on References -> Add Reference -> Under .Net tab select "Microsoft.SharePoint" Open Program.cs add "using Microsoft.SharePoint.Administration;" Right Click on References -> Add Reference -> Under "Browse" tab select the "ULSLogger.dll" which we created first. (Path : ULSLogger\ULSLogger\bin\Debug\) Right Click on Project -> Properties -> Select "Build" Tab -> Under "Platform Target" option select "x64". Open the Program.cs and paste the below code. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint.Administration; using ULSLogger; namespace ULSLoggerClient {  class Program   {   static void Main(string[] args)     {     Console.WriteLine("ULS Logging Started.");     string strResult = LoggingService.LogErrorInULS("My Application is Working Fine.");      Console.WriteLine("ULS Logging Info. Result : " + strResult);     string strResult = LoggingService.LogErrorInULS("My Application got an Exception.", TraceSeverity.High);     Console.WriteLine("ULS Logging Waring Result : " + strResult);      Console.WriteLine("ULS Logging Completed.");      Console.ReadLine();     }   } } Just build the solution and execute. It'll log the message on the log file. Make sure you are using Farm Administrator User ID. You can play with Message and TraceSeverity as required. Now Open ULS Viewer -> File -> Open From -> ULS -> Select First Option to open the default ULS Log. It's Uls RealTime and will show all log entries in readable table format. Right Click on a row and select "Filter By This Item". Select "Event ID" and enter value "700" that we used in the application. Click Ok and now you'll see the Exceptions/Logs which logged by our application.   If you want to see High Priority Messages only then Click Icons except Red Cross Icon on the Toolbar. The tooltip will tell what's the icons used for.

    Read the article

  • Join the Authors of SSIS Design Patterns at the PASS Summit 2012!

    - by andyleonard
    My fellow authors and I will be presenting a day-long pre-conference session titled SSIS Design Patterns at the PASS Summit 2012 in Seattle Monday 5 Nov 2012! Register to learn patterns for: Package execution Package logging Loading flat file sources Loading XML sources Loading the cloud Dynamic package generation SSIS Frameworks Data warehouse ETL Data flow performance   Presenting this session: Matt Masson Tim Mitchell Jessica Moss Michelle Ufford Andy Leonard I hope to see you in Seattle!...(read more)

    Read the article

  • SCOM, 90 Days In, II. Noise.

    - by merrillaldrich
    Once you get past the basic architecture of a SCOM implementation, and build the servers, and so on, the first real problem is … well, noise. Suddenly (depending on how you deploy) the system will reach out, like marching army ants or a some very clever cybernetic spider and find, and then proceed to yell at you about, every single problem on every server you didn’t know you had. That, of course, is the point. Still, a tool like this is not useful if it doesn’t surface the real problems from the...(read more)

    Read the article

  • How do programmer-seeking employers see Bioinformatics degree?

    - by Max
    I love programming, but I also love biology. Basically Bioinformatics sounds fun to me. However, there is a fat chance that I won't get a Bioinformatics job and will be forced to build my career around regular programming. Therefore a question: does it matter (much) for an employer if he is looking for a regular programmer but finds a Bioinformatics diploma? Or is it the same in the long run as a regular Informatics diploma?

    Read the article

  • Questions re: Eclipse Jobs API

    - by BenCole
    Similar to http://stackoverflow.com/questions/8738160/eclipse-jobs-api-for-a-stand-alone-swing-project This question mentions the Jobs API from the Eclipse IDE: ...The disadvantage of the pre-3.0 approach was that the user had to wait until an operation completed before the UI became responsive again. The UI still provided the user the ability to cancel the currently running operation but no other work could be done until the operation completed. Some operations were performed in the background (resource decoration and JDT file indexing are two such examples) but these operations were restricted in the sense that they could not modify the workspace. If a background operation did try to modify the workspace, the UI thread would be blocked if the user explicitly performed an operation that modified the workspace and, even worse, the user would not be able to cancel the operation. A further complication with concurrency was that the interaction between the independent locking mechanisms of different plug-ins often resulted in deadlock situations. Because of the independent nature of the locks, there was no way for Eclipse to recover from the deadlock, which forced users to kill the application... ...The functionality provided by the workspace locking mechanism can be broken down into the following three aspects: Resource locking to ensure multiple operations did not concurrently modify the same resource Resource change batching to ensure UI stability during an operation Identification of an appropriate time to perform incremental building With the introduction of the Jobs API, these areas have been divided into separate mechanisms and a few additional facilities have been added. The following list summarizes the facilities added. Job class: support for performing operations or other work in the background. ISchedulingRule interface: support for determining which jobs can run concurrently. WorkspaceJob and two IWorkspace#run() methods: support for batching of delta change notifications. Background auto-build: running of incremental build at a time when no other running operations are affecting resources. ILock interface: support for deadlock detection and recovery. Job properties for configuring user feedback for jobs run in the background. The rest of this article provides examples of how to use the above-mentioned facilities... In regards to above API, is this an implementation of a particular design pattern? Which one?

    Read the article

  • Why do local HTML files take longer to load in IE compared to Firefox?

    - by jaslr
    I am creating a local HTML file that links to 2 external CSS files and 3 external JS files. When I refresh this in Internet Explorer 9, the page takes over a minute to load compared to instantly in Firefox (latest stable build. When I remove the external CSS and JS references, IE9 loads the page instantly. Can anyone explain why IE9 takes so long to load local HTML files with references to external CSS and JS files?

    Read the article

  • Creative, busy Devoxx week

    - by JavaCecilia
    I got back from my first visit to the developer conference Devoxx in Antwerp. I can't describe the vibes of the conference, it was a developer amusement park, hackergartens, fact sessions, comic relief provided by Java Posse, James Bond and endless hallway discussions.All and all - I had a lot of fun, my main mission was to talk about Oracle's main focus for OpenJDK which besides development and bug fixing is making sure the infrastructure is working out for the full community. My focus was not to hang out at night club the Noxx, but that was came included in the package :)The London Java community leaders Ben Evans and Martijn Verburg are leading discussions in the community to lay out the necessary requirements for the infrastructure for build and test in the open. They called a first meeting at JavaOne gathering 25 people, including people from RedHat, IBM and Oracle. The second meeting at Devoxx included 14 participants and had representatives from Oracle and IBM. I hope we really can find a way to collaborate on this, making sure we deliver an efficient infrastructure for all engineers to contribute to OpenJDK with.My home in all of this was the BOF rooms and the sessions there meeting the JUG leaders, talking about OpenJDK infrastructure and celebrating the Duchess Duke Award together with the others. The restaurants in the area was slower than I've ever seen, so I missed out on Trisha Gee's brilliant replay of the workshop "The Problem with Women in IT - an Agile Approach" where she masterly leads the audience (a packed room, 50-50 gender distribution) to solve the problem of including more diversity in the developer community. A tough and sometimes sensitive topic where she manages to keep the discussion objective with a focus of improving the matter from a business perspective. Mattias Karlsson is organizing the Java developer conference Jfokus in Stockholm and was there talking to Andres Almires planning a Hackergarten with a possible inclusion of an OpenJDK bugathon. That would be really cool, especially as the Oracle Stockholm Java development office is just across the water from the Jfokus venue, some of the local JVM engineers will likely attend and assist, even though the bug smashing theme will likely be more starter level build warnings in Swing or langtools than fixing JVM bugs.I was really happy that I managed to catch a seat for the Java Posse live podcast "the Third Presidential Debate" a lot of nerd humor, a lot of beer, a lot of fun :) The new member Chet had a perfect dead pan delivery and now I just have to listen more to the podcasts! Can't get the most perfect joke out of my head, talking about beer "As my father always said: Better a bottle in front of me than a frontal lobotomy" - hilarious :)I attended the sessions delivered by my Stockholm office colleagues Marcus Lagergren (on dynamic languages on the jvm, JavaScript in particular) and Joel Borggrén-Franck (Annotations) and was happy to see the packed room and all the questions raised at the end.There's loads of stuff to write about the event, but just have to pace myself for now. It was a fantastic event, captain Stephan Janssen with crew should be really proud to provide this forum to the developer community!

    Read the article

  • SQL Server 2008 R2 Service Pack 2 CTP is available

    - by AaronBertrand
    You can download the Service Pack 2 CTP from the following URL: http://www.microsoft.com/en-us/download/details.aspx?id=29848 The build # is 10.50.3720. This service pack contains all of the fixes from Service Pack 1 & Cumulative Updates 1 through 5, and a couple of other minor fixes (a couple of SSRS bugs and a bug about an ALTER TABLE batch not being cached correctly). It does not include fixes from Service Pack 1 Cumulative Update #6, which I mentioned recently . You should *NOT* install this...(read more)

    Read the article

  • MVC Portable Area Modules *Without* MasterPages

    - by Steve Michelotti
    Portable Areas from MvcContrib provide a great way to build modular and composite applications on top of MVC. In short, portable areas provide a way to distribute MVC binary components as simple .NET assemblies where the aspx/ascx files are actually compiled into the assembly as embedded resources. I’ve blogged about Portable Areas in the past including this post here which talks about embedding resources and you can read more of an intro to Portable Areas here. As great as Portable Areas are, the question that seems to come up the most is: what about MasterPages? MasterPages seems to be the one thing that doesn’t work elegantly with portable areas because you specify the MasterPage in the @Page directive and it won’t use the same mechanism of the view engine so you can’t just embed them as resources. This means that you end up referencing a MasterPage that exists in the host application but not in your portable area. If you name the ContentPlaceHolderId’s correctly, it will work – but it all seems a little fragile. Ultimately, what I want is to be able to build a portable area as a module which has no knowledge of the host application. I want to be able to invoke the module by a full route on the user’s browser and it gets invoked and “automatically appears” inside the application’s visual chrome just like a MasterPage. So how could we accomplish this with portable areas? With this question in mind, I looked around at what other people are doing to address similar problems. Specifically, I immediately looked at how the Orchard team is handling this and I found it very compelling. Basically Orchard has its own custom layout/theme framework (utilizing a custom view engine) that allows you to build your module without any regard to the host. You simply decorate your controller with the [Themed] attribute and it will render with the outer chrome around it: 1: [Themed] 2: public class HomeController : Controller Here is the slide from the Orchard talk at this year MIX conference which shows how it conceptually works:   It’s pretty cool stuff.  So I figure, it must not be too difficult to incorporate this into the portable areas view engine as an optional piece of functionality. In fact, I’ll even simplify it a little – rather than have 1) Document.aspx, 2) Layout.ascx, and 3) <view>.ascx (as shown in the picture above); I’ll just have the outer page be “Chrome.aspx” and then the specific view in question. The Chrome.aspx not only takes the place of the MasterPage, but now since we’re no longer constrained by the MasterPage infrastructure, we have the choice of the Chrome.aspx living in the host or inside the portable areas as another embedded resource! Disclaimer: credit where credit is due – much of the code from this post is me re-purposing the Orchard code to suit my needs. To avoid confusion with Orchard, I’m going to refer to my implementation (which will be based on theirs) as a Chrome rather than a Theme. The first step I’ll take is to create a ChromedAttribute which adds a flag to the current HttpContext to indicate that the controller designated Chromed like this: 1: [Chromed] 2: public class HomeController : Controller The attribute itself is an MVC ActionFilter attribute: 1: public class ChromedAttribute : ActionFilterAttribute 2: { 3: public override void OnActionExecuting(ActionExecutingContext filterContext) 4: { 5: var chromedAttribute = GetChromedAttribute(filterContext.ActionDescriptor); 6: if (chromedAttribute != null) 7: { 8: filterContext.HttpContext.Items[typeof(ChromedAttribute)] = null; 9: } 10: } 11:   12: public static bool IsApplied(RequestContext context) 13: { 14: return context.HttpContext.Items.Contains(typeof(ChromedAttribute)); 15: } 16:   17: private static ChromedAttribute GetChromedAttribute(ActionDescriptor descriptor) 18: { 19: return descriptor.GetCustomAttributes(typeof(ChromedAttribute), true) 20: .Concat(descriptor.ControllerDescriptor.GetCustomAttributes(typeof(ChromedAttribute), true)) 21: .OfType<ChromedAttribute>() 22: .FirstOrDefault(); 23: } 24: } With that in place, we only have to override the FindView() method of the custom view engine with these 6 lines of code: 1: public override ViewEngineResult FindView(ControllerContext controllerContext, string viewName, string masterName, bool useCache) 2: { 3: if (ChromedAttribute.IsApplied(controllerContext.RequestContext)) 4: { 5: var bodyView = ViewEngines.Engines.FindPartialView(controllerContext, viewName); 6: var documentView = ViewEngines.Engines.FindPartialView(controllerContext, "Chrome"); 7: var chromeView = new ChromeView(bodyView, documentView); 8: return new ViewEngineResult(chromeView, this); 9: } 10:   11: // Just execute normally without applying Chromed View Engine 12: return base.FindView(controllerContext, viewName, masterName, useCache); 13: } If the view engine finds the [Chromed] attribute, it will invoke it’s own process – otherwise, it’ll just defer to the normal web forms view engine (with masterpages). The ChromeView’s primary job is to independently set the BodyContent on the view context so that it can be rendered at the appropriate place: 1: public class ChromeView : IView 2: { 3: private ViewEngineResult bodyView; 4: private ViewEngineResult documentView; 5:   6: public ChromeView(ViewEngineResult bodyView, ViewEngineResult documentView) 7: { 8: this.bodyView = bodyView; 9: this.documentView = documentView; 10: } 11:   12: public void Render(ViewContext viewContext, System.IO.TextWriter writer) 13: { 14: ChromeViewContext chromeViewContext = ChromeViewContext.From(viewContext); 15:   16: // First render the Body view to the BodyContent 17: using (var bodyViewWriter = new StringWriter()) 18: { 19: var bodyViewContext = new ViewContext(viewContext, bodyView.View, viewContext.ViewData, viewContext.TempData, bodyViewWriter); 20: this.bodyView.View.Render(bodyViewContext, bodyViewWriter); 21: chromeViewContext.BodyContent = bodyViewWriter.ToString(); 22: } 23: // Now render the Document view 24: this.documentView.View.Render(viewContext, writer); 25: } 26: } The ChromeViewContext (code excluded here) mainly just has a string property for the “BodyContent” – but it also makes sure to put itself in the HttpContext so it’s available. Finally, we created a little extension method so the module’s view can be rendered in the appropriate place: 1: public static void RenderBody(this HtmlHelper htmlHelper) 2: { 3: ChromeViewContext chromeViewContext = ChromeViewContext.From(htmlHelper.ViewContext); 4: htmlHelper.ViewContext.Writer.Write(chromeViewContext.BodyContent); 5: } At this point, the other thing left is to decide how we want to implement the Chrome.aspx page. One approach is the copy/paste the HTML from the typical Site.Master and change the main content placeholder to use the HTML helper above – this way, there are no MasterPages anywhere. Alternatively, we could even have Chrome.aspx utilize the MasterPage if we wanted (e.g., in the case where some pages are Chromed and some pages want to use traditional MasterPage): 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %> 2: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 3: <% Html.RenderBody(); %> 4: </asp:Content> At this point, it’s all academic. I can create a controller like this: 1: [Chromed] 2: public class WidgetController : Controller 3: { 4: public ActionResult Index() 5: { 6: return View(); 7: } 8: } Then I’ll just create Index.ascx (a partial view) and put in the text “Inside my widget”. Now when I run the app, I can request the full route (notice the controller name of “widget” in the address bar below) and the HTML from my Index.ascx will just appear where it is supposed to.   This means no more warnings for missing MasterPages and no more need for your module to have knowledge of the host’s MasterPage placeholders. You have the option of using the Chrome.aspx in the host or providing your own while embedding it as an embedded resource itself. I’m curious to know what people think of this approach. The code above was done with my own local copy of MvcContrib so it’s not currently something you can download. At this point, these are just my initial thoughts – just incorporating some ideas for Orchard into non-Orchard apps to enable building modular/composite apps more easily. Additionally, on the flip side, I still believe that Portable Areas have potential as the module packaging story for Orchard itself.   What do you think?

    Read the article

  • Generating Report for NUnit

    - by thangchung
     All source codes for this post can be found at my github.Time ago, I received a request that people ask me how they can generate reports of the results of testing using NUnit? In fact, I may never do this. In the little world of my programming, I only care about the test results, red-green-refactoring, and that was it. When I got that question quite a bit unexpected, I knew that I could use NCover to generate reports, but reports of NCover too simple, it did not give us more details on the number of test cases, test methods, ... And I began to see about creating interesting report for NUnit.I was lucky to find an open source here. Its authors call it NUnit2Report, but one disadvantage is it only running on .NET 1.0. Indeed too old compared to the current version 4.0. And I try to download the preview, but I could not run. I had to open its source code and found that it uses XSLT to convert the output of NUnit results from XML to HTML. Nothing really special, because I also knew that after NUnit run output file extension is XML is created. Author only use this file to convert to HTML using XSLT. And I decided to convert it to. NET 4.0, because I will not have to code from scratch. Conversion work made me take some time, but was lucky that I finally have what I want. Thanks Gilles for the this OSS. I will send a mail to thank him for his efforts but put this out for the OSS. Now I will show people how to do it. I used the auto built NAnt and NUnit for running TestCase, and I use Selenium testing framework. After writing three TestCase using Selenium, I ran NUnit, and got the following results: There are 1 fail and 2s success. In the bin directory of this project will have the NUnit output file as shown below: Then I create a build file, and a bat file for easy running (can use PowerShell is here also.) Double click in the bat file to create a report like this:       Finally open the index.html file in the folder to view report. As everyone can see, it is the TestCase and divide very clearly, that I meet the requirements. This is really good. Once again I really thank NUnit2Report from Gilles. People can contact him via the mail address [email protected] or website  http://nunit2report.sourceforge.net. It really is useful to those who promised to QA. Hopefully this post will help anyone really interested in doing reports for NUnit.   

    Read the article

  • Microsoft Access 2000 How To's Series

    Listen Software Solutions and author David Nishimoto present a new series designed to help Microsoft Access developers discover the secrets of Access programming and empower the developer with the critical knowledge needed to build enterprise-quality applications.

    Read the article

  • How to increase backlinks of blog or websites [duplicate]

    - by Adarsh Sojitra
    This question already has an answer here: How do I build backlinks? 5 answers I know that this question is very easy and also Silly.But I don't know how to make backlinks to my blog.I have tried commenting in various blogs and websites but in alexa there is only 1 backlink which is my own blog.Do anyone know how to make a quality backlinks for blog or website....I also want to know that by increasing Backlinks, SEO of my blog imporves???Thanks in advance......

    Read the article

  • Separating a "wad of stuff" utility project into individual components with "optional" dependencies

    - by romkyns
    Over the years of using C#/.NET for a bunch of in-house projects, we've had one library grow organically into one huge wad of stuff. It's called "Util", and I'm sure many of you have seen one of these beasts in your careers. Many parts of this library are very much standalone, and could be split up into separate projects (which we'd like to open-source). But there is one major problem that needs to be solved before these can be released as separate libraries. Basically, there are lots and lots of cases of what I might call "optional dependencies" between these libraries. To explain this better, consider some of the modules that are good candidates to become stand-alone libraries. CommandLineParser is for parsing command lines. XmlClassify is for serializing classes to XML. PostBuildCheck performs checks on the compiled assembly and reports a compilation error if they fail. ConsoleColoredString is a library for colored string literals. Lingo is for translating user interfaces. Each of those libraries can be used completely stand-alone, but if they are used together then there are useful extra features to be had. For example, both CommandLineParser and XmlClassify expose post-build checking functionality, which requires PostBuildCheck. Similarly, the CommandLineParser allows option documentation to be provided using the colored string literals, requiring ConsoleColoredString, and it supports translatable documentation via Lingo. So the key distinction is that these are optional features. One can use a command line parser with plain, uncolored strings, without translating the documentation or performing any post-build checks. Or one could make the documentation translatable but still uncolored. Or both colored and translatable. Etc. Looking through this "Util" library, I see that almost all potentially separable libraries have such optional features that tie them to other libraries. If I were to actually require those libraries as dependencies then this wad of stuff isn't really untangled at all: you'd still basically require all the libraries if you want to use just one. Are there any established approaches to managing such optional dependencies in .NET?

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >