Search Results

Search found 8335 results on 334 pages for 'msbuild extension pack'.

Page 21/334 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Oracle Enterprise Pack for Eclipse 12.1.1 update on OTN

    - by gstachni
    Oracle Enterprise Pack for Eclipse (OEPE) 12.1.1.0.1 was released to OTN last week with support for new standards and features including: Support for Eclipse Indigo SR2 (3.7.2) Updated server plugins for Glassfish 3.1.2 SSL configuration support for WebLogic Server deployment and debugging   The SSL configuration option can be found when configuring the domain for a new WebLogic Server connection. For Eclipse early adopters, an OEPE 12c update based on Eclipse Juno M6 will be available soon.

    Read the article

  • Platinum SEO Plugin Vs All-In-One SEO Pack

    The two biggest SEO optimization tools for WordPress are probably Platinum SEO and All-in-One SEO Pack. This article explains the relationship of the two and gives a brief rundown of installing Platinum SEO plugin on your blog. It then provides a comparison of the two, feature-for-feature.

    Read the article

  • SSMS Tools Pack 2.0

    If you work with SSMS, you’ll know how frustrating it can be when tasks you perform every day aren’t part of the core features. Malden Prajdic certainly did, which is why he developed his free SSMS Tools Pack. Now on its second version, Grant Fritchey explains the functionality of this great free plugin. The Future of SQL Server MonitoringMonitor wherever, whenever with Red Gate's SQL Monitor. See it live in action now.

    Read the article

  • Steps to Apply a Service Pack or Patch to Mirrored SQL Server Databases

    Planning on patching my SQL Servers to the latest service pack, but not sure how to complete this for a environment that is using database mirroring? In this tip, will outline the environment and then walk through the process of patching mirrored servers. New! SQL Monitor 3.0 Red Gate's multi-server performance monitoring and alerting tool gets results from Day One.Simple to install and easy to use – download a free trial today.

    Read the article

  • ASP.NET and Visual Studio 2010 Service Pack 1

    Want to have a say in what goes into the ASP.NET bits of service pack 1 for VS 2010? Well, spend a few minutes filling out the online survey posted by the ASP.NET team: http://www.surveymonkey.com/s/MLCDPN7 If your most urgent fix doesnt make it into...(read more)...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Rename file in XP, only select file name, but show file extension.

    - by RasmusWriedtLarsen
    So if I have a file called: test.txt and I want to rename it, there are two options (depending on the Show already known file extension option): 1) ON: it selects everything (test.txt), meaning I have to manually select "test" and replace it with the new filename. (which is irritating) 2) OFF: Only "test" is editable (and visible). Problem is that I frequently need to change the file extension of a file, but if the option is turned on, it's a pain to change the file name. I know that in Win7 it does something smart: It only selects the file name when you press rename[F2], but also lets you edit the file extension. Is there a way to accomplish this?

    Read the article

  • How to perform regular expression based replacements on files with MSBuild

    - by Daniel Cazzulino
    And without a custom DLL with a task, too . The example at the bottom of the MSDN page on MSBuild Inline Tasks already provides pretty much all you need for that with a TokenReplace task that receives a file path, a token and a replacement and uses string.Replace with that. Similar in spirit but way more useful in its implementation is the RegexTransform in NuGet’s Build.tasks. It’s much better not only because it supports full regular expressions, but also because it receives items, which makes it very amenable to batching (applying the transforms to multiple items). You can read about how to use it for updating assemblies with a version number, for example. I recently had a need to also supply RegexOptions to the task so I extended the metadata and a little bit of the inline task so that it can parse the optional flags. So when using the task, I can pass the flags as item metadata as follows:...Read full article

    Read the article

  • Articles on TFS Build Server / MSBuild

    - by MartinWatts
    I have decided to write some articles on using a TFS Build Server. During the past few years I have had the responsibility and challange of keeping one running, and I found out that on some subjects, there is very little to find on the internet. So hopefully my experiences can help others. That is, before VS 2010 build server makes everything we have learnt on MSBuild so far redundant. ;) The first article is about selectively getting the sources you need to get the build done. You can find the article here.

    Read the article

  • C#: Adding Functionality to 3rd Party Libraries With Extension Methods

    - by James Michael Hare
    Ever have one of those third party libraries that you love but it's missing that one feature or one piece of syntactical candy that would make it so much more useful?  This, I truly think, is one of the best uses of extension methods.  I began discussing extension methods in my last post (which you find here) where I expounded upon what I thought were some rules of thumb for using extension methods correctly.  As long as you keep in line with those (or similar) rules, they can often be useful for adding that little extra functionality or syntactical simplification for a library that you have little or no control over. Oh sure, you could take an open source project, download the source and add the methods you want, but then every time the library is updated you have to re-add your changes, which can be cumbersome and error prone.  And yes, you could possibly extend a class in a third party library and override features, but that's only if the class is not sealed, static, or constructed via factories. This is the perfect place to use an extension method!  And the best part is, you and your development team don't need to change anything!  Simply add the using for the namespace the extensions are in! So let's consider this example.  I love log4net!  Of all the logging libraries I've played with, it, to me, is one of the most flexible and configurable logging libraries and it performs great.  But this isn't about log4net, well, not directly.  So why would I want to add functionality?  Well, it's missing one thing I really want in the ILog interface: ability to specify logging level at runtime. For example, let's say I declare my ILog instance like so:     using log4net;     public class LoggingTest     {         private static readonly ILog _log = LogManager.GetLogger(typeof(LoggingTest));         ...     }     If you don't know log4net, the details aren't important, just to show that the field _log is the logger I have gotten from log4net. So now that I have that, I can log to it like so:     _log.Debug("This is the lowest level of logging and just for debugging output.");     _log.Info("This is an informational message.  Usual normal operation events.");     _log.Warn("This is a warning, something suspect but not necessarily wrong.");     _log.Error("This is an error, some sort of processing problem has happened.");     _log.Fatal("Fatals usually indicate the program is dying hideously."); And there's many flavors of each of these to log using string formatting, to log exceptions, etc.  But one thing there isn't: the ability to easily choose the logging level at runtime.  Notice, the logging levels above are chosen at compile time.  Of course, you could do some fun stuff with lambdas and wrap it, but that would obscure the simplicity of the interface.  And yes there is a Logger property you can dive down into where you can specify a Level, but the Level properties don't really match the ILog interface exactly and then you have to manually build a LogEvent and... well, it gets messy.  I want something simple and sexy so I can say:     _log.Log(someLevel, "This will be logged at whatever level I choose at runtime!");     Now, some purists out there might say you should always know what level you want to log at, and for the most part I agree with them.  For the most party the ILog interface satisfies 99% of my needs.  In fact, for most application logging yes you do always know the level you will be logging at, but when writing a utility class, you may not always know what level your user wants. I'll tell you, one of my favorite things is to write reusable components.  If I had my druthers I'd write framework libraries and shared components all day!  And being able to easily log at a runtime-chosen level is a big need for me.  After all, if I want my code to really be re-usable, I shouldn't force a user to deal with the logging level I choose. One of my favorite uses for this is in Interceptors -- I'll describe Interceptors in my next post and some of my favorites -- for now just know that an Interceptor wraps a class and allows you to add functionality to an existing method without changing it's signature.  At the risk of over-simplifying, it's a very generic implementation of the Decorator design pattern. So, say for example that you were writing an Interceptor that would time method calls and emit a log message if the method call execution time took beyond a certain threshold of time.  For instance, maybe if your database calls take more than 5,000 ms, you want to log a warning.  Or if a web method call takes over 1,000 ms, you want to log an informational message.  This would be an excellent use of logging at a generic level. So here was my personal wish-list of requirements for my task: Be able to determine if a runtime-specified logging level is enabled. Be able to log generically at a runtime-specified logging level. Have the same look-and-feel of the existing Debug, Info, Warn, Error, and Fatal calls.    Having the ability to also determine if logging for a level is on at runtime is also important so you don't spend time building a potentially expensive logging message if that level is off.  Consider an Interceptor that may log parameters on entrance to the method.  If you choose to log those parameter at DEBUG level and if DEBUG is not on, you don't want to spend the time serializing those parameters. Now, mine may not be the most elegant solution, but it performs really well since the enum I provide all uses contiguous values -- while it's never guaranteed, contiguous switch values usually get compiled into a jump table in IL which is VERY performant - O(1) - but even if it doesn't, it's still so fast you'd never need to worry about it. So first, I need a way to let users pass in logging levels.  Sure, log4net has a Level class, but it's a class with static members and plus it provides way too many options compared to ILog interface itself -- and wouldn't perform as well in my level-check -- so I define an enum like below.     namespace Shared.Logging.Extensions     {         // enum to specify available logging levels.         public enum LoggingLevel         {             Debug,             Informational,             Warning,             Error,             Fatal         }     } Now, once I have this, writing the extension methods I need is trivial.  Once again, I would typically /// comment fully, but I'm eliminating for blogging brevity:     namespace Shared.Logging.Extensions     {         // the extension methods to add functionality to the ILog interface         public static class LogExtensions         {             // Determines if logging is enabled at a given level.             public static bool IsLogEnabled(this ILog logger, LoggingLevel level)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         return logger.IsDebugEnabled;                     case LoggingLevel.Informational:                         return logger.IsInfoEnabled;                     case LoggingLevel.Warning:                         return logger.IsWarnEnabled;                     case LoggingLevel.Error:                         return logger.IsErrorEnabled;                     case LoggingLevel.Fatal:                         return logger.IsFatalEnabled;                 }                                 return false;             }             // Logs a simple message - uses same signature except adds LoggingLevel             public static void Log(this ILog logger, LoggingLevel level, object message)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message);                         break;                     case LoggingLevel.Informational:                         logger.Info(message);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message);                         break;                     case LoggingLevel.Error:                         logger.Error(message);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message);                         break;                 }             }             // Logs a message and exception to the log at specified level.             public static void Log(this ILog logger, LoggingLevel level, object message, Exception exception)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message, exception);                         break;                     case LoggingLevel.Informational:                         logger.Info(message, exception);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message, exception);                         break;                     case LoggingLevel.Error:                         logger.Error(message, exception);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message, exception);                         break;                 }             }             // Logs a formatted message to the log at the specified level.              public static void LogFormat(this ILog logger, LoggingLevel level, string format,                                          params object[] args)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.DebugFormat(format, args);                         break;                     case LoggingLevel.Informational:                         logger.InfoFormat(format, args);                         break;                     case LoggingLevel.Warning:                         logger.WarnFormat(format, args);                         break;                     case LoggingLevel.Error:                         logger.ErrorFormat(format, args);                         break;                     case LoggingLevel.Fatal:                         logger.FatalFormat(format, args);                         break;                 }             }         }     } So there it is!  I didn't have to modify the log4net source code, so if a new version comes out, i can just add the new assembly with no changes.  I didn't have to subclass and worry about developers not calling my sub-class instead of the original.  I simply provide the extension methods and it's as if the long lost extension methods were always a part of the ILog interface! Consider a very contrived example using the original interface:     // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsWarnEnabled)             {                 _log.WarnFormat("Statement {0} took too long to execute.", statement);             }             ...         }     }     Now consider this alternate call where the logging level could be perhaps a property of the class          // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // allow logging level to be specified by user of class instead         public LoggingLevel ThresholdLogLevel { get; set; }                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsLogEnabled(ThresholdLogLevel))             {                 _log.LogFormat(ThresholdLogLevel, "Statement {0} took too long to execute.",                     statement);             }             ...         }     } Next time, I'll show one of my favorite uses for these extension methods in an Interceptor.

    Read the article

  • DAC pack up all your troubles

    - by Tony Davis
    Visual Studio 2010, or perhaps its apparently-forthcoming sister, "SQL Studio", is being geared up to become the natural way for developers to create databases. Central to this drive is the introduction of 'data-tier application components', or DACs. Applications are developed as normal but when it comes to deployment, instead of supplying the DBA with a bunch of scripts to create the required database objects, the developer creates a single DAC Package ("DAC Pack"); a zipped XML file containing all the database objects needed by the application, along with versioning information, policies for deployment, and so on. It's an intriguing prospect. Developers can work on their development database using their existing tools and source control, and then package up the changes into a single DACPAC for deployment and management. DBAs get an "application level view" of how their instances are being used and the ability to collectively, rather than individually, manage the objects. The DBA needing to manage a large number of relatively small databases can use "DAC snapshots" to get a quick overview of what has changed across all the databases they manage. The reason that DAC packs haven't caused more excitement is that they can only be pushed to SQL Server 2008 R2, and they must be developed or inspected using Visual Studio 2010. Furthermore, what we see right now in VS2010 is more of a 'work-in-progress' or 'vision of the future', with serious shortcomings and restrictions that render it unsuitable for anything but small 'non-critical' departmental databases. The first problem is that DAC packs support a limited set of schema objects (corresponding closely to the features available on 'Azure'). This means that Service Broker queues, CLR Objects, and perhaps most critically security (permissions, certificates etc.), are off-limits. Applications that require these objects will need to add them via a post-deployment TSQL script, rather defeating the whole idea. More worrying still is the process for altering a database with a DAC pack. The grand 'collective' philosophy, whereby a single XML file can be used for deploying and managing builds and changes, extends, unfortunately, to database upgrades. Any change to a database object will result in the creation of a new database, copying the data from the old version, nuking the previous one, and then renaming the new one. Simple eh? The problem is that even something as trivial as adding a comment to a stored procedure in a 5GB database will require the server to find at least twice as much space, as well sufficient elbow-room in the transaction log for copying the largest table. Of course, you'll need to take the database offline for the full course of the deployment, which is likely to take a long time if there is a lot of data. This upgrade/rename process breaks the log chain, makes any subsequent full restore operation highly complicated, and will also break log shipping. As with any grand vision, the devil is always in the detail. It's hard to fathom why Microsoft hasn't used a SQL Compare-style approach to the upgrade process, altering a database with a change script, and this will surely be adopted in the near future. Something had to be in place for VS2010, but right now DAC packs only make sense for Azure. For this, they're cute, but hardly compelling. Nevertheless, DBAs would do well to get familiar with VS 2010 and DAC packs. Like it or not, they're both coming. Cheers, Tony.

    Read the article

  • Problem with Gallio and TeamCity and the new Visual Studio 2010 release

    - by Bernard Larouche
    I am running TeamCity on a virtual machine. I have installed the new Visual Studio 2010 release yesterday and converted my VS 2008 projects. I also have installed .NET Framework 4 on my virtual machine. Before yesterday all my projects were building succesfully on the CI server but since I installed VS 2010 I get the following error message : error MSB5014: File format version is not recognized. MSBuild can only read solution files between versions 7.0 and 9.0, inclusive. I did change my config on Team City to take into account the new .NET 4 framework : Build Runner : MSBuild Build File Path : CFT.msbuild MSBuild version : Microsoft.NET Framework 4.0 MSBuild ToolsVersion : 4.0 Run Platform : x86 I think it has something to do with the fact that now MSBuild must refer to .NET 4 framwork but it seems that it keeps refering to 2.0.

    Read the article

  • Firefox extension is freezing Firefox until request is completed

    - by Michael
    For some reason the function is freezing along with firefox until it fully retrieve the stream from requested site. Is there any mechanism to prevent freezing, so it works as expected? in XUL <statusbarpanel id="eee_label" tooltip="eee_tooltip" onclick="eee.retrieve_rate(event);"/> Javascript retrieve_rate: function(e) { var ajax = null; ajax = new XMLHttpRequest(); ajax.open('GET', 'http://site.com', false); ajax.onload = function() { if (ajax.status == 200) { var regexp = /blabla/g; var match = regexp.exec(ajax.responseText); while (match != null) { window.dump('Currency: ' + match[1] + ', Rate: ' + match[2] + ', Change: ' + match[3] + "\n"); if(match[1] == "USD") rate_USD = sprintf("%s:%s", match[1], match[2]); if(match[1] == "EUR") rate_EUR = sprintf("%s:%s", match[1], match[2]); if(match[1] == "RUB") rate_RUB = sprintf("%s/%s", match[1], match[2]); match = regexp.exec(ajax.responseText); } var rate = document.getElementById('eee_label'); rate.label = rate_USD + " " + rate_EUR + " " + rate_RUB; } else { } }; ajax.send(); I tried to put window.dump() right after ajax.send() and it dumped in the console also after the request is completed.

    Read the article

  • How to pack a Chrome extension on Mac OSX with commandline??

    - by Parimal Das
    Hi I am trying to automate my Chrome extension building process on OSX 10.5. I am unable to find a similar command for OSX like this for Windows chrome.exe --pack-extension=c:\myext --pack-extension-key=c:\myext.pem Is it even possible on OSX? As there is no mention of it in the documentation. Please guide me. Thanks in advance. -Parimal Das

    Read the article

  • Python: unable to inherit from a C extension.

    - by celil
    I am trying to add a few extra methods to a matrix type from the pysparse library. Apart from that I want the new class to behave exactly like the original, so I chose to implement the changes using inheritance. However, when I try from pysparse import spmatrix class ll_mat(spmatrix.ll_mat): pass this results in the following error TypeError: Error when calling the metaclass bases cannot create 'builtin_function_or_method' instances What is this causing this error? Is there a way to use delegation so that my new class behaves exactly the same way as the original?

    Read the article

  • Shared classes are build under VS2008 only but not under MSBuild.

    - by Vasiliy Borovyak
    We share our classes between silverlight 3.0 client and server as is it described here. Everything works fine under Visual Studio 2008 only. Using msbuild with following command line parameters: C:\Windows\Microsoft.NET\Framework\v3.5\msbuild.exe FoobarApplication.sln /t:Rebuild /p:Configuration=Release /p:Platform="Any CPU" we get following error: Class1.cs(28,54): error CS0234: The type or namespace name 'WcfService' does not exist in the namespace 'Company.FoobarApplication' (are you missing an assembly reference?) Service References\geoServiceReference1\Reference.cs(24,81): error CS0234: The type or namespace name 'WcfService' does not exist in the namespace 'Company.FoobarApplication' (are you missing an assembly reference?) Done Building Project "C:\work\bov-tmp\FoobarApplication\SilverlightClassLibrary3\SilverlightClassLibrary3.csproj" (Rebuild target(s)) -- FAILED. Done Building Project "C:\work\bov-tmp\FoobarApplication\FoobarApplication.sln" (Rebuild target(s)) -- FAILED. I found exactly the same question here. There are 4 workarounds there, I tried first 3 of them and those did not worked out. The 4-th workaround is not the acceptable solution. Any thoughts how to build the solution?

    Read the article

  • writing an extension for Safari 5

    - by Caylem
    As of Monday 7th June 2010 Safari (v5) supports Extensions. Some already exist such as the Gmail Checker & the upcoming Coda Notes by Panic. So my question... Where would one begin if one intends to develop an application for Safari 5? Thanks in advance for any feedback!

    Read the article

  • Google Chrome Extension

    - by Jamie
    Is there a way to replace inside the DOM of a page using the replace() in javascript In the source code I want to replace: <div class="topbar">Bookmark Us</div> to <div class="topbar"><span class="larger-font">Bookmark Us</span></div> When a Google Chrome extenstion is on the matched website of a URL and it will do the above. Any page that matches: http://www.domain.com/support.php Thanks.

    Read the article

  • Get URL and save it | Chrome Extension

    - by Jamie
    Basically on my window (when you click the icon) it should open and show the URL of the tab and next to it I want it to say "Save", it will save it to the localStorage, and to be displayed below into the saved links area. Like this:

    Read the article

  • How to bundle extension methods requiring configuration in a library

    - by Greg
    Hi, I would like to develop a library that I can re-use to add various methods involved in navigating/searching through a graph (nodes/relationships, or if you like vertexs/edges). The generic requirements would be: There are existing classes in the main project that already implement the equivalent of the graph class (which contains the lists of nodes / relationships), node class and relationship class (which links nodes together) - the main project likely already has persistence mechanisms for the info (e.g. these classes might be built using Entity Framework for persistance) Methods would need to be added to each of these 3 classes: (a) graph class - methods like "search all nodes", (b) node class - methods such as "find all children to depth i", c) relationship class - methods like "return relationship type", "get parent node", "get child node". I assume there would be a need to inform the library with the extending methods the class names for the graph/node/relationships table (as different project might use different names). To some extent it would need to be like how a generics collection works (where you pass the classes to the collection so it knows what they are). Need to be a way to inform the library of which node property to use for equality checks perhaps (e.g. if it were a graph of webpages the equality field to use might be the URI path) I'm assuming that using abstract base classes wouldn't really work as this would tie usage down to have to use the same persistence approach, and same class names etc. Whereas really I want to be able to, for a project that has "graph-like" characteristics, the ability to add graph searching/walking methods to it.

    Read the article

  • Chrome extension: sendMessage doesn't work

    - by user3334776
    I've already read the documentation from Google on 'message passing' a few times and have probably looked at over 10 other questions with the same problem and already tried quiet a few variations of most of their "solutions" and of what I have below... This is black magic, right? Either way, here it goes. Manifest File: { "manifest_version" : 2, "name" : "Message Test", "version" : "1.0", "browser_action": { "default_popup": "popup.html" }, "background": { "scripts": ["background.js"] }, "content_scripts": [ { "matches" : ["<all_urls>"], "js": ["message-test.js"] } ] } I'm aware extensions aren't suppose to use inline JS, but I'm leaving this in so the original question can be left as it was since I still can't get the message to send from the background page, When I switch from the popup to the background, I removed the appropriate lines from the manifest.json popup.html file: <html> <head> <script> chrome.tabs.query({active: true, currentWindow: true}, function(tabs) { chrome.tabs.sendMessage(tabs[0].id, {greeting: "hello", theMessage: "Why isn\'t this working?"}, function(response) { console.log(response.farewell); }); }); </script> </head> <body> </body> </html> OR background.js file: chrome.tabs.query({active: true, currentWindow: true}, function(tabs) { chrome.tabs.sendMessage(tabs[0].id, {greeting: "hello", theMessage: "Why isn\'t this working?"}, function(response) { console.log(response.farewell); }); }); message-test.js file: var Mymessage; chrome.runtime.onMessage.addListener(function(message, sender, sendResponse) { if (message.greeting == "hello"){ Mymessage = message.theMessage; alert(Mymessage); } else{ sendResponse({}); } }); No alert(Mymessage) goes off. I'm also trying to execute this after pressing a button from a popup and having a window at a specified url, but that's a later issue. The other files can be found here except with the background.js content wrapped in an addEventListener("click"....: http://pastebin.com/KhqxLx5y AND http://pastebin.com/JaGcp6tj

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >