Search Results

Search found 14969 results on 599 pages for 'tfs 2008'.

Page 352/599 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Basic Team Foundation Server 2010 Question - System Resource Usage?

    - by user127954
    Guys / Gals i have a real basic Team Foundation Server 2010 question. For those of you who have played around with tfs 2010 is it a lot more light weight than tfs2008 is? I remember installing all the pieces needed for TFS 2008 one one machine at work. I remember it being a pain to install (i know 2010 is supposed to be much better) We wanted to play around with it a little bit to see if it met our needs. Well it brought that machine to a screeching halt. I'm needing a source control repository for home and i thought why not just install tfs 2010 so i can get familiar with it and maybe in the future i can make a better sell to my organization and FINALLY get them to move off of Source Safe but my concern is i only have one server at home (granted i already have SQL Server installed) and don't want to buy a machine just for this purpose. I'd also like to get more familiar with CI too. Anyways, if team is going to be to heavy i'll just use subversion but i'd like to use TFS if possible. Any help would be appreciated. thanks, Ncage

    Read the article

  • WCF cross-domain policy security error

    - by George2
    Hello everyone, I am using VSTS 2008 + C# + WCF + .Net 3.5 + Silverlight 3.0. I host Silverlight control in an html page and debug it from VSTS 2008 (press F5, then run in VSTS 2008 built-in ASP.Net development web server), then call another WCF service (hosted in another machine running IIS 7.0 + Vista). The WCF service is very simple, just return a constant string to client. When invoking the WCF service from Silverlight, I got the following error message, An error occurred while trying to make a request to URI 'https://LabTest/Test.svc'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details. Here is the clientaccesspolicy.xml file, anything wrong? <?xml version="1.0" encoding="utf-8" ?> <access-policy> <cross-domain-access> <policy> <allow-from http-request-headers="*"> <domain uri="*"> </domain> </allow-from> <grant-to> <resource path="/" include-subpaths="true"></resource> </grant-to> </policy> </cross-domain-access> </access-policy> thanks in advance, George

    Read the article

  • Subversion freaking out on me!

    - by Malfist
    I have two copies of a site, one is the production copy, and the other is the development copy. I recently added everything in the production to a subversion repository hosted on our linux backup server. I created a tag of the current version and I was done. I then copied the development copy overtop of the production copy (on my local machine where I have everything checked out). There are only 10-20 files changed, however, when I use tortoise SVN to do a commit, it says every file has changed. The diff file generated shows subversion removing everything, and replacing it with the new version (which is the exact same). What is going on? How do I fix it? An example diff: Index: C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html =================================================================== --- C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (revision 5) +++ C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (working copy) @@ -1,4 +1,4 @@ -<html> -<body bgcolor="#FFFFFF"> -</body> +<html> +<body bgcolor="#FFFFFF"> +</body> </html> \ No newline at end of file

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • CodePlex Daily Summary for Wednesday, October 23, 2013

    CodePlex Daily Summary for Wednesday, October 23, 2013Popular ReleasesImapX 2: ImapX 2.0.0.10: The long awaited ImapX 2.0.0.10 release. The library has been rewritten from scratch, massive optimizations and refactoring done. Several new features introduced. Added support for Mono. Added support for Windows Phone 7.1 and 8.0. Added support for .Net 2.0, 3.0 Simplified connecting to server by reducing the number of parameters and adding automatic port selection. Changed authentication handling to be universal and allowing to build custom providers. Added advanced server featur...ABCat: ABCat v.2.0.1a: ?????????? ???????? ? ?????????? ?????? ???? ??? Win7. ????????? ?????? ????????? ?? ???????. ????? ?????, ???? ????? ???????? ????????? ?????????? ????????? "?? ??????? ????? ???????????? ?????????? ??????...", ?? ?????????? ??????? ? ?????????? ?????? Microsoft SQL Ce ?? ????????? ??????: http://www.microsoft.com/en-us/download/details.aspx?id=17876. ???????? ?????? x64 ??? x86 ? ??????????? ?? ?????? ???????????? ???????. ??? ??????? ????????? ?? ?????????? ?????? Entity Framework, ? ???? ...NB_Store - Free DotNetNuke Ecommerce Catalog Module: NB_Store v2.3.8 Rel3: vv2.3.8 Rel3 updates the version number in the ManagerMenuDefault.xml. Simply update the version setting in the Back Office to 02.03.08 if you have already installed Rel2. v2.3.8 Is now DNN6 and DNN7 compatible NOTE: NB_Store v2.3.8 is NOT compatible with DNN5. SOURCE CODE : https://github.com/leedavi/NB_Store (Source code has been moved to GitHub, due to issues with codeplex SVN and the inability to move easily to GIT on codeplex)patterns & practices: Data Access Guidance: Data Access Guidance 2013: This is the 2013 release of Data Access Guidance. The documentation for this RI is also available on MSDN: Data Access for Highly-Scalable Solutions: Using SQL, NoSQL, and Polyglot Persistence: http://msdn.microsoft.com/en-us/library/dn271399.aspxLINQ to Twitter: LINQ to Twitter v2.1.10: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.Media Companion: Media Companion MC3.584b: IMDB changes fixed. Fixed* mc_com.exe - Fixed to using new profile entries. * Movie - fixed rename movie and folder if use foldername selected. * Movie - Alt Edit Movie, trailer url check if changed and confirm valid. * Movie - Fixed IMDB poster scraping * Movie - Fixed outline and Plot scraping, including removal of Hyperlink's. * Movie Poster refactoring, attempts to catch gdi+ errors Revision HistoryTerrariViewer: TerrariViewer v7.2 [Terraria Inventory Editor]: Added "Check for Update" button Hopefully fixed Windows XP issue You can now backspace in Item stack fieldsVirtual Wifi Hotspot for Windows 7 & 8: Virtual Router Plus 2.6.0: Virtual Router Plus 2.6.0Fast YouTube Downloader: Fast YouTube Downloader 2.3.0: Fast YouTube DownloaderMagick.NET: Magick.NET 6.8.7.101: Magick.NET linked with ImageMagick 6.8.7.1. Breaking changes: - Renamed Matrix classes: MatrixColor = ColorMatrix and MatrixConvolve = ConvolveMatrix. - Renamed Depth method with Channels parameter to BitDepth and changed the other method into a property.VidCoder: 1.5.9 Beta: Added Rip DVD and Rip Blu-ray AutoPlay actions for Windows: now you can have VidCoder start up and scan a disc when you insert it. Go to Start -> AutoPlay to set it up. Added error message for Windows XP users rather than letting it crash. Removed "quality" preset from list for QSV as it currently doesn't offer much improvement. Changed installer to ignore version number when copying files over. Should reduce the chances of a bug from me forgetting to increment a version number. Fixed ...MSBuild Extension Pack: October 2013: Release Blog Post The MSBuild Extension Pack October 2013 release provides a collection of over 480 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUI...VG-Ripper & PG-Ripper: VG-Ripper 2.9.49: changes NEW: Added Support for "ImageTeam.org links NEW: Added Support for "ImgNext.com" links NEW: Added Support for "HostUrImage.com" links NEW: Added Support for "3XVintage.com" linksmyCollections: Version 2.8.7.0: New in this version : Added Public Rating Added Collection Number Added Order by Collection Number Improved XBMC integrations Play on music item will now launch default player. Settings are now saved in database. Tooltip now display sort information. Fix Issue with Stars on card view. Fix Bug with PDF Export. Fix Bug with technical information's. Fix HotMovies Provider. Improved Performance on Save. Bug FixingMoreTerra (Terraria World Viewer): MoreTerra 1.11.3.1: Release 1.11.3.1 ================ = New Features = ================ Added markers for Copper Cache, Silver Cache and the Enchanted Sword. ============= = Bug Fixes = ============= Use Official Colors now no longer tries to change the Draw Wires option instead. World reading was breaking for people with a stock 1.2 Terraria version. Changed world name reading so it does not crash the program if you load MoreTerra while Terraria is saving the world. =================== = Feature Removal = =...patterns & practices - Windows Azure Guidance: Cloud Design Patterns: 1st drop of Cloud Design Patterns project. It contains 14 patterns with 6 related guidance.Player Framework by Microsoft: Player Framework for Windows and WP (v1.3): Includes all changes in v1.3 beta 1 and v1.3 beta 2 Support for Windows 8.1 RTM and VS2013 RTM Xaml: New property: AutoLoadPluginTypes to help control which stock plugins are loaded by default (requires AutoLoadPlugins = true). Support for SystemMediaTransportControls on Windows 8.1 JS: Support for visual markers in the timeline. JS: Support for markers collection and markerreached event. JS: New ChaptersPlugin to automatically populate timeline with chapter tracks. JS: Audio an...Json.NET: Json.NET 5.0 Release 8: Fix - Fixed not writing string quotes when QuoteName is falsePowerShell Community Extensions: 3.1 Production: PowerShell Community Extensions 3.1 Release NotesOct 17, 2013 This version of PSCX supports Windows PowerShell 3.0 and 4.0 See the ReleaseNotes.txt download above for more information.Python Tools for Visual Studio: 2.0: PTVS 2.0 We’re pleased to announce the release of Python Tools for Visual Studio 2.0 RTM. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, IPython, and cross platform and cross language debugging support. QUICK VIDEO OVERVIEW For a quick overview of the general IDE experience, please watch this v...New ProjectsAssignment 1 - WSCC2013: Assignment 1 - Web Scripting and Content Creation Assignment 1 * Initial web 2.0 site idea * Subversion * Web Application which accepts two numbers, add them aChappie: Chappie Team Main ProjectEnable Meeting Workspaces in SharePoint 2013: This projects provides some site templates to make meeting workspaces available in SharePoint 2013. Entity Framework 6 Contrib: Entity Framework Contrib is dedicated to contribute into Entity Framework 6 with new features that can not be included into EF master branch.ExifToFileName - Rename video audio files using exif tags o files property: The software allows you to rename automatically in the format: yyyy-mm-dd hh.mm.ss.ext all video or image files of a directory.Extensible Console Application Shell: Extensible Console Shell Application for creating plug in based functionality using MEF to easily incorporate new functionality into the shell console app.GameBook Engine: A game book engine.Generic Data Access Table: This is a simple class structure that simulates a database table through reflection for extremely quick data management, ideal for basic testing and simple appsGoogle Maps MVC 4: Google Maps MVC 4Grauers Hide Ribbon Items: A SharePoint Project to hide ribbon items in librariesHandy PRISM 4 Extensions: Collection of handy PRISM 4 extensions which comes handy on day to day usageHashTag .Net Developer's Library: Collection of helpful .Net routines covering serialization, logging, reflection, hosting, cryptography, extensions, validation, and more.How to display CallerID during VoIP Calls on Website: You can follow your calls easily using Ozeki Phone System XE JavaScript API. It is simple to receive notifications on the calls made through your own website.Imagenator: Small image converter app.jean1023changebranch: 11Kormic Cash Registers: An open source cash register application for windows. See the development plan or overview for more information.Line-Graph Drawing Component for WPF: This component allows you to draw lines between UI elements in a WPF application. Those lines and the UI elements are organized as graphs to configure thatOrchard Html Injection: Orchard module for injecting html code into layouts and zones.OS simulation: It's an project that is aimed to create a simulation of operation system for studying purposes.Producttransfer: the demostation of export Project_veilingsite: HTML5 ASP siteProjeto Integrador: Senac - ProjetoSapphireForum: Telerik Academy project. Forum-like application done using ASP.NET Web Forms.Team Hambergite - Conquistador: The project is simple game. You can take question and pick answer.TFS Storage Explorer: TFS Storage Explorer allows browsing the Team Foundation Server (non-version control) storage for TFS 2013 or above.Ticket Management based on TFS: TicketM is an extension to TFS 2012 to do support ticket management. It leverages Work Item Tracking Service of TFS 2012. vwpPMCase: vwpPMCasevwpPMCommunicate: vwpPMCommunicatevwpPMConfiguration: vwpPMConfigurationvwpPMCosts: vwpPMCostsvwpPMHR: vwpPMHRvwpPMPurchase: vwpPMPurchasevwpPMQuality: vwpPMQualityvwpPMRisk: vwpPMRiskvwpPMScope: vwpPMScopevwpPMThesis: vwpPMThesisvwpPmTime: vwpPmTimevwpPMWhole: vwpPMWholeWinner - Scommessa vincente: Winner permette di scommettere diverse quote per gli incontri dei maggiori eventi sportivi, calcolare la vincita partendo da un importo inserito, e salvarla.ymproject: cocos & tools & editor & applications

    Read the article

  • CodePlex Daily Summary for Friday, August 24, 2012

    CodePlex Daily Summary for Friday, August 24, 2012Popular ReleasesVisual Studio Team Foundation Server Branching and Merging Guide: v2 - Visual Studio 2012: Welcome to the Branching and Merging Guide Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review Documentation has been reviewed by the quality and recording team All critical bugs have been resolved Known Issues / Bugs Spelling, grammar and content revisions are in progress. Hotfix will be published.Community TFS Build Extensions: August 2012: The August 2012 release contains VS2010 Activities(target .NET 4.0) VS2012 Activities (target .NET 4.5) Community TFS Build Manager VS2010 Community TFS Build Manager VS2012 Both the Community TFS Build Managers can also be found in the Visual Studio Gallery here where updates will first become available. Please note that we only intend to fix major bugs in the 2010 version and will concentrate our efforts on the 2012 version of the TFS Build Manager. At a high level, the following I...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.62: Fix for issue #18525 - escaped characters in CSS identifiers get double-escaped if the character immediately after the backslash is not normally allowed in an identifier. fixed symbol problem with nuget package. 4.62 should have nuget symbols available again.Game of Life 3D: GameOfLife3D Version 0.5.2: Support Windows 8nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.65: As some of you may know we were planning to release version 2.70 much later (the end of September). But today we have to release this intermediate version (2.65). It fixes a critical issue caused by a third-party assembly when running nopCommerce on a server with .NET 4.5 installed. No major features have been introduced with this release as our development efforts were focused on further enhancements and fixing bugs. To see the full list of fixes and changes please visit the release notes p...MyRouter (Virtual WiFi Router): MyRouter 1.2.9: . Fix: Some missing changes for fixing the window subclassing crash. · Fix: fixed bug when Run MyRouter at the first Time. · Fix: Log File · Fix: improve performance speed application · fix: solve some Exception.Smart Thread Pool: SmartThreadPool 2.2.2: Release Changes Added set name to threads Fixed the WorkItemsQueue.Dequeue. Replaced while(!Monitor.TryEnter(this)); with lock(this) { ... } Fixed SmartThreadPool.Pipe Added IsBackground option to threads Added ApartmentState to threads Fixed thread creation when queuing many work items at the same time.ZXing.Net: ZXing.Net 0.8.0.0: sync with rev. 2393 of the java version improved API, direct support for multiple barcode decoding, wrapper for barcode generating many other improvements and fixes encoder and decoder command line clients demo client for emguCV dev documentation startedScintillaNET: ScintillaNET 2.5.1: This release has been built from the 2.5 branch. Issues closed: Issue # Title 32524 32524 32550 32550 32552 32552 25148 25148 32449 32449 32551 32551 32711 32711 MFCMAPI: August 2012 Release: Build: 15.0.0.1035 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeDocument.Editor: 2013.2: Whats new for Document.Editor 2013.2: New save as Html document Improved Traslate support Minor Bug Fix's, improvements and speed upsPulse: Pulse Beta 5: Whats new in this release? Well to start with we now have Wallbase.cc Authentication! so you can access favorites or NSFW. This version requires .NET 4.0, you probably already have it, but if you don't it's a free and easy download from Microsoft. Pulse can bet set to start on Windows startup now too. The Wallpaper setter has settings now, so you can change the background color of the desktop and the Picture Position (Tile/Center/Fill/etc...) I've switched to Windows Forms instead of WPF...Metro Paint: Metro Paint: Download it now , don't forget to give feedback to me at maitreyavyas@live.com or at my facebook page fb.com/maitreyavyas , Hope you enjoy it.Obelisk - WP7 & Windows 8 MVVM Persistence Library: Obelisk 2.2 Release: This release is built against code shared between WP7 and Windows 8. The setup project only contains the source for WP7, because I can't create an MSI in Windows 8 yet, so for Windows 8, use the source.MiniTwitter: 1.80: MiniTwitter 1.80 ???? ?? .NET Framework 4.5 ?????? ?? .NET Framework 4.5 ????????????? "&" ??????????????????? ???????????????????????? 2 ??????????? ReTweet ?????????????????、In reply to ?????????????? URL ???????????? ??????????????????????????????Droid Explorer: Droid Explorer 0.8.8.6 Beta: Device images are now pulled from DroidExplorer Cloud Service refined some issues with the usage statistics Added a method to get the first available value from a list of property names DroidExplorer.Configuration no longer depends on DroidExplorer.Core.UI (it is actually the other way now) fix to the bootstraper to only try to delete the SDK if it is a "local" sdk, not an existing. no longer support the "local" sdk, you must now select an existing SDK checks for sdk if it was ins...Path Copy Copy: 11.0.1: Bugfix release that corrects the following issue: 11365 If you are using Path Copy Copy in a network environment and use the UNC path commands, it is recommended that you upgrade to this version.ExtAspNet: ExtAspNet v3.1.9.1: +2012-08-18 v3.1.9 -??other/addtab.aspx???JS???BoundField??Tooltip???(Dennis_Liu)。 +??Window?GetShowReference???????????????(︶????、????、???、??~)。 -?????JavaScript?????,??????HTML????????。 -??HtmlNodeBuilder????????????????JavaScript??。 -??????WindowField、LinkButton、HyperLink????????????????????????????。 -???????????grid/griddynamiccolumns2.aspx(?????)。 -?????Type??Reset?????,??????????????????(e??)。 -?????????????????????。 -?????????int,short,double??????????(???)。 +?Window????Ge...Task Card Creator 2010: TaskCardCreator2010 4.0.2.0: What's New: UI/UX improved using a contextual ribbon tab for reports Finishing the "new 4.0 UI" Report template help improved New project branch to support TFS 2012: http://taskcardcreator2012.codeplex.com User interface made more modern (4.0.1.0) Smarter algorithm used for report generation (4.0.1.0) Quality setting added (4.0.1.0) Terms harmonized (4.0.1.0) Miscellaneous optimizations (4.0.1.0) Fixed critical issue introduced in 4.0.0.0 (4.0.1.0)SABnzbd for LCDSmartie: v 0.9.1: - Included right version of Newtonsoft.Json.dll in download No other changesNew ProjectsAnagramme: Jeu en réseau basé sur les anagrammes. Démonstraction technique utilisant WPF, WCF, WF, MEF et le pattern MVVM.ApplicationModel Framework: Ultra light WPF, MEF and MVVM enabled Framework.atfcard: atfcardCaribbean Cinemas: Crear una aplicación para Windows Phone 7.5 o superior, en la cual los usuarios puedan conocer cuales películas se encuentran actualmente en la cartelera.CiberSeguros: Este es un basico ABM usando una empresa de seguros como logica de negociosCLF 3.0: The SharePoint CLF 3.0 toolkit will allow departments and agencies to publish web sites that conform to the new Treasury Board of Canada Secretariat.CodeContrib: C# blog engine using ASP.NET MVC 4.DataGridView UserControl with Paging: this is user control of Windows form in C#. this user control is DatagridView with extended functionality of paging. diploma: ????JVM via COBOL: Sample programs in the isCOBOL dialect of object-aware and object-oriented COBOL, which focus on integrating functionality from the Java APIs into COBOL.MyEFamily: Client/Server software to remove the family from the Social Network and into the Family NetworkMyMVC3: My MVCOMR.Lib.Database - Lightweight WinRT InMemory Database: Lightweight in memory database with depended persistent source.OpenNETXC an unofficial port of the OpenXCPlatform Project to WinRT: An Unofficial WInRT port of the OpenXC Platform to WinRT see http://openxcplatform.com/PowerExtension: PowerExtension is an Open Source extension for Small Basic, a programming language. It adds file, networking, speech, and more!Project Webernet: Published: 8/23/2012ProjectManagementGenius: ????,???Proligence PowerShell VFS: The PowerShell VFS project is an implementation of a virtual file system for PowerShell providers. Using this library you can easily implement advanced PowerSheProxer.Me-Wrapper: Ein Wrapper für die Website Proxer.Me zum anschauen der Streams.Sharepoint Custom Recurrence Field: A recurrence field for SharePoint 2010 similar to the timer recurrence field in Central Admin.SharePoint Document Converter: SharePoint Document Converter solution gives a start on how we can leverage the Word automation Service to convert documents to formats that word can support. This project convert documents of type "docx" or "doc" to any possible file type that word support like to PDF, XPS, DOCX, DOCM, DOC, DOTX, DOTM, DOT, XML, RTF, MHT. This solution helps you to learn following things about SharePoint: - How document conversion happen using Word Automation Service - SharePoint Ribbon customization (H...Small Ticket System: Light CRM / Ticket System LightSwitch / Basic Features: Account/Contact/Contract Management Ticket System with Work History / Notes / TasksSQL Server Keep Alive Service: What is this? A Windows Service that will test if your SQL Server is up and running and writes the status to the Windows EventlogTeamBoard: A team build server displayTest Foreign Vocabulary: Tool helping to learn foreign vocabulary. You import Excel file which contains the vocabulary list and after, you yourself test with tool.testdd08232012git01: sdtesttfs08232012tfs01: sdTFS Kurs 2012: Dette er et undervisningsprosjekt til bruk i Mesaninen 2012. thenewcat: ffffffffffffffvvvvTriviaGame: a trivia game using wcf wpf technologiesUltra Urban: Ultra Urban is a 3d simcity-like game framework written in C# and XNA. It's going to provide some open interfaces for further city simulationUniversity Scheduler: University SchedulerVisual Studio Solution Export Import Addin: Usually we need to share visual studio projects. We take the source code and create zip file and share location with others. If project is not clean then share size will be more. Above manual process can be automated inside visual studio. if we have an add-in to do the same. I have created an addin for Visual Studio 2010 with that all the above manual tasks can be automated. Source code is provided as it is. So you can extend to develop same for other versions of visual studios. Cheers!...Whisper.Web.Providers: Custom web providers by whisper.Including CustomMembershipProvider ,CustomRoleProvider and Sqlserver version(SqlMembershipProvider,SqlRoleProvider).Windows Azure Storage Metrics Client Library: A library of .NET classes useful for the client (consumer) side of Windows Azure Storage Metrics and Windows Azure Diagnostics.WPF Prism Starter Kit: The goal of this project is to deliver a partitioned project skeleton to use prism with WPF.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Sharepoint 2010, 404 error after installation

    - by Tommy Jakobsen
    Running Windows Server 2008 Standard R2, SQL Server 2008 Enterprise, Team Foundation Server 2010, I installed Sharepoint Server 2010 (single server). It installed correctly, and the wizard configured it without errors. When accessing the sharepoint server through http://localhost/ I get a 404 error. I also get a 404 when trying to access the admin interface on port 42620. Sharepoint, TFS and Reporting services are the only application on my IIS. NOT sharing the same port, so that can't be the error. Do you have any ideas what the problem can be? Is there some way that I can debug this?

    Read the article

  • Reporting Services Returning HTTP 401 Unauthorized

    - by Chris Arnold
    I have just ported an existing ASP.NET application to a new web server (Windows Server 2008 R2 and SQL Server 2008). It is successfully running on 4 other servers of varying O/S (which I also setup). My ASP.NET app calls into the Reporting Services Web Service (ReportExecution2005.asmx) to generate a report and save it as a pdf to the file system. I consistently receive "System.Net.WebException - The request failed with HTTP status 401: Unauthorized." In UTTER desperation I have performed the following... Granted all Users complete access to SSRS via the Reports web page. Granted all Users 'Full control' to <%ProgramFiles%\Microsoft SQL Server\MSRS10.MSSQLSERVER I am not a network / server specialist but I'm the only one that can deal with this and it's driving me batty. Help!

    Read the article

  • Stable reverse port forwarding in SSH and stale sessions

    - by Vi
    Using VPS to forward ports behind NAT: for((;;)) { ssh -R 2222:127.0.0.1:22 [email protected]; sleep 10; } When connection is broken somehow and it is reconnecting. Warning: remote port forwarding failed for listen port 2222 Linux vi-server.no-ip.org 2.6.18-92.1.13.el5.028stab059.3 #1 SMP Wed Oct 15 13:33:44 MSD 2008 i686 I type: vi@vi-server:~$ killall sshd Connection to vi-server.org closed by remote host. Connection to vi-server.org closed. Linux vi-server.no-ip.org 2.6.18-92.1.13.el5.028stab059.3 #1 SMP Wed Oct 15 13:33:44 MSD 2008 i686 vi@vi-server:~$ Now it's OK. How it's simpler to make this automatic?

    Read the article

  • .Net Framework corrupted

    - by Samsudeen B
    Hi, We are facing a problem of .Net framework corruption for one our clients with the following environment OS : Windows 2008 Server SP2; Framework : .NET Framework 3.5 SP1; Application Details Database : SQL Server 2008; Server : WCF hosted webservice; Client : WPF based UI; Problem : The Config files inside the "..\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG" are suddenly deleted and and not able to work with my application. Not able to repair .NET / Run SQL Server. The only option is to restore the earlier images versions of that machine Any help is much appreciated sam

    Read the article

  • .Net Framework currputed

    - by Samsudeen B
    Hi, We are facing a problem of .Net framework corruption for one our clients with the following environment OS : Windows 2008 Server SP2; Framework : .NET Framework 3.5 SP1; Application Details Database : SQL Server 2008; Server : WCF hosted webservice; Client : WPF based UI; Problem : The Config files inside the "..\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG" are suddenly deleted and and not able to work with my application. Not able to repair .NET / Run SQL Server. The only option is to restore the earlier images versions of that machine Any help is much appreciated sam

    Read the article

  • IIS 7.0: Requiring Client Certificates causes error 500 and "page cannot be displayed"

    - by user48443
    I have two Windows 2008 x86 servers running IIS 7.0, one site on each server; both sites are SSL-enabled, using DoD-issued certificates. Both sites are accessible via https over port 443, but fail the moment Client Certificates are set to Require or Accept. IIS log records error 500.0.64 but nothing else. I have several Windows 2008 IIS 7 x64 servers that require client certificates and they are working as expected; it's just the two x86 servers that are being problematic.

    Read the article

  • How to sysprep SQL Server Express?

    - by Jim
    We plan to deploy Hyper-V VHD with Windows Server 2008 R2 and SQL Server 2012 Express installed to multiple hosts. From my understanding, the correct way to do this is to install SQL Server in prepartion mode, sysprep Windows, then complete SQL Server installation when the VHD is deployed. I mostly followed the process in this blog post: http://sethusrinivasan.com/category/sysprep/ However, after the VHD is deployed, I'm unable to complete the SQL Server installation. It keeps saying "Upgrade matrix is incorrect". It seems that it's trying to upgrade itself to Enterprise edition (I was asked for product key during install, but I skipped it). Could anyone share their experience in deploying VHDs with SQL Server (we're fine with either SQL Server 2008 R2 or 2012)? I think the source of my issue is because I can't select "Express Edition" when entering the product key at the completion stage, so the installation is trying to do an upgrade to Enterprise Edition. I have no idea why the drop down list is empty.

    Read the article

  • Splwow64 with TS Easy Print

    - by Tim Brigham
    I have an application (Sage MIP Fund Accounting) which exports data to Excel. In this process it uses an internal print driver. Since we upgraded from 2008 to 2008 R2 this export process causes system hangs. This has been isolated down to the splwow64 executable hanging while the Excel document is building. If I kill the spwow64 executable things function properly (I just can't print it once completed). This only occurs while using printer redirection using the Remote Desktop Easy Print function - if I pull the printer redirection things work exactly as expected. I've spent the last couple hours looking at hotfixes or driver upgrades since this appears to be a problem specifically with how the Remote Desktop Easy Printer printer is functioning. Is anyone aware of a hotfix which would be applicable in this situation? I don't want to grab every hotfix for redirected printing and start throwing them out there.

    Read the article

  • Exchange 2010 to Exchange 2010 Public Folder Replication

    - by Archit Baweja
    We have 2 exchange servers in our org. MX1 and MX2. I'm trying to replicate all MX1 public folders to MX2. I've setup replication for all the toplevel folders to include the MX2 server. However no public folders are being replicated. The event log does not show any errors. I've set the diagnostic level for all public folder diagnostics to Highest using get-eventloglevel "MSExchangeIS\9001 Public\*" | set-eventloglevel -Level Expert However besides a 3092 event ID (type: 0x2) generated on MX1 (the source server), there are no events being generated that would notify me of any issues. Some technical details. MX1 is Windows 2008 Standard, MX2 is Windows 2008 Enterprise (eval mode right now).

    Read the article

  • How can you know what is w3wp.exe doing? (or how to diagnose a performance problem)

    - by Daniel Magliola
    I'm having a performance problem in a site we've made, and I'm not exactly sure how to start diagnosing it. The short description is: We have a very small site (http://hearablog.com) with very little traffic, in a crappy dedicated server, CPU is always very high, sometimes it stays at 100% for minutes, and w3wp.exe is taking most of it. A typical scenario is w3wp.exe takes 60%, and SQL Server takes about 30%. Our DB is pretty small too. Long description and more details: The site is hosted in a very crappy server by Cari.Net. From the beginning we had the feeling that the server didn't quite behave correctly, like some things would take just too long, so this could be a configuration problem from the get go. It may also be that we are getting a virtual server while we're supposed to have a dedicated one, although we have no evidence that'd indicate this, except for the fact that the server tends to be quite slow. The server is Windows 2008 Standard 64-bit, with SQL 2008 Express Hardware is a Celeron 2.80 GHz, 1Gb RAM The website is developed in ASP.Net MVC, using Entity Framework for data access. Now, this is pretty crappy hardware, but i've had other servers with these guys, with equivalent (or worse) HW, and performance is much better than this one. That said, the other servers have W2003 and SQL2005, and I'm using ASP.Net "WebForms" 2.0, no MVC, no LINQ, no EF; so I'm not sure whether going to 2008 / the other stuff means a big performance penalty is expected. I'm serving MP3 files (5-20 Mb) regularly, which is a slightly unusual load, maybe that is causing some kind of problems? Would that cause w3wp to use a lot of CPU? Disk usage seems very low. Memory is usually around 90%, but disk usage seems to indicate it's not paging much. I get tons of e-mails every day about SQL timeouts, for queries taking over 30 seconds, although all our queries are pretty straightforward (or should be, but EF may be screwing it up). This is what resource monitor looks like in one of these "sprints" of 100% CPU, in case there's anything useful there. And a snapshot of some performance counters: Now, what confuses me very much is that CPU usage of w3wp is just so high. It shouldn't be doing much really... So my questions are... Is there any way of finding out "what" it is doing? Maybe even profile it? Any performance counters I should be looking at? Is this to be expected given this hardware/software configuration? Is this could be cause by some kind of configuration failure, where would you start looking? Thank you VERY much. Daniel Magliola

    Read the article

  • DHCP Client Can't Find DHCP Server

    - by leeman24
    I currently have 3 machines: CentOS (router) eth1 - 18.0.168.1 eth2 - 145.165.34.1 Windows Server 2008 (server) 18.0.168.2 DHCP scope - 145.165.34.10 - 145.165.34.20 Windows 7 (client) Supposed to use DHCP I can't get my Windows 7 client to get an address from the Windows Server 2008 DHCP server. Every network interface can ping each other (ex. 18.0.168.2 can ping 18.0.168.1 & 145.165.34.1 and the other way around). My Linux machine acting as the router has default IP tables. Other than this command which may or may not be right: iptables -I INPUT -p udp -d 18.0.168.2 --dport 67:68 -j ACCEPT I have also tried it after I flushed the IP tables. I was looking at the dhcrelay command but it seems CentOS doesn't have it and I am not even sure how to use it.

    Read the article

  • SharePoint 2010 Server Configuration Error -> "Cannot connect to database master"

    - by Chrish Riis
    I recieve the following error when I try to configure SharePoint 2010 Server: "Cannot connect to the database master at SQL server at [computer.domain]. The database might not exist, or the current user does not have permission to connect to it." I run the following setup: Windows Server 2008 R2 Standard with SP1 and all the updates SQL Server 2008 R2 with SP1 SharePoint Server 2010 with SP1 Everything is installed on the same server (it's a testserver) I have tried the following: Rebooting the server Checking the install account's DB rights (dbcreator, securityadmin - I even let it have sysadmin) Opened up the firewall on port 1433 and 1434 Uninstalled both SQL and SP, then reinstalled the both Enabled all client protocols in SQL Server Configuration Made sure I used the correct account for installing SharePoint (local admin) Useful links: TCP/IP settings – http:// blog.vanmeeuwen-online.nl/2010/10/cannot-connect-to-database-master-at.html http:// ybbest.wordpress.com/2011/04/22/cannot-connect-to-database-master-at-sql-server-at-sql2008r2/ Wrong slash - http:// yakimadev.com/2010/11/cannot-connect-to-database-master-at-sql-server-at-serverdbname-error-during-sharepoint-2010-products-configuration-wizard-and-installation/ Port error - http:// www.knowsharepoint.com/2011/08/error-connecting-to-database-server.html

    Read the article

  • HP DL580 G5 Hyper-V networking problem

    - by mr-perfect
    Hi I have installed the hyper-v role on a DL580 G5 cluster. The host operating system is a server 2008 x64 patched to the maximum. Everything has gone fine, but if I install a guest operating system, actually a windows server 2008 x64 I can't reach the network from it. It send many packets, but don't received any, so I can't ping the external network as well. I have installed the latest drivers to the server and unintalled the network configuration utility from the physical server, but no luck. I added a legacy adapter to the guest binded to the same phisycal adapter on the host machine but it can't help Any idea welcome...

    Read the article

  • Does the Windows "Sources" folder need copied to C: like the "i386" folder did?

    - by James Watt
    On all flavors of Windows prior to Windows Vista, the Windows install CD contained a folder called i386. After installing Windows, this folder is suppose to be copied to the C: drive. Once the folder has been copied, if user is ever installing a program or windows updates that require the Windows install CD, it will retrieve the files from the hard drive INSTEAD of prompting for the Windows CD. On new versions of Windows, including Windows Vista, Windows 7, Server 2008 and Server 2008 R2, the i386 folder has been renamed to "sources". Should this folder be copied to the hard drive? Or do the new versions of Windows work differently (i.e. by installing all features on the hard drive to eliminate the need for ever prompting the user to insert their disc.) It does not hurt to copy the sources folder, so I have been doing it. But if I could eliminate time wasted it would make installations faster which helps my customers' bottom line.

    Read the article

  • How to Remove a VM From Hyper-V Without Deleting the Configuration File?

    - by Steven Murawski
    I'm in the process of moving a number of virtual machines that are homed on shared storage (a file share, though shared cluster disk would work as well) to a new VM host with access to the same shared storage. The new host is a different build version (moving from Windows Server 2012 Beta to Windows Server 2012 RC - though this same process could be used with migrations of Windows Server 2008/2008 R2 to Windows Server 2012 as well), so I cannot migrate the machine with inbox tooling. I need to remove the VM from management of the source Hyper-V host in order to import the VM to the new Hyper-V host. I want to retain the configuration file, so I can import the VM as it stands and not need to reconfigure it. The VHD files are rather large and they are staying on the same file share, so I'd rather not duplicate them during the move process.

    Read the article

  • Seamless Remote Windows on Linux Client

    - by prestomation
    I really like the RemoteApp features of Windows Server 2008 Are there any solutions for similar functionality for Linux clients and a Windows server? We have users that are running Linux on the desktop and rdesktop to a Windows server, mostly for Outlook. It would be nice if Outlook could behave more seamlessly. From what I can tell rdesktop doesn't do that yet, it only implements RDP 5. I've also seen "SeamlessRDP" from Cendio, which is an extension to rdesktop, but I can't get it to work. I think it doesn't work in Vista, but that is just a guess. Any ideas? And thanks in advance. EDIT: I emailed Cendio and they confirmed SeamlessRDP doesn't work with Server 2008 for unknown reasons. It appears there is no way to do this with a 2k8 server. It's either windows clients with RemoteApp or using a pre-Vista OS with seamlessrdp

    Read the article

  • How to Grant IIS 7.5 access to a certificate in certificate store?

    - by thames
    In Windows 2003 it was simple to do and one could use the winhttpcertcfg.exe (download) to give "NETWORK SERVICE" account access to a certificate. I'm now using Windows Server 2008 R2 with IIS 7.5 and I am unable to find where and how to set permissions access permissions to a certificate in the certificate store. This Post showed how to do it in Vista and that winhttpcertcfg features were added into the certificates mmc however it doesn't seem to work with imported certificates or doesn't work anymore on Server 2008 R2. So does anyone have any idea on how give IIS 7.5 the correct permissions to read a certificate from the certificate store? And also what account from IIS 7.5 that needs the permission.

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >