Search Results

Search found 14482 results on 580 pages for 'ssrs 2008'.

Page 340/580 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • NOT A DUPLICATE! VS2010 - How to automatically stop compile on first compile error

    - by Ben Robbins
    {rant}First I'd like to say that this IS NOT A DUPLICATE. I've asked this question previously but it got closed as a duplicate when it isn't. This question is SPECIFIC to VS 2010 and the answers to the so-called duplicate work in VS 2008 but not in VS 2010 (at least not for me or anyone I know). So before you go closing something as a duplicate how about you read the question carefully and try the answer for yourself and see if it actually works. Apologies for the rant but there is no obvious way to contact the SO police that closed the issue or get it reopened. {/rant} At work we have a C# solution with over 80 projects. In VS 2008 we use a macro to stop the compile as soon as a project in the solution fails to build (see this question for several options for VS 2005 & VS 2008: http://stackoverflow.com/questions/134796/how-to-automatically-stop-visual-c-build-at-first-compile-error). Is it possible to do the same in VS 2010? What we have found is that in VS 2010 the macros don't work (at least I couldn't get them to work) as it appears that the environment events don't fire in VS 2010. The default behaviour is to continue as far as possible and display a list of errors in the error window. I'm happy for it to stop either as soon as an error is encountered (file-level) or as soon as a project fails to build (project-level). Answers for VS 2010 only please. If the macros do work then a detailed explanation of how to configure them for VS 2010 would be appreciated. Thanks.

    Read the article

  • Microsoft products such as Visual Studio 2010 does not require to enter serial number

    - by MainMa
    Hi, I am member of WebsiteSpark and was member of DreamSpark. Both programs enable to download software and provide serial keys to use. Some software like Windows Server has an ISO file to download and a serial number displayed on the website which I must enter during installation. Some other software does not have any serial key. For example, when I downloaded Visual Studio 2010, there was just a link to an ISO file. During installation, there was no such a field as serial number (whereas Visual Studio 2008 had this field at the beginning of installation process). There is the same thing with SQL Server 2008 and Microsoft Expression Studio 3. Even when I've downloaded the public trial RTM version of Windows Seven Enterprise, there were no serial number to enter. I don't think that such expensive products as SQL Server 2008 Enterprise are delivered without serials and online validation, so I suppose that the serial is embedded into the product itself, either in installation binaries or in a separate config file, so is already in the ISO I download so I do not have to enter it. So my question is, how it is done technically? Is each 2 GBs ISO generated on-demand on the server to embed a serial each time this ISO is requested? I suppose that if it is done, it has a huge impact on servers performance (no caching, no streaming...), so what may be the techniques used behind? I want to implement the same feature in a product I intend to ship (to simplify installation by avoiding to ask to enter serial number), but I really don't see how to do it with low impact on server performance.

    Read the article

  • WCF cross-domain policy security error

    - by George2
    Hello everyone, I am using VSTS 2008 + C# + WCF + .Net 3.5 + Silverlight 3.0. I host Silverlight control in an html page and debug it from VSTS 2008 (press F5, then run in VSTS 2008 built-in ASP.Net development web server), then call another WCF service (hosted in another machine running IIS 7.0 + Vista). The WCF service is very simple, just return a constant string to client. When invoking the WCF service from Silverlight, I got the following error message, An error occurred while trying to make a request to URI 'https://LabTest/Test.svc'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details. Here is the clientaccesspolicy.xml file, anything wrong? <?xml version="1.0" encoding="utf-8" ?> <access-policy> <cross-domain-access> <policy> <allow-from http-request-headers="*"> <domain uri="*"> </domain> </allow-from> <grant-to> <resource path="/" include-subpaths="true"></resource> </grant-to> </policy> </cross-domain-access> </access-policy> thanks in advance, George

    Read the article

  • Subversion freaking out on me!

    - by Malfist
    I have two copies of a site, one is the production copy, and the other is the development copy. I recently added everything in the production to a subversion repository hosted on our linux backup server. I created a tag of the current version and I was done. I then copied the development copy overtop of the production copy (on my local machine where I have everything checked out). There are only 10-20 files changed, however, when I use tortoise SVN to do a commit, it says every file has changed. The diff file generated shows subversion removing everything, and replacing it with the new version (which is the exact same). What is going on? How do I fix it? An example diff: Index: C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html =================================================================== --- C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (revision 5) +++ C:/Users/jhollon/Documents/Visual Studio 2008/Projects/saloon/trunk/components/index.html (working copy) @@ -1,4 +1,4 @@ -<html> -<body bgcolor="#FFFFFF"> -</body> +<html> +<body bgcolor="#FFFFFF"> +</body> </html> \ No newline at end of file

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario   Conventional Structures   Columnstore   Δ SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Sharepoint 2010, 404 error after installation

    - by Tommy Jakobsen
    Running Windows Server 2008 Standard R2, SQL Server 2008 Enterprise, Team Foundation Server 2010, I installed Sharepoint Server 2010 (single server). It installed correctly, and the wizard configured it without errors. When accessing the sharepoint server through http://localhost/ I get a 404 error. I also get a 404 when trying to access the admin interface on port 42620. Sharepoint, TFS and Reporting services are the only application on my IIS. NOT sharing the same port, so that can't be the error. Do you have any ideas what the problem can be? Is there some way that I can debug this?

    Read the article

  • Stable reverse port forwarding in SSH and stale sessions

    - by Vi
    Using VPS to forward ports behind NAT: for((;;)) { ssh -R 2222:127.0.0.1:22 [email protected]; sleep 10; } When connection is broken somehow and it is reconnecting. Warning: remote port forwarding failed for listen port 2222 Linux vi-server.no-ip.org 2.6.18-92.1.13.el5.028stab059.3 #1 SMP Wed Oct 15 13:33:44 MSD 2008 i686 I type: vi@vi-server:~$ killall sshd Connection to vi-server.org closed by remote host. Connection to vi-server.org closed. Linux vi-server.no-ip.org 2.6.18-92.1.13.el5.028stab059.3 #1 SMP Wed Oct 15 13:33:44 MSD 2008 i686 vi@vi-server:~$ Now it's OK. How it's simpler to make this automatic?

    Read the article

  • .Net Framework corrupted

    - by Samsudeen B
    Hi, We are facing a problem of .Net framework corruption for one our clients with the following environment OS : Windows 2008 Server SP2; Framework : .NET Framework 3.5 SP1; Application Details Database : SQL Server 2008; Server : WCF hosted webservice; Client : WPF based UI; Problem : The Config files inside the "..\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG" are suddenly deleted and and not able to work with my application. Not able to repair .NET / Run SQL Server. The only option is to restore the earlier images versions of that machine Any help is much appreciated sam

    Read the article

  • .Net Framework currputed

    - by Samsudeen B
    Hi, We are facing a problem of .Net framework corruption for one our clients with the following environment OS : Windows 2008 Server SP2; Framework : .NET Framework 3.5 SP1; Application Details Database : SQL Server 2008; Server : WCF hosted webservice; Client : WPF based UI; Problem : The Config files inside the "..\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG" are suddenly deleted and and not able to work with my application. Not able to repair .NET / Run SQL Server. The only option is to restore the earlier images versions of that machine Any help is much appreciated sam

    Read the article

  • IIS 7.0: Requiring Client Certificates causes error 500 and "page cannot be displayed"

    - by user48443
    I have two Windows 2008 x86 servers running IIS 7.0, one site on each server; both sites are SSL-enabled, using DoD-issued certificates. Both sites are accessible via https over port 443, but fail the moment Client Certificates are set to Require or Accept. IIS log records error 500.0.64 but nothing else. I have several Windows 2008 IIS 7 x64 servers that require client certificates and they are working as expected; it's just the two x86 servers that are being problematic.

    Read the article

  • How to sysprep SQL Server Express?

    - by Jim
    We plan to deploy Hyper-V VHD with Windows Server 2008 R2 and SQL Server 2012 Express installed to multiple hosts. From my understanding, the correct way to do this is to install SQL Server in prepartion mode, sysprep Windows, then complete SQL Server installation when the VHD is deployed. I mostly followed the process in this blog post: http://sethusrinivasan.com/category/sysprep/ However, after the VHD is deployed, I'm unable to complete the SQL Server installation. It keeps saying "Upgrade matrix is incorrect". It seems that it's trying to upgrade itself to Enterprise edition (I was asked for product key during install, but I skipped it). Could anyone share their experience in deploying VHDs with SQL Server (we're fine with either SQL Server 2008 R2 or 2012)? I think the source of my issue is because I can't select "Express Edition" when entering the product key at the completion stage, so the installation is trying to do an upgrade to Enterprise Edition. I have no idea why the drop down list is empty.

    Read the article

  • Splwow64 with TS Easy Print

    - by Tim Brigham
    I have an application (Sage MIP Fund Accounting) which exports data to Excel. In this process it uses an internal print driver. Since we upgraded from 2008 to 2008 R2 this export process causes system hangs. This has been isolated down to the splwow64 executable hanging while the Excel document is building. If I kill the spwow64 executable things function properly (I just can't print it once completed). This only occurs while using printer redirection using the Remote Desktop Easy Print function - if I pull the printer redirection things work exactly as expected. I've spent the last couple hours looking at hotfixes or driver upgrades since this appears to be a problem specifically with how the Remote Desktop Easy Printer printer is functioning. Is anyone aware of a hotfix which would be applicable in this situation? I don't want to grab every hotfix for redirected printing and start throwing them out there.

    Read the article

  • Exchange 2010 to Exchange 2010 Public Folder Replication

    - by Archit Baweja
    We have 2 exchange servers in our org. MX1 and MX2. I'm trying to replicate all MX1 public folders to MX2. I've setup replication for all the toplevel folders to include the MX2 server. However no public folders are being replicated. The event log does not show any errors. I've set the diagnostic level for all public folder diagnostics to Highest using get-eventloglevel "MSExchangeIS\9001 Public\*" | set-eventloglevel -Level Expert However besides a 3092 event ID (type: 0x2) generated on MX1 (the source server), there are no events being generated that would notify me of any issues. Some technical details. MX1 is Windows 2008 Standard, MX2 is Windows 2008 Enterprise (eval mode right now).

    Read the article

  • How to install a new TFS checkin policy on a TFS 2010 server?

    - by rhart
    Hi, We've recently upgraded our TFS server to TFS 2010 from 2008. We've been researching a couple new add-on checkin policies we want to install. The only problem is that all documentation I can find on adding new policies to the server appears to be specific to TFS 2008 or earlier. Those steps involve adding new keys in the registry which do not exist on our 2010 TFS server. Does anybody know where the process to install new checkin policies on a TFS 2010 server so they can be applied to Team Projects is documented? Thanks!

    Read the article

  • How can you know what is w3wp.exe doing? (or how to diagnose a performance problem)

    - by Daniel Magliola
    I'm having a performance problem in a site we've made, and I'm not exactly sure how to start diagnosing it. The short description is: We have a very small site (http://hearablog.com) with very little traffic, in a crappy dedicated server, CPU is always very high, sometimes it stays at 100% for minutes, and w3wp.exe is taking most of it. A typical scenario is w3wp.exe takes 60%, and SQL Server takes about 30%. Our DB is pretty small too. Long description and more details: The site is hosted in a very crappy server by Cari.Net. From the beginning we had the feeling that the server didn't quite behave correctly, like some things would take just too long, so this could be a configuration problem from the get go. It may also be that we are getting a virtual server while we're supposed to have a dedicated one, although we have no evidence that'd indicate this, except for the fact that the server tends to be quite slow. The server is Windows 2008 Standard 64-bit, with SQL 2008 Express Hardware is a Celeron 2.80 GHz, 1Gb RAM The website is developed in ASP.Net MVC, using Entity Framework for data access. Now, this is pretty crappy hardware, but i've had other servers with these guys, with equivalent (or worse) HW, and performance is much better than this one. That said, the other servers have W2003 and SQL2005, and I'm using ASP.Net "WebForms" 2.0, no MVC, no LINQ, no EF; so I'm not sure whether going to 2008 / the other stuff means a big performance penalty is expected. I'm serving MP3 files (5-20 Mb) regularly, which is a slightly unusual load, maybe that is causing some kind of problems? Would that cause w3wp to use a lot of CPU? Disk usage seems very low. Memory is usually around 90%, but disk usage seems to indicate it's not paging much. I get tons of e-mails every day about SQL timeouts, for queries taking over 30 seconds, although all our queries are pretty straightforward (or should be, but EF may be screwing it up). This is what resource monitor looks like in one of these "sprints" of 100% CPU, in case there's anything useful there. And a snapshot of some performance counters: Now, what confuses me very much is that CPU usage of w3wp is just so high. It shouldn't be doing much really... So my questions are... Is there any way of finding out "what" it is doing? Maybe even profile it? Any performance counters I should be looking at? Is this to be expected given this hardware/software configuration? Is this could be cause by some kind of configuration failure, where would you start looking? Thank you VERY much. Daniel Magliola

    Read the article

  • DHCP Client Can't Find DHCP Server

    - by leeman24
    I currently have 3 machines: CentOS (router) eth1 - 18.0.168.1 eth2 - 145.165.34.1 Windows Server 2008 (server) 18.0.168.2 DHCP scope - 145.165.34.10 - 145.165.34.20 Windows 7 (client) Supposed to use DHCP I can't get my Windows 7 client to get an address from the Windows Server 2008 DHCP server. Every network interface can ping each other (ex. 18.0.168.2 can ping 18.0.168.1 & 145.165.34.1 and the other way around). My Linux machine acting as the router has default IP tables. Other than this command which may or may not be right: iptables -I INPUT -p udp -d 18.0.168.2 --dport 67:68 -j ACCEPT I have also tried it after I flushed the IP tables. I was looking at the dhcrelay command but it seems CentOS doesn't have it and I am not even sure how to use it.

    Read the article

  • SharePoint 2010 Server Configuration Error -> "Cannot connect to database master"

    - by Chrish Riis
    I recieve the following error when I try to configure SharePoint 2010 Server: "Cannot connect to the database master at SQL server at [computer.domain]. The database might not exist, or the current user does not have permission to connect to it." I run the following setup: Windows Server 2008 R2 Standard with SP1 and all the updates SQL Server 2008 R2 with SP1 SharePoint Server 2010 with SP1 Everything is installed on the same server (it's a testserver) I have tried the following: Rebooting the server Checking the install account's DB rights (dbcreator, securityadmin - I even let it have sysadmin) Opened up the firewall on port 1433 and 1434 Uninstalled both SQL and SP, then reinstalled the both Enabled all client protocols in SQL Server Configuration Made sure I used the correct account for installing SharePoint (local admin) Useful links: TCP/IP settings – http:// blog.vanmeeuwen-online.nl/2010/10/cannot-connect-to-database-master-at.html http:// ybbest.wordpress.com/2011/04/22/cannot-connect-to-database-master-at-sql-server-at-sql2008r2/ Wrong slash - http:// yakimadev.com/2010/11/cannot-connect-to-database-master-at-sql-server-at-serverdbname-error-during-sharepoint-2010-products-configuration-wizard-and-installation/ Port error - http:// www.knowsharepoint.com/2011/08/error-connecting-to-database-server.html

    Read the article

  • HP DL580 G5 Hyper-V networking problem

    - by mr-perfect
    Hi I have installed the hyper-v role on a DL580 G5 cluster. The host operating system is a server 2008 x64 patched to the maximum. Everything has gone fine, but if I install a guest operating system, actually a windows server 2008 x64 I can't reach the network from it. It send many packets, but don't received any, so I can't ping the external network as well. I have installed the latest drivers to the server and unintalled the network configuration utility from the physical server, but no luck. I added a legacy adapter to the guest binded to the same phisycal adapter on the host machine but it can't help Any idea welcome...

    Read the article

  • Does the Windows "Sources" folder need copied to C: like the "i386" folder did?

    - by James Watt
    On all flavors of Windows prior to Windows Vista, the Windows install CD contained a folder called i386. After installing Windows, this folder is suppose to be copied to the C: drive. Once the folder has been copied, if user is ever installing a program or windows updates that require the Windows install CD, it will retrieve the files from the hard drive INSTEAD of prompting for the Windows CD. On new versions of Windows, including Windows Vista, Windows 7, Server 2008 and Server 2008 R2, the i386 folder has been renamed to "sources". Should this folder be copied to the hard drive? Or do the new versions of Windows work differently (i.e. by installing all features on the hard drive to eliminate the need for ever prompting the user to insert their disc.) It does not hurt to copy the sources folder, so I have been doing it. But if I could eliminate time wasted it would make installations faster which helps my customers' bottom line.

    Read the article

  • How to Remove a VM From Hyper-V Without Deleting the Configuration File?

    - by Steven Murawski
    I'm in the process of moving a number of virtual machines that are homed on shared storage (a file share, though shared cluster disk would work as well) to a new VM host with access to the same shared storage. The new host is a different build version (moving from Windows Server 2012 Beta to Windows Server 2012 RC - though this same process could be used with migrations of Windows Server 2008/2008 R2 to Windows Server 2012 as well), so I cannot migrate the machine with inbox tooling. I need to remove the VM from management of the source Hyper-V host in order to import the VM to the new Hyper-V host. I want to retain the configuration file, so I can import the VM as it stands and not need to reconfigure it. The VHD files are rather large and they are staying on the same file share, so I'd rather not duplicate them during the move process.

    Read the article

  • Seamless Remote Windows on Linux Client

    - by prestomation
    I really like the RemoteApp features of Windows Server 2008 Are there any solutions for similar functionality for Linux clients and a Windows server? We have users that are running Linux on the desktop and rdesktop to a Windows server, mostly for Outlook. It would be nice if Outlook could behave more seamlessly. From what I can tell rdesktop doesn't do that yet, it only implements RDP 5. I've also seen "SeamlessRDP" from Cendio, which is an extension to rdesktop, but I can't get it to work. I think it doesn't work in Vista, but that is just a guess. Any ideas? And thanks in advance. EDIT: I emailed Cendio and they confirmed SeamlessRDP doesn't work with Server 2008 for unknown reasons. It appears there is no way to do this with a 2k8 server. It's either windows clients with RemoteApp or using a pre-Vista OS with seamlessrdp

    Read the article

  • How to Grant IIS 7.5 access to a certificate in certificate store?

    - by thames
    In Windows 2003 it was simple to do and one could use the winhttpcertcfg.exe (download) to give "NETWORK SERVICE" account access to a certificate. I'm now using Windows Server 2008 R2 with IIS 7.5 and I am unable to find where and how to set permissions access permissions to a certificate in the certificate store. This Post showed how to do it in Vista and that winhttpcertcfg features were added into the certificates mmc however it doesn't seem to work with imported certificates or doesn't work anymore on Server 2008 R2. So does anyone have any idea on how give IIS 7.5 the correct permissions to read a certificate from the certificate store? And also what account from IIS 7.5 that needs the permission.

    Read the article

  • World Wide Publishing Service very slow to restart on IIS? Why?

    - by StacMan
    Every now and then, we look to restart our IIS server by restarting the "WWW Publishing Service". On most systems this would usually only take a minute or two, but this can often take up to 10 minutes to stop the server and restart. Does anyone know any way to work out what is taking so much time to stop the service? After reading up around the net I've learned this may be due to locked resources used by users and/or lower-level IIS cached items being recycled. But I cant seem to work out where I can validate if this is true on not. Also if anyone knows how to fix or speed this up, that would be excellent. We have a large codebase which contains over 280 aspx pages across the site. Our main domain contains about 100 aspx pages whilst the subdomains contain 15 or 20 each. Some specs: Code is written in C#; runs on .Net framework 2.0 Server Windows Web Server 2008 IIS7; DB running SQL Server 2008 Standard

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >