Search Results

Search found 50994 results on 2040 pages for 'simple solution'.

Page 183/2040 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Caching: the Good, the Bad and the Hype

    One of the more important aspects of the scalability of an ASP.NET site is caching. To do this effectively, one must understand the relative permanence and importance of the data that is presented to the user, and work out which of the four major aspects of caching should be used. There is always a compromise, but in most cases it is an easy compromise to make considering its effects in a heavily-loaded production system

    Read the article

  • Designing Efficient SQL: A Visual Approach

    Sometimes, it is a great idea to push away the keyboard when tackling the problems of an ill-performing, complex, query, and take up pencil and paper instead. By drawing a diagram to show of all the tables involved, the joins, the volume of data involved, and the indexes, you'll see more easily the relative efficiency of the possible paths that your query could take through the tables.

    Read the article

  • Reliable Storage Systems for SQL Server

    By validating the IO path before commissioning the production database system, and performing ongoing validation through page checksums and DBCC checks, you can hopefully avoid data corruption altogether, or at least nip it in the bud. If corruption occurs, then you have to take the right decisions fast to deal with it. Rod Colledge explains how a pessimistic mindset can be an advantage

    Read the article

  • SSIS Basics: Using the Merge Join Transformation

    SSIS is able to take sorted data from more than one OLE DB data source and merge them into one table which can then be sent to an OLE DB destination. This 'Merge Join' transformation works in a similar way to a SQL join by specifying a 'join key' relationship. this transformation can save a great deal of processing on the destination. Annette Allen, as usual, gives clear guidance on how to do it.

    Read the article

  • Auditing DDL Changes in SQL Server databases

    Even where Source Control isn't being used by developers, it is still possible to automate the process of tracking the changes being made to a database and put those into Source Control, in order to track what changed and when. You can even get an email alert when it happens. With suitable scripting, you can even do it if you don't have direct access to the live database. Grant shows how easy this is with SQL Compare.

    Read the article

  • .NET Developer Basics – Recursive Algorithms

    Recursion can be a powerful programming technique when used wisely. Some data structures such as tree structures lend themselves far more easily to manipulation by recursive techniques. As it is also a classic Computer Science problem, it is often used in technical interviews to probe a candidate's grounding in basic programming techniques. Whatever the reason, it is well worth brushing up one's understanding with Damon's introduction to Recursion.

    Read the article

  • Offsite Backup

    - by Grant Fritchey
    There was a recent weather event in the United States that seriously impacted our power grid and our physical well being. Lots of businesses found that they couldn’t get to their building or that their building was gone. Many of them got to do a full test of their disaster recovery processes. A big part of DR is having the ability to get yourself back online in a different location. Now, most of us are not going to be paying for multiple sites, but, we need the ability to move to one if needed. The best thing you can to start to set this up is have an off-site backup. Want an easy way to automate that? I mean, yeah, you can go to tape or to a portable drive (much more likely these days) and then carry that home, but we’ve all got access to offsite storage these days, SkyDrive, DropBox, S3, etc. How about just backing up to there? I agree. Great idea. That’s why Red Gate is setting up some methods around it. Want to take part in the early access program? Go here and try it out.

    Read the article

  • Custom Extension not showing up wthin experimental instance

    - by user1333524
    I have a VSIX extension that I created which shows up in Visual Studio 2010 and works as designed. However I am attempting to build some Visual Studio automation which relies on this Extension and although its present in the project where I am building my automation package, when I run the package project, the visual studio experimental solution loads no problem, however when I look within the Extension Manager I do not see my registered Extension (even though it shows up within my Visual Studio project where I am building my extension. The extension is a custom shell for LightSwitch which may be a clue as to why it is not showing within my experimental instance of Visual Studio, not sure as I see other extensions I built and registered for LightSwitch showing up. Of course my issue is that when I attempt to load a solution that has a dependency on my LightSwitch extension, which otherwise loads fine outside of the Experimental Instance, fails due to the fact it can't locate my custom extension when I try to load the solution within my experimental solution.

    Read the article

  • Will this make the object thread-safe?

    - by sharptooth
    I have a native Visual C++ COM object and I need to make it completely thread-safe to be able to legally mark it as "free-threaded" in th system registry. Specifically I need to make sure that no more than one thread ever accesses any member variable of the object simultaneously. The catch is I'm almost sure that no sane consumer of my COM object will ever try to simultaneously use the object from more than one thread. So I want the solution as simple as possible as long as it meets the requirement above. Here's what I came up with. I add a mutex or critical section as a member variable of the object. Every COM-exposed method will acquire the mutex/section at the beginning and release before returning control. I understand that this solution doesn't provide fine-grained access and this might slow execution down, but since I suppose simultaneous access will not really occur I don't care of this. Will this solution suffice? Is there a simpler solution?

    Read the article

  • Lift session valid ajax callback from a static javascript

    - by ChrisJamesC
    I am currently implementing a graph visualisation tool using lift on the server side and d3 ( a javascript visualisation framework) for all the visualisation. The problem I have is that in the script I want to get session dependent data from the server. So basically, my objective is to write lift-valid ajax callbacks in a static js script. Here is what I tried so far: What I have tried so far If you feel that the best solution is one that I already tried feel free to post a detailed answer telling me how to use it exactly and how it completely solves my problem. REST interface Usually what one would do to get data from a javascript function in lift is to create a REST interface. However this interface will not be linked to any session. This is the solution I got from my previous question: Get json data in d3 from lift snippet Give function as argument of script Another solution would be to give the ajaxcallback as an argument of the main script called to generate my graph. However I expect to have a lot of callbacks and I don't want to have to mess with the arguments of my script. Write the ajax callback in another script using lift and call it from the main script This solution, which is similar to a hidden text input is probably the more likely to work. However it is not elegant and it would mean that I would have to load a lot of scripts on load, which is not really conveniant. Write the whole script in lift and then serve it to the client This solution can be elegant, however my script is very long and I would really prefer that it remainss static. What I want On client side While reviewing the source code of my webpage I found that the callback for an ajaxSelect is: <select onchange="liftAjax.lift_ajaxHandler('F966066257023LYKF4=' + encodeURIComponent(this.value), null, null, null)" name="F96606625703QXTSWU" id="node_delete" class="input"> Moreover, there is a variable containing the state of the page in the end of the webpage: var lift_page = "F96606625700QRXLDO"; So, I am wondering if it is possible to simulate that my ajaxcall is valid using this liftAjax.lift_ajaxHandler function. However I don't know the exact synthax to use. On server side Since I "forged" a request on client side, I would now like to get the request on client side and to dispatch it to the correct function. This is where the LiftRules.dispatch object seems the best solution: when it is called, all the session management has been made (the request is authentified and linked to a session), however I don't know how to write the correct piece of code in the append function. Remark In lift all names of variables are changed to a random string in order to increase the security, I would like to have the same behavior in my application even if that will probably mean that I will have to "give" the javascript these values. However an array of 15 string values is still a better tradeoff than 15 functions as argument of a javascript function.

    Read the article

  • How to mimic polymorphism in classes with template methods (c++)?

    - by davide
    in the problem i am facing i need something which works more or less like a polymorphic class, but which would allow for virtual template methods. the point is, i would like to create an array of subproblems, each one being solved by a different technique implemented in a different class, but holding the same interface, then pass a set of parameters (which are functions/functors - this is where templates jump up) to all the subproblems and get back a solution. if the parameters would be, e.g., ints, this would be something like: struct subproblem { ... virtual void solve (double& solution, double parameter)=0; } struct subproblem0: public subproblem { ... virtual void solve (double& solution, double parameter){...}; } struct subproblem1: public subproblem { ... virtual void solve (double* solution, double parameter){...}; } int main{ subproblem problem[2]; subproblem[0] = new subproblem0(); subproblem[1] = new subproblem1(); double argument0(0), argument1(1), sol0[2], sol1[2]; for(unsigned int i(0);i<2;++i) { problem[i]->solve( &(sol0[i]) , argument0); problem[i]->solve( &(sol1[i]) , argument1); } return 0; } but the problem is, i need the arguments to be something like Arg<T1,T2> argument0(f1,f2) and thus the solve method to be something of the likes of template<T1,T2> solve (double* solution, Arg<T1,T2> parameter) which cant obviously be declared virtual ( so cant be called from a pointer to the base class)... now i'm pretty stuck and don't know how to procede...

    Read the article

  • On Handling Dates in SQL

    The calendar is inherently complex by the very nature of the astronomy that underlies the year, and the conflicting historical conventions. The handling of dates in TSQL is even more complex because, when SQL Server was Sybase, it was forced by the lack of prevailing standards in SQL to create its own ways of processing and formatting dates and times. Joe Celko looks forward to a future when it is possible to write standard SQL date-processing code with SQL Server.

    Read the article

  • Point covering problem

    - by Sean
    I recently had this problem on a test: given a set of points m (all on the x-axis) and a set n of lines with endpoints [l, r] (again on the x-axis), find the minimum subset of n such that all points are covered by a line. Prove that your solution always finds the minimum subset. The algorithm I wrote for it was something to the effect of: (say lines are stored as arrays with the left endpoint in position 0 and the right in position 1) algorithm coverPoints(set[] m, set[][] n): chosenLines = [] while m is not empty: minX = min(m) bestLine = n[0] for i=1 to length of n: if n[i][0] <= m and n[i][1] > bestLine[1] then bestLine = n[i] add bestLine to chosenLines for i=0 to length of m: if m <= bestLine[1] then delete m[i] from m return chosenLines I'm just not sure if this always finds the minimum solution. It's a simple greedy algorithm so my gut tells me it won't, but one of my friends who is much better than me at this says that for this problem a greedy algorithm like this always finds the minimal solution. For proving mine always finds the minimal solution I did a very hand wavy proof by contradiction where I made an assumption that probably isn't true at all. I forget exactly what I did. If this isn't a minimal solution, is there a way to do it in less than something like O(n!) time? Thanks

    Read the article

  • Designing Databases for Rapid Resilience

    As the volume of data increases, DBAs need to plan more actively for rapid restores in the event of failure. For this, the intelligent use of filegroups is important, particularly when the Enterprise Edition of SQL Server offers the hope of online restores. How, though, should you arrange your data on the different filegroups? What happenens if the primary filegroup gets corrupted? Why backup and restore indexes?

    Read the article

  • No Significant Fragmentation? Look Closer…

    If you are relying on using 'best-practice' percentage-based thresholds when you are creating an index maintenance plan for a SQL Server that checks the fragmentation in your pages, you may miss occasional 'edge' conditions on larger tables that will cause severe degradation in performance. It is worth being aware of patterns of data access in particular tables when judging the best threshold figure to use.

    Read the article

  • How to invert alternate bits of a number

    - by Cupidvogel
    The problem is how to invert alternate bits of a number, starting from the LSB. Currently what I am doing is first doing a count = -1 while n: n >>= 1 count += 1 to first find the position of the leftmost set bit, then running a loop to invert every alternate bit: i = 0 while i <= count: num ^= 1<<i i += 2 Is there a quick hack solution instead of this rather boring loop solution? Of course, the solution can't make any asumption about the size of the integer.

    Read the article

  • Optionally Running SPSecurity.RunWithElevatedPrivileges with Delgates

    - by Damon Armstrong
    I was writing some SharePoint code today where I needed to give people the option of running some code with elevated permission.  When you run code in an elevated fashion it normally looks like this: SPSecurity.RunWithElevatedPrivileges(()=> {     //Code to run }); It wasn’t a lot of code so I was initially inclined to do something horrible like this: public void SomeMethod(bool runElevated) {     if(runElevated)     {         SPSecurity.RunWithElevatedPrivileges(()=>         {             //Code to run         });     }     else     {         //Copy of code to run     } } Easy enough, but I did not want to draw the ire of my coworkers for employing the CTRL+C CTRL+V design pattern.  Extracting the code into a whole new method would have been overkill because it was a pretty brief piece of code.  But then I thought, hey, wait, I’m basically just running a delegate, so why not define the delegate once and run it either in an elevated context or stand alone, which resulted in this version which I think is much cleaner because the code is only defined once and it didn’t require a bunch of extra lines of code to define a method: public void SomeMethod(bool runElevated) {     var code = new SPSecurity.CodeToRunElevated(()=>     {         //Code to run     });     if(runElevated)     {         SPSecurity.RunWithElevatedPermissions(code);         }     else     {         Code();     } }

    Read the article

  • My book is released – Async in C# 5

    - by Alex Davies
    I’m pleased to announce that my book “Async in C# 5″ has been published by O’Reilly! http://oreil.ly/QQBjO3 If you want to know about how to use async, and whether it’s important for your code, I thoroughly recommend reading it. It’s the best book about the subject I’ve ever written. In fact it’s probably the best book I’ve written full stop. I may have only written one book. It also has a very fetching parrot on the cover, which would make a very good addition to your bookshelf.

    Read the article

  • Collecting the Information in the Default Trace

    The default trace is still the best way of getting important information to provide a security audit of SQL Server, since it records such information as logins, changes to users and roles, changes in object permissions, error events and changes to both database settings and schemas. The only trouble is that the information is volatile. Feodor shows how to squirrel the information away to provide reports, check for unauthorised changes and provide forensic evidence.

    Read the article

  • Window Functions in SQL Server

    When SQL Server introduced Window Functions in SQL Server 2005, it was done in a rather tentative way, with only a handful of functions being introduced. This was frustrating, as they remove the last excuse for cursor-based operations by providing aggregations over a partition of the result set, and imposing an ordered sequence over a partition. Now, with SQL Server 2012, we are soon to enjoy a full range of Window Functions. They are going to make for some much simpler SQL queries.

    Read the article

  • How can I copy files in the middle of a build in Team System?

    - by Dana
    I have two solutions that I want to include in a build. Solution two requires the dll's from solution one to successfully build. Solution two has a Binaries folder where the dll's from solution one need to be copied before building Solution two. I've been trying an AfterBuild Target, hoping that it would copy the items after the first SolutionToBuild, but it doesn't fire then. I'm guessing that it would probably fire after both solutions have compiled, but that's not what I want. <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Main/Framework.sln"> <Targets>AfterCompileFramework</Targets> <Properties></Properties> </SolutionToBuild> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../../Dashboard/Main/Dashboard.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> <ItemGroup> <FrameworkBinaries Include="$(DropLocation)\$(BuildNumber)\Release\Framework.*.dll"/> </ItemGroup> <Message Text="FrameworkBinaries: @(FrameworkBinaries)" Importance="high"/> <Copy SourceFiles="@(FrameworkBinaries)" DestinationFolder="$(BuildProjectFolderPath)/../../../Dashboard/Main/Binaries"/>

    Read the article

  • SharePoint Scenario Framework

    - by Damon
    I've worked with SharePoint for some time now, and I like to think that I know all there is to know about it.  Deep down I know that's not true, but it's a fun delusion.  However, I found out yesterday that there is a mechanism in SharePoint called the Scenario Framework that has been around for a while but that I had no idea ever existed.  It is used to maintain state between multi-page web forms and helps manage the navigation between those pages.  If building out multiple-page forms in SharePoint is in your plans, you can find more information about the Scenario Framework on Waldek Mastykarz blog entry on the subject.

    Read the article

  • Windows Azure from a Data Perspective

    Before creating a data application in Windows Azure, it is important to make choices based on the type of data you have, as well as the security and the business requirements. There are a wide range of options, because Windows Azure has intrinsic data storage, completely separate from SQL Azure, that is highly available and replicated. Your data requirements are likely to dictate the type of data storage options you choose.

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >