Search Results

Search found 3797 results on 152 pages for 'talk'.

Page 41/152 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Caching: the Good, the Bad and the Hype

    One of the more important aspects of the scalability of an ASP.NET site is caching. To do this effectively, one must understand the relative permanence and importance of the data that is presented to the user, and work out which of the four major aspects of caching should be used. There is always a compromise, but in most cases it is an easy compromise to make considering its effects in a heavily-loaded production system

    Read the article

  • On Handling Dates in SQL

    The calendar is inherently complex by the very nature of the astronomy that underlies the year, and the conflicting historical conventions. The handling of dates in TSQL is even more complex because, when SQL Server was Sybase, it was forced by the lack of prevailing standards in SQL to create its own ways of processing and formatting dates and times. Joe Celko looks forward to a future when it is possible to write standard SQL date-processing code with SQL Server.

    Read the article

  • Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0!

    Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0! mod_pagespeed is an open-source Apache module that automatically optimizes web pages and resources on them: images, CSS, JavaScript, and much more. In this episode, we'll catch up with Joshua Marantz, the tech lead of the project at Google and talk about the history of mod_pagespeed, its fast growing adoption (130K+ sites!), technical architecture and how it works under the hood. Finally, we'll talk about the upcoming 1.0 release milestone for the project. If you're curious about mod_pagespeed, then this is definitely the show you won't want to miss! From: GoogleDevelopers Views: 2 0 ratings Time: 01:05:06 More in Science & Technology

    Read the article

  • Auditing DDL Changes in SQL Server databases

    Even where Source Control isn't being used by developers, it is still possible to automate the process of tracking the changes being made to a database and put those into Source Control, in order to track what changed and when. You can even get an email alert when it happens. With suitable scripting, you can even do it if you don't have direct access to the live database. Grant shows how easy this is with SQL Compare.

    Read the article

  • Offsite Backup

    - by Grant Fritchey
    There was a recent weather event in the United States that seriously impacted our power grid and our physical well being. Lots of businesses found that they couldn’t get to their building or that their building was gone. Many of them got to do a full test of their disaster recovery processes. A big part of DR is having the ability to get yourself back online in a different location. Now, most of us are not going to be paying for multiple sites, but, we need the ability to move to one if needed. The best thing you can to start to set this up is have an off-site backup. Want an easy way to automate that? I mean, yeah, you can go to tape or to a portable drive (much more likely these days) and then carry that home, but we’ve all got access to offsite storage these days, SkyDrive, DropBox, S3, etc. How about just backing up to there? I agree. Great idea. That’s why Red Gate is setting up some methods around it. Want to take part in the early access program? Go here and try it out.

    Read the article

  • Designing Databases for Rapid Resilience

    As the volume of data increases, DBAs need to plan more actively for rapid restores in the event of failure. For this, the intelligent use of filegroups is important, particularly when the Enterprise Edition of SQL Server offers the hope of online restores. How, though, should you arrange your data on the different filegroups? What happenens if the primary filegroup gets corrupted? Why backup and restore indexes?

    Read the article

  • My book is released – Async in C# 5

    - by Alex Davies
    I’m pleased to announce that my book “Async in C# 5″ has been published by O’Reilly! http://oreil.ly/QQBjO3 If you want to know about how to use async, and whether it’s important for your code, I thoroughly recommend reading it. It’s the best book about the subject I’ve ever written. In fact it’s probably the best book I’ve written full stop. I may have only written one book. It also has a very fetching parrot on the cover, which would make a very good addition to your bookshelf.

    Read the article

  • Optionally Running SPSecurity.RunWithElevatedPrivileges with Delgates

    - by Damon Armstrong
    I was writing some SharePoint code today where I needed to give people the option of running some code with elevated permission.  When you run code in an elevated fashion it normally looks like this: SPSecurity.RunWithElevatedPrivileges(()=> {     //Code to run }); It wasn’t a lot of code so I was initially inclined to do something horrible like this: public void SomeMethod(bool runElevated) {     if(runElevated)     {         SPSecurity.RunWithElevatedPrivileges(()=>         {             //Code to run         });     }     else     {         //Copy of code to run     } } Easy enough, but I did not want to draw the ire of my coworkers for employing the CTRL+C CTRL+V design pattern.  Extracting the code into a whole new method would have been overkill because it was a pretty brief piece of code.  But then I thought, hey, wait, I’m basically just running a delegate, so why not define the delegate once and run it either in an elevated context or stand alone, which resulted in this version which I think is much cleaner because the code is only defined once and it didn’t require a bunch of extra lines of code to define a method: public void SomeMethod(bool runElevated) {     var code = new SPSecurity.CodeToRunElevated(()=>     {         //Code to run     });     if(runElevated)     {         SPSecurity.RunWithElevatedPermissions(code);         }     else     {         Code();     } }

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Window Functions in SQL Server

    When SQL Server introduced Window Functions in SQL Server 2005, it was done in a rather tentative way, with only a handful of functions being introduced. This was frustrating, as they remove the last excuse for cursor-based operations by providing aggregations over a partition of the result set, and imposing an ordered sequence over a partition. Now, with SQL Server 2012, we are soon to enjoy a full range of Window Functions. They are going to make for some much simpler SQL queries.

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • I will be at NNUG Kristiansand tonight

    - by Sahil Malik
    SharePoint 2010 Training: more information Greetings! I will be speaking at NNUG Kristiansand tonight (sorry for the very short notice). So I was thinking what should I demo tonight?Hmm!!!Instead of using any slides or such material, what I thought would be a tonne of fun, would be to develop a service bus based application running in Azure. This will be a good opportunity to code and talk, and run into some snafus and show off some VS2012 improvements/annoyances. At the end of an apprx. 1 hour talk, I hope to have an application ready that you can run yourself by the end of the evening. hehe :) .. now that’s cool isn’t it? :) .. Time permitting I will even tie SP2013 + ADFSv2 + Claims into it, just to be cool. Hope to see you there! Here is the registration link - http://www.nnug.no/Avdelinger/Kristiansand/Moter2/NNUG-Kristiansand---Oktober/ Read full article ....

    Read the article

  • Collecting the Information in the Default Trace

    The default trace is still the best way of getting important information to provide a security audit of SQL Server, since it records such information as logins, changes to users and roles, changes in object permissions, error events and changes to both database settings and schemas. The only trouble is that the information is volatile. Feodor shows how to squirrel the information away to provide reports, check for unauthorised changes and provide forensic evidence.

    Read the article

  • Google Games Chat #7

    Google Games Chat #7 The Google Games Chat (official motto: "Now with 30% less swearing") is back! And we're ready to talk about all things Halloween related. Like zombies! And vampires! And things in games that scare us, like corrupt save game files. But we probably won't get Todd to talk about Amnesia: TDD, because he's too scared to play it. What a chicken. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Gaming

    Read the article

  • Windows Azure from a Data Perspective

    Before creating a data application in Windows Azure, it is important to make choices based on the type of data you have, as well as the security and the business requirements. There are a wide range of options, because Windows Azure has intrinsic data storage, completely separate from SQL Azure, that is highly available and replicated. Your data requirements are likely to dictate the type of data storage options you choose.

    Read the article

  • Event-Driven Debugging

    - by Brian Donahue
    Most application troubleshooting involves getting an error, analyzing the error message, and at worst, attaching a debugger to work out the real cause. What is not really covered is how to troubleshoot an applicaiton that is not errant, but is having a performance issue, and more than likely, in the middle of the night when you are snug in your bed, sawing logs. What you need is an ever-vigilant cyborg who never sleeps to sit in front of your server all night, but as SkyNet is not live yet, you can settle for the next-best thing. Windows provides performance counters and alerts that can tell you when an applicaiton reaches an unacceptable threshold of naughty behavior, but although it can tattle on your brainchild, it won't be the child psychiatrist that you need to tell you why he's pulling your server's pigtails and pulling faces at the teacher. What you need is to plug a debugger into performance monitor and have it tell you what's going on with your applicaiton at the time. For this purpose, I'd used Microsoft's MDbgEngine as the basis for an applicaiton that will dump a program's stacks, I call it Application Slicer Dicer Wonder Dumper Super Cyborg, or StackOMatic for short. StackOMatic can look at a program's behavior and tell you if the stacks are not moving, but it can also work on the command-line to dump all managed methods on the stack at will. Now that there is a command you can use to dump the stacks, all you need to do is politely tell Windows to run it when you're displeased with your creation as it's trashing the CPU of your server at 3 AM. The first step is to create a scheduled task to tell StackOMatic to dump your applicaiton. Start Task Scheduler and right-click Task Scheduler Library and then Create Task. For this exercise I'm creating a task that will dump the Red Gate SQL Monitor Base Monitor Service. In the Actions tab, I enter the path to StackOMatic and use the arguments to log the stack dump to a file: /PN:RedGate.Response.Engine.Alerting.Base.Service /OUT:c:\users\administrator\MonitorLog.txt Next, I go into Windows Server 2008's Reliability and Performance Monitor and add a new Data Collector Set. This set will produce an alert on the %Processor Time for the service. When the processor time breaches 50%, it will run the StackDumpBaseService task I created. Whenever the service misbehaves, it will append to the log file. Now when I go to work in the morning, I can see what the service was doing when it overloaded the processor and take action.

    Read the article

  • How to stop Office 2010 changing " and ' to smart quotes

    - by fatherjack
    I have recently upgraded to Office 2010 at work and there are a few things that are a real problem for me. As a T-SQL developer and SQL Server DBA I copy and paste code to and from various applications and if Word gets involved it can has disastrous consequences. There is an option that appears to be defaulted to "on" that changes a straight quote to what Word describes as a smart quote - see the image below. Note - the single quote suffers from the same effect. Now, getting to the point that...(read more)

    Read the article

  • Preventing Problems in SQL Server

    It is never a good idea to let your users be the ones to tell you of database server outages. It is far better to be able to spot potential problems by being alerted for the most relevant conditions on your servers at the best threshold. This will take time and patience, but the reward will be an alerting system which allows you to deal more effectively with issues before they involve system down-time

    Read the article

  • Game-over! Gaining Physical access to a computer

    Security requires defense in depth. The cleverest intrusion detection system, combined with the best antivirus, won’t help you if a malicious person can gain physical access to your PC or server. A routine job, helping a family member remove a malware infection, brings it home to Wesley just how easy it is to get a command prompt with SYSTEM access on any PC, and inspires him to give a warning about the consequences.

    Read the article

  • SharePoint Scenario Framework

    - by Damon
    I've worked with SharePoint for some time now, and I like to think that I know all there is to know about it.  Deep down I know that's not true, but it's a fun delusion.  However, I found out yesterday that there is a mechanism in SharePoint called the Scenario Framework that has been around for a while but that I had no idea ever existed.  It is used to maintain state between multi-page web forms and helps manage the navigation between those pages.  If building out multiple-page forms in SharePoint is in your plans, you can find more information about the Scenario Framework on Waldek Mastykarz blog entry on the subject.

    Read the article

  • No Significant Fragmentation? Look Closer…

    If you are relying on using 'best-practice' percentage-based thresholds when you are creating an index maintenance plan for a SQL Server that checks the fragmentation in your pages, you may miss occasional 'edge' conditions on larger tables that will cause severe degradation in performance. It is worth being aware of patterns of data access in particular tables when judging the best threshold figure to use.

    Read the article

  • What job is most similar to being a DBA?

    - by fatherjack
    As long as I have worked with computers, and that's a length of time that may be easier on the eye when converted to dog-years, computers have been compared with cars. I guess the car was the most complicated thing in our lives until the PC arrived. We had plenty of time to get used to the car and how it worked, or not, and how it gradually became more complex. We can compare backups to spare tyres, CPU cores to pistons ("the more you blow, the faster you go"), and so on. I am not aware...(read more)

    Read the article

  • Hosted Monitoring

    - by Grant Fritchey
    The concept of using services to take the place of writing a lot of your own code goes way, way back in computing history. The fundamentals of the concept go back to the dawn of computing with places like IBM hosting time-shares for computing power that you could rent for short periods of time. But things really took off with the building of the Web. Now, all the growth with virtual machines, hosted machines, hosted services from vendors like Amazon and Microsoft, the need to keep all of your software locally on physical boxes is just going the way of the dodo. There will likely always be some pieces of software that you keep on machines on your property or on your person, but the concept of keeping fundamental services locally is going away. As someone put it to me once, if you were starting a business right now, would you bother setting up an Exchange server to manage your email or would you just go to one of the external mail services for everything? For most of us (who are not Exchange admins) the answer is pretty easy. With all this momentum to having external services manage more and more of the infrastructure that’s not business unique, why would you burn up a server and license instance setting up monitoring for your SQL Servers? Of course, some of you are dealing with hyper-sensitive data that might require, through law or treaty, that you lock it down and never expose it to the intertubes, but most of us are not. So, what if someone else took on the basic hassle of setting up monitoring on your systems? That’s what we’re working on here at Red Gate. Right now it’s a private test, but we’re growing it and developing it and it’ll be going to a public beta, probably (hopefully) this year. I’m running it on my machines right now. The concept is pretty simple. You put a relay on your server, poke a hole in your firewall for it, and we start monitoring your server using SQL Monitor. It’s actually shocking how easy it is to get going. You still have to adjust your alerting thresholds, but that’s a standard part of alerting. Your pain threshold and my pain threshold for any given alert may be different. But from there, we do all the heavy lifting, keeping your data online and available, providing you with access to the information about how your servers are behaving, everything. Maybe it’s just me, but I’m really excited by this. I think we’re getting to a place where we can really help the small and medium sized businesses get a monitoring solution in place, quickly and easily. All you crazy busy, and possibly accidental, DBAs and system admins finally can set up monitoring without taking all the time to configure systems, run installs, and all the rest. You just have to tweak your alerts and you’re ready to run. If you are interested in checking it out, you can apply for the closed beta through the Monitor web page.

    Read the article

  • Content Weighting and Sociology

    - by Chris Massey
    I’ve had loads of fantastic feedback on the concept and early curation wireframes I posted on the labs, and it’s led to some further thoughts on the topic of voting. More specifically, thoughts about the kinds of behaviour and values a platform encourages in it’s users via the set of available actions. StackOverflow is a very good example of this kind of sociology in action, not only via the set of available actions, but through the reputation system it uses to both reward and control it’s users. In our case (specifically, in the case of the curation model I’ve been talking about thus far), the main considerations are how the quality of content is judged, and how to make sure each piece of curated content gets a fair hearing. Based on the feedback and conversations I’ve had with many of you over the last few days, a few considerations came to light about how we might need to weight and display our curations, and I’ve written about that more extensively over on the labs themselves – have a read and let me know what you think.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >