Search Results

Search found 5279 results on 212 pages for 'execution counter'.

Page 88/212 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • Lucid hangs at booting after kernel upgrade

    - by Thomas Deutsch
    This weekend, one of our servers running Lucid has installed some upgrades: libgcrypt11 1.4.4-5ubuntu2.1 linux-firmware 1.34.14 linux-image-2.6.32-41-generic 2.6.32-41.91 linux-libc-dev 2.6.32-41.91 Afterwards, it rebooted since this was a kernel upgrade. Now, it hangs at booting, after /scripts/init-bottom. init-bottom itself should not be the problem, the last line I can see is "done". So the problem has to be shortly after that. http://manpages.ubuntu.com/manpages/hardy/man8/initramfs-tools.8.html tells me, that the next step is procfs and sysfs are moved to the real rootfs and execution is turned over to the init binary which should now be found in the mounted rootfs. But I don't know how and where. The problem exists with older kernels too, and this one here doesn't fix the problem: http://www.tummy.com/journals/entries/jafo_20111003_160440 Anyone an idea?

    Read the article

  • Cardinality Estimation Bug with Lookups in SQL Server 2008 onward

    - by Paul White
    Cost-based optimization stands or falls on the quality of cardinality estimates (expected row counts).  If the optimizer has incorrect information to start with, it is quite unlikely to produce good quality execution plans except by chance.  There are many ways we can provide good starting information to the optimizer, and even more ways for cardinality estimation to go wrong.  Good database people know this, and work hard to write optimizer-friendly queries with a schema and metadata (e.g. statistics) that reduce the chances of poor cardinality estimation producing a sub-optimal plan.  Today, I am going to look at a case where poor cardinality estimation is Microsoft’s fault, and not yours. SQL Server 2005 SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; The query plan on SQL Server 2005 is as follows (if you are using a more recent version of AdventureWorks, you will need to change the year on the date range from 2003 to 2007): There is an Index Seek on ProductID = 1, followed by a Key Lookup to find the Transaction Date for each row, and finally a Filter to restrict the results to only those rows where Transaction Date falls in the range specified.  The cardinality estimate of 45 rows at the Index Seek is exactly correct.  The table is not very large, there are up-to-date statistics associated with the index, so this is as expected. The estimate for the Key Lookup is also exactly right.  Each lookup into the Clustered Index to find the Transaction Date is guaranteed to return exactly one row.  The plan shows that the Key Lookup is expected to be executed 45 times.  The estimate for the Inner Join output is also correct – 45 rows from the seek joining to one row each time, gives 45 rows as output. The Filter estimate is also very good: the optimizer estimates 16.9951 rows will match the specified range of transaction dates.  Eleven rows are produced by this query, but that small difference is quite normal and certainly nothing to worry about here.  All good so far. SQL Server 2008 onward The same query executed against an identical copy of AdventureWorks on SQL Server 2008 produces a different execution plan: The optimizer has pushed the Filter conditions seen in the 2005 plan down to the Key Lookup.  This is a good optimization – it makes sense to filter rows out as early as possible.  Unfortunately, it has made a bit of a mess of the cardinality estimates. The post-Filter estimate of 16.9951 rows seen in the 2005 plan has moved with the predicate on Transaction Date.  Instead of estimating one row, the plan now suggests that 16.9951 rows will be produced by each clustered index lookup – clearly not right!  This misinformation also confuses SQL Sentry Plan Explorer: Plan Explorer shows 765 rows expected from the Key Lookup (it multiplies a rounded estimate of 17 rows by 45 expected executions to give 765 rows total). Workarounds One workaround is to provide a covering non-clustered index (avoiding the lookup avoids the problem of course): CREATE INDEX nc1 ON Production.TransactionHistory (ProductID) INCLUDE (TransactionDate); With the Transaction Date filter applied as a residual predicate in the same operator as the seek, the estimate is again as expected: We could also force the use of the ultimate covering index (the clustered one): SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WITH (INDEX(1)) WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; Summary Providing a covering non-clustered index for all possible queries is not always practical, and scanning the clustered index will rarely be optimal.  Nevertheless, these are the best workarounds we have today. In the meantime, watch out for poor cardinality estimates when a predicate is applied as part of a lookup. The worst thing is that the estimate after the lookup join in the 2008+ plans is wrong.  It’s not hopelessly wrong in this particular case (45 versus 16.9951 is not the end of the world) but it easily can be much worse, and there’s not much you can do about it.  Any decisions made by the optimizer after such a lookup could be based on very wrong information – which can only be bad news. If you think this situation should be improved, please vote for this Connect item. © 2012 Paul White – All Rights Reserved twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • What are the 'must know' GDB commands?

    - by Chris Smith
    I'm starting to get the hang of GDB, but everything still feels much slower than when debugging in Eclipse or Visual Studio. Are there any GDB commands you find particularly useful/productive? My life became dramatically better when I discovered: list - Display source code near the current instruction But that is still pretty basic. (And unnecessary when running GDB from Emacs.) Is there any way to do things like setup a watch window? (Print and update the result of an expression every time execution stops.)

    Read the article

  • Managed code and the Shell – Do?

    Back in 2006 I wrote a blog post titled: Managed code and the Shell – Don't!. Please visit that post to see why that advice was given.The crux of the issue has been addressed in the latest CLR via In-Process Side-by-Side Execution. In addition to the MSDN documentation I just linked, there is also an MSDN article on the topic: In-Process Side-by-Side.Now, even though the major technical impediment seems to be removed, I don’t know if Microsoft is now officially supporting managed extensions to the shell. Either way, I noticed a CodePlex project that is marching ahead to enable exactly that: Managed Mini Shell Extension Framework. Not much activity there, but maybe it will grow once .NET 4 is released... Comments about this post welcome at the original blog.

    Read the article

  • Funny statements, quotes, phrases, errors found on technical Books [closed]

    - by Felipe Fiali
    I found some funny or redundant statements on technical books I've read, I'd like to share. And I mean good, serious, technical books. Ok so starting it all: The .NET framework doesn't support teleportation From MCTS 70-536 Training kit book - .NET Framework 2.0 Application Development Foundation Teleportation in science fiction is a good example of serialization (though teleportation is not currently supported byt the .NET Framework). C# 3 is sexy From Jon Skeet's C# in Depth second Edition You may be itching to get on to the sexy stuff from C# 3 by this point, and I don’t blame you. Instantiating a class From Introduction to development II in Microsoft Dynamics AX 2009 To instantiate a class, is to create a new instance of it. Continue or break From Introduction to development II in Microsoft Dynamics AX 2009 Continue and break commands are used within all three loops to tell the execution to break or continue. These are just a few. I'll post some more later. Share some that you might have found too.

    Read the article

  • gtk glade need help

    - by shiv garg
    I am using glade to make an user interface. i have successfully generated the glade file Now i have to include this file in my C code. I am using following code: #include <stdlib.h> #include<gtk/gtk.h> int main (int argc, char *argv[]) { GtkWidget *builder,*window,*button; gtk_init (&argc, &argv); builder=gtk_builder_new(); gtk_builder_add_from_file(builder,"shiv.glade",NULL); window=GTK_WIDGET (gtk_builder_get_object(builder,"window1")) ; button=GTK_WIDGET (gtk_builder_get_object(builder,"button1")); g_object_unref(G_OBJECT(builder)); gtk_widget_show(button); gtk_widget_show(window); gtk_main (); return 0; } My UI is a simple window having a button without any callback function. I am getting following errors on execution GTK-CRITICAL **: IA__gtk_widget_show assertion 'GTK_IS_WIDGET(widget)' failed

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • Real world pitfalls of introducing F# into a large codebase and engineering team

    - by nganju
    I'm CTO of a software firm with a large existing codebase (all C#) and a sizable engineering team. I can see how certain parts of the code would be far easier to write in F#, resulting in faster development time, fewer bugs, easier parallel implementations, etc., basically overall productivity gains for my team. However, I can also see several productivity pitfalls of introducing F#, namely: 1) Everyone has to learn F#, and it's not as trivial as switching from, say, Java to C#. Team members that have not learned F# will be unable to work on F# parts of the codebase. 2) The pool of hireable F# programmers, as of now (Dec 2010) is non-existent. Search various software engineer resume databases for "F#", way less than 1% of resumes contain the keyword. 3) Community support as of now (Dec 2010) is less available. You can google almost any problem in C# and find someone that has already dealt with it, not so with F#. Third party tool support (NUnit, Resharper etc) is also sketchy. I realize that this is a bit Catch-22, i.e. if people like me don't use F# then the community and tools will never materialize, etc. But, I've got a company to run, and I can be cutting edge but not bleeding edge. Any other pitfalls I'm not considering? Or anyone care to rebut the pitfalls I've mentioned? I think this is an important discussion and would love to hear your counter-arguments in this public forum that may do a lot to increase F# adoption by industry. Thanks.

    Read the article

  • Can't set permissions for files on an NTFS partition

    - by ashishsony
    I remember that I was able to run a Linux .exe that was placed on an NTFS partition earlier before I installed 10.10 RC. But if I try to run it now, I can't run it as it hasn't the execution permission. The bad part is that I can't change the permissions too. I'm chmod-ding +x but no change at all with its permissions. So this seems to be a bug? Any help? Though when I put it on ext4 partition, I can set the permission. But I want to do this as I did before, right from its default NTFS location.

    Read the article

  • Le gestionnaire d'accès de Sun repris par des anciens de la société : OpenSSO devient OpenAM grâce à

    Le gestionnaire d'accès de Sun repris par des anciens de la société OpenSSO devient OpenAM sous l'égide de Simon Phipps, nouvel employé de ForgeRock Dans la famille des technologies de Sun dont on se demande ce qu'elles vont devenir avec leur rachat par Oracle, voici OpenSSO. OpenSSO est un gestionnaire d'accès à des services web, open source, fondé sur un mécanisme de single sign-on qui fournit « des services d'identité essentiels pour simplifier, de manière transparente, l'exécution de la connexion unique ». Sous l'égide d'Oracle, cette technologie était semble-t-il sur une voie de garage. Le géant du logiciel possédait déjà ses propres solutions avant même le rach...

    Read the article

  • What platform to use for browser based turn based strategy game

    - by sunwukung
    I want to write a browser based strategy game that can be played by two players in separate locations. The game itself is predominantly turn based. To that end, I want to determine the correct platform on which to build this game. To prevent gamers "gaming" the system, the business logic needs to reside in the server. I could arguably use AJAX for a large part of the games functionality, but at two key points in the game loop, the opposing player can "counter" the current players move. In addition, when it's time for the players to swap, AJAX polling is likely to fall short, so it's starting to look like WebSockets is going to be a requirement to pull this off smoothly. So, the remaining question is regarding the back end. I'd kinda like to build this in Python/Flask - but this is primarily out of wanting to tackle a project with that language, not neccessarily because it's the appropriate tool for the job. The next most likely candidate has got to be NodeJS given it's (apparently) tighter integration with the WebSockets protocol. My question, then, is regarding the best platform on which to pursue this objective.

    Read the article

  • Unit testing time-bound code

    - by maasg
    I'm currently working on an application that does a lot of time-bound operations. That is, based on long now = System.currentTimeMillis();, and combined with an scheduler, it will calculate periods of time that parametrize the execution of some operations. e.g.: public void execute(...) { // executed by an scheduler each x minutes final int now = (int) TimeUnit.MILLISECONDS.toSeconds(System.currentTimeMillis()); final int alignedTime = now - now % getFrequency() ; final int startTime = alignedTime - 2 * getFrequency(); final int endTimeSecs = alignedTime - getFrequency(); uploadData(target, startTime, endTimeSecs); } Most parts of the application are unit-tested independently of time (in this case, uploadData has a natural unit test), but I was wondering about best practices for testing time-bound parts that rely on System.currentTimeMillis() ?

    Read the article

  • Parallel Computing in .Net 4.0

    - by kaleidoscope
    Technorati Tags: Ram,Parallel Computing in .Net 4.0 Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem: To be run using multiple CPUs A problem is broken into discrete parts that can be solved concurrently Each part is further broken down to a series of instructions Instructions from each part execute simultaneously on different CPUs Parallel Extensions in .NET 4.0 provides a set of libraries and tools to achieve the above mentioned objectives. This supports two paradigms of parallel computing Data Parallelism – This refers to dividing the data across multiple processors for parallel execution.e.g we are processing an array of 1000 elements we can distribute the data between two processors say 500 each. This is supported by the Parallel LINQ (PLINQ) in .NET 4.0 Task Parallelism – This breaks down the program into multiple tasks which can be parallelized and are executed on different processors. This is supported by Task Parallel Library (TPL) in .NET 4.0 A high level view is shown below:

    Read the article

  • Do you know when to send a done email in Scrum?

    - by Martin Hinshelwood
    At SSW we have always sent done emails to the owner/requestor to let them know that it is done. Others who are dependent on that tasks are CC’ed so they know they can proceed. But how does that fit into Scrum?   Update 14th April 2010 Rule added to Rules to better Scrum with TFS If you are working on a task: When you complete a Task that is part of a User Story you need to send a done email to the Owner of that Story. You only need to add the Task #, Summary and link to the item in WIWA. Remember that all your tasks should be under 4 hours, do spending lots of time on a Done Email for a Task would be counter productive. Add more information if required, for example you may have completed the task a different way than previously discussed.  Make sure that every User Story has an Owner as per the rules. If you are the owner of a story: When you complete a story you should send a comprehensive done email as per the rules when the story had been completed. Make sure you add a list of all of the Tasks that were completed as part of the story and the Done criteria that you completed. If your done criteria says: Built Successfully 30% Code Coverage All tests passed Then add an illustration to show this. Figure: Show that you have met your Done criteria where possible.   This is all designed to help you Scrum Team members (Product Owner, ScrumMaster and Team) validate the quality of the work that has been completed. Remember that you are not DONE until your team says you are done.   Technorati Tags: SSW,Scrum,SSW Rules

    Read the article

  • Brightness problem after upgrade Ubuntu 13.10

    - by Daniel Yunus
    I have upgrade to 13.10 recently. But, the brightness is working after I add this in version 13.04 before. After I upgrade to 13.10, this code is not working on ASUS Slimbook X401U #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. echo 0 > /sys/class/backlight/acpi_video0/brightness exit 0

    Read the article

  • Ubuntu took away permissions from my Data partition

    - by RobinJ
    The pangolin has struck again. The bug of the day for today is Ubuntu taking away my permissions on my Data partition (NTFS). One moment everything worked fine, the next moment I couldn't chmod anything anymore. chown throws no errors or warnings at all, but nothing has changed either. chmod keeps saying Operation not permitted. I've been messing around with /etc/fstab as suggested by other answers on AskUbuntu, but none of them seem to have the desired effect. This is my current line: UUID=25D7D681409A96B7 /media/Data ntfs defaults,umask=000,gid=46,permissions,users,auto,exec 0 0 For reference, this is the original one: UUID=25D7D681409A96B7 /media/Data ntfs defaults,umask=007,gid=46 0 0 (right after the problem started occuring) What do I need to do so I am the owner of my own hard drive again? I want to be able to just use chmod and chown (without sudo) without being told that some mysterious alien has taken over control of my Data partition. I can still read and write, but execution permissions seem to be the problem.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #004

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Auto Generate Script to Delete Deprecated Fields in Current Database In early career everytime I have to drop a column, I had hard time doing it because I was scared what if that column was needed somewhere in the code. Due to this fear I never dropped any column. I just renamed the column. If the column which I renamed was needed afterwards it was very easy to rename it back again. However, it is not recommended to keep the deleted column renamed in the database. At every interval I used to drop the columns which was prefixed with specific word. This script is 6 years old but still works. Give it a look, I am open for improvements. 2007 Shrinking Truncate Log File – Log Full – Part 2 Shrinking database or mdf file is indeed bad thing and it creates lots of problems. However, once in a while there is legit requirement to shrink the log file – a very rare one. In the rare occasion shrinking or truncating the log file may be the only solution. However, one should make sure to take backup before and after the truncate or shrink as in case of a disaster they can be very useful. Remember that truncating log file will break the log chain and while restore it can create major issue. Anyway, use this feature with caution. 2008 Simple Use of Cursor to Print All Stored Procedures of Database Including Schema This is a very interesting requirement I used to face in my early career days, I needed to print all the Stored procedures of my database. Interesting enough I had written a cursor to do so. Today when I look back at this stored procedure, I believe there will be a much cleaner way to do the same task, however, I still use this SP quite often when I have to document all the stored procedures of my database. Interesting Observation about Order of Resultset without ORDER BY In industry many developers avoid using ORDER BY clause to display the result in particular order thinking that Index is enforcing the order. In this interesting example, I demonstrate that without using ORDER BY, same table and similar query can return different results. Query optimizer always returns results using any method which is optimized for performance. The learning is There is no order unless ORDER BY is used. 2009 Size of Index Table – A Puzzle to Find Index Size for Each Index on Table I asked this puzzle earlier where I asked how to find the Index size for each of the tables. The puzzle was very well received and lots of interesting answers were received. To answer this question I have written following blog posts. I suggest this weekend you try to solve this problem and see if you can come up with a better solution. If not, well here are the solutions. Solution 1 | Solution 2 | Solution 3 Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. The SQL Server Query optimizer is a very smart tool and it makes a better selection of execution plan. Suggesting hints to the Query Optimizer should be attempted when absolutely necessary and by experienced developers who know exactly what they are doing (or in development as a way to experiment and learn). Interesting Observation – TOP 100 PERCENT and ORDER BY I have seen developers and DBAs using TOP very causally when they have to use the ORDER BY clause. Theoretically, there is no need of ORDER BY in the view at all. All the ordering should be done outside the view and view should just have the SELECT statement in it. It was quite common that to save this extra typing by including ordering inside of the view. At several instances developers want a complete resultset and for the same they include TOP 100 PERCENT along with ORDER BY, assuming that this will simulate the SELECT statement with ORDER BY. 2010 SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience In year 2010 I attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever. Instead of writing about my usual routine or the event, I wrote about the interesting things I did and how I felt about it! When I go back and read it, I feel that this is the best event I attended in year 2010. Change Database Access to Single User Mode Using SSMS Image says all. 2011 SQL Server 2012 has introduced new analytic functions. These functions were long awaited and I am glad that they are now here. Before when any of this function was needed, people used to write long T-SQL code to simulate these functions. But now there’s no need of doing so. Having available native function also helps performance as well readability. Function SQLAuthority MSDN CUME_DIST CUME_DIST CUME_DIST FIRST_VALUE FIRST_VALUE FIRST_VALUE LAST_VALUE LAST_VALUE LAST_VALUE LEAD LEAD LEAD LAG LAG LAG PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_DISC PERCENTILE_DISC PERCENTILE_DISC PERCENT_RANK PERCENT_RANK PERCENT_RANK Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Introducing the Documentation Workflows

    - by Owen Allen
    The how-to documents  provide end to end examples of specific features, such as creating a new zone or discovering a new system. We are enhancing the individual how-tos with documents called Workflows. These workflows are each built around procedural flowcharts that show these larger and more complex tasks. The workflow indicates which how-tos or other workflows you should follow to complete a more complex process, and give you a flow for planning the execution of a process. Over the coming days I'll highlight each of these workflows, and talk about the tasks that each one guides you through.

    Read the article

  • Revert permission of /usr back to root

    - by Rodrigo Sasaki
    I was doing some alterations but in one I messed up. I changed the permissions of almost everything inside the /usr folder to my own user. It didn't change everything because it failed in the middle of the execution, I still have /sbin, /share and /src assigned to root. the command I ran was this (this was executed while inside /usr): sudo chown -R myuser:myuser . Is there any way for me to revert this? If I run: sudo chown -R root:root . I get this error: sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set

    Read the article

  • Multiple vulnerabilities in Thunderbird

    - by RitwikGhoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-1948 Denial of service (DoS) vulnerability 9.3 Thunderbird Solaris 10 SPARC: 145200-12 X86: 145201-12 CVE-2012-1950 Address spoofing vulnerability 6.4 CVE-2012-1951 Resource Management Errors vulnerability 10.0 CVE-2012-1952 Resource Management Errors vulnerability 9.3 CVE-2012-1953 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 9.3 CVE-2012-1954 Resource Management Errors vulnerability 10.0 CVE-2012-1955 Address spoofing vulnerability 6.8 CVE-2012-1957 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-1958 Resource Management Errors vulnerability 9.3 CVE-2012-1959 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2012-1961 Improper Input Validation vulnerability 4.3 CVE-2012-1962 Resource Management Errors vulnerability 10.0 CVE-2012-1963 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2012-1964 Clickjacking vulnerability 4.0 CVE-2012-1965 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-1966 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2012-1967 Arbitrary code execution vulnerability 10.0 CVE-2012-1970 Denial of service (DoS) vulnerability 10.0 CVE-2012-1973 Resource Management Errors vulnerability 10.0 CVE-2012-3966 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • 12.04 LTS boot hangs at "SP5100 TCO timer: mmio address 0xfec000f0 already in use", didn't yesterday

    - by DarkIron112
    Dual-booting Windows 7 and Ubuntu 12.04 LTS. I went to reboot from Win to Ubu, and found a few interesting things. My POST screen is covered in blocks of epileptic colors until I hit GRUB, which continues when I try to boot into Ubuntu. These color blocks don't appear when I use my on-board VGA, so I'll just attribute to that. Grub dimensions are swapped (card vs onboard, probably), but, when interfacing with onboard VGA, the Grub Timeout Counter works and when using my card, it does not (see "[!!!]" below for more information) Booting into Ubuntu directly causes the error: SP5100 TCO timer: mmio address 0xfec000f0 already in use Booting into recovery mode, meanwhile, and then "resuming normal boot" gets me to the desktop without native 1440x900 resolution and graphic drivers can't tell the monitor it's looking at (I assume this is because it's not a full graphic boot, and as it says, some drivers won't run?) [!!!] When I reboot after going into recovery mode, the countdown timer works ONCE, puts me back into default ubuntu boot, and then does not work again until after another recovery-mode boot. Windows 7 can boot perfectly with no issues whatsoever from epilepsy color blocks or driver detection. This makes me wonder /why/ the POST screen can't handle my video card anymore. Amidst all the diagnostics, I opened my case and re-seated the videocard securely, ensuring it wasn't a loose connection-- But this did nothing to help me. Hardware I am running an NVidia GeForce GTX 8800 video card in a PCI slot. I have 4.8GiB memory, an AMD Athlon II Quad-core 640 Processor, on an MSI K9N6GM Series Mobo. Onboard video is an NVidia GeForce MCP61(V/S/P) card. Note: I did not have any of these problems yesterday, and I have been using Ubuntu intensively for a week, though it's been working flawlessly for months. I've recently been using it to mod my Android phone, perhaps I messed something up in the file system?

    Read the article

  • Day 5 of Oracle OpenWorld 2012 October 4th

    - by Maria Colgan
    Its the last day of Oracle OpenWorld and we have saved the very best for last. So hopefully you are still awake and functioning at this stage! Today, we present An Insider’s View of How the Optimizer Works (Session CON8457) at Moscone South - room 104. This session explains how the latest version of the optimizer works and the best ways you can influence its decisions to ensure you get optimal execution every time We really hope you have enjoy the conference so far and will stop by our session this afternoon before you head off home! +Maria Colgan

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • Cool projects at Codeplex

    - by Tiago Salgado
    There’s a significant number of useful projects at Codeplex who were meeting with some of our needs. As such, here are two that already have been useful to me: Droid Explorer As the name implies, allows us to explore an Android device and has features such as: Copy local files to device Reboot device Reboot device in to recovery mode Open files for viewing / execution locally with the default file type executable Package Manager (Install & Uninstall) Take a Screen Shot (landscape or portrait) etc Virtual Router – Wifi Hot Spot (Windows 7 / 2008 R2) Allows you to create an Access Point, sharing your local Internet connection, with other wireless devices. Its no doubt an application to be always installed, by the simple way to create a resource like this one.

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >