Search Results

Search found 3761 results on 151 pages for 'revision history'.

Page 35/151 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • SSMS Tools Pack 2.7 is released. New website, improved licensing and features.

    - by Mladen Prajdic
    New website Nice, isn't it? Cleaner, simpler, better looking and more modern. If you have any suggestions for further improvements I'd be glad to hear them. Simpler licensing With SSMS tools Pack 2.7 the licensing is finally where it should be. It is now based on the activate/deactivate model. This way you can move a license from machine to machine with simple deactivation on one and reactivation on another machine. Much better, no? Because of very good feedback I have added an option for 6 machines and lowered the 4 machines option to 3 machines. This should make it much simpler for you to choose the right option for yourself. Improved features Version 2.5.3 was already extremely stable and 2.7 continues with that tradition. Because of that I could fully focus on features and why 3.0 will rock even more that 2.7! ;) In version 2.7 I have addressed quite a few improvements you were requesting for a while now. SQL History This is probably the biggest time saver out there, therefore it's only fair it gets a few important updates. If you have an existing .sql file opened, the Window Content History now saves your code to that existing file and also makes a backup in the SQL History log default location. Search is still done through the SQL History log but the Tab Sessions Restore opens your existing .sql file. This way you don't have to remember to save your existing files by yourself anymore. A bug when you couldn't search properly if you copied the log files to a new location was fixed. Unfortunately this removed the option to filter a search with the time component. The smallest search interval is now one day. The SSMS Tools Pack now remembers the visibility of the Current Window History window when you exit SSMS. SQL Snippets You can now set the position of the cursor in your snippets by placing {C} somewhere in your snippet. It's a small improvement but can be a huge time saver since you don't have to move through the snippet to the desired location anymore. Run script on multiple databases Database choices can now be saved with a name and then loaded again next time. You can also choose to run the script in a new window for each chosen database. Search through grid results You can now go previous/next search result with the Prev/Next control inside the search window. This is extremely useful if you have a large resultset. IT saves you the scrolling. CRUD generator Four new variables have been added: |CurrentDate| writes current date in format yyyy-MM-dd to your script |CurrentTime| writes current time in 24h format HH:mm:ss to your script |CurrentWinUser| writes current Windows logged on user to your script |CurrentSqlUser| writes current SQL logged on login to your script This was actually quite a requested feature so if you have any other ideas for extra variables, do let me know. That's about it. I hope you're going to enjoy this version as much as the previous ones. Have fun!

    Read the article

  • Tools for Maintaining Branches in SVN

    - by Chris Conway
    My team uses SVN for source control. Recently, I've been working on a branch with occasional merges from the trunk and it's been a fairly annoying experience (cf. Joel Spolsky's "Subversion Story #1"), so I've been looking alternative ways to manage branches and merging. Given that a centralized SVN repository is non-negotiable, what I'd like is a set of tools that satisfy the following conditions. Complete revision history should be stored in SVN for both trunk and branches. Merging in either direction (and potentially criss-crossing) should be relatively painless. Merging history should be stored in SVN to the greatest extent possible. I've looked at both git-svn and bzr-svn and neither seems to be up to the job—basically, given the revision history they can export from the SVN repository, they can't seem to do any better a job handling merges than SVN can. For example, after cloning the repository with git, the revision history for my branch shows the original branch off of trunk, but git doesn't "see" any of the interim SVN merges as "native" merges—the revision history is one long line. As a result, any attempts to merge from trunk in git yield just as many conflicts as an SVN merge would. (Besides, the git-svn documentation explicitly warns against using git to merge between branches.) Is there a way to adjust my workflow to make git satisfy the above requirements? Maybe I just need tips or tricks (or a separate merging tool?) to help SVN be better at merging into branches?

    Read the article

  • Populate a list from xml using python

    - by Sam
    I have an xml file in the following format: <food> <desert> cake <desert> </food> <history> currently in my belly </history> I want to create two list, food and text populated with cake and history in string format. Is there an easy way to do it in python?

    Read the article

  • Javascript Back Button - Stop the initial load of back button from working

    - by Evan
    Hi, I'm using a javascript back button link and forward button link to control the user's history inside a modal/lightbox window. The challenge I have is when the modal window is launched, and the "back" and "forward" buttons are present for the user to click, if the initial javascript back button is clicked when the window opens, it actually closes the modal window, because the javascript history is taking the user back to the page PRIOR to the opening of modal window. So, in essence, I'm trying to disable the "back" button from working on the initial load of the modal/light box. <a href="javascript:history.go(-1)">Back Button</a> <a href="javascript:history.go(1)">Foward Button</a> Is this possible?

    Read the article

  • CSharp: Testing a Generic Class

    - by Jonas Gorauskas
    More than a question, per se, this is an attempt to compare notes with other people. I wrote a generic History class that emulates the functionality of a browser's history. I am trying to wrap my head around how far to go when writing unit tests for it. I am using NUnit. Please share your testing approaches below. The full code for the History class is here (http://pastebin.com/ZGKK2V84).

    Read the article

  • Testing a Generic Class

    - by Jonas Gorauskas
    More than a question, per se, this is an attempt to compare notes with other people. I wrote a generic History class that emulates the functionality of a browser's history. I am trying to wrap my head around how far to go when writing unit tests for it. I am using NUnit. Please share your testing approaches below. The full code for the History class is here (http://pastebin.com/ZGKK2V84).

    Read the article

  • Javascript back button for iframe parent window

    - by DisgruntledGoat
    I have some pages with iframes in them. I want to add a link/button inside the iframe, to make the browser go back one page in history. But I want the PARENT to go back, not the iframe itself. I originally had this, which makes the iframe page go back (if it exists): <a href="javascript:history.back()">&laquo; Go back</a> I've tried window.parent.history.back() and window.parent.document.history.back() but neither one works. There are no cross-domain issues accessing the iframe from the parent and vice-versa.

    Read the article

  • Mercurial: What is the benefit of fixing errors in earlier versions

    - by Ken Earley
    According to the guide, under the heading: Fixing errors in earlier revisions, it states this: When you find a bug in some earlier revision you have two options: either you can fix it in the current code, or you can go back in history and fix the code exactly where you did it, which creates a cleaner history. How does going back in history make it cleaner? It still makes a new changeset at tip. Does it have something to do with what is recorded as it's parent? Is there a way to view the logs seeing the newly inserted changeset in that order? This lesson is under the main heading of Lone developer with nonlinear history. Is this good practice when working on a team?

    Read the article

  • Mercurial central server file discrepancy (using 'diff to local')

    - by David Montgomery
    Newbie alert! OK, I have a working central Mercurial repository that I've been working with for several weeks. Everything has been great until I hit a really bizarre problem: my central server doesn't seem to be synced to itself? I only have one file that seems to be out-of-sync right now, but I really need to know how this happened to prevent it from happening in the future. Scenario: 1) created Mercurial repository on server using an existing project directory. The directory contained the file 'mypage.aspx'. 2) On my workstation, I cloned the central repository 3) I made an edit to mypage.aspx 4) hg commit, then hg push from my workstation to the central server 5) now if I look at mypage.aspx on the server's repository using TortoiseHg's repository explorer, I see the change history for mypage.aspx -- an initial check-in and one edit. However, when I select 'Diff to local', it shows the current version on the server's disk is the original version, not the edited version! I have not experimented with branching at all yet, so I'm sure I'm not getting a branch problem. 'hg status' on the server or client returns no pending changes. If I create a clone of the server's repository to a new location, I see the same change history as I would expect, but the file on disk doesn't contain my edit. So, to recap: Central repository = original file, but shows change in revision history (bad) Local repository 'A' = updated file, shows change in revision history (good) Local repository 'B' = original file, but shows change in revision history (bad) Help please! Thanks, David

    Read the article

  • Data in column not changed

    - by shanks
    I have sql 2005 and when i run below query, data from RealTimeLog table transfer to History but when new data come in RealTimeLog table old data not changed by new one means OutTime data is not changed with new data from RealTimeLog. insert into History (UserID,UserName,LogDate, [InTime], [OutTime]) SELECT UserID,UserName,[LogDate],CONVERT(nvarchar,MIN(CONVERT(datetime, [LogTime], 108)), 108), CONVERT(nvarchar, MAX(CONVERT(datetime, [LogTime], 108)), 108) From RealTimeLog where not Exists (select * from History H Where H.UserID = RealTimeLog.UserID AND H.UserName=RealTimeLog.UserName AND H.LogDate=RealTimeLog.LogDate) GROUP BY UserID,UserName,[LogDate] ORDER BY UserID,[LogDate] for ex. 1 Shanks 02/05/2010 9:00 10:00 if new Max time generated suppose 11:00 in RealtimeLog then it is not inserted in History table and output remain same as above.

    Read the article

  • privacy, c++, firefox... big bug!!!

    - by Delirium tremens
    How to reproduce: open Firefox visit a good TGP click History click Show All History select the name of the good TGP you already know Delete This Page, but there is an other feature, a super secret feature, click Forget All About This Page --- if you had cookies, cache, active logins etc that came from the good TGP, it's correctly deleted, because it's a different feature from delete this page visit TWO good TGPs click History click Show All History select the names of the TWO good TGPs --- where is Forget All About These Pages??? That is the bug... It used to be all-or-nothing, but now... now??? oh, now there's a bug and it's still all-or-nothing.

    Read the article

  • Does Ivy's url resolver support transitive retrieval?

    - by Sean
    For some reason I can't seem to resolve the dependencies of my dependencies when using a url resolver to specify a repository's location. However, when using the ibiblio resolver, I am able to retrieve them. For example: <!-- Ivy File --> <ivy-module version="1.0"> <info organisation="org.apache" module="chained-resolvers"/> <dependencies> <dependency org="commons-lang" name="commons-lang" rev="2.0" conf="default"/> <dependency org="checkstyle" name="checkstyle" rev="5.0"/> </dependencies> </ivy-module> <!-- ivysettings file --> <ivysettings> <settings defaultResolver="chained"/> <resolvers> <chain name="chained"> <url name="custom-repo"> <ivy pattern="http://my.internal.domain.name/ivy/[organisation]/[module]/[revision]/ivy-[revision].xml"/> <artifact pattern="http://my.internal.domain.name/ivy/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]"/> </url> <url name="ibiblio-mirror" m2compatible="true"> <artifact pattern="http://mirrors.ibiblio.org/pub/mirrors/maven2/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> <ibiblio name="ibiblio" m2compatible="true"/> </chain> </resolvers> </ivysettings> <!-- checkstyle ivy.xml file generated from pom via ivy:install task --> <?xml version="1.0" encoding="UTF-8"?> <ivy-module version="1.0" xmlns:m="http://ant.apache.org/ivy/maven"> <info organisation="checkstyle" module="checkstyle" revision="5.0" status="release" publication="20090509202448" namespace="maven2" > <license name="GNU Lesser General Public License" url="http://www.gnu.org/licenses/lgpl.txt" /> <description homepage="http://checkstyle.sourceforge.net/"> Checkstyle is a development tool to help programmers write Java code that adheres to a coding standard </description> </info> <configurations> <conf name="default" visibility="public" description="runtime dependencies and master artifact can be used with this conf" extends="runtime,master"/> <conf name="master" visibility="public" description="contains only the artifact published by this module itself, with no transitive dependencies"/> <conf name="compile" visibility="public" description="this is the default scope, used if none is specified. Compile dependencies are available in all classpaths."/> <conf name="provided" visibility="public" description="this is much like compile, but indicates you expect the JDK or a container to provide it. It is only available on the compilation classpath, and is not transitive."/> <conf name="runtime" visibility="public" description="this scope indicates that the dependency is not required for compilation, but is for execution. It is in the runtime and test classpaths, but not the compile classpath." extends="compile"/> <conf name="test" visibility="private" description="this scope indicates that the dependency is not required for normal use of the application, and is only available for the test compilation and execution phases." extends="runtime"/> <conf name="system" visibility="public" description="this scope is similar to provided except that you have to provide the JAR which contains it explicitly. The artifact is always available and is not looked up in a repository."/> <conf name="sources" visibility="public" description="this configuration contains the source artifact of this module, if any."/> <conf name="javadoc" visibility="public" description="this configuration contains the javadoc artifact of this module, if any."/> <conf name="optional" visibility="public" description="contains all optional dependencies"/> </configurations> <publications> <artifact name="checkstyle" type="jar" ext="jar" conf="master"/> </publications> <dependencies> <dependency org="antlr" name="antlr" rev="2.7.6" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"/> <dependency org="apache" name="commons-beanutils-core" rev="1.7.0" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"/> <dependency org="apache" name="commons-cli" rev="1.0" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"/> <dependency org="apache" name="commons-logging" rev="1.0.3" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"/> <dependency org="com.google.collections" name="google-collections" rev="0.9" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"/> </dependencies> </ivy-module> Using the "ibiblio" resolver I have no problem resolving my project's two dependencies (commons-lang 2.0 and checkstyle 5.0) and checkstyle's dependencies. However, when attempting to exclusively use the "custom-repo" or "ibiblio-mirror" resolvers, I am able to resolve my project's two explicitly defined dependencies, but not checkstyle's dependencies. Is this possible? Any help would be greatly appreciated.

    Read the article

  • variables in batch files

    - by richzilla
    Hi All, Im trying to set up a batch file to automatically deploy a php app to a web server, and basically what i want is an entirely automated process where i would just give it a revision number from the repository and it would then export the files, upload via ftp and then update deployment info at the repo host (codebase). However, im starting from scratch here, and im wanting to know, how would i set up a batch file to accept a variable when it was run, for example typing myfile.bat /revision 42 would enable me to deploy revision 42 to my server. if anyone can point me in the right direction id be very appreciated.

    Read the article

  • variables in batch scripts [closed]

    - by richzilla
    I'm trying to set up a batch file to automatically deploy a php app to a web server. Basically, what I want is an entirely automated process: I would just give it a revision number from the repository and it would then export the files, upload via ftp and then update deployment info at the repo host (codebase). However, I'm starting from scratch here. How would I set up a batch file to accept a variable when it was run? For example, the command myfile.bat /revision 42 should deploy revision 42 to my server. If anyone can point me in the right direction I'd appreciate it.

    Read the article

  • Text File Cannot read by Batch File

    - by Typowarrior
    I have the problem where TXT file that batch create can't be read. it turns to ECHO OFF Result. Here is 1st code need to be run. echo. wmic /output:huhu.txt Path CIM_DataFile WHERE Name='C:\\Users\\uJaNbaTus\\Desktop\\HyperTerminal.exe' Get Version echo. Then I create another .bat file with this code and run it. echo. setlocal ENABLEDELAYEDEXPANSION set revision= for /f "delims=" %%a in (huhu.txt) do ( set line=%%a if "x!line:~0,8!"=="xVersion " ( set revision=!line:~8! ) ) echo !revision! echo. endlocal When I run this .bat file the result Showing ECHO off. Btw if I create another file using notepad and replace (huhu.txt) I didn't get any error and the output come from txt file.

    Read the article

  • Cannot dump svn repository

    - by vinga
    I've a problem with my svn repo. I cannot use it, I even cannot dump it. svnadmin verify repo returns Can't set position pointer in file 'svn/db/revs/0/0' When I try to dump repo (no matter what revision range), console output shows: * Dumped revision 0. svnadmin: Final line in revision file missing space I've googled that this may be connected with wrong version apr apache2 library, but I have other repositories which work good, so I thing this isn't the case. Is there any way to save at least some files from my repo? Can svn repo get corrupted so easily (probably after power-cut, however I'm not sure).

    Read the article

  • USB flash module giving errors

    - by vshenoy
    Hi, I have a SATA USB flash module which was earlier running a 2.4 linux kernel (2.4.36.6) and on which now I am trying to install ubuntu server 10.04.1 LTS. I have two such USB flash modules and on one of them the installation process itself giving these errors: sd 4:0:0:0 [sda] Device not ready sd 4:0:0:0 [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE sd 4:0:0:0 [sda] Sense Key : Not Ready [current] sd 4:0:0:0 [sda] Add. Sense: Medium not present sd 4:0:0:0 [sda] CDB: Write(10): 2a 00 00 05 48 02 00 00 04 00 end_request: I/O error, dev sda, sector 46114 usb 1-1: reset high speed USB device using ehci_hcd and address 2 Buffer I/O error on device sda1, logical block 172033 lost page write due to I/O error on sda1 Buffer I/O error on device sda1, logical block 172034 lost page write due to I/O error on sda1 on the other the installation is successful, but after a day or two of running the machine hangs because of kernel spewing these messages: Remounting filesystem read-only EXT2-fs error (device sda1): read_block_bitmap: Cannot read block [bitmap - block_group = 105, block_bitmap = 860161] EXT2-fs error (device sda1): ext2_get_inode: unable to read inode block - inode=13083, block=24683 ext2_free_inode: bit already cleared for inode 83966 and the machine needs to be hard rebooted. On both the systems SCSI emulation with usb_storage driver is being used to detect the module. Here is the output of /proc/scsi/scsi on 2.4: # cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: TS Model: UFM Rev: 1100 Type: Direct-Access ANSI SCSI revision: 02 and on 2.6: # cat /proc/scsi/scsi Attached devices: Host: scsi6 Channel: 00 Id: 00 Lun: 00 Vendor: TS Model: UFM Rev: 1100 Type: Direct-Access ANSI SCSI revision: 00 i.e. only 'ANSI SCSI revision:' is shown as different, although I am not sure if this can cause any problem. Really appreciate if someone can point as to how to debug this issue or any mailing list where I can further ask questions about this.

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Vampires – Folklore, Fantasy, and Fact

    - by Akemi Iwaya
    Halloween is practically here, so what better time is there than now to look into the history of vampires? Michael Molina has put together a great presentation looking at the folklore and types of vampires throughout history, sorting facts from fiction, and more in the TED-Ed channel’s latest video. Vampires: Folklore, fantasy and fact – Michael Molina [YouTube]     

    Read the article

  • COLUMNS_UPDATED() for audit triggers

    - by Piotr Rodak
    In SQL Server 2005, triggers are pretty much the only option if you want to audit changes to a table. There are many ways you can decide to store the change information. You may decide to store every changed row as a whole, either in a history table or as xml in audit table. The former case requires having a history table with exactly same schema as the audited table, the latter makes data retrieval and management of the table a bit tricky. Both approaches also suffer from the tendency to consume...(read more)

    Read the article

  • WebCenter Customer Spotlight: Los Angeles Department of Water and Power

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution Summary Los Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. The goal of the project was to implement a newly designed web portal to increase customer self-service while reducing transactions via IVR and automate many of the paper based processes to web based workflows for their 1.6 million customers. LADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content and Oracle SOA Suite for the integration of their complex back-end systems infrastructure. The new portal has received extremely positive feedback from not only the customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Company OverviewLos Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. LADWP also bills most of these customers for sanitation services provided by another department in the city of Los Angeles.  Business ChallengesThe goal of the project was to implement a newly designed web portal that is easy to navigate from a web browser and mobile devices, as well as be the platform for surfacing internet and intranet applications at LADWP. The primary objective of the new portal was to increase customer self-service while reducing the transactions via IVR and walk-up and to automate many of the paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account (Bill Pay, Payment History, Bill History, Usage analysis, Service Request Management) Financial Assistance Programs Customer Rebate Programs Turn Off/Turn On/Transfer of Services Outage Reporting eNotification (SMS, email) Solution DeployedLADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content. Using Oracle SOA Suite they integrated various back-end systems including Oracle Siebel CRM IBM Mainframe based CIS FILENET for document management EBP Eletronic Bill Payment System HP Imprint System for BillXML data Other systems including outage reporting systems, SMS service, etc. The new portal’s features include: Complete Graphical redesign based on best practices in UI Design for high usability Customer Self Service implemented through MyAccount (Bill Pay, Payment History, Bill History, Usage Analysis, Service Request Management) Financial Assistance Programs (CRM, WebCenter) Customer Rebate Programs (CRM, WebCenter) Turn On/Off/Transfer of services (Commercial & Residential) Outage Reporting eNotification (SMS, email) Multilingual (English & Spanish) – using WebCenter multi-language support Section 508 (ADA) Compliant Search – Using WebCenter SES (Secured Enterprise Search) Distributed Authorship in WebCenter Content Mobile Access (any Mobile Browser) Business ResultsThe new portal has received extremely positive feedback from not only customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Additional Information LADWP OpenWorld presentation Oracle WebCenter Portal Oracle WebCenter Content Oracle SOA Suite

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Linux: A Platform for the Cloud

    <b>Linux.com:</b> "The goal of this article is to review the history and architecture of Linux as well as its present day developments to understand how Linux has become today's leading platform for cloud computing. We will start with a little history on Unix system development and then move to the Linux system itself."

    Read the article

  • Contiguous Time Periods

    It is always better, and more efficient, to maintain referential integrity by using constraints rather than triggers. Sometimes it is not at all obvious how to do this, and the history table, and other temporal data tables, presented problems for checking data that were difficult to solve with constraints. Suddenly, Alex Kuznetsov came up with a good solution, and so now history tables can benefit from more effective integrity checking. Joe explains...

    Read the article

  • SSMS Tools Pack 3.0 is out. Full SSMS 2014 support and improved features.

    - by Mladen Prajdic
    With version 3.0 the SSMS 2014 is fully supported. Since this is a new major version you'll eventually need a new license. Please check the EULA to see when. As a thank you for your patience with this release, everyone that bought the SSMS Tools Pack after April 1st, the release date of SQL Server 2014, will receive a free upgrade. You won't have to do anything for this to take effect. First thing you'll notice is that the UI has been completely changed. It's more in line with SSMS and looks less web-like. Also the core has been updated and rewritten in some places to be better suited for future features. Major improvements for this release are: Window Connection Coloring Something a lot of people have asked me over the last 2 years is if there's a way to color the tab of the window itself. I'm very glad to say that now it is. In SSMS 2012 and higher the actual query window tab is also colored at the top border with the same color as the already existing strip making it much easier to see to which server your query window is connected to even when a window is not focused. To make it even better, you can not also specify the desired color based on the database name and not just the server name. This makes is useful for production environments where you need to be careful in which database you run your queries in. Format SQL The format SQL core was rewritten so it'll be easier to improve it in future versions. New improvement is the ability to terminate SQL statements with semicolons. This is available only in SSMS 2012 and up. Execution Plan Analyzer A big request was to implement the Problems and Solutions tooltip as a window that you can copy the text from. This is now available. You can move the window around and copy text from it. It's a small improvement but better stuff will come. SQL History Current Window History has been improved with faster search and now also shows the color of the server/database it was ran against. This is very helpful if you change your connection in the same query window making it clear which server/database you ran query on. The option to Force Save the history has been added. This is a menu item that flushes the execution and tab content history save buffers to disk. SQL Snippets Added an option to generate snippet from selected SQL text on right click menu. Run script on multiple databases Configurable database groups that you can save and reuse were added. You can create groups of preselected databases to choose from for each server. This makes repetitive tasks much easier New small team licensing option A lot of requests came in for 1 computer, Unlimited VMs option so now it's here. Hope it serves you well.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >