Search Results

Search found 28643 results on 1146 pages for 'go'.

Page 28/1146 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • When is it time to buy a new hard drive, and what considerations go into buying a new hard drive?

    - by user1125620
    I've had my current hard drive for about 4-5 years now, and I've never had a problem with it before, but now it's making whirring noises. It's done this before and, last time, the noise did go away the next day, but I have accumulated quite a bit of information that I wouldn't want to lose on the drive. HD Tune Pro and Berlac Advisor both said the drive was healthy, and I wouldn't want to get a new one unless it was absolutely necessary or was going to show drastic performance improvements. My only knock against the drive would be that Visual Studio takes longer to load than I'd like it to. HD Tune Pro says the average read speed is 54.3MB/s. I'm not sure if that's good or bad, but it seems about average compared to similar drives on http://www.hdtune.com/testresults.html. Model #: WDC WD5000AAJS-22YFA0 So, should hard drives be replaced after a certain amount of time? Has mine reached that point? Would a new hard drive be any faster?

    Read the article

  • What happened to the ability to go back in a already-loaded YouTube video?

    - by Chelonian
    For many years, my experience with YouTube had been, consistently and on any computer/browser, that once a section of that video were loaded, or the whole video, if you wanted to see an earlier part of the video again, you could simply click on the loading bar to that area and you would instantly go to that part. More recently (6 months?) this has changed. Now, clicking in an earlier part of the video forces a reload of the whole video (that is, the grey bar goes back to 0) and you have to hope your loading will be fast enough to make waiting to watch it again even worth it. It is a huge regression in usability for me. Is there any way to watch YouTube videos in the "old way"? I use Firefox 17.0.1 at this point and have an updated (I think) Flash, but I'd be willing to try other browsers or versions if it made a difference.

    Read the article

  • Problems Enforcing Referential Integrity on SQL Server Tables

    - by SidC
    Hello All, I have a SQL Server 2005 database comprised of Customer, Quote, QuoteDetail tables. I want/need to enforce referential integrity such that when an insert is made on quotedetail, the quote and customer tables are also affected. I have tried my best to set up primary/foreign keys on my tables but need some help. Here's the scripts for my tables as they stand now (please don't laugh): Customers: USE [Diel_inventory] GO /****** Object: Table [dbo].[Customers] Script Date: 05/08/2010 03:39:04 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Customers]( [pkCustID] [int] IDENTITY(1,1) NOT NULL, [CompanyName] [nvarchar](50) NULL, [Address] [nvarchar](50) NULL, [City] [nvarchar](50) NULL, [State] [nvarchar](2) NULL, [ZipCode] [nvarchar](5) NULL, [OfficePhone] [nvarchar](12) NULL, [OfficeFAX] [nvarchar](12) NULL, [Email] [nvarchar](50) NULL, [PrimaryContactName] [nvarchar](50) NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ([pkCustID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] Quotes: USE [Diel_inventory] GO /****** Object: Table [dbo].[Quotes] Script Date: 05/08/2010 03:30:46 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Quotes]( [pkQuoteID] [int] IDENTITY(1,1) NOT NULL, [fkCustomerID] [int] NOT NULL, [QuoteDate] [timestamp] NOT NULL, [NeedbyDate] [datetime] NULL, [QuoteAmt] [decimal](6, 2) NOT NULL, [QuoteApproved] [bit] NOT NULL, [fkOrderID] [int] NOT NULL, CONSTRAINT [PK_Bids] PRIMARY KEY CLUSTERED ( [pkQuoteID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[Quotes] WITH CHECK ADD CONSTRAINT [fkCustomerID] FOREIGN KEY([fkCustomerID]) REFERENCES [dbo].[Customers] ([pkCustID]) GO ALTER TABLE [dbo].[Quotes] CHECK CONSTRAINT [fkCustomerID] QuoteDetail: USE [Diel_inventory] GO /****** Object: Table [dbo].[QuoteDetail] Script Date: 05/08/2010 03:31:58 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[QuoteDetail]( [ID] [int] IDENTITY(1,1) NOT NULL, [fkQuoteID] [int] NOT NULL, [fkCustomerID] [int] NOT NULL, [fkPartID] [int] NULL, [PartNumber1] [float] NOT NULL, [Qty1] [int] NOT NULL, [PartNumber2] [float] NULL, [Qty2] [int] NULL, [PartNumber3] [float] NULL, [Qty3] [int] NULL, [PartNumber4] [float] NULL, [Qty4] [int] NULL, [PartNumber5] [float] NULL, [Qty5] [int] NULL, [PartNumber6] [float] NULL, [Qty6] [int] NULL, [PartNumber7] [float] NULL, [Qty7] [int] NULL, [PartNumber8] [float] NULL, [Qty8] [int] NULL, [PartNumber9] [float] NULL, [Qty9] [int] NULL, [PartNumber10] [float] NULL, [Qty10] [int] NULL, [PartNumber11] [float] NULL, [Qty11] [int] NULL, [PartNumber12] [float] NULL, [Qty12] [int] NULL, [PartNumber13] [float] NULL, [Qty13] [int] NULL, [PartNumber14] [float] NULL, [Qty14] [int] NULL, [PartNumber15] [float] NULL, [Qty15] [int] NULL, [PartNumber16] [float] NULL, [Qty16] [int] NULL, [PartNumber17] [float] NULL, [Qty17] [int] NULL, [PartNumber18] [float] NULL, [Qty18] [int] NULL, [PartNumber19] [float] NULL, [Qty19] [int] NULL, [PartNumber20] [float] NULL, [Qty20] [int] NULL, CONSTRAINT [PK_QuoteDetail] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[QuoteDetail] WITH CHECK ADD CONSTRAINT [FK_QuoteDetail_Customers] FOREIGN KEY ([fkCustomerID]) REFERENCES [dbo].[Customers] ([pkCustID]) GO ALTER TABLE [dbo].[QuoteDetail] CHECK CONSTRAINT [FK_QuoteDetail_Customers] GO ALTER TABLE [dbo].[QuoteDetail] WITH CHECK ADD CONSTRAINT [FK_QuoteDetail_PartList] FOREIGN KEY ([fkPartID]) REFERENCES [dbo].[PartList] ([RecID]) GO ALTER TABLE [dbo].[QuoteDetail] CHECK CONSTRAINT [FK_QuoteDetail_PartList] GO ALTER TABLE [dbo].[QuoteDetail] WITH CHECK ADD CONSTRAINT [FK_QuoteDetail_Quotes] FOREIGN KEY([fkQuoteID]) REFERENCES [dbo].[Quotes] ([pkQuoteID]) GO ALTER TABLE [dbo].[QuoteDetail] CHECK CONSTRAINT [FK_QuoteDetail_Quotes] Your advice/guidance on how to set these up so that customer ID in Customers is the same as in Quotes (referential integrity) and that CustomerID is inserted on Quotes and Customers when an insert is made to QuoteDetial would be much appreciated. Thanks, Sid

    Read the article

  • About to go live: virtual dedicated server or cloud?

    - by morpheous
    I am about to launch my startup company, and we will be going live in a few weeks time. We have really tight budgetary constraints, since we are bootstrapping - and would prefer not to raise external capital. I cant use shared hosting because I need more control of the server machine (for technical reasons - e.g. using proprietary extensions to PHP, Apache and in the database layer as well) - but want to control costs and dont want to go fully private server route, until we have determined the market size etc. So the only real alternatives AFAIK is between virtual server and the cloud. At the moment, cloud services seem a bit "vague" to me. My understanding is that they allow an entity to outsource its IT infrastructure, which in my mind (at least), is indistinguishable from what a hosting provider provides (at least from a functional point of view) - I would like to seek some clarification on exactly what the difference between the two is. Back to my original question, my requirements are: IT infrastructure that can scale with growth Ability to have control of the machine (for e.g. to install our internally developed libraries etc) Backup software that is flexible and comprehensive enough (yet simple to use), that allows a (secured) backup strategy to be implemented. On this issue, I have always wondered where the actual backed up data was stored (since the physical machines are remote, and one cant get access to any actual tapes etc backed onto). I would also like some advice and recommendations in this area. Regarding data size, I am expecting the dataset to be increasing by a few megabytes of data (originally, say 10Mb, in about a years time, possibly 50Mb) every day. As an aside, I have decided to deploy on a Debian server (most of my additional libraries etc were compiled and built on a Debian machine). Mindful of all of the above, I would like some advice (and reason) as to which route to take. I would also like some advice on which backup software to use, from people who have walked a similar path.

    Read the article

  • apache2 server running ruby on rails application has go daddy cert that works in chrome/firefox and ie 9 but not ie 8

    - by ryan
    I have a rails application up on a linode ubuntu 11 server, running apache2. I have a cert purchased from godaddy, (where we also bought our domain) and the cert is installed on my server. Part of my virtual host file: ServerName my_site.com ServerAlias www.my_site.com SSLEngine On SSLCertificateFile /path/my_site.com.crt SSLCertificateKeyFile /path/my_site.com.key SSLCertificateChainFile /path/gd_bundle.crt The cert works fine in Chrome, FireFox and IE 9+ but in IE 8- I get this error: There is a problem with this website's security certificate. The security certificate presented by this website was issued for a different website's address. I'm hosting multiple rails apps on this same server (4 right now plus some old php sites that don't need ssl). I have tried googling every possible combination of the error/situation that I could think of but at this point I'm shooting in the dark. The closest I could come up with is that some versions if IE don't support SNI. But that doesn't apply here because I am getting the warning on windows 7 machines running IE 8, and the SNI only seemed to apply to IE 8 if the operating system was windows XP. So why is this cert being accepted by all browsers but giving me a warning in IE 8? Edit: So doing a little more digging and I figured out some more. It turns out this is effecting IE 9 as well. However the problem seems to be that IE is not traversing the ssl chain to get to the right cert. FireFox and Chrome when I go to view certificate show the correct one, but IE is showing one of our other sites certificates. REAL QUESTION HERE: That being the case why is IE not getting the right certificate when others are and how do I fix it?

    Read the article

  • I go to www.facebook.com, but a completely different site appears.

    - by Rosarch
    I am going to www.facebook.com, but the site that appears is totally different. This occurs on Chrome 6+, IE9, and FF 3+. What could be happening? Is this a security risk? Facebook was working just fine, then all of a sudden this happened. Update: The same problem occurs on my netbook. Update 2: When I go to http://69.63.189.11/, it works fine. So... DNS problem? How do I fix? Update 3: Checked the hosts file: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost Looks like it hasn't been altered.

    Read the article

  • How can the route between two private IPs go via public IPs?

    - by Gilles
    I'm trying to understand what this output from traceroute means. I changed the IP addresses for privacy but retained the public/private IP range distinction. traceroute.db -e -n 10.1.1.9 traceroute to (10.1.1.9), 30 hops max, 60 byte packets 1 10.0.0.1 0.596 ms 0.588 ms 0.577 ms 2 10.0.0.2 1.032 ms 1.029 ms 1.084 ms 3 10.0.0.3 3.360 ms 3.355 ms 3.338 ms 4 23.0.0.4 3.974 ms 4.592 ms 4.584 ms 5 23.0.0.5 13.442 ms 13.445 ms 13.434 ms 6 45.0.0.6 13.195 ms 12.924 ms 12.913 ms 7 67.0.0.7 52.088 ms 51.683 ms 52.040 ms 8 10.1.1.8 46.878 ms 44.575 ms 44.815 ms 9 10.1.1.9 45.932 ms 45.603 ms 45.593 ms The first 10.0.* range is inside my organisation. The last 10.1.* range is another site of my organisation. The intermediate addresses belong to various ISPs. I expect that there is some kind of VPN between the two sites, but I don't know much about our network topology. What I don't understand is how the route can go from a private address through public addresses back into private addresses. Searching led me to Public IPs on MPLS Traceroute, which gives a possible explanation: MPLS. Is MPLS the only possible or most likely explanation? Otherwise what does this tell me about our network infrastructure? Bonus question for my edification: in this scenario, who is generating the ICMP TTL exceeded packets and if relevant mangling their source and destination addresses?

    Read the article

  • Where did my backup files go? Can they be recovered?

    - by Ken
    I just purchased a Western Digital Essential SE 1TB external hard drive from Best Buy at their recommendation. I then exchanged it for a Toshiba Canvio (I think that was the name). I have a Toshiba Qosmio X505-Q898. The Canvio locked up my computer and rewrote some kind of OS file, and erased all the restore points as well as the system image backup (according to Best Buy) just by plugging it in for the first time. Never even got to the install part or anything -- plugged it in and fried my computer. They spent about an hour and a half on my computer and got it back to a somewhat working condition and gave me access to my files. So now they say I have to back it up using my recovery disk and rewriting my OS. Enter the Essential. Brought it home last night, plugged it in and installed everything. Works perfect, no problems. Backed up everything on it. I unplugged and plugged it twice to make sure that everything was on it. Essential told me it had both the HDD and SSD backed up. So I reinstalled my OS. Plugged the Essential in and everything loads right up. Went to retrieve my files and the Western Digital has nothing on it. It shows all my music, pics, ETC. as still being on my computer and needing to be backed up, but since there are no files on my computer now. Where is this information coming from and where did my files go? It's about 810GB worth of files I've amassed over several years. Is there any way to recover data from this? I plan to contact Western Digital and Best Buy, just thought I would check here too. Any advice will be appreciated as a lot of these files are invaluable to me.

    Read the article

  • Is there a way to rewrite a url to go to a file stored in the filefield of the node

    - by kidbrax
    I have a custom content-type called 'document' On this content-type, I have a File field where a user can upload a document. Assume the path to a document node is /somesite/document/tester I would like to be able to link to /somesite/document/tester/file and it automatically go to the file that is uploaded to the file field of the node. I have tried the url_alter module and am able to get the correct url of the document but when it tries to go there, it says not found. It seems that my redirection is still trying to be rewritten with pathauto or something. Ulitamtely, we want to have a consistent url for these documents so that a user can upload a replacement document and we can still use the same urls. Any ideas?

    Read the article

  • How would you go to "design" a cart within a Zend Framework project?

    - by ÉricP
    Hi, I know ZF well, and a little bit of Magento, but I'm new to E-commerce, and I'm sure there are best practice to follow when designing a cart model. How would go to design a cart? I though of two models, Model_Cart and Model_Cart_Item used in conjonction with Zend_Session to store the cart in session. What are your feedbacks? How would you go to do that? What should I know about writing a cart system? Note that I need a simple system, I even don't need to work with quantity

    Read the article

  • problem with iOS 4.2 when user press the list view item to go to UIwebview page and the navigation button disappears on the second visit

    - by seahorse
    My app is a Navigation based application. The main menu contains the list view items. if I clicks one of them, it goes to next view which in this case take me to UIwebview embedded web site. Everything is looking great. I can view the content of web page, the navigation control back button which takes to the main menu if I press it. However, I'm having issue when I try to go back to main menu if i visit that subview the second time. It loads the content of UIwebview web page, but the navigation button is gone and won't let me go back to main menu. This problem only appears on latest iOS 4.2 version. Otherwise it works great on 3.1 to 4.1. I would appreciate any hints or inputs. Note this seems not working for subview using UIWebview embedded web content. I don't have any issue with other subviews

    Read the article

  • Is it possible for PHP to generate a fresh page on every Javascript history.go(-1) ?

    - by Ho
    Hello, I have a PHP page (a.php) which is already sending these headers: <?php header('Cache-Control: no-cache, no-store, max-age=0, must-revalidate'); header('Expires: Mon, 26 Jul 1997 05:00:00 GMT'); header('Pragma: no-cache'); ?> And on the PHP page (a.php) , it has a link to another page (b.html) on b.html, it has a javascript code to: <script type="text/javascript"> history.go(-1); </scirpt> It seems to me that, when the browser is "going back" to a.php,the content isn't fresh at all. Would you please advise me if generating a completely fresh page on history.go(-1) is possible? Thank you.

    Read the article

  • Flex Builder I want to go from titleWindow to a panel.

    - by dejaninic
    Hi. I'm building an user login page and I want to go from titleWindow to Panel. I'm suing following function but it always takes me back to parentApplication. My question is how can I go to my panel and not to Application page. I know that I'm using parentApplication but what should I use instead??? Here is a part of my code: private function handleLogin(event:ResultEvent):void { Alert.show("You have succesfully logged in.", "Information", Alert.OK, null, null, null, Alert.OK); parentApplication.accountPage.mainService.getAccount(); PopUpManager.removePopUp(this); }

    Read the article

  • Where should I go with hosting my site: VPS, GAE, another option?

    - by Jonathan Hayward
    My website, http://JonathansCorner.com/, began life before 1994 as www.imsa.edu/~jhayward/ and has been through various iterations and improvements to content, HTML, and the like, but remains a literature site that is from a web administrator's perspective fairly simple and primitive: a fair amount of static HTML and supporting files, a little bit of CGI and URI rewriting, .htaccess files providing Expires: headers and the like. An associated site demoes various CGI scripts that fall under the category of "and other creations"; the site as a whole has the purpose of sharing my creative works, and so far a fairly rudimentary use of Apache functionality, supported by Unix tools to, for instance, update RSS feed and the "starting point" link on the home page, has served that purpose fairly well. I looked around here on web hosting, and found the note on web host reccommendations as a good note for "What are some of people's favorite web hosts overall," but I wanted to ask a more focused question of "What are the best web hosts for criteria XYZ:" I am looking at a VPS so I will have root, be able to install stuff and edit Apache's config files etc., running Gentoo or other Linux, BSD, or the like. I would like a system that is secure enough that the host's vulnerabilities are mostly the ones that come along with what I am trying to do: that is, I won't be trying to administer and secure an ancient Linux like some have complained about at 1and1. I would like good uptime/reliability and competent support staff: if the level 1 help desk is going to tell me to go to "My Computer" on a Linux box, I'd like to be able to get past them. Ideally I would like a site hosted within some place that will have low latency for U.S. visitors in particular. I would like a hosting solution that will be with a stable business, one that will probably be around, and one unlikely to vanish without warning. With those things specified, I would be interested in knowing what are the less expensive options. (I expect that some of the things I've specified will knock out all of the cheapest options, but I'm still interested in price.) With all that stated, I'd like to back up a bit and look at whether I am asking the right question. I am concerned that the above is a very good way of asking, "How can I keep my site in line with the wave of the past?" I am wondering if it might be specifically wiser to look to adapt my site to newer technologies instead of trying to keep it on older technologies. For instance, while I would hardly portray my site as a way to show off the full power of Google App Engine, the main site at least should be a straightforward port if I were to do that. And beyond Google App Engine, my knowledge of cloud solutions is basic. If it is a better and more future-proof solution to port my site to another kind of solution, I would be interested in knowing where those future-proof solutions lie. So I would be interested in wisdom. If the question I asked in detail is still a good question to be asking, what would people suggest? Or if I should seriously consider porting my site to a newer basic option, what should I try there? Any thoughts would be appreciated.

    Read the article

  • SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    Earlier I asked puzzle why statistics are not updated. Read the complete details over here: Statistics are not Updated but are Created Once In the question I have demonstrated even though statistics should have been updated after lots of insert in the table are not updated.(Read the details SQL SERVER – When are Statistics Updated – What triggers Statistics to Update) In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? I have many answers – here is the how I fixed it which has resolved the issue for me. NOTE: There are multiple answers to this problem and I will do my best to list all. Solution: Create nonclustered Index on column City Here is the working example for the same. Let us understand this script and there is added explanation at the end. -- Execution Plans Difference -- Estimated Execution Plan Vs Actual Execution Plan -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO CREATE NONCLUSTERED INDEX IX_ExecTable1 ON ExecTable (City); GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -------------------------------------------------------------- -- Round 2 -- Insert One Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -- Clean up Database DROP TABLE ExecTable GO When I created non clustered index on the column city, it also created statistics on the same column with same name as index. When we populate the data in the column the index is update – resulting execution plan to be invalided – this leads to the statistics to be updated in next execution of SELECT. This behavior does not happen on Heap or column where index is auto created. If you explicitly update the index, often you can see the statistics are updated as well. You can see this is for sure happening if you follow the tell of John Sansom. John Sansom‘s suggestion: That was fun! Although the column statistics are invalidated by the time the second select statement is executed, the query is not compiled/recompiled but instead the existing query plan is reused. It is the “next” compiled query against the column statistics that will see that they are out of date and will then in turn instantiate the action of updating statistics. You can see this in action by forcing the second statement to recompile. SELECT FirstName, LastName, City FROM ExecTable WHERE City = ‘New York’ option(RECOMPILE) GO Kevin Cross also have another suggestion: I agree with John. It is reusing the Execution Plan. Aside from OPTION(RECOMPILE), clearing the Execution Plan Cache before the subsequent tests will also work. i.e., run this before round 2: ————————————————————– – Clear execution plan cache before next test DBCC FREEPROCCACHE WITH NO_INFOMSGS; ————————————————————– Nice puzzle! Kevin As this was puzzle John and Kevin both got the correct answer, there was no condition for answer to be part of best practices. I know John and he is finest DBA around – his tremendous knowledge has always impressed me. John and Kevin both will agree that clearing cache either using DBCC FREEPROCCACHE and recompiling each query every time is for sure not good advice on production server. It is correct answer but not best practice. By the way, if you have better solution or have better suggestion please advise. I am open to change my answer and publish further improvement to this solution. On very separate note, I like to have clustered index on my Primary Key, which I have not mentioned here as it is out of the scope of this puzzle. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Index, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Statistics

    Read the article

  • SQL SERVER – Simple Example to Configure Resource Governor – Introduction to Resource Governor

    - by pinaldave
    Let us jump right away with question and answer mode. What is resource governor? Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. Why is resource governor required? If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. What will be the real world example of need of resource governor? Here are two simple scenarios where the resource governor can be very useful. Scenario 1: A server which is running OLTP workload and various resource intensive reports on the same server. The ideal situation is where there are two servers which are data synced with each other and one server runs OLTP transactions and the second server runs all the resource intensive reports. However, not everybody has the luxury to set up this kind of environment. In case of the situation where reports and OLTP transactions are running on the same server, limiting the resources to the reporting workload it can be ensured that OTLP’s critical transaction is not throttled. Scenario 2: There are two DBAs in one organization. One DBA A runs critical queries for business and another DBA B is doing maintenance of the database. At any point in time the DBA A’s work should not be affected but at the same time DBA B should be allowed to work as well. The ideal situation is that when DBA B starts working he get some resources but he can’t get more than defined resources. Does SQL Server have any default resource governor component? Yes, SQL Server have two by default created resource governor component. 1) Internal –This is used by database engine exclusives and user have no control. 2) Default – This is used by all the workloads which are not assigned to any other group. What are the major components of the resource governor? Resource Pools Workload Groups Classification In simple words here is what the process of resource governor is. Create resource pool Create a workload group Create classification function based on the criteria specified Enable Resource Governor with classification function Let me further explain you the same with graphical image. Is it possible to configure resource governor with T-SQL? Yes, here is the code for it with explanation in between. Step 0: Here we are assuming that there are separate login accounts for Reporting server and OLTP server. /*----------------------------------------------- Step 0: (Optional and for Demo Purpose) Create Two User Logins 1) ReportUser, 2) PrimaryUser Use ReportUser login for Reports workload Use PrimaryUser login for OLTP workload -----------------------------------------------*/ Step 1: Creating Resource Pool We are creating two resource pools. 1) Report Server and 2) Primary OLTP Server. We are giving only a few resources to the Report Server Pool as described in the scenario 1 the other server is mission critical and not the report server. ----------------------------------------------- -- Step 1: Create Resource Pool ----------------------------------------------- -- Creating Resource Pool for Report Server CREATE RESOURCE POOL ReportServerPool WITH ( MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30, MIN_MEMORY_PERCENT=0, MAX_MEMORY_PERCENT=30) GO -- Creating Resource Pool for OLTP Primary Server CREATE RESOURCE POOL PrimaryServerPool WITH ( MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100, MIN_MEMORY_PERCENT=50, MAX_MEMORY_PERCENT=100) GO Step 2: Creating Workload Group We are creating two workloads each mapping to each of the resource pool which we have just created. ----------------------------------------------- -- Step 2: Create Workload Group ----------------------------------------------- -- Creating Workload Group for Report Server CREATE WORKLOAD GROUP ReportServerGroup USING ReportServerPool ; GO -- Creating Workload Group for OLTP Primary Server CREATE WORKLOAD GROUP PrimaryServerGroup USING PrimaryServerPool ; GO Step 3: Creating user defiled function which routes the workload to the appropriate workload group. In this example we are checking SUSER_NAME() and making the decision of Workgroup selection. We can use other functions such as HOST_NAME(), APP_NAME(), IS_MEMBER() etc. ----------------------------------------------- -- Step 3: Create UDF to Route Workload Group ----------------------------------------------- CREATE FUNCTION dbo.UDFClassifier() RETURNS SYSNAME WITH SCHEMABINDING AS BEGIN DECLARE @WorkloadGroup AS SYSNAME IF(SUSER_NAME() = 'ReportUser') SET @WorkloadGroup = 'ReportServerGroup' ELSE IF (SUSER_NAME() = 'PrimaryServerPool') SET @WorkloadGroup = 'PrimaryServerGroup' ELSE SET @WorkloadGroup = 'default' RETURN @WorkloadGroup END GO Step 4: In this final step we enable the resource governor with the classifier function created in earlier step 3. ----------------------------------------------- -- Step 4: Enable Resource Governer -- with UDFClassifier ----------------------------------------------- ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.UDFClassifier); GO ALTER RESOURCE GOVERNOR RECONFIGURE GO Step 5: If you are following this demo and want to clean up your example, you should run following script. Running them will disable your resource governor as well delete all the objects created so far. ----------------------------------------------- -- Step 5: Clean Up -- Run only if you want to clean up everything ----------------------------------------------- ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = NULL) GO ALTER RESOURCE GOVERNOR DISABLE GO DROP FUNCTION dbo.UDFClassifier GO DROP WORKLOAD GROUP ReportServerGroup GO DROP WORKLOAD GROUP PrimaryServerGroup GO DROP RESOURCE POOL ReportServerPool GO DROP RESOURCE POOL PrimaryServerPool GO ALTER RESOURCE GOVERNOR RECONFIGURE GO I hope this introductory example give enough light on the subject of Resource Governor. In future posts we will take this same example and learn a few more details. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Resource Governor

    Read the article

  • Should I go vor Arrays or Objects in PHP in a CouchDB/Ajax app?

    - by karlthorwald
    I find myself converting between array and object all the time in PHP application that uses couchDB and Ajax. Of course I am also converting objects to JSON and back (for sometimes couchdb but mostly Ajax), but this is not so much disturbing my workflow. At the present I have php objects that are returned by the CouchDB modules I use and on the other hand I have the old habbit to return arrays like array("error"="not found","data"=$dataObj) from my functions. This leads to a mixed occurence of real php objects and nested arrays and I cast with (object) or (array) if necessary. The worst thing is that I know more or less by heart what a function returns, but not what type (array or object), so I often run into type errors. My plan is now to always cast arrays to objects before returning from a function. Of course this implies a lot of refactoring. Is this the right way to go? What about the conversion overhead? Other ideas or tips? Edit: Kenaniah's answer suggests I should go the other way, this would mean I'd cast everything to arrays. And for all the Ajax / JSON stuff and also for CouchDB I would use $myarray = json_decode($json_data,$assoc = false) Even more work to change all the CouchDB and Ajax functions but in the end I have better code.

    Read the article

  • How would I structure the loop to go through inputs?

    - by dmanexe
    I am attempting to make a loop that will go through an array structured like this: $input[n][checked] $input[n][input] The 2nd input acts as a price multiplier, but doesn't have to exist (field can be blank). I don't think a foreach loop is right because I don't think it handles the inputs from the form in the correct dimensional array order to keep the information together. I have inputs on a form that look like this: <input type="checkbox" name="measure[<?php echo $item->id; ?>][checked]" value="<?php echo $item->id; ?>"> <input class="item_mult" type="text" name="measure[<?php echo $item->id; ?>][input]" /> I am attempting to make the loop go through and act as a multiplier on the input relative to its sibling field. (i.e. input[1][input] would be an integer that I want to retrieve later, grouped with input[1][checked]) <? $field = $this->input->post('measure',true); $totals = array(); foreach($field as $value): if ($value['input'] == TRUE) { $query = $this->db->get_where('items', array('id' => $value['input']))->row(); $totals[] = $query->price; ?> <p><?=$query->name?> - <?=money_format('%(#10n', $query->price)?></p> <?php } else { } endforeach; ?> And finally, the last code to array_sum and print the grand total: <? $grand_total = array_sum($totals); ?> <p><?=money_format('%(#10n', $grand_total)?></p> Eventually, I will need to store these records in a database, so I am sending complete item IDs through, etc.

    Read the article

  • How do you go about finding out whether an idea you've had has already been patented?

    - by Iain Fraser
    I have an idea for image copy-protection that I'm in the process of coding up and plan on selling to one of my clients who sells images online. If successful I think there would be a lot of people in a similar situation to my client who would be interested in the code also. I think this is a fairly unique idea that could be packaged into a saleable product - but if I did do this, I wouldn't want some big corporation decending on me with their lawyers after all my hard work. So before I put too much work into this I'd really like to know how I'd go about finding if this idea has been patented already and whether I'd get in trouble if I sold my product and if it would be worthwhile patenting the idea myself. Although I find the idea of software patenting abhorrent, it would be more to protect myself from the usual suspects than to stop fellow-developers from using the idea (if it is in fact a worthwhile one). I live in Australia, so an idea of who to go and see and a ball park figure of how much money I'd be looking at having to pay would be fantastic (in orders of a magnitude: 100s, 1000s, 10s of thousands of dollars, etc). Cheers Iain

    Read the article

  • Should I go for Arrays or Objects in PHP in a CouchDB/Ajax app?

    - by karlthorwald
    I find myself converting between array and object all the time in PHP application that uses couchDB and Ajax. Of course I am also converting objects to JSON and back (for sometimes couchdb but mostly Ajax), but this is not so much disturbing my workflow. At the present I have php objects that are returned by the CouchDB modules I use and on the other hand I have the old habbit to return arrays like array("error"="not found","data"=$dataObj) from my functions. This leads to a mixed occurence of real php objects and nested arrays and I cast with (object) or (array) if necessary. The worst thing is that I know more or less by heart what a function returns, but not what type (array or object), so I often run into type errors. My plan is now to always cast arrays to objects before returning from a function. Of course this implies a lot of refactoring. Is this the right way to go? What about the conversion overhead? Other ideas or tips? Edit: Kenaniah's answer suggests I should go the other way, this would mean I'd cast everything to arrays. And for all the Ajax / JSON stuff and also for CouchDB I would use $myarray = json_decode($json_data,$assoc = true); //EDIT: changed to true, whcih is what I really meant Even more work to change all the CouchDB and Ajax functions but in the end I have better code.

    Read the article

  • SQL SERVER – Disable Clustered Index and Data Insert

    - by pinaldave
    Earlier today I received following email. “Dear Pinal, [Removed unrelated content] We looked at your script and found out that in your script of disabling indexes, you have only included non-clustered index during the bulk insert and missed to disabled all the clustered index. Our DBA[name removed] has changed your script a bit and included all the clustered indexes. Since our application is not working. When DBA [name removed] tried to enable clustered indexes again he is facing error incorrect syntax error. We are in deep problem [word replaced] [Removed Identity of organization and few unrelated stuff ]“ I have replied to my client and helped them fixed the problem. What really came to my attention is the concept of disabling clustered index. Let us try to learn a lesson from this experience. In this case, there was no need to disable clustered index at all. I had done necessary work when I was called in to work on tuning project. I had removed unused indexes, created few optimal indexes and wrote a script to disable few selected high cost indexes when bulk insert (and similar) operations are performed. There was another script which rebuild all the indexes as well. The solution worked till they included clustered index in disabling the script. Clustered indexes are in fact original table (or heap) physically ordered (any more things – not scope of this article) according to one or more keys(columns). When clustered index is disabled data rows of the disabled clustered index cannot be accessed. This means there will be no insert possible. When non clustered indexes are disabled all the data related to physically deleted but the definition of the index is kept in the system. Due to the same reason even reorganization of the index is not possible till the clustered index (which was disabled) is rebuild. Now let us come to the second part of the question, regarding receiving the error when clustered index is ‘enabled’. This is very common question I receive on the blog. (The following statement is written keeping the syntax of T-SQL in mind) Clustered indexes can be disabled but can not be enabled, they have to rebuild. It is intuitive to think that something which we have ‘disabled’ can be ‘enabled’ but the syntax for the same is ‘rebuild’. This issue has been explained here: SQL SERVER – How to Enable Index – How to Disable Index – Incorrect syntax near ‘ENABLE’. Let us go over this example where inserting the data is not possible when clustered index is disabled. USE AdventureWorks GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL, CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Populate Table INSERT INTO [dbo].[TableName] SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' GO -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Insert Data should work fine INSERT INTO [dbo].[TableName] SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Fifth' GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Insert Data will fail INSERT INTO [dbo].[TableName] SELECT 6, 'Sixth' UNION ALL SELECT 7, 'Seventh' GO /* Error: Msg 8655, Level 16, State 1, Line 1 The query processor is unable to produce a plan because the index 'PK_TableName' on table or view 'TableName' is disabled. */ -- Reorganizing Index will also throw an error ALTER INDEX [PK_TableName] ON [dbo].[TableName] REORGANIZE GO /* Error: Msg 1973, Level 16, State 1, Line 1 Cannot perform the specified operation on disabled index 'PK_TableName' on table 'dbo.TableName'. */ -- Rebuliding should work fine ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Insert Data should work fine INSERT INTO [dbo].[TableName] SELECT 6, 'Sixth' UNION ALL SELECT 7, 'Seventh' GO -- Clean Up DROP TABLE [dbo].[TableName] GO I hope this example is clear enough. There were few additional posts I had written years ago, I am listing them here. SQL SERVER – Enable and Disable Index Non Clustered Indexes Using T-SQL SQL SERVER – Enabling Clustered and Non-Clustered Indexes – Interesting Fact Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Constraint and Keys, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison

    - by pinaldave
    Earlier, I have written about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. I got many emails asking for performance analysis of paging. Here is the quick analysis of it. The real challenge of paging is all the unnecessary IO reads from the database. Network traffic was one of the reasons why paging has become a very expensive operation. I have seen many legacy applications where a complete resultset is brought back to the application and paging has been done. As what you have read earlier, SQL Server 2011 offers a better alternative to an age-old solution. This article has been divided into two parts: Test 1: Performance Comparison of the Two Different Pages on SQL Server 2011 Method In this test, we will analyze the performance of the two different pages where one is at the beginning of the table and the other one is at its end. Test 2: Performance Comparison of the Two Different Pages Using CTE (Earlier Solution from SQL Server 2005/2008) and the New Method of SQL Server 2011 We will explore this in the next article. This article will tackle test 1 first. Test 1: Retrieving Page from two different locations of the table. Run the following T-SQL Script and compare the performance. SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO You will notice that when we are reading the page from the beginning of the table, the database pages read are much lower than when the page is read from the end of the table. This is very interesting as when the the OFFSET changes, PAGE IO is increased or decreased. In the normal case of the search engine, people usually read it from the first few pages, which means that IO will be increased as we go further in the higher parts of navigation. I am really impressed because using the new method of SQL Server 2011,  PAGE IO will be much lower when the first few pages are searched in the navigation. Test 2: Retrieving Page from two different locations of the table and comparing to earlier versions. In this test, we will compare the queries of the Test 1 with the earlier solution via Common Table Expression (CTE) which we utilized in SQL Server 2005 and SQL Server 2008. Test 2 A : Page early in the table -- Test with pages early in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO Test 2 B : Page later in the table -- Test with pages later in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO From the resultset, it is very clear that in the earlier case, the pages read in the solution are always much higher than the new technique introduced in SQL Server 2011 even if we don’t retrieve all the data to the screen. If you carefully look at both the comparisons, the PAGE IO is much lesser in the case of the new technique introduced in SQL Server 2011 when we read the page from the beginning of the table and when we read it from the end. I consider this as a big improvement as paging is one of the most used features for the most part of the application. The solution introduced in SQL Server 2011 is very elegant because it also improves the performance of the query and, at large, the database. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Rename Columnname or Tablename – SQL in Sixty Seconds #032 – Video

    - by pinaldave
    We all make mistakes at some point of time and we all change our opinion. There are quite a lot of people in the world who have changed their name after they have grown up. Some corrected their parent’s mistake and some create new mistake. Well, databases are not protected from such incidents. There are many reasons why developers may want to change the name of the column or table after it was initially created. The goal of this video is not to dwell on the reasons but to learn how we can rename the column and table. Earlier I have written the article on this subject over here: SQL SERVER – How to Rename a Column Name or Table Name. I have revised the same article over here and created this video. There is one very important point to remember that by changing the column name or table name one creates the possibility of errors in the application the columns and tables are used. When any column or table name is changed, the developer should go through every place in the code base, ad-hoc queries, stored procedures, views and any other place where there are possibility of their usage and change them to the new name. If this is one followed up religiously there are quite a lot of changes that application will stop working due to this name change.  One has to remember that changing column name does not change the name of the indexes, constraints etc and they will continue to reference the old name. Though this will not stop the show but will create visual un-comfort as well confusion in many cases. Here is my question back to you – have you changed ever column name or table name in production database (after project going live)? If yes, what was the scenario and need of doing it. After all it is just a name. Let me know what you think of this video. Here is the updated script. USE tempdb GO CREATE TABLE TestTable (ID INT, OldName VARCHAR(20)) GO INSERT INTO TestTable VALUES (1, 'First') GO -- Check the Tabledata SELECT * FROM TestTable GO -- Rename the ColumnName sp_RENAME 'TestTable.OldName', 'NewName', 'Column' GO -- Check the Tabledata SELECT * FROM TestTable GO -- Rename the TableName sp_RENAME 'TestTable', 'NewTable' GO -- Check the Tabledata - Error SELECT * FROM TestTable GO -- Check the Tabledata - New SELECT * FROM NewTable GO -- Cleanup DROP TABLE NewTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – How to Rename a Column Name or Table Name What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >