Search Results

Search found 89127 results on 3566 pages for 'backup server'.

Page 364/3566 | < Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >

  • SQL Server Reporting Services - Fast TimeDataRetrieval - Long TimeProcessing

    - by user197529
    An application that I support has recently begun experiencing extended periods of time required to execute a report in SQL Server Reporting Services. The reports that are being executed are not terribly complex. There are multiple stored procedures (between 5 and 8) which return anywhere from a handful to 8000 records total. Reports are generally from 2 to 100 pages. One can argue (and I have) the benefit of a 100 page report, but the client is footing the bill. At any rate, the problem is that even the reports with 500 records (11 pages) being returned takes 5 minutes to return to the browser. In the execution log the TimeDataRetrieval is 60 seconds, but the TimeProcessing is 235 seconds. It seems bizarre to me that my query runs so quickly, but it takes Reporting Services so long to process the data. Any suggestions are greatly appreciated. Kind Regards, Bernie

    Read the article

  • TFSBuild.Proj and Manual SQL Server Work Help?

    - by ScSub
    Using the VS 2008 GDR update, I have created a database project. I have created a SQL Server deployment package. I have created a database unit test. Using some wizards, the stuff got into my tfsbuild.proj file so near the end of the automated build process a database is created. I lack a little control of the whole process, I now see. What I would like to do is manually deploy the DB, run 3 custom scripts against the DB, and then manually start the DB unit test. I have other non-DB unit tests that already run. I do not want to use VSMDI or ordered unit test stuff because in out multi-developer environment it gets messy. Help!

    Read the article

  • SQL Server: SELECT rows with MAX(Column A), MAX(Column B), DISTINCT by related columns

    - by z531
    Scenario: Table A MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/1/2010, 'Fred', null, null 2, 1/2/2010, 'Barney', 'Mr. Slate', 1/7/2010 3, 1/3/2010, 'Noname', null, null Table B MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/3/2010, 'Wilma', 'The Great Kazoo', 1/5/2010 2, 1/4/2010, 'Betty', 'Dino', 1/4/2010 Table C MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/5/2010, 'Pebbles', null, null 2, 1/6/2010, 'BamBam', null, null Table D MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/2/2010, 'Noname', null, null 3, 1/4/2010, 'Wilma', null, null I need to return the max added date and corresponding user, and max updated date and corresponding user for each distinct record when tables A,B,C&D are UNION'ed, i.e.: 1, 1/5/2010, 'Pebbles', 'The Great Kazoo', 1/5/2010 2, 1/6/2010, 'BamBam', 'Mr. Slate', 1/7/2010 3, 1/4/2010, 'Wilma', null, null I know how to do this with one date/user per row, but with two is beyond me. DBMS is SQL Server 2005. T-SQL solution preferred. Thanks in advance, Dave

    Read the article

  • Table fragmentation in SQL server

    - by Joseph
    Hi Anybody have an idea about Table fragmentation in SQL Server(not Index fragmentation).We have a table ,this is the main table and its not storing any data permently,dat come here and goes out continusly. there is no index on this because only insert and delete staements are running frequently. recently We face a huge delay for the response from this table. If we select anything it tooks more than 2 to 5 minuts to return result,even there is very few datas. At last we delete and recreate this table,and now its working very fine. Appreciate If any comments,how this is happening ? Joseph

    Read the article

  • What to consider if using triggers on tables in a sql-server merge replication

    - by Ice
    Hi, i am driving since some years a sql-server2000 merge-replication over three locations. Triggers do a lot of work in this database. i got no troubles. Now migrating these database to a brand new sql2008, i got some issues about the triggers. They are firing even if the merge-agent does his work. Is there anybody who has some experience with that kind of stuff on sql2008-server? Can anybody confirm that different behaviour to sql2000? Peace Ice

    Read the article

  • Accessing SQL server Table is slow -very few data inside

    - by Joseph
    Dear all I have a temp table ,datas keep on coming in and going out. now a days even if there is very few records also if we select ,its taking so long. i cant put index on this table because its a Temp table. The only way i found that drop the table and recreate it.its working very fine. any idea why this is happening?is it like some kind of fragmentation?if there is index ,then we can check the frgment,but if there is no index then waht to do. we are using sql server 2008 64 bit thanks Joseph

    Read the article

  • Compound IDENTITY column in SQL SERVER 2008

    - by Asaf R
    An Orders table has a CustomerId column and an OrderId column. For certain reasons it's important that OrderId is no longer than 2-bytes. There will be several million orders in total, which makes 2-bytes not enough. A customer will have no more than several thousand orders making 2-bytes enough. The obvious solution is to have the (CustomerId, OrderId) be unique rather than (OrderId) itself. The problem is generating the next Customer's OrderId. Preferably, without creating a separate table for each customer (even if it contains only an IDENTITY column), in order to keep the upper layers of the solution simple. Q: how would you generate the next OrderId so that (CustomerId, OrderId) is unique but OrderId itself is allowed to have repetitions? Does Sql Server 2008 have a built in functionality for doing this? For lack of a better term I'm calling it a Compound IDENTITY column.

    Read the article

  • wowza vs Flash Media Server (FMS / FMIS) - ease of integration with ASP.Net

    - by alchemical
    We're creating a web site offering one to many video chat and trying to decide on which of these streaming servers to go with. Looking at around 256kbps live streams, hoping to achieve at least 1000 simultaneous streams on one 8-core server. Wowza is cheaper (1k vs 5k for FMS), and appears to be used successfully by many sites (StreamLive, Justin.TV, etc.). However, some people have expressed that it may be more difficult to work with. I.e. fine-tuning it, less documentation, integration with ASP.Net code, etc. Wondering if anyone with real-world experience with either of these servers can advise regarding how easy or difficult to use and integrate they are for a site like this. Also wondering if there is any performance difference (lag, etc.).

    Read the article

  • SQL Server Query solution cum Suggestion Required

    - by Nirmal
    Hello All... I have a following scenario in my SQL Server 2005 database. zipcodes table has following fields and value (just a sample): zipcode latitude longitude ------- -------- --------- 65201 123.456 456.789 65203 126.546 444.444 and place table has following fields and value : id name zip latitude longitude -- ---- --- -------- --------- 1 abc 65201 NULL NULL 2 def 65202 NULL NULL 3 ghi 65203 NULL NULL 4 jkl 65204 NULL NULL Now, my requirement is like I want to compare my zip codes of place table and update the available latitude and longitude fields from zipcode table. And there are some of the zipcodes which has no entry in zipcode table, so that should remain null. And the major issue is like I have more then 50,00,000 records in my db. So, query should support this feature. I have tried some of the solutions but unfortunately not getting proper output. Any help would be appreciated...

    Read the article

  • searching for address matches using sql server 2008 full text search

    - by Sridhar
    Hi, I am not sure how to search for address matches using sql server 2008 full text search. This is what I tried but it doesn't return any records. TableA ------ Address1 Address2 City State Zip All the above columns in the table are full text indexed. Let's say if the user enters "123 Apple street FL 33647" and I have a record in the table as Address1 = "123" , Address2 = "Apple street", City = "Tampa", State = "FL" and Zip = "33647" I would like the query to return this. can you please let me know how I would do this. query tried -------------- SELECT * FROM TableA WHERE CONTAINS((Address1, Address2, City, State, zip), N'FORMSOF(THESAURUS, 123AppleStreetFL33647)'); If I put spaces in the search word, it is giving syntax error. Thanks, sridhar.

    Read the article

  • sql server 2000 and for xml explicit

    - by Marcin
    Hi everyone, I've got a problem with using for xml explicit in SQL Server 2000 (so I can't use the new path() stuff from sql 2005/8) Essentially I have two tables and the XML structure I want to have is <xml> <table_1 field1="foo" field2="foobar2" field3="foobar3"> <a_row_from_table_2 field1="goo" field2="goobar2" field3="goobar3" /> <a_row_from_table_2 field1="hoo" field2="hoobar2" field3="hoobar3" /> </table_1> </xml> That is, table_1 has a one-to-many relationship with table_2, and I want to make a hierarchy of it. So far I can't seem to get it, the closest I've managed to get is all the records from table1, with all the records from table2 appended to the very last element of table1 Any help with setting up this kind of relationship would be greatly appreciated. -Marcin

    Read the article

  • Autoincrementing hierarchical IDs on SQL Server

    - by Ville Koskinen
    Consider this table on SQL Server wordID aliasID value =========================== 0 0 'cat' 1 0 'dog' 2 0 'argh' 2 1 'ugh' WordID is a id of a word which possibly has aliases. AliasID determines a specific alias to a word. So above 'argh' and 'ugh' are aliases to each other. I'd like to be able to insert new words which do not have any aliases in the table without having to query the table for a free wordID value first. Inserting value 'hey' without specifying a wordID or an aliasID, a row looking like this would be created: wordID aliasID value =========================== 3 0 'hey' Is this possible and how?

    Read the article

  • Sharepoint and SQL server Endpoint

    - by Usman Ahmad
    I created a LinkedServer on MS SQL Server 2005 pointing to my Active Directory. Nothing too fancy. Simple LinkedServer with ReadOnlyAdmin Account assigned to CONNECTAS Property. Then I created some storedprocedures to retreive some data from the LinkedServer. Again nothing too fancy. Just a few simple LDAP Queries. Then I created a SQL Endpoint to expose these stored procedures as Web services. Using Visual Studio 2008, everything is fine and dandy. It finds the endpoint, gets the list of the methods available and runs smoothly. Cool. But the Sharpeoint Add Web Service somehow doesn't work! It finds the WSDL file no prob. Retrieves the list of functions and parameterss n all but just simply doesn't run them! Any advice or where I should start checking?

    Read the article

  • Entity Framework and Sql Server view question

    - by Sergio Romero
    Hi to all, For several reasons that I don't have the liberty to talk about, we are defining a view on our Sql Server 2005 database like so: CREATE VIEW [dbo].[MeterProvingStatisticsPoint] AS SELECT CAST(0 AS BIGINT) AS 'RowNumber', CAST(0 AS BIGINT) AS 'ProverTicketId', CAST(0 AS INT) AS 'ReportNumber', GETDATE() AS 'CompletedDateTime', CAST(1.1 AS float) AS 'MeterFactor', CAST(1.1 AS float) AS 'Density', CAST(1.1 AS float) AS 'FlowRate', CAST(1.1 AS float) AS 'Average', CAST(1.1 AS float) AS 'StandardDeviation', CAST(1.1 AS float) AS 'MeanPlus2XStandardDeviation', CAST(1.1 AS float) AS 'MeanMinus2XStandardDeviation' WHERE 0 = 1 The idea is that the Entity Framework will create an entity based on this query, which it does, but it generates it with an error that states the following: "warning 6002: The table/view 'Keystone_Local.dbo.MeterProvingStatisticsPoint' does not have a primary key defined. The key has been inferred and the definition was created as a read-only table/view." And it decides that the CompletedDateTime field will be this entity primary key. We are using EdmGen to generate the model. Is there a way not to have the entity framework include any field of this view as a primary key? Thanks for help.

    Read the article

  • PHP + SQL Server or VB.NET + MySQL

    - by Muhammad Mussnoon
    Can someone suggest that out of the two mentioned (odd?) combinations, which is less odd, or in other words, is less trouble to work with + maintain. If it helps, the system is going to have two front-ends - one web application and one desktop application. The desktop application is going to be coded using VB.NET, and the web application in PHP. There's really no reason why the desktop application can't be replaced by a web application as well - except that one of the programmers seem to really want to code it in VB.... However, none of us have experience working with either of these pairs (you could easily call us n00bs), so we are a bit apprehensive to start. P.S. hosting service will be gotten from some provider and not be on the client's own server.

    Read the article

  • SQL Server Bulk Insert failing when called from .NET SqlCommand

    - by Nick Wright
    I have a stored procedure which does bulk insert on a SQL server 2005 database. When I call this stored procedure from some SQL (passing in the name of a local format file and data file) it works fine. Every time. However, when this same stored procedure gets called from C# .NET 3.5 code using SqlCommand.ExecuteNonQuery it works intermittently. When it fails a SqlException is generated stating "Cannot bulk load. Invalid column number in the format file "c:\bulkinsert\MyFile.fmt" I don't think this error message is correct. Has anyone experienced similar problems with calling bulk insert from code? Thanks.

    Read the article

  • Microsoft SQL Server 2005 - using the modulo operator

    - by cc0
    So I have a silly problem, I have not used much SQL Server before, or any SQL for that matter. I basically have a minor mathematical problem that I need solved, and I thought modulo would be good. I have a number of dates in the database, but I need them be rounded off to the closest [dynamic integer] (could be anything from 0 to 5000000) which will be input as a parameter each time this query is called. So I thought I'd use modulo to find the remainder, then subtract that remainder from the date. If there is a better way, or an integrated function, please let me know! What would be the syntax for that? I've tried a lot of things, but I keep getting error messages like integers/floats/decimals can't be used with the modulo operators. I tried casting to all kinds of numeric datatypes. Any help would be appreciated.

    Read the article

  • Teradata equivalent of persisted computed column (in SQL Server)

    - by Cade Roux
    We have a few tables with persisted computed columns in SQL Server. Is there an equivalent of this in Teradata? And, if so, what is the syntax and are there any limitations? The particular computed columns I am looking at conform some account numbers by removing leading zeros - an index is also created on this conformed account number: ACCT_NUM_std AS ISNULL(CONVERT(varchar(39), SUBSTRING(LTRIM(RTRIM([ACCT_NUM])), PATINDEX('%[^0]%', LTRIM(RTRIM([ACCT_NUM])) + '.' ), LEN(LTRIM(RTRIM([ACCT_NUM]))) ) ), '' ) PERSISTED With the Teradata TRIM function, the trimming part would be a little simpler: ACCT_NUM_std AS COALESCE(CAST(TRIM(LEADING '0' FROM TRIM(BOTH FROM ACCT_NUM))) AS varchar(39)), '' ) I guess I could just make this a normal column and put the code to standardize the account numbers in all the processes which insert into the table. We did this to put the standardization code in one place.

    Read the article

  • Inserting Large volume of data in SQL Server 2005

    - by Manjoor
    We have a application (written in c#) to store live stock market price in the database (SQL Server 2005). It insert about 1 Million record in a single day. Now we are adding some more segment of market into it and the no of records would be double (2 Millions/day). Currently the average record insertion per second is about 50, maximum is 450 and minimum is 0. To check certain conditions i have used service broker (asynchronous trigger) on my price table. It is running fine at this time(about 35% CPU utilization). Now i am planning to create a in memory dataset of current stock price. we would like to do some simple calculations. I want to know different views of members on this. Please provide your way of dealing with such situation.

    Read the article

  • Availability Best Practices on Oracle VM Server for SPARC

    - by jsavit
    This is the first of a series of blog posts on configuring Oracle VM Server for SPARC (also called Logical Domains) for availability. This series will show how to how to plan for availability, improve serviceability, avoid single points of failure, and provide resiliency against hardware and software failures. Availability is a broad topic that has filled entire books, so these posts will focus on aspects specifically related to Oracle VM Server for SPARC. The goal is to improve Reliability, Availability and Serviceability (RAS): An article defining RAS can be found here. Oracle VM Server for SPARC Principles for Availability Let's state some guiding principles for availability that apply to Oracle VM Server for SPARC: Avoid Single Points Of Failure (SPOFs). Systems should be configured so a component failure does not result in a loss of application service. The general method to avoid SPOFs is to provide redundancy so service can continue without interruption if a component fails. For a critical application there may be multiple levels of redundancy so multiple failures can be tolerated. Oracle VM Server for SPARC makes it possible to configure systems that avoid SPOFs. Configure for availability at a level of resource and effort consistent with business needs. Effort and resource should be consistent with business requirements. Production has different availability requirements than test/development, so it's worth expending resources to provide higher availability. Even within the category of production there may be different levels of criticality, outage tolerances, recovery and repair time requirements. Keep in mind that a simple design may be more understandable and effective than a complex design that attempts to "do everything". Design for availability at the appropriate tier or level of the platform stack. Availability can be provided in the application, in the database, or in the virtualization, hardware and network layers they depend on - or using a combination of all of them. It may not be necessary to engineer resilient virtualization for stateless web applications applications where availability is provided by a network load balancer, or for enterprise applications like Oracle Real Application Clusters (RAC) and WebLogic that provide their own resiliency. It's (often) the same architecture whether virtual or not: For example, providing resiliency against a lost device path or failing disk media is done for the same reasons and may use the same design whether in a domain or not. It's (often) the same technique whether using domains or not: Many configuration steps are the same. For example, configuring IPMP or creating a redundant ZFS pool is pretty much the same within the guest whether you're in a guest domain or not. There are configuration steps and choices for provisioning the guest with the virtual network and disk devices, which we will discuss. Sometimes it is different using domains: There are new resources to configure. Most notable is the use of alternate service domains, which provides resiliency in case of a domain failure, and also permits improved serviceability via "rolling upgrades". This is an important differentiator between Oracle VM Server for SPARC and traditional virtual machine environments where all virtual I/O is provided by a monolithic infrastructure that itself is a SPOF. Alternate service domains are widely used to provide resiliency in production logical domains environments. Some things are done via logical domains commands, and some are done in the guest: For example, with Oracle VM Server for SPARC we provide multiple network connections to the guest, and then configure network resiliency in the guest via IP Multi Pathing (IPMP) - essentially the same as for non-virtual systems. On the other hand, we configure virtual disk availability in the virtualization layer, and the guest sees an already-resilient disk without being aware of the details. These blogs will discuss configuration details like this. Live migration is not "high availability" in the sense of "continuous availability": If the server is down, then you don't live migrate from it! (A cluster or VM restart elsewhere would be used). However, live migration can be part of the RAS (Reliability, Availability, Serviceability) picture by improving Serviceability - you can move running domains off of a box before planned service or maintenance. The blog Best Practices - Live Migration on Oracle VM Server for SPARC discusses this. Topics Here are some of the topics that will be covered: Network availability using IP Multipathing and aggregates Disk path availability using virtual disks defined with multipath groups ("mpgroup") Disk media resiliency configuring for redundant disks that can tolerate media loss Multiple service domains - this is probably the most significant item and the one most specific to Oracle VM Server for SPARC. It is very widely deployed in production environments as the means to provide network and disk availability, but it can be confusing. Subsequent articles will describe why and how to configure multiple service domains. Note, for the sake of precision: an I/O domain is any domain that has a physical I/O resource (such as a PCIe bus root complex). A service domain is a domain providing virtual device services to other domains; it is almost always an I/O domain too (so it can have something to serve). Resources Here are some important links; we'll be drawing on their content in the next several articles: Oracle VM Server for SPARC Documentation Maximizing Application Reliability and Availability with SPARC T5 Servers whitepaper by Gary Combs Maximizing Application Reliability and Availability with the SPARC M5-32 Server whitepaper by Gary Combs Summary Oracle VM Server for SPARC offers features that can be used to provide highly-available environments. This and the following blog entries will describe how to plan and deploy them.

    Read the article

  • SQL Server: position based on marks

    - by Rhys
    I am using SQL Server 2008. I have a Student table in which there are following fields: 1. StudentId, 2. StudentName, 3. Marks . I want to get a resultset in which there should be a column named “Position”. Something like “Select StudentId,StudentName,Marks, as Position from Student...” so that, depending on the marks a student scored, i can evaluate them as the 1st, 2nd or 20th position. If students have the same marks, then they have the same position. Thanks. Rhys

    Read the article

  • including php file from another server with php

    - by ermac2014
    hi I have 2 php files each file on different server. lets say the first file called includeThis.php and the second called main.php the first file is located in (http://)www.sample.com/includeThis.php and the second located in (http://)www.mysite.com/main.php so now what I want is to include the first file into my second file. the contents of the first file is like: <?php $foo = "this is data from file one"; ?> the second file like: <?php include "http://www.sample.com/includeThis.php"; echo $foo; ?> is there any way I can do this? thanks in advance

    Read the article

  • SQL Server - Query Short-Circuiting?

    - by Sam Schutte
    Do T-SQL queries in SQL Server support short-circuiting? For instance, I have a situation where I have two database and I'm comparing data between the two tables to match and copy some info across. In one table, the "ID" field will always have leading zeros (such as "000000001234"), and in the other table, the ID field may or may not have leading zeros (might be "000000001234" or "1234"). So my query to match the two is something like: select * from table1 where table1.ID LIKE '%1234' To speed things up, I'm thinking of adding an OR before the like that just says: table1.ID = table2.ID to handle the case where both ID's have the padded zeros and are equal. Will doing so speed up the query by matching items on the "=" and not evaluating the LIKE for every single row (will it short circuit and skip the LIKE)?

    Read the article

  • Problem in SQL Server 2005 using ASP.Net

    - by megala
    I created one ASP.Net project using SQLServer database as back end.I shows the foollwing error .How to solve this? ===============Coding Imports System.Data.SqlClient Partial Class Default2 Inherits System.Web.UI.Page Dim myConnection As SqlConnection Dim myCommand As SqlCommand Dim ra As Integer Protected Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Button1.Click myConnection = New SqlConnection("Data Source=JANANI-FF079747\SQLEXPRESS;Initial Catalog=new;Persist Security Info=True;User ID=sa;Password=janani") 'server=localhost;uid=sa;pwd=;database=pubs") myConnection.Open() myCommand = New SqlCommand("Insert into table3 values 'janani','jan'") ra = myCommand.ExecuteNonQuery() ========---> error is showing here MsgBox("New Row Inserted" & ra) myConnection.Close() End Sub End Class =========Error Message============ ExecuteNonQuery: Connection property has not been initialized.

    Read the article

  • GetOleDbSchemaTable Foreign Keys on Sql Server 2005

    - by haxelit
    I'm trying to get the Foreign Keys for a table in my SQL Server 2005 database. I'm using the GetOleDbSchemaTable function right now: DataTable schemaTable = connection.GetOleDbSchemaTable( OleDbSchemaGuid.Foreign_Keys, new object[] { null, null, null, "TABLE" }); This pulls back the right foreign keys, the only problem is that the UpdateRule and DeleteRule are set to "No Action". If I browse to the same table in SSMS I can see that my DeleteRule is "Set NULL". Does the GetOleDbSchemaTable function not return the proper foreign key rules ? Has any one else ran into this problem ?

    Read the article

< Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >