Search Results

Search found 4908 results on 197 pages for 'ssas 2005'.

Page 76/197 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • How to convert DateTime to a number with a precision greater than days in T-SQL?

    - by Jader Dias
    Both queries below translates to the same number SELECT CONVERT(bigint,CONVERT(datetime,'2009-06-15 15:00:00')) SELECT CAST(CONVERT(datetime,'2009-06-15 23:01:00') as bigint) Result 39978 39978 The generated number will be different only if the days are different. There is any way to convert the DateTime to a more precise number, as we do in .NET with the .Ticks property? I need at least a minute precision.

    Read the article

  • Sql Server copying table information between databases

    - by Andrew
    Hi, I have a script that I am using to copy data from a table in one database to a table in another database on the same Sql Server instance. The script works great when I am connected to the Sql Server instance as myself as I have dbo access to both databases. The problem is that this won't be the case on the client's Sql Server. They have seperate logins for each database (Sql Authentication Logins). Does anyone know if there is a way to run a script under these circumstances. The script would be doing something like. use sourceDB Insert targetDB.dbo.tblTest (id, test_name) Select id, test_name from dbo.tblTest Thanks

    Read the article

  • can't insert xml dml expression as a string

    - by 81967
    Here is the code below that would explain you the problem... I create a table below with an xml column and declare a variable, initialize it and Insert the Value into the xml column, create table CustomerInfo (XmlConfigInfo xml) declare @StrTemp nvarchar(2000) set @StrTemp = '<Test></Test>' insert into [CustomerInfo](XmlConfigInfo) values (@StrTemp) Then comes the part of the question,, if I write this... update [CustomerInfo] set XmlConfigInfo.modify('insert <Info></Info> into (//Test)[1]') -- Works Fine!!! but when I try this, set @StrTemp = 'insert <Info></Info> into (//Test)[1]' update [CustomerInfo] set XmlConfigInfo.modify(@StrTemp) -- Doesn't Work!!! and throws an error The argument 1 of the xml data type method "modify" must be a string literal. is there a way around for this one? I tried this, but it is not working :(

    Read the article

  • DataGridView: how to make scrollbar in sync with current cell selection?

    - by David.Chu.ca
    I have a windows application with DataGridView as data presentation. Every 2 minutes, the grid will be refreshed with new data. In order to keep the scroll bar in sync with the new data added, I have to reset its ScrollBars: dbv.Rows.Clear(); // clear rows SCrollBars sc = dbv.ScrollBars; dbv.ScrollBars = ScrollBars.None; // continue to populate rows such as dbv.Rows.Add(obj); dbv.ScrollBars = sc; // restore the scroll bar setting back With above codes, the scroll bar reappears fine after data refresh. The problem is that the application requires to set certain cell as selected after the refresh: dbv.CurrentCell = dbv[0, selectedRowIndex]; With above code, the cell is selected; however, the scroll bar's position does not reflect the position of the selected cell's row position. When I try to move the scroll bar after the refresh, the grid will jump to the first row. It seems that the scroll bar position is set back to 0 after the reset. The code to set grid's CurrentCell does not cause the scroll bar to reposition to the correct place. There is no property or method to get or set scroll bar's value in DataGriadView, as far as I know. I also tried to set the selected row to the top: dbv.CurrentCell = dbv[0, selectedRowIndex]; dbv.FirstDisplayedScrollingRowIndex = selectedRowIndex; The row will be set to the top, but the scroll bar's position is still out of sync. Not sure if there is any way to make scroll bar's position in sync with the selected row which is set in code?

    Read the article

  • SSRS2005 timeout error

    - by jaspernygaard
    Hi I've been running around circles the last 2 days, trying to figure a problem in our customers live environment. I figured I might as well post it here, since google gave me very limited information on the error message (5 results to be exact). The error boils down to a timeout when requesting a certain report in SSRS2005, when a certain parameter is used. The deployment scenario is: Machine #1 Running reporting services (SQL2005, W2K3, IIS6) Machine #2 Running datawarehouse database (SQL2005, W2K3) which is the data source for #1 Both machines are running on the same vm cluster and LAN. The report requests a fairly simple SP - lets called it sp(param $a, param $b). When requested with param $a filled, it executes correctly. When using param $b, it times out after the global timeout periode has passed. If I run the stored procedure with param $b directly from sql management studio on #2, it returns the results perfectly fine (within 3-4s). I've profiled the datawarehouse database on #2 and when param $b is used, the query from the reporting service to the database, never reaches #2. The error message that I get upon timeout, when using param $b, when invoking the report directly from SSRS web interface is: "An error has occurred during report processing. Cannot read the next data row for the data set DataSet. A severe error occurred on the current command. The results, if any, should be discarded. Operation cancelled by user." The ExecutionLog for the SSRS does give me much information besides the error message rsProcessingAborted I'm running out of ideas of how to nail this problem. So I would greatly appreciate any comments, suggestions or ideas. Thanks in advance!

    Read the article

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

  • FTS: Searching across multiple fields 'intelligently'

    - by Wild Thing
    Hi, I have a SP using FTS (Full Text Search). I want searches across multiple fields, 'intelligently' ranking results based on the weights I assign. Consider a search on a view fetching data from tables: Book, Author and Genre. Now, I want the searcher to be able to do: "Ludlum Fiction", "Robert Ludlum Bourne", "Bourne Ludlum", etc. Unfortunately, the only way I have been able to do that at present is this: http://pastebin.com/fdce11ff This is pretty bad, because I am manually breaking up the search string. I know I am doing this completely the wrong way, but can't figure out the right way to search across multiple fields in FTS. Can somebody help please?

    Read the article

  • C# DateTime Class and Datetime in database

    - by Spyros
    Hello . I have the following problem. I have an object with some DateTime properties , and a Table in database that I store all that objects , in Sql server I want to store the DateTime properties in some columns of DateTime Datatype, but the format of datetime in sql server is different from the DateTime class in c# and I got an sql exception saying "DateTime cannot be parsed". I know how to solve this by making the format yyyy-MM-dd but is this the proper and best solution to do this?

    Read the article

  • SQL Server and Table-Valued User-Defined Function optimizations

    - by John Leidegren
    If I have an UDF that returns a table, with thousands of rows, but I just want a particular row from that rowset, will SQL Server be able to handle this effciently? SELECT * FROM dbo.MyTableUDF() WHERE ID = 1 To what extent is the query optimizer capable of reasoning about this type of query? How are Table-Valued UDFs different from traidtional views if they take no parameters? Any gotchas I should know about?

    Read the article

  • Upgrade from .NET 2.0 to .NET 3.5 problems

    - by Bashir Magomedov
    I’m trying to upgrade our solution from VS2005 .NET 2.0 to VS2008 .NET 3.5. I converted the solution using VS2008 conversion wizard. All the projects (about 50) remained targeting to .NET Framework 2.0., moreover if I’m changing target framework manually for one of the projects, all referenced dll (i.e. System, System.Core, System.Data, etc. are still pointing to Framework 2.0. The only way to completely change targeting framework I found is to remove these references and refer them again using proper version of framework. Doing it manually is not best choice I think. 50 projects ~ 10 references each ~ 0.5 minutes for changing each reference is about 5 hours to complete. Am I missing something? Are there any other ways of converting full solution from .NET 2.0 to .NET 3.5? Thank you.

    Read the article

  • SQL Server 2008 - Shrinking the Transaction Log - Any way to automate?

    - by Albert
    I went in and checked my Transaction log the other day and it was something crazy like 15GB. I ran the following code: USE mydb GO BACKUP LOG mydb WITH TRUNCATE_ONLY GO DBCC SHRINKFILE(mydb_log,8) GO Which worked fine, shrank it down to 8MB...but the DB in question is a Log Shipping Publisher, and the log is already back up to some 500MB and growing quick. Is there any way to automate this log shrinking, outside of creating a custom "Execute T-SQL Statement Task" Maintenance Plan Task, and hooking it on to my log backup task? If that's the best way then fine...but I was just thinking that SQL Server would have a better way of dealing with this. I thought it was supposed to shrink automatically whenever you took a log backup, but that's not happening (perhaps because of my log shipping, I don't know). Here's my current backup plan: Full backups every night Transaction log backups once a day, late morning (maybe hook the Log shrinking onto this...doesn't need to be shrank every day though) Or maybe I just run it once a week, after I run a full backup task? What do you all think?

    Read the article

  • How to select the value of the xsi:type attribute in SQL Server?

    - by kralizek
    Considering this xml document: DECLARE @X XML (DOCUMENT search.SearchParameters) = '< parameters xmlns="http://www.educations.com/Search/Parameters.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> < parameter xsi:type="category" categoryID="38" /> < /parameters>'; I'd like to access the value of the attribute "type". According to this blog post, the xsi:type attribute is special and can't be accessed by usal keywords/functions. How can I do it? PS: I tried with WITH XMLNAMESPACES ( 'http://www.educations.com/Search/Parameters.xsd' as p, 'http://www.w3.org/2001/XMLSchema-instance' as xsi ) SELECT @X.value('(/p:parameters/p:parameter/@xsi:type)[1]','nvarchar(max)') but it didn't work.

    Read the article

  • the name 'controlname' does not exist in the current context

    - by zohair
    Hi, I have a web application that I'm working on(ASP.NET2.0 with C#)[Using VS2005]. Everything was working fine, and all of a sudden I get the error: Error 1 The name 'Label1' does not exist in the current context and 43 others of the sort for each time that I used a control in my codebehind of the page. This is only happening for 1 page. And it's as if the codebehind isn't recognizing the controls. Another interesting thing is that the intellisense isn't picking up any of the controls either.. I have tried to clean the solution file, delete the obj file, exclude the files from the project then re-add them, close VS and restart it, and even restart my computer, but none of these have worked. Please Help. Thank you

    Read the article

  • Can I make a matrix row group span its child groups in SSRS?

    - by AaronSieb
    I have a matrix, whose rows are grouped into two groups. A class, and a time for that class. The class cell is going to end up being several lines long, and I'd like the rows for each time slot of the class to line up next to the class description, like this: ----------------------------------------- **Class** | 7:00am | [row data] Description of |---------------------- the class, this | 12:00pm | [row data] is several lines |---------------------- long. | 1:00pm | [row data] ----------------------------------------- But what I'm getting is this: ----------------------------------------- **Class** | 7:00am | [row data] Description of | | the class, this | | is several lines | | long. | | ----------------------------------------- | 12:00pm | [row data] | | | | | | | | ----------------------------------------- | 1:00pm | [row data] | | | | | | | | ----------------------------------------- Is there any way to make SSRS collapse the matrix?

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • In MS SQL Server, is there a way to "atomically" increment a column being used as a counter?

    - by Dan P
    Assuming a Read Committed Snapshot transaction isolation setting, is the following statement "atomic" in the sense that you won't ever "lose" a concurrent increment? update mytable set counter = counter + 1 I would assume that in the general case, where this update statement is part of a larger transaction, that it wouldn't be. For example, I think this scenario is possible: update the counter within transaction #1 do some other stuff in transaction #1 update the counter with transaction #2 commit transaction #2 commit transaction #1 In this situation, wouldn't the counter end up only being incremented by 1? Does it make a difference if that is the only statement in a transaction? How does a site like stackoverflow handle this for its question view counter? Or is the possibility of "losing" some increments just considered acceptable?

    Read the article

  • Sorting the data returned by a database

    - by Rishabh Ohri
    hi all, In our project we have a requirement that when a set of records are returned by the database the records should be sorted with respect to the TITLE field in the record. The records will have to be sorted alphabetically but if the title of a record has a number in it then it should come after the records whose title only consists of alphabets. Details: we are using SQL Server , and c#. The data from the database comes to an Entity class whic forwards the data to other layers. So, What will be the possible and effective solution for this requirement.

    Read the article

  • SQL Server Profiler Implementation Using C#/VB.net

    - by Asim Sajjad
    I to implement sql Server Profile in C#/VB.net application, Can any one has good example of it, I have search on google but didn't find good working example, I don't have Sql server Profiler tool on my system ans also don't have Sql Server (it is on difference system). how do I can create profiler for my own

    Read the article

  • Sql Server Backup and move backup file: How to cope with file permissions?

    - by Stefan Steinegger
    With our product we have a simple backup tool for the sql server database. This tool should just make a full backup and restore to and from any folder. Of course, the user (usually an administrator) needs permission to write to the target folder. To avoid the problem of not being able to perform a backup to a network drive, I write the backup to a temp file in the Sql Server backup directory. Then I move it to the target folder. This requires permission to delete the temporary file from the sql servers backup folder. Restore is the same in the other direction. This seemed to work fine until someone tested it on vista, where the user does not have write access to the backup folder by default. So there are many solutions to solve this, but none of them seemed to be really nice. One solution would be to find another folder for the temporary file. Both the sql server user as well as the administrator performing the backup need read and write permissions. Is there such a directory? Any other ideas? Thanks a lot.

    Read the article

  • Where does IE store the ASP.NET_SessionId cookie?

    - by scherand
    I am a bit baffled here; using IE7, ASP.NET 2.0 and Cassini (the VS built-in web server; although the same thing seems to be true for "real" applications deployed in IIS) I am looking for the session-id-cookie. My test page shows a session id (by printing out Session.SessionId) and Response.Cookies.Keys contains ASP.NET_SessionId. So far so good. But I cannot find the cookie in IEs cookie-store! Nor does "remove all cookies" reset the session (as it does in FF)... So where - I am tempted to write that four letter word - does IE store that bloody cookie? Or am I missing something? By the way there is no hidden field with a session id either, as far as I can see. If I check in FF there is a cookie called ASP.NET_SessionId as I would expect. And as mentioned above deleting that cookie does start a new session; as I would expect. Can anybody imagine what is happening here?

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >