Search Results

Search found 28297 results on 1132 pages for 'sql azure'.

Page 116/1132 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

  • sp_addlinkedserver on sql server 2005 giving problem

    - by Jit
    I am trying to create a link server of a remote database(both the servers are SQL serve2005). I am able to connect that remote server from my SQL Server management studio. I used the following syntax to create it. EXEC sp_addlinkedserver @server = N'LINKSQL2005', @srvproduct = N'', @provider = N'SQLNCLI', @provstr = N'SERVER=IP Address of remote server ;User ID=XXXXXX;Password=***' I have provided the IP addressntax. and user name and password in the above syntax. The link server is getting created. But when I try to execute a query on it I get the error below. Query Used. select * from LINKSQL2005.<DBName>.dbo.<TableName> OLE DB provider "SQLNCLI" for linked server "LINKSQL2005" returned message "Communication link failure". Msg 10054, Level 16, State 1, Line 0 TCP Provider: An existing connection was forcibly closed by the remote host. Msg 18456, Level 14, State 1, Line 0 Login failed for user 'sa'. OLE DB provider "SQLNCLI" for linked server "LINKSQL2005" returned message "Invalid connection string attribute". Pls help me, where am I making mistake.

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

  • General SQL Server query performance

    - by Kiril
    Hey guys, This might be stupid, but databases are not my thing :) Imagine the following scenario. A user can create a post and other users can reply to his post, thus forming a thread. Everything goes in a single table called Posts. All the posts that form a thread are connected with each other through a generated key called ThreadID. This means that when user #1 creates a new post, a ThreadID is generated, and every reply that follows has a ThreadID pointing to the initial post (created by user #1). What I am trying to do is limit the number of replies to let's say 20 per thread. I'm wondering which of the approaches bellow is faster: 1 I add a new integer column (e.x. Counter) to Posts. After a user replies to the initial post, I update the initial post's Counter field. If it reaches 20 I lock the thread. 2 After a user replies to the initial post, I select all the posts that have the same ThreadID. If this collection has more than 20 items, I lock the thread. For further information: I am using SQL Server database and Linq-to-SQL entity model. I'd be glad if you tell me your opinions on the two approaches or share another, faster approach. Best Regards, Kiril

    Read the article

  • "Order By" in LINQ-to-SQL Causes performance issues

    - by panamack
    I've set out to write a method in my C# application which can return an ordered subset of names from a table containing about 2000 names starting at the 100th name and returning the next 20 names. I'm doing this so I can populate a WPF DataGrid in my UI and do some custom paging. I've been using LINQ to SQL but hit a snag with this long executing query so I'm examining the SQL the LINQ query is using (Query B below). Query A runs well: SELECT TOP (20) [t0].[subject_id] AS [Subject_id], [t0].[session_id] AS [Session_id], [t0].[name] AS [Name] FROM [Subjects] AS [t0] WHERE (NOT (EXISTS( SELECT NULL AS [EMPTY] FROM ( SELECT TOP (100) [t1].[subject_id] FROM [Subjects] AS [t1] WHERE [t1].[session_id] = 1 ORDER BY [t1].[name] ) AS [t2] WHERE [t0].[subject_id] = [t2].[subject_id] ))) AND ([t0].[session_id] = 1) Query B takes 40 seconds: SELECT TOP (20) [t0].[subject_id] AS [Subject_id], [t0].[session_id] AS [Session_id], [t0].[name] AS [Name] FROM [Subjects] AS [t0] WHERE (NOT (EXISTS( SELECT NULL AS [EMPTY] FROM ( SELECT TOP (100) [t1].[subject_id] FROM [Subjects] AS [t1] WHERE [t1].[session_id] = 1 ORDER BY [t1].[name] ) AS [t2] WHERE [t0].[subject_id] = [t2].[subject_id] ))) AND ([t0].[session_id] = 1) ORDER BY [t0].[name] When I add the ORDER BY [t0].[name] to the outer query it slows down the query. How can I improve the second query? This was my LINQ stuff Nick int sessionId = 1; int start = 100; int count = 20; // Query subjects with the shoot's session id var subjects = cldb.Subjects.Where<Subject>(s => s.Session_id == sessionId); // Filter as per params var orderedSubjects = subjects .OrderBy<Subject, string>( s => s.Col_zero ); var filteredSubjects = orderedSubjects .Skip<Subject>(start) .Take<Subject>(count);

    Read the article

  • Group / User based security. Table / SQL question

    - by Brett
    Hi, I'm setting up a group / user based security system. I have 4 tables as follows: user groups group_user_mappings acl where acl is the mapping between an item_id and either a group or a user. The way I've done the acl table, I have 3 columns of note (actually 4th one as an auto-id, but that is irrelevant) col 1 item_id (item to access) col 3 user_id (user that is allowed to access) col 3 group_id (group that is allowed to access) So for example item1, peter, , item2, , group1 item3, jane, , so either the acl will give access to a user or a group. Any one line in the ACL table with either have an item - user mapping, or an item group. If I want to have a query that returns all objects a user has access to, I think I need to have a SQL query with a UNION, because I need 2 separate queries that join like.. item - acl - group - user AND item - acl - user This I guess will work OK. Is this how its normally done? Am I doing this the right way? Seems a little messy. I was thinking I could get around it by creating a single user group for each person, so I only ever deal with groups in my SQL, but this seems a little messy as well..

    Read the article

  • SQL Server: preventing dirty reads in a stored procedure

    - by pcampbell
    Consider a SQL Server database and its two stored procs: *1. A proc that performs 3 important things in a transaction: Create a customer, call a sproc to perform another insert, and conditionally insert a third record with the new identity. BEGIN TRAN INSERT INTO Customer(CustName) (@CustomerName) SELECT @NewID = SCOPE_IDENTITY() EXEC CreateNewCustomerAccount @NewID, @CustomerPhoneNumber IF @InvoiceTotal > 100000 INSERT INTO PreferredCust(InvoiceTotal, CustID) VALUES (@InvoiceTotal, @NewID) COMMIT TRAN *2. A stored proc which polls the Customer table for new entries that don't have a related PreferredCust entry. The client app performs the polling by calling this stored proc every 500ms. A problem has arisen where the polling stored procedure has found an entry in the Customer table, and returned it as part of its results. The problem was that it has picked up that record, I am assuming, as part of a dirty read. The record ended up having an entry in PreferredCust later, and ended up creating a problem downstream. Question How can you explicitly prevent dirty reads by that second stored proc? The environment is SQL Server 2005 with the default configuration out of the box. No other locking hits are given in either of these stored procedures.

    Read the article

  • Update or Insert Row depending on whether row is present in Microsoft SQL Server 2005

    - by Srikanth
    Hi, I am passing a XML document as a input to a stored procedure in Microsoft SQL Server 2005. This is the sample XML being passed as input <Strategy StrategyID="0" TOStrategyID="8" ShutdownQtySell="1" ShutdownQtyBuy="1"> <ParameterRange ParameterSetID="6" ParameterRangeID="1" ParameterRangeFrom="0" ParameterRangeTo="20" ParameterAutoTakeOut="False"> </ParameterRange> <ParameterRange ParameterSetID="6" ParameterRangeID="4" ParameterRangeFrom="21" ParameterRangeTo="40" ParameterAutoTakeOut="False"> </ParameterRange> <ParameterRange ParameterSetID="6" ParameterRangeID="5" ParameterRangeFrom="41" ParameterRangeTo="60" ParameterAutoTakeOut="False"> </ParameterRange> <ParameterRange ParameterSetID="6" ParameterRangeID="6" ParameterRangeFrom="61" ParameterRangeTo="80" ParameterAutoTakeOut="False"> </ParameterRange> <ParameterRange ParameterSetID="6" ParameterRangeID="7" ParameterRangeFrom="81" ParameterRangeTo="100" ParameterAutoTakeOut="False"> </ParameterRange> </Strategy> I am able to retrieve the data using OpenXML functionality in SQL server I am using this to get the data corresponding to ParameterRange rows SELECT ParameterRangeID as iRangeID, ParameterSetID as iSetID, ParameterRangeFrom as fRangeFrom, ParameterRangeTo as fRangeTo, ParameterAutoTakeOut as bTakeoutEnabled FROM OPENXML(@idoc, '/Strategy/ParameterRange', 1) WITH (ParameterSetID int,ParameterRangeID int,ParameterRangeFrom float,ParameterRangeTo float,ParameterAutoTakeOut bit) Now, I need to insert/update these rows into a table TempRanges which has (iRangeID,iSetID) as the primary key. If there is a row with the primary key, I want to update it the latest values and If there is no row with that primary key, I need to insert into the table. How can I accomplish this inside the Stored Procedure ? Thanks, Sri

    Read the article

  • Database table relationships: Always also relate to specified value (Linq to SQL in .NET Framework)

    - by sinni800
    I really can not describe my question better in the title. If anyone has suggestions: Please tell! I use the Linq to SQL framework in .NET. I ran into something which could be easily solved if the framework supported this, it would be a lot of extra coding otherwise: I have a n to n relation with a helper table in between. Those tables are: Items, places and the connection table which relates items to places and the other way. One item can be found in many places, so can one place have many items. Now of course there will be many items which will be in ALL places. Now there is a problem: Places can always be added. So I need a place-ID which encompasses ALL places, always. Like maybe a place-id "0". If the helper table has a row with the place-id of zero, this should be visible in all places. In SQL this would be a simple "Where [...] or place-id = 0", but how do I do this in Linq relations? Also, for a little side question: How could I manage "all but this place" kind of exclusions?

    Read the article

  • 'LINQ query plan' horribly inefficient but 'Query Analyser query plan' is perfect for same SQL!

    - by Simon_Weaver
    I have a LINQ to SQL query that generates the following SQL : exec sp_executesql N'SELECT COUNT(*) AS [value] FROM [dbo].[SessionVisit] AS [t0] WHERE ([t0].[VisitedStore] = @p0) AND (NOT ([t0].[Bot] = 1)) AND ([t0].[SessionDate] > @p1)',N'@p0 int,@p1 datetime', @p0=1,@p1='2010-02-15 01:24:00' (This is the actual SQL taken from SQL Profiler on SQL Server 2008.) The query plan generated when I run this SQL from within Query Analyser is perfect. It uses an index containing VisitedStore, Bot, SessionDate. The query returns instantly. However when I run this from C# (with LINQ) a different query plan is used that is so inefficient it doesn't even return in 60 seconds. This query plan is trying to do a key lookup on the clustered primary key which contains a couple million rows. It has no chance of returning. What I just can't understand though is that the EXACT same SQL is being run - either from within LINQ or from within Query Analyser yet the query plan is different. I've ran the two queries many many times and they're now running in isolation from any other queries. The date is DateTime.Now.AddDays(-7), but I've even hardcoded that date to eliminate caching problems. Is there anything i can change in LINQ to SQL to affect the query plan or try to debug this further? I'm very very confused!

    Read the article

  • Generated sql from LINQ to SQL

    - by Muhammad Kashif Nadeem
    Following code ProductPricesDataContext db = new ProductPricesDataContext(); var products = from p in db.Products where p.ProductFields.Count > 3 select new { ProductIDD = p.ProductId, ProductName = p.ProductName.Contains("hotel"), NumbeOfProd = p.ProductFields.Count, totalFields = p.ProductFields.Sum(o => o.FieldId + o.FieldId) }; Generated follwing sql SELECT [t0].[ProductId] AS [ProductIDD], (CASE WHEN [t0].[ProductName] LIKE '%hotel%' THEN 1 WHEN NOT ([t0].[ProductName] LIKE '%hotel%') THEN 0 ELSE NULL END) AS [ProductName], ( SELECT COUNT(*) FROM [dbo].[ProductField] AS [t2] WHERE [t2].[ProductId] = [t0].[ProductId] ) AS [NumbeOfProd], ( SELECT SUM([t3].[FieldId] + [t3].[FieldId]) FROM [dbo].[ProductField] AS [t3] WHERE [t3].[ProductId] = [t0].[ProductId]) AS [totalFields] FROM [dbo].[Product] AS [t0] WHERE (( SELECT COUNT(*) FROM [dbo].[ProductField] AS [t1] WHERE [t1].[ProductId] = [t0].[ProductId] )) > 3 Why is this CASE statement for ProductName and because of this instead of ProductName i am just getting 0 in my result set. It should generate sql like following, (where ProductName like '%hotel%' SELECT [t0].[ProductId] AS [ProductIDD], [ProductName], ( SELECT COUNT(*) FROM [dbo].[ProductField] AS [t2] WHERE [t2].[ProductId] = [t0].[ProductId] ) AS [NumbeOfProd], ( SELECT SUM([t3].[FieldId] + [t3].[FieldId]) FROM [dbo].[ProductField] AS [t3] WHERE [t3].[ProductId] = [t0].[ProductId]) AS [totalFields] FROM [dbo].[Product] AS [t0] WHERE (( SELECT COUNT(*) FROM [dbo].[ProductField] AS [t1] WHERE [t1].[ProductId] = [t0].[ProductId] )) > 3 AND t0.ProductName like '%hotel%' Thanks.

    Read the article

  • SQL: select rows with the same order as IN clause

    - by Andrea3000
    I know that this question has been asked several times and I've read all the answer but none of them seem to completely solve my problem. I'm switching from a mySQL database to a MS Access database with MS SQL. In both of the case I use a php script to connect to the database and perform SQL queries. I need to find a suitable replacement for a query I used to perform on mySQL. I want to: perform a first query and order records alphabetically based on one of the columns construct a list of IDs which reflects the previous alphabetical order perform a second query with the IN clause applied with the IDs' list and ordered by this list. In mySQL I used to perform the last query this way: SELECT name FROM users WHERE id IN ($name_ids) ORDER BY FIND_IN_SET(id,'$name_ids') Since FIND_IN_SET is available only in mySQL and CHARINDEX and PATINDEX are not available from my php script, how can I achieve this? I know that I could write something like: SELECT name FROM users WHERE id IN ($name_ids) ORDER BY CASE id WHEN ... THEN 1 WHEN ... THEN 2 WHEN ... THEN 3 WHEN ... THEN 4 END but you have to consider that: IDs' list has variable length and elements because it depends on the first query that list can easily contains thousands of elements Have you got any hint on this? Is there a way to programmatically construct the ORDER BY CASE ... WHEN ... statement? Is there a better approach since my list of IDs can be big?

    Read the article

  • Interesting LinqToSql behaviour

    - by Ben Robinson
    We have a database table that stores the location of some wave files plus related meta data. There is a foreign key (employeeid) on the table that links to an employee table. However not all wav files relate to an employee, for these records employeeid is null. We are using LinqToSQl to access the database, the query to pull out all non employee related wav file records is as follows: var results = from Wavs in db.WaveFiles where Wavs.employeeid == null; Except this returns no records, despite the fact that there are records where employeeid is null. On profiling sql server i discovered the reason no records are returned is because LinqToSQl is turning it into SQL that looks very much like: SELECT Field1, Field2 //etc FROM WaveFiles WHERE 1=0 Obviously this returns no rows. However if I go into the DBML designer and remove the association and save. All of a sudden the exact same LINQ query turns into SELECT Field1, Field2 //etc FROM WaveFiles WHERE EmployeeID IS NULL I.e. if there is an association then LinqToSql assumes that all records have a value for the foreign key (even though it is nullable and the property appears as a nullable int on the WaveFile entity) and as such deliverately constructs a where clause that will return no records. Does anyone know if there is a way to keep the association in LinqToSQL but stop this behaviour. A workaround i can think of quickly is to have a calculated field called IsSystemFile and set it to 1 if employeeid is null and 0 otherwise. However this seems like a bit of a hack to work around strange behaviour of LinqToSQl and i would rather do something in the DBML file or define something on the foreign key constraint that will prevent this behaviour.

    Read the article

  • Complex SQL design, help/advice needed

    - by eugeneK
    Hi, i have few questions for SQL gurus in here ... Briefly this is ads management system where user can define campaigns for different countries, categories, languages. I have few questions in mind so help me with what you can. Generally i'm using ASP.NET and i want to cache all result set of certain user once he asks for statistics for the first time, this way i will avoid large round-trips to server. any help is welcomed Click here for diagram with all details you need for my questions 1.Main issue of this application is to show to the user how many clicks/impressions were and how much money he spent on campaign. What is the easiest way to get this information for him? I will also include filtering by date, date ranges and few other params in this statistics table. 2.Other issue is what happens when user will try to edit campaign. Old campaign will die this means if user set 0.01$ as campaignPPU (pay-per-unit) and next day updates it to 0.05$ all will be reset to 0.05$. 3.If you could re-design some parts of table design so it would be more flexible and easier to modify, how would you do it? Thanks... sorry for so large job but it may interest some SQL guys in here

    Read the article

  • PHP -- automatic SQL injection protection?

    - by ashgromnies
    I took over maintenance of a PHP app recently and I'm not super familiar with PHP but some of the things I've been seeing on the site are making me nervous that it could be vulnerable to a SQL injection attack. For example, see how this code for logging into the administrative section works: $password = md5(HASH_SALT . $_POST['loginPass']); $query = "SELECT * FROM `administrators` WHERE `active`='1' AND `email`='{$_POST['loginEmail']}' AND `password`='{$password}'"; $userInfo = db_fetch_array(db_query($query)); if($userInfo['id']) { $_SESSION['adminLoggedIn'] = true; // user is logged in, other junk happens here, not important The creators of the site made a special db_query method and db_fetch_array method, shown here: function db_query($qstring,$print=0) { return @mysql(DB_NAME,$qstring); } function db_fetch_array($qhandle) { return @mysql_fetch_array($qhandle); } Now, this makes me think I should be able to do some sort of SQL injection attack with an email address like: ' OR 'x'='x' LIMIT 1; and some random password. When I use that on the command line, I get an administrative user back, but when I try it in the application, I get an invalid username/password error, like I should. Could there be some sort of global PHP configuration they have enabled to block these attacks? Where would that be configured? Here is the PHP --version information: # php --version PHP 5.2.12 (cli) (built: Feb 28 2010 15:59:21) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies with the ionCube PHP Loader v3.3.14, Copyright (c) 2002-2010, by ionCube Ltd., and with Zend Optimizer v3.3.9, Copyright (c) 1998-2009, by Zend Technologies

    Read the article

  • Linq to SQL - Failing to update

    - by Isaac
    Hello, I'm having some troubles with updating linq to sql entities. For some reason, I can update every single field of my item entity besides name. Here are two simple tests I wrote: [TestMethod] public void TestUpdateName( ) { using ( var context = new SimoneDataContext( ) ) { Item item = context.Items.First( ); if ( item != null ) { item.Name = "My New Name"; context.SubmitChanges( ); } } } [TestMethod] public void TestUpdateMPN( ) { using ( var context = new SimoneDataContext( ) ) { Item item = context.Items.First( ); if ( item != null ) { item.MPN = "My New MPN"; context.SubmitChanges( ); } } } Unfortunately, TestUpdateName() fails with the following error: System.Data.SqlClient.SqlException: Incorrect syntax near the keyword 'WHERE'.. And here's the outputted SQL: UPDATE [dbo].[Items] SET WHERE ([Id] = @p0) AND ([CategoryId] = @p1) AND ([MPN] = @p2) AND ([Height] = @p3) AND ([Width] = @p4) AND ([Weight] = @p5) AND ([Length] = @p6) AND ([AdministrativeCost] = @p7) -- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [1] -- @p1: Input Int (Size = 0; Prec = 0; Scale = 0) [1] -- @p2: Input VarChar (Size = 10; Prec = 0; Scale = 0) [My New MPN] -- @p3: Input Decimal (Size = 0; Prec = 5; Scale = 3) [30.000] -- @p4: Input Decimal (Size = 0; Prec = 5; Scale = 3) [10.000] -- @p5: Input Decimal (Size = 0; Prec = 5; Scale = 3) [40.000] -- @p6: Input Decimal (Size = 0; Prec = 5; Scale = 3) [30.000] -- @p7: Input Money (Size = 0; Prec = 19; Scale = 4) [350.0000] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 3.5.30729.4926 As you can see, no update is being generated (SET is empty ...) I have no clue why is this happening. And already in advance ... YES, the table Item has a PK (Id). Thank you in advance!

    Read the article

  • ORDER BY in a Sql Server 2008 view

    - by eidylon
    Hi all... we have a view in our database which has an ORDER BY in it. Now, I realize views generally don't order, because different people may use it for different things, and want it differently ordered. This view however is used for a VERY SPECIFIC use-case which demands a certain order. (It is team standings for a soccer league.) The database is Sql Server 2008 Express, v.10.0.1763.0 on a Windows Server 2003 R2 box. The view is defined as such: CREATE VIEW season.CurrentStandingsOrdered AS SELECT TOP 100 PERCENT *, season.GetRanking(TEAMID) RANKING FROM season.CurrentStandings ORDER BY GENDER, TEAMYEAR, CODE, POINTS DESC, FORFEITS, GOALS_AGAINST, GOALS_FOR DESC, DIFFERENTIAL, RANKING It returns: GENDER, TEAMYEAR, CODE, TEAMID, CLUB, NAME, WINS, LOSSES, TIES, GOALS_FOR, GOALS_AGAINST, DIFFERENTIAL, POINTS, FORFEITS, RANKING Now, when I run a SELECT against the view, it orders the results by GENDER, TEAMYEAR, CODE, TEAMID. Notice that it is ordering by TEAMID instead of POINTS as the order by clause specifies. However, if I copy the SQL statement and run it exactly as is in a new query window, it orders correctly as specified by the ORDER BY clause.

    Read the article

  • Remote connection to SQL Server Express fails

    - by worlds-apart89
    I have two computers that share the same Internet IP address. Using one of the computers, I can remotely connect to a SQL Server database on the other. Here is my connection string: SqlConnection connection = new SqlConnection(@"Data Source=192.168.1.101\SQLEXPRESSNI,1433;Network Library=DBMSSOCN;Initial Catalog=FirstDB;Persist Security Info=True;User ID=username;Password=password;"); 192.168.1.101 is the server, SQLEXPRESSNI is the SQL Server instance name, and FirstDB is the name of the database. Now, I have another computer with a different Internet IP address. I want to connect to the server above using the third computer that does not belong to my local area network. I dont have access to that third computer at the moment, so I want to use (if possible) the client computer in LAN again. SqlConnection connection = new SqlConnection(@"Data Source=SharedInternetIP\SQLEXPRESSNI,1433;Network Library=DBMSSOCN;Initial Catalog=FirstDB;Persist Security Info=True;User ID=username;Password=password;"); Does not work Note that I am a beginner, so I am not quite sure what I am doing even though I know what I want to do. By passing the Internet IP to the SqlConnection object rather than the local IP address, how can I successfully connect to the server computer, using the client computer in the same network? Also note that my ultimate goal is to connect to the server with an external client, but I don't have access to that computer right now. I'd appreciate any help.

    Read the article

  • What's the SQL function for timestamp manipulation?

    - by Eric Leroy
    I'm new to SQL and the time functions are different than mySQL so I'm having a terrible time finding a good site reference with USEFUL timestamp queries. I'm not able to locate the correct way of doing this query in SQL: Id Timestamp ----------------------------------- 1145744 2012-10-10 18:15:11.500 1145743 2012-10-10 18:15:11.313 1145742 2012-10-10 18:15:11.313 1145741 2012-10-10 18:15:11.253 1145740 2012-10-10 18:15:11.190 1145739 2012-10-10 18:15:11.190 1145738 2012-10-10 18:15:11.127 1145737 2012-10-10 18:15:11.067 1145736 2012-10-10 18:15:11.063 1145735 2012-10-10 18:15:10.940 1145734 2012-10-10 18:15:10.817 SELECT * from table WHERE Timestamp ... RANGE I need the range of 2 timestamps so I can select rows by the following parameters: second range minute range hour range day range week range month range year range Is there one function to put in 2 timestamps and get the range? or is this a mix of functions I need? Any good site references would be greatly appriceated. MSDN site isn't helping me isolate the proper way of doing this. I've been searching for about an hour trying to get the last day from 1:30PM to 1:30PM today.

    Read the article

  • .Net SQL Parameter for String List Problem

    - by JK
    I am writing a C# program in which I send a query to SQL Server to be processed and a dataset returns. I am using parameters to pass information to the query before it is sent to SQL server. This works fine except in the situation below. The query looks like this: reportQuery = " Select * From tableName Where Account_Number in (@AccountNum); and Account_Date = @AccountDate "; The AccountDate parameter works find but not the AccountNum parameter. I need the final query to execute like this: Select * From tableName Where Account_Number in ('AX3456','YZYL123','ZZZ123'); and Account_Date = '1-Jan-2010' The problem is that I have these account numbers (actually text) in a C# string list. To feed it to the parameter, I have been declaring the parameter as a string. I turn the list into one string and feed it to the parameter. I think the problem is that I am feeding the paramater this: "'AX3456','YZYL123','ZZZ123'" when it wants this 'AX3456','YZYL123','ZZZ123' How do I get the string list into the query using a parameter and have it execute as shown above? This is how I am declaring and assigning the parameter. SqlParameter AccountNumsParam = new SqlParameter(); AccountNumsParam.ParameterName = "@AccountNums"; AccountNumsParam.SqlDbType = SqlDbType.NVarChar; AccountNumsParam.Value = AccountNumsString; FYI, AccountNumString == "'AX3456','YZYL123','ZZZ123'"

    Read the article

  • Error while closing SQL Connection

    - by Wickedman84
    I have a problem with closing the SQLconnection in my application. My application is in VB.net. I have a reference in my application to a class with code to open and close the database connection and to execute all sql scripts. The error occurs when i close my application. In the formClosing event of my main form I call a function that closes all the connections. But just before I close the connections I perform an SQLquery to delete a row from a table with the function below. Public Function DeleteFunction(ByVal mySQLQuery As String, ByVal cmd As SqlCommand) As Boolean Try cmd.Connection = myConnection cmd.CommandText = mySQLQuery cmd.ExecuteNonQuery() Return True Catch ex As Exception WriteErrorMessage("DeleteFunction", ex, Logpath, "SQL Error: " & mySQLQuery) Return False End Try End Function In my application I check the result of the boolean. If it returns True, then i call the function to close the database connection. The returned boolean is True and the requested row is deleted in my database. This means i can close my connection which I do with the function below. Public Sub DatabaseConnClose() myCommand.CommandText = "" myConnection.Close() myCommand = Nothing myConnection = Nothing End Sub After executing this code I receive an error in my logfile from the DeleteFunction. It says: "Connection property has not been initialized." It seems very strange to receive an error from a function that was completely executed, or am i wrong to think that? Can anyone tell me why I receive this error and how I can solve the problem?

    Read the article

  • Windows Azure and Server App Fabric &ndash; kinsmen or distant relatives?

    - by kaleidoscope
    Technorati Tags: tinu,windows azure,windows server,app fabric,caching windows azure If you are into Windows Azure then it would be rather demeaning to ask if you are aware of Windows Azure App Fabric. Just in case you are not - Windows Azure App Fabric provides a secure connectivity service by means of which developers can build distributed applications as well as services that work across network and organizational boundaries in the cloud. But some of you may have heard of another similar term floating around forums and blog posts - Windows Server App Fabric. The momentary déjà vu that you might have felt upon encountering it is not unheard of in the Cloud Computing circles - http://social.msdn.microsoft.com/Forums/en/netservices/thread/5ad4bf92-6afb-4ede-b4a8-6c2bcf8f2f3f http://forums.virtualizationtimes.com/session-state-management-using-windows-server-app-fabric Many have fallen prey to this ambiguous nomenclature but its not without a purpose. First announced at PDC 2009, Windows Server AppFabric is a set of application services focused on improving the speed, scale, and management of Web, Composite, and Enterprise applications. Initially codenamed Dublin the app fabric (oops....Windows Server App Fabric) provides add-ons like Monitoring,Tracking and Persistence into your hosted Workflow and Services without the Developer worried about these Functionalities. Alongwith this it also provides Distributed In-Memory caching features from Velocity caching. In short it is a healthy equivalent of Windows Azure App Fabric minus the cloud part. So why bring this up while talking about Windows Azure? Well, apart from their similar last names these powers are soon to be combined if Microsoft's roadmap is to be believed - "Together, Windows Server AppFabric and Windows Azure platform AppFabric provide a comprehensive set of services that help developers rapidly develop new applications spanning Windows Azure and Windows Server, and which also interoperate with other industry platforms such as Java, Ruby, and PHP." One of the most powerful features of the Windows Server App Fabric is its distributed caching mechanism which if appropriately leveraged with the Windows Azure App Fabric could very well mean a revolution in the Session Management techniques for the Azure platform. Well Microsoft, we do have our fingers crossed..... Read on... http://blogs.technet.com/windowsserver/archive/2010/03/01/windows-server-appfabric-beta-2-available.aspx

    Read the article

  • Database operations through SQL: Database Restore ...

    - by zc0000
    If using sql to perform a database restoring , care the sql-codes very well. Any trivial mistake may prevent a successful execution. Cases are listed here based on simple experiments. with operation: MOVE 'logical_file_name_in_backup' TO 'operating_system_file_name' If logical file name not correctly set , following error is obtained: Logical file 'FILE_NAME' is not part of database 'DATABASE_NAME'. Use RESTORE FILELISTONLY to list the logical file names. RESTORE DATABASE is terminating abnormally. To be continue...

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >