Search Results

Search found 31421 results on 1257 pages for 'entity sql'.

Page 488/1257 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • Newly configured MSSQL2008, TIME_WAIT but no ESTABLISHED?

    - by 3molo
    Windows 2008 R2, standard. No firewall locally on it. Newly setup because an old SQL2000 had two disks die (or could it be the raid controller?) at the same time. Luckily, I had fresh backups. The databases have been restored, and SP2 for SQL2008 applied. I can see various hosts trying to establish a session, but the (customer) sites does not work and I don't see the expected established sessions. A wireshark reveals a full three-way handshake. Since it's customer machines connecting, I cannot logon to them and restart application pools.. What on earth could be causing this? No. Time Source Destination Protocol Info 1 0.000000 1.2.5.127 1.2.6.133 TCP desktop-dna > ms-sql-s [SYN] Seq=0 Win=65535 Len=0 MSS=1380 SACK_PERM=1 Frame 1: 62 bytes on wire (496 bits), 62 bytes captured (496 bits) Ethernet II, Src: Cisco_31:5e:09 (00:26:0b:31:5e:09), Dst: Vmware_b7:00:05 (00:50:56:b7:00:05) Internet Protocol, Src: 1.2.5.127 (1.2.5.127), Dst: 1.2.6.133 (1.2.6.133) Transmission Control Protocol, Src Port: desktop-dna (2763), Dst Port: ms-sql-s (1433), Seq: 0, Len: 0 No. Time Source Destination Protocol Info 2 0.000123 1.2.6.133 1.2.5.127 TCP ms-sql-s > desktop-dna [SYN, ACK] Seq=0 Ack=1 Win=8192 Len=0 MSS=1460 SACK_PERM=1 Frame 2: 62 bytes on wire (496 bits), 62 bytes captured (496 bits) Ethernet II, Src: Vmware_b7:00:05 (00:50:56:b7:00:05), Dst: Cisco_31:5e:09 (00:26:0b:31:5e:09) Internet Protocol, Src: 1.2.6.133 (1.2.6.133), Dst: 1.2.5.127 (1.2.5.127) Transmission Control Protocol, Src Port: ms-sql-s (1433), Dst Port: desktop-dna (2763), Seq: 0, Ack: 1, Len: 0 No. Time Source Destination Protocol Info 3 0.000884 1.2.5.127 1.2.6.133 TCP desktop-dna > ms-sql-s [ACK] Seq=1 Ack=1 Win=65535 Len=0 And netstat TCP 1.2.6.133:1433 1.2.2.98:26895 TIME_WAIT 0 TCP 1.2.6.133:1433 1.2.2.98:26912 TIME_WAIT 0 TCP 1.2.6.133:1433 1.2.2.98:26918 TIME_WAIT 0 TCP 1.2.6.133:1433 1.2.2.98:26931 TIME_WAIT 0 TCP 1.2.6.133:1433 1.2.5.127:2736 TIME_WAIT 0 TCP 1.2.6.133:1433 1.2.5.127:2737 TIME_WAIT 0 TCP 1.2.6.133:1433 1.2.5.127:2738 TIME_WAIT 0 TCP 1.2.6.133:1433 1.2.5.127:2739 TIME_WAIT 0

    Read the article

  • Ado.net Fill method not throwing error on running a Stored Procedure that does not exist.

    - by Mike
    I am using a combination of the Enterprise library and the original Fill method of ADO. This is because I need to open and close the command connection myself as I am capture the event Info Message Here is my code so far // Set Up Command SqlDatabase db = new SqlDatabase(ConfigurationManager.ConnectionStrings[ConnectionName].ConnectionString); SqlCommand command = db.GetStoredProcCommand(StoredProcName) as SqlCommand; command.Connection = db.CreateConnection() as SqlConnection; // Set Up Events for Logging command.StatementCompleted += new StatementCompletedEventHandler(command_StatementCompleted); command.Connection.FireInfoMessageEventOnUserErrors = true; command.Connection.InfoMessage += new SqlInfoMessageEventHandler(Connection_InfoMessage); // Add Parameters foreach (Parameter parameter in Parameters) { db.AddInParameter(command, parameter.Name, (System.Data.DbType)Enum.Parse(typeof(System.Data.DbType), parameter.Type), parameter.Value); } // Use the Old Style fill to keep the connection Open througout the population // and manage the Statement Complete and InfoMessage events SqlDataAdapter da = new SqlDataAdapter(command); DataSet ds = new DataSet(); // Open Connection command.Connection.Open(); // Populate da.Fill(ds); // Dispose of the adapter if (da != null) { da.Dispose(); } // If you do not explicitly close the connection here, it will leak! if (command.Connection.State == ConnectionState.Open) { command.Connection.Close(); } ... Now if I pass into the variable StoredProcName = "ThisProcDoesNotExists" And run this peice of code. The CreateCommand nor da.Fill through an error message. Why is this. The only way I can tell it did not run was that it returns a dataset with 0 tables in it. But when investigating the error it is not appearant that the procedure does not exist. EDIT Upon further investigation command.Connection.FireInfoMessageEventOnUserErrors = true; is causeing the error to be surpressed into the InfoMessage Event From BOL When you set FireInfoMessageEventOnUserErrors to true, errors that were previously treated as exceptions are now handled as InfoMessage events. All events fire immediately and are handled by the event handler. If is FireInfoMessageEventOnUserErrors is set to false, then InfoMessage events are handled at the end of the procedure. What I want is each print statement from Sql to create a new log record. Setting this property to false combines it as one big string. So if I leave the property set to true, now the question is can I discern a print message from an Error ANOTHER EDIT So now I have the code so that the flag is set to true and checking the error number in the method void Connection_InfoMessage(object sender, SqlInfoMessageEventArgs e) { // These are not really errors unless the Number >0 // if Number = 0 that is a print message foreach (SqlError sql in e.Errors) { if (sql.Number == 0) { Logger.WriteInfo("Sql Message",sql.Message); } else { // Whatever this was it was an error throw new DataException(String.Format("Message={0},Line={1},Number={2},State{3}", sql.Message, sql.LineNumber, sql.Number, sql.State)); } } } The issue now that when I throw the error it does not bubble up to the statement that made the call or even the error handler that is above that. It just bombs out on that line The populate looks like // Populate try { da.Fill(ds); } catch (Exception e) { throw new Exception(e.Message, e); } Now even though I see the calling codes and methods still in the Call Stack, this exception does not seem to bubble up?

    Read the article

  • Dynamic connection for LINQ to SQL DataContext

    - by Steve Clements
    If for some reason you need to specify a specific connection string for a DataContext, you can of course pass the connection string when you initialise you DataContext object.  A common scenario could be a dev/test/stage/live connection string, but in my case its for either a live or archive database.   I however want the connection string to be handled by the DataContext, there are probably lots of different reasons someone would want to do this…but here are mine. I want the same connection string for all instances of DataContext, but I don’t know what it is yet! I prefer the clean code and ease of not using a constructor parameter. The refactoring of using a constructor parameter could be a nightmare.   So my approach is to create a new partial class for the DataContext and handle empty constructor in there. First from within the LINQ to SQL designer I changed the connection property to None.  This will remove the empty constructor code from the auto generated designer.cs file. Right click on the .dbml file, click View Code and a file and class is created for you! You’ll see the new class created in solutions explorer and the file will open. We are going to be playing with constructors so you need to add the inheritance from System.Data.Linq.DataContext public partial class DataClasses1DataContext : System.Data.Linq.DataContext    {    }   Add the empty constructor and I have added a property that will get my connection string, you will have whatever logic you need to decide and get the connection string you require.  In my case I will be hitting a database, but I have omitted that code. public partial class DataClasses1DataContext : System.Data.Linq.DataContext {    // Connection String Keys - stored in web.config    static string LiveConnectionStringKey = "LiveConnectionString";    static string ArchiveConnectionStringKey = "ArchiveConnectionString";      protected static string ConnectionString    {       get       {          if (DoIWantToUseTheLiveConnection) {             return global::System.Configuration.ConfigurationManager.ConnectionStrings[LiveConnectionStringKey].ConnectionString;          }          else {             return global::System.Configuration.ConfigurationManager.ConnectionStrings[ArchiveConnectionStringKey].ConnectionString;          }       }    }      public DataClasses1DataContext() :       base(ConnectionString, mappingSource)    {       OnCreated();    } }   Now when I new up my DataContext, I can just leave the constructor empty and my partial class will decide which one i need to use. Nice, clean code that can be easily refractored and tested.   Share this post :

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • Data Modeling Resources

    - by Dejan Sarka
    You can find many different data modeling resources. It is impossible to list all of them. I selected only the most valuable ones for me, and, of course, the ones I contributed to. Books Chris J. Date: An Introduction to Database Systems – IMO a “must” to understand the relational model correctly. Terry Halpin, Tony Morgan: Information Modeling and Relational Databases – meet the object-role modeling leaders. Chris J. Date, Nikos Lorentzos and Hugh Darwen: Time and Relational Theory, Second Edition: Temporal Databases in the Relational Model and SQL – all theory needed to manage temporal data. Louis Davidson, Jessica M. Moss: Pro SQL Server 2012 Relational Database Design and Implementation – the best SQL Server focused data modeling book I know by two of my friends. Dejan Sarka, et al.: MCITP Self-Paced Training Kit (Exam 70-441): Designing Database Solutions by Using Microsoft® SQL Server™ 2005 – SQL Server 2005 data modeling training kit. Most of the text is still valid for SQL Server 2008, 2008 R2, 2012 and 2014. Itzik Ben-Gan, Lubor Kollar, Dejan Sarka, Steve Kass: Inside Microsoft SQL Server 2008 T-SQL Querying – Steve wrote a chapter with mathematical background, and I added a chapter with theoretical introduction to the relational model. Itzik Ben-Gan, Dejan Sarka, Roger Wolter, Greg Low, Ed Katibah, Isaac Kunen: Inside Microsoft SQL Server 2008 T-SQL Programming – I added three chapters with theoretical introduction and practical solutions for the user-defined data types, dynamic schema and temporal data. Dejan Sarka, Matija Lah, Grega Jerkic: Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft SQL Server 2012 – my first two chapters are about data warehouse design and implementation. Courses Data Modeling Essentials – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Logical and Physical Modeling for Analytical Applications – online course I wrote for Pluralsight. Working with Temporal data in SQL Server – my latest Pluralsight course, where besides theory and implementation I introduce many original ways how to optimize temporal queries. Forthcoming presentations SQL Bits 12, July 17th – 19th, Telford, UK – I have a full-day pre-conference seminar Advanced Data Modeling Topics there.

    Read the article

  • How to prevent ‘Select *’ : The elegant way

    - by Dave Ballantyne
    I’ve been doing a lot of work with the “Microsoft SQL Server 2012 Transact-SQL Language Service” recently, see my post here and article here for more details on its use and some uses. An obvious use is to interrogate sql scripts to enforce our coding standards.  In the SQL world a no-brainer is SELECT *,  all apologies must now be given to Jorge Segarra and his post “How To Prevent SELECT * The Evil Way” as this is a blatant rip-off IMO, the only true way to check for this particular evilness is to parse the SQL as if we were SQL Server itself.  The parser mentioned above is ,pretty much, the best tool for doing this.  So without further ado lets have a look at a powershell script that does exactly that : cls #Load the assembly [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Management.SqlParser") | Out-Null $ParseOptions = New-Object Microsoft.SqlServer.Management.SqlParser.Parser.ParseOptions $ParseOptions.BatchSeparator = 'GO' #Create the object $Parser = new-object Microsoft.SqlServer.Management.SqlParser.Parser.Scanner($ParseOptions) $SqlArr = Get-Content "C:\scripts\myscript.sql" $Sql = "" foreach($Line in $SqlArr){ $Sql+=$Line $Sql+="`r`n" } $Parser.SetSource($Sql,0) $Token=[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SET $IsEndOfBatch = $false $IsMatched = $false $IsExecAutoParamHelp = $false $Batch = "" $BatchStart =0 $Start=0 $End=0 $State=0 $SelectColumns=@(); $InSelect = $false $InWith = $false; while(($Token = $Parser.GetNext([ref]$State ,[ref]$Start, [ref]$End, [ref]$IsMatched, [ref]$IsExecAutoParamHelp ))-ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::EOF) { $Str = $Sql.Substring($Start,($End-$Start)+1) try{ ($TokenPrs =[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]$Token) | Out-Null #Write-Host $TokenPrs if($TokenPrs -eq [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SELECT){ $InSelect =$true $SelectColumns+="" } if($TokenPrs -eq [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_FROM){ $InSelect =$false #Write-Host $SelectColumns -BackgroundColor Red foreach($Col in $SelectColumns){ if($Col.EndsWith("*")){ Write-Host "select * is not allowed" exit } } $SelectColumns =@() } }catch{ #$Error $TokenPrs = $null } if($InSelect -and $TokenPrs -ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SELECT){ if($Str -eq ","){ $SelectColumns+="" }else{ $SelectColumns[$SelectColumns.Length-1]+=$Str } } } OK, im not going to pretend that its the prettiest of powershell scripts,  but if our parsed script file “C:\Scripts\MyScript.SQL” contains SELECT * then “select * is not allowed” will be written to the host.  So, where can this go wrong ?  It cant ,or at least shouldn’t , go wrong, but it is lacking in functionality.  IMO, Select * should be allowed in CTEs, views and Inline table valued functions at least and as it stands they will be reported upon. Anyway, it is a start and is more reliable that other methods.

    Read the article

  • Alternatives for comparing data from different databases

    - by Alex
    I have two huge tables on separate databases. One of them has the information of all the SMS that passed through the company's servers while the other one has the information of the actual billing of those SMS. My job is to compare samples of both of these tables (for example, the records between 1 and 2 pm) to see if there are any differences: SMS that were sent but not charged to the user for whatever reason that may be happening. The columns I will be using to compare are the remitent's phone number and the exact date the SMS was sent. An issue here is that dates usually are the same on both sides, but in many cases differ by 1 or 2 seconds. I have, so far, two alternatives to do this: (PL/SQL) Create two tables where i'm going to temporarily store all the records of that 1hour sample. One for each of the main tables. Then, for each distinct phone number, select the time of every SMS sent from that phone from both my temporary tables and start comparing one by one using cursors. In this case, the procedure would be ran on the server where one of the sources is so the contents of the other one would be looked up using a dblink. (sqlplus + c++) Instead of storing the 1hour samples in new tables, output the query to a text file. I will have two text files, one for each source. Then, open the first file and load all of it's content on a hash_map (key-value) using c++, where the key will be the phone number and the value a list of times of SMS sent from that phone. Finally, open the second file, grab each line (in this format: numberX timeX), look for numberX's entry on the hash_map (wich will be a list of times) and then check if timeX is on that list. If it isn't, save it somewhere to finally store it on a "uncharged" table (this would also be the final step on case 1) My main concern is efficiency. These samples have about 2 million records on each source, so just grabbing one record on one side and looking it up on the other would not be possible. That's the reason I wanted to use hash_maps Which do you think is a better option?

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Looking for recommendations for a server-side newsletter program

    - by Sparky672
    Hello- I'm currently using a server-side SQL based mailing list program called Php-List on multiple sites and it works fairly well. But installation and setup is quite cumbersome, quirky and the interface is not well organized... neither is the code... with pieces all over the place in random fashion. Customizing the "look & feel" and full site integration are both tedious and painful. Upgrading the version is made more complex since multiple edits need to be manually transferred each time. Also, probably due to a poor English translation, descriptions and instructions within certain areas of the user interface are contradictory and unclear. You just have to play with it and remember what you did last time it worked. It's supposed to be so my customers can send out their own newsletters... after supplying a written tutorial, about half of them seem to stumble through it okay and the other half just hire me to do it for them. So not quite easy enough for most average people to use. I'm looking for something that's as easy for them as using a blog or discussion forum. It also must be easier to set up and integrate into a site than Php-List. I have no problem getting dirty and writing CSS or HTML by hand. Nor do I have any problem editing the program code. Perhaps what I'm looking for is a solution that is more organized, a better GUI, and template or "skin" based. Therefore, if I spend many hours customizing a skin, I can simply update the program and re-use my custom skin without having to reproduce the tedious setup over and over. (I currently maintain a list of about 25 things I must manually edit or add to multiple files in multiple directories each time I install or upgrade Php-List) A great example of what I'm looking for is very much like WordPress or phpBB. They're both easy to install and customize yet powerful and packed full of features. They're also VERY well organized making customization less painful. So enough yammering for now... anyone know of something, besides Php-List, with many of the same features as Php-List; maintaining a mailing list with a server-side database, custom sign-up pages, automatic opt-in opt-out, allowing custom HTML newsletter templates, etc? Thank-you!

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ASP.Net application can no longer write to DB after having run out of disk space

    - by remi.despres-smyth
    I'm a software developer troubleshooting a sticky problem on a client's production server, and I've got a bit of a problem. They have a virtual server running Windows Server 2008, SQL Server 2008 R1 and IIS7. It was provisioned with two partitions: one that has the OS (~15 Gig), and the other has IIS' web sites (another ~15 Gig). My application that's running this server has been running perfectly well, up until about an hour ago, when it started throwing System.IO.IOException: "There is not enough space on disk". As soon as my client notified me, I cleared up some space on C:\, emptied the recycle bin, and restarted SQL Server and IIS. The web server came back up and the application was running, but it no longer saves information to the database. No error message is coming up, the application can get information out of the DB, but it can no longer save data back to it. I rebooted the server, to no effect. I spoke with a sys admin at the hosting company, and he says SQL Server appears to have come up fine and the database is not in read-only mode. I confirmed that, as I can add records to tables from SQL Server Management Studio. I looked at the event log immediately after trying to save an edited record in the app, and no new events appear in there that I can tell. I'm assuming this is related to having run out of space, as it was all working fine prior to that, but I'm at a bit of a loss as to what exactly needs a kick in the pants to get going again. Can anyone help me out? What the heck is going on here?

    Read the article

  • sql developer cannot establish connection to oracle db with listener running

    - by lostinthebits
    I am working from home and connected to my work's vpn. I have tried to connect to the work db with sql developer (the latest version and the previous version) on the following environments: mac os x 10.8.5 (with sql developer launched and installed directly on the iMac. sql developer launched and installed directly on a vm on same computer (guest Ubuntu 12.04 LTS) sql developer launched and installed directly on a vm on same computer (guest Windows 7.0 Professional) I get Status Failure Test Failed : IO Error - The Network Adapter could not establish the connection. I have read dba forums and googled and the most common suggestion is that the oracle listener is not up and running. I can conclusively say this is not the case because I have the option of using remote desktop and accessing the oracle db in question on my work computer. If the listener was down, according to my DBA, no one would be able to connect. My sysadmin and dba are stumped so I assume it is something unique to my home system. The reason I do not want to continue with the remote desktop workaround is because remote desktop has an annoying (infuriating often) lag.

    Read the article

  • Symantec Protection Suite Enterprise Edition

    - by rihatum
    We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) What Hardware would you recommend as a Server spec for the SQL server 16GB RAM, Dual XEON? d) What Hardware would you recommend as a Server spec for the MGMT Servers 16GB RAM each with DUAL xeon and sas disks? e) Also, how do you or would you recommend to protect these 4 servers (2 x SQL and 2 x MGMT Servers)? f) How would you recommend to store backups for these desktops? We do have a SAN and a NAS in our environment and we do have one spare DAS (Dell MD3000). If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks ! Rihatum

    Read the article

  • Failed to generate a user instance of SQL Server

    - by Goondocks
    I'm using Windows 7 Beta and trying to install a web application locally. This web site uses Microsoft SQL Server 2005 Express (SQLEXPRESS) and a MDB file in the web site's ~/App_Data folder. I was instructed to configure IIS7 to use Classic .NET AppPool for this web application. Each time the web site loads, I receive the following error: There was an error trying to connect to the Database Server: Failed to generate a user instance of SQL Server due to failure in retrieving the user's local application data path. Please make sure the user has a local user profile on the computer. The connection will be closed. The Internet is packed with articles written on this subject. The prevailing wisdom seems to be: Configure the SQL Express Service to use the Local System account. Delete the following directory: C:\Users\username\AppData\Microsoft\Microsoft SQL Server Data\SQLEXPRESS Neither of these fixes have made any impact. I have tinkered with permissions and settings for hours to no avail. Can anyone suggest a fix or help me understand how to get more detailed information about the problem.

    Read the article

  • Fixing a typo in machine name

    - by justSteve
    When i installed windows i had a typo in the machine name that i corrected from the system's 'Computer Name/Domain Changes' - the workstation is a member of a workgroup not a domain. From everything i can see the renamed machine name is correct. Shift gears.... I'm importing SQL logins from my remote server to this, my development workstation and have used the script presented here - a script that generates a CREATE statement for each login found. While I was preparing to run this script's output (from the remote box) i needed to change the domain name from the remote to my local's name - so i ran the same script locally (in order to see what SQL things my domain name is. SQL has the original machine name - the one with the typo. However, the scripts are tossing errors if i try to create logins with that identifier. CREATE LOGIN [Setve\Admin] FROM WINDOWS WITH DEFAULT_DATABASE = [master] But works correctly if i use the updated machine name: CREATE LOGIN [Steve\Admin] FROM WINDOWS WITH DEFAULT_DATABASE = [master] So the problem is: do i have a problem i need to solve? Somewhere, deep in the guts of SQL Server, it has record of a Domain name that does not exist. Should i find and fix that discrepancy? thx

    Read the article

  • How to start MSSQL Server with corrupt model db

    - by Jordan McGuigan
    After moving some databases around (restoring, deleting, etc) we experienced an issue creating new databases. Specifically, When trying to create a new database MSSQL Server it failed because the "The database 'model' is marked RESTORING and is in a state that does not allow recovery to be run". As some online solutions suggested, we tried to Start and Stop the MSSQL Service. Service would not restart because "Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive" (FYI: the drive has 100gb of free space). Tried restarting the machine the MSSQL Server is running on. When the server came back online, we received the same error. We have tried deleting tempdb.mdf and restoring the modeldb from the templates folder, but neither of these solved the issue. We are unable to connect to the database, even in single user mode. Many of the online solutions have us running SQL commands against the server, but we are unable to connect (even in single user mode) to the DB to run commands against the server. Specific error messages: Database 'model' cannot be opened. It is in the middle of a restore. (Microsoft SQL Server, Error: 927) The SQL Server (MSSQLSERVER) service is starting. The SQL Server (MSSQLSERVER) service could not be started. A service specific error occurred: 1814. We need the server up and running again ASAP.

    Read the article

  • Server periodically freezing - Help Stabilizing

    - by JonDog
    We run an asp.net/sql server data collection website with a hand full of clients dumping data in and running reports. We moved to a new server (specs below) and have had issues with it freezing and having to reboot it a dozen times over the pass six months. The hosting company has mentioned possible causes (listed below) but cant give a definite answer on what is going wrong. They have offered to reconfigure how ever I like. We have benefited from having a much faster system and really dont want to get rid of the ssd's unless they are the issue. Two possible setup changes that I've talked with them about are also listed below. Any suggestions on what maybe causing the freezing issue as well as suggestion on a new setup would be great. My main questions are: Do SSD generally have problems running the OS & SQL Server on the same RAID Array? and Are the new SSD's still unrefined enough to be running in a production environment? Thanks Current: Xeon Quad Core E3-1270 3.40 Ghz 16 GB DDR3-1333 ECC SDRAM First Hard Drive: 120GB Intel SSD Second Hard Drive: 120GB Intel SSD Third Hard Drive: 120GB Intel SSD Fourth Hard Drive: 120GB Intel SSD SAS 4 Port RAID Card Windows 2012 Standard Edition - 64 Bit MSSQL 2008 Web Edition Possible Causes: Running Sql Server & OS on same RAID Array OS Software Issues Using SSD's CPU Underpowered Not enough RAM Option 1 2x Xeon Quad Core E5-2603 1.80 GHz 16 GB DDR3-1333 ECC SDRAM 1 x 240GB Intel SSD - OS 3 x 1 TB SATA HDD (7200 RPM) - SQL Server SATA 4 Port RAID Card Windows 2012 Standard Edition - 64 Bit Option 2 Dell PowerEdge E3-1270v2 3.5GHz 4 Cores 16 GB DDR3-1600 UDIMM 4 x 128 GB Samsung 840 Pro SSD Add-in H200 (SAS/SATA Controller), 4 Hard Drives - RAID 10 Windows 2012 Standard Edition - 64 Bit

    Read the article

  • The EntitySet name xxx could not be found.

    - by adamjellyit
    I create a simple table in SqlServer .. key field integer, 4 strings and a Timestamp This Table is called Event .. which pluralizes to Events.(the checkbox was ticked) I run the Entity Builder in VS2010 just adding this table only. EntityModelXXX x = new EntityModelXXX(); // create an object 'e' here x.AddToEvents(e); produces an error : The EntitySet name 'EntityModelXXX.Events' could not be found. It doesn't seem to find the set .. why?

    Read the article

  • Compile error with Nested Classes

    - by ProfK
    I have metadata classes nested within entity classes, as I have always done, but suddenly, when I deploy to my target web site, and try and view a page, I get e.g. the following compile error: CS0102: The type 'PvmmsModel.ActivationResource' already contains a definition for 'ActivationResourceMetadata' My code for this type looks like below. There is only one definition of ActivationResourceMetadata: namespace PvmmsModel { [DisplayName("Activation Resources")] [DisplayColumn("Name")] [MetadataType(typeof(ActivationResourceMetadata))] public partial class ActivationResource { public class ActivationResourceMetadata { [ScaffoldColumn(false)] public object ResourceId { get; set; } [DisplayName("Cell Phone")] public object CellPhone { get; set; } [DisplayName("Shifts Worked or Planned")] public object ActivationShifts { get; set; } } } } This is on an ASP.NET WebSite project.

    Read the article

  • WPF DataGrid duplicates new row when new item is attached to the source collection.

    - by Shimmy
    <Page> <Page.Resources> <data:Quote x:Key="Quote"/> </Page.Resources> <tk:DataGrid DataContext="{Binding Quote}" ItemsSource="{Binding Rooms}"> <tk:DataGrid/> </Page> Code: Private Sub InitializingNewItem _ (sender As DataGrid, _ ByVal e As InitializingNewItemEventArgs) _ Handles dgRooms.InitializingNewItem Dim room = DirectCast(e.NewItem, Room) 'Room is subclass of EntityObject Dim state = room.EntityState 'Detached Dim quote = Resources("Quote") state = quote.EntityState 'Unchanged 'either one of these lines causes the new row to go duplicated: quote.Rooms.Add(room) room.Quote = quote 'I tried: sender.Items.Refresh 'I also tried to remove the detached entity from the DataGrid and create a 'new item but it they throw exceptions saying the the Items is untouchable. End If

    Read the article

  • WCF Data Services implementation strategies.

    - by Nix
    Microsoft has done a savvy job of not outlining the actual place for data services in the wonderful world of SOA/Web dev. So my question is simple, are WCF Data Services designed to be used via clients? Or has anyone ever heard of someone using them on the server side? Simple scenario a general layered architecture using BO business objects (parenthesis indicate what is being passed between layers) (XML) WCF Service - (BO)Business Logic - (BO) Dao - Entity Framework or using data services it would be where DS BO are modeled business entities to be used in data service. (XML) WCF Service -(BO) Business Logic - (BO) WCF Data Service - (DS BO)Server I can't see a use for the later, unless there are going to be a lot of cases people would be accessing your data via your Data Service Layer vs the Service layer? Thoughts anyone? I have not seen any mention of using DS from within a Service Layer....

    Read the article

  • L2E many to many query

    - by 5YrsLaterDBA
    I have four tables: Users PrivilegeGroups rdPrivileges LinkPrivilege ----------- ---------------- --------------- --------------- userId(pk) privilegeGroupId(pk) privilegeId(pk) privilegeId(pk, fk) privilegeGroupId(fk) name code privilegeGroupId(pk, fk) L2E will not create LinkPrivilege entity for me. So we only have Users, PrivilegeGroups and rdPrivileges entities. PrivilegeGroups and rdPrivileges are many to many relationship. What I need to do is retrieve all code from rdPrivileges table based on a passed in userId. How can I do it? EDIT working code: var acc = from u in db.Users from pg in db.PrivilegeGroups from p in pg.rdPrivileges where u.UserId == userId && u.PrivilegeGroups.PrivilegeGroupId == pg.PrivilegeGroupId select p.Code;

    Read the article

  • How to avoid "The name 'ConfigurationManager' does not exist in the current context" error?

    - by 5YrsLaterDBA
    I am using VS2008. I have a project connect with a database and the connection string is read from App.config via ConfigurationManager. We are using L2E. Now I added a helper project, AndeDataViewer, to have a simple UI to display data from the database for testing/verification purpose. I don't want to create another set of Entity Data Model in the helper project. I just added all related files as a link in the new helper project. When I compile, I got the following error: Error 15 The name 'ConfigurationManager' does not exist in the current context C:\workspace\SystemSoftware\SystemSoftware\src\systeminfo\RuntimeInfo.cs 24 40 AndeDataViewer I think I may need to add another project setting/config related file's link to the helper project from the main project? There is no App.config file in the new helper project. But it looks I cannot add that file's link to the helper project. Any ideas?

    Read the article

  • EntityFramework EntityState and databinding along with INotifyPropertyChanged

    - by OffApps Cory
    Hello, all! I have a WPF view that displays a Shipment entity. I have a textblock that contains an asterisk which will alert a user that the record is changed but unsaved. I originally hoped to bind the visibility of this (with converter) to the Shipment.EntityState property. If value = EntityState.Modified Then Return Visibility.Visible Else Return Visibility.Collapsed End If The property gets updated just fine, but the view is ignorant of the change. What I need to know is, how can I get the UI to receive notification of the property change. If this cannot be done, is there a good way of writing my own IsDirty property that handles editing retractions (i.e. if I change the value of a property, then change it back to it's original it does not get counted as an edit, and state remains Unchanged). Any help, as always, will be greatly appreciated. Cory

    Read the article

  • EF 4.0 - Many to Many relationship - problem with deletes

    - by chugh97
    My Entity Model is as follows: Person , Store and PersonStores Many-to-many child table to store PeronId,StoreId When I get a person as in the code below, and try to delete all the StoreLocations, it deletes them from PersonStores as mentioned but also deletes it from the Store Table which is undesirable. Also if I have another person who has the same store Id, then it fails saying "The DELETE statement conflicted with the REFERENCE constraint \"FK_PersonStores_StoreLocations\". The conflict occurred in database \"EFMapping2\", table \"dbo.PersonStores\", column 'StoreId'.\r\nThe statement has been terminated" as it was trying to delete the StoreId but that StoreId was used for another PeronId and hence exception thrown. Person p = null; using (ClassLibrary1.Entities context = new ClassLibrary1.Entities()) { p = context.People.Where(x=> x.PersonId == 11).FirstOrDefault(); List<StoreLocation> locations = p.StoreLocations.ToList(); foreach (var item in locations) { context.Attach(item); context.DeleteObject(item); context.SaveChanges(); } }

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >