Search Results

Search found 32122 results on 1285 pages for 'ms sql'.

Page 189/1285 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • SQL Clustering on Hyper V - is a cluster within a cluster a benefit.

    - by Chris W
    This is a re-hash of a question I asked a while back - after a consultant has come in firing ideas in to other teams in the department the whole issue has been raised again hence I'm looking for more detailed answers. We're intending to set-up a multi-instance SQL Cluster across a number of physical blades which will run a variety of different systems across each SQL instance. In general use there will be one virtual SQL instance running on each VM host. Again, in general operation each VM host will run on a dedicated underlying blade. The set-up should give us lots of flexibility for maintenance of any individual VM or underlying blade with all the SQL instances able to fail over as required. My original plan had been to do the following: Install 2008 R2 on each blade Add Hyper V to each blade Install a 2008 R2 VM to each blade Within the VMs - create a failover cluster and then install SQL Server clustering. The consultant has suggested that we instead do the following: Install 2008 R2 on each blade Add Hyper V to each blade Install a 2008 R2 VM to each blade Create a cluster on the HOST machines which will host all the VMs. Within the VMs - create a failover cluster and then install SQL Server clustering. The big difference is the addition of step 4 whereby we cluster all of the guest VMs as well. The argument is that it improves maintenance further since we have no ties at all between the SQL cluster and physical hardware. We can in theory live migrate the guest VMs around the hosts without affecting the SQL cluster at all so we for routine maintenance physical blades we move the SQL cluster around without interruption and without needing to failover. It sounds like a nice idea but I've not come across anything on the internet where people say they've done this and it works OK. Can I actually do the live migrations of the guests without the SQL Cluster hosted within them getting upset? Does anyone have any experience of this set up, good or bad? Are there some pros and cons that I've not considered? I appreciate that mirroring is also a valuable option to consider - in this case we're favouring clustering since it will do the whole of each instance and we have a good number of databases. Some DBs are for lumbering 3rd party systems that may not even work kindly with mirroring (and my understanding of clustering is that fail overs are completely transparent to the clients). Thanks.

    Read the article

  • Configure SQL server reportign service to send email

    - by Edoode
    Hi I'm configuring SQL server 2005 reporting server to send emails outside the domain. I have followed the steps outlined at MS but have a question: How can I supply a domain user to connect to the Exchange server in the same domain? I've tried <SMTPAccountName>DOMAIN\User</SMTPAccountName> in the rsreportserver.config.

    Read the article

  • Why can't I connect to remote Microsoft SQL Server through SSH tunnel?

    - by Alexander
    I have at home a D-Link DIR-615 C1 router with DD-WRT. I set up the SSH server on the router, and log on through an SSH2-RSA passphrase-protected key. That router is the gateway between the local network and the internet. One of the computers on that network has Microsoft SQL Server 2008 installed, with TCP/IP protocol enabled through port 1433. I've set up port forwarding on the router, so that remote connections are possible and are, in fact, working (some developers log on remotely without problems). I am part of another network, that has internet access through a proxy server, which only has ports 80 and 443 opened. I can't connect to that MSSQL server on that remote server because 1433 port is closed on this network. I connected (using Putty) through 443 port to my router's SSH server, and set up 2 tunnels. One is for RDP (3389), and it's working. The other is for 1433 port, to connect to the server. I can't connect through the SSH tunnel to the MS SQL Server, neither through telnet, or through GUI clients. Am I missing something? Additional details: on connect, I get this error from SQL Server Management Studio: TITLE: Connect to Server Cannot connect to localhost:14330. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 3) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=3&LinkId=20476 BUTTONS: OK The tunnel is configured like this: L14330 192.168.0.103:1433 192.168.0.103 is the permanent address of the SQL Server on the LAN. I also successfully forwarded TCP traffic of 3389 port to that IP, so tunneling is working to that IP address. When connecting without tunnel, through Microsoft SQL Server Management Studio, using the same method the connection establishes. Too bad my proxy doesn't allow 1433 port traffic, I wouldn't have this headache.

    Read the article

  • Changing MS Project to 20-hour or 30-hour week.

    - by Eric
    I'm working on a project in MS Project and the default is a 40-hour week. I'm putting each individual task in based on a number of hours, not days. I'd like to have the whole thing set up and computing at 40-hour weeks, and then change it to 20 hours and have the project recompute. How do I do this? I think it has something to do with changing the "project calendar" but I can't quite figure it out.

    Read the article

  • Restore DB - Error RESTORE HEADERONLY is terminating abnormally

    - by Jordon Willis
    I have taken backup of SQL Server 2008 DB on server, and download them to local environment. I am trying to restore that database and it is keep on giving me following error. An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ ADDITIONAL INFORMATION: The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=09.00.4053&EvtSrc=MSSQLServer&EvtID=3241&LinkId=20476 I have tried to create a temp DB on server and tried to restore the same backup file and that works. I have also tried no. of times downloading file from server to local pc using different options on Filezila (Auto, Binary) But its not working. After that I tried to execute following command on server. BACKUP DATABASE go4sharepoint_1384_8481 TO DISK=' C:\HostingSpaces\dbname_jun14_2010_new.bak' with FORMAT It is giving me following error: Msg 3201, Level 16, State 1, Line 1 Cannot open backup device 'c:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\Backup\ C:\HostingSpaces\dbname_jun14_2010_new.bak'. Operating system error 123(The filename, directory name, or volume label syntax is incorrect.). Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally. After researching I found the following 2 useful links: http://support.microsoft.com/kb/290787 http://social.msdn.microsoft.com/Forums/en-US/sqlsetupandupgrade/thread/4d5836f6-be65-47a1-ad5d-c81caaf1044f But I am still not able to restore Database correctly. Any help would be much appreciated. Thanks.

    Read the article

  • Troubleshooting SQL Azure Connectivity

    - by kaleidoscope
    Technorati Tags: Rituraj,Connectivity Issues with SQL Azure Troubleshooting SQL Azure Connectivity How to resolve some of the common connectivity error messages that you would see while connecting to SQL Azure A transport-level error has occurred when receiving results from the server. (Provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) System.Data.SqlClient.SqlException: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding. The statement has been terminated. An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections Error: Microsoft SQL Native Client: Unable to complete login process due to delay in opening server connection. A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. Some troubleshooting tips a) Verify Azure Firewall Settings and Service Availability     Reference: SQL Azure Firewall - http://msdn.microsoft.com/en-us/library/ee621782.aspx b) Verify that you can reach our Virtual IP     Reference: Telnet Troubleshooting Guide - http://technet.microsoft.com/en-us/library/cc753360(WS.10).aspx    Reference: How to Use TRACERT to Troubleshoot TCP/IP Problems in Windows - http://support.microsoft.com/kb/314868 c) Windows Firewall on the local machine     Frequently Asked Questions - http://msdn.microsoft.com/en-us/library/bb736261(VS.85).aspx     Reference: Windows Firewall with Advanced Security Getting Started Guide - http://technet.microsoft.com/en-us/library/cc748991(WS.10).aspx d) Other Firewall products     Reference: http://www.whatismyip.com/ e) Generate a Network Trace using Microsoft Network Monitor tool    Reference: How to capture network traffic with Network Monitor - http://support.microsoft.com/kb/148942 f) SQL Azure Denial of Service (DOS) Guard SQL Azure utilizes techniques to prevent denial of service attacks. If your connection is getting reset by our service due to a potential DOS attack you would  be able to see a three way handshake established and then a RESET in your network trace.

    Read the article

  • sp_send_dbmail - MS SQL 2008

    - by Nev_Rahd
    sp_send_dbmail works fine on one server as below: EXEC msdb.dbo.sp_send_dbmail @profile_name = 'xxxx Mail Profile', @recipients = 'xxx.com', @body = 'xxxxxxxx', @subject = 'xxx - Please do not reply to this email'; But on other server where got same Mail profile setup throws an error saying it excepts value for parameters @copy_recipients, @blind_copy_recipients (though these are optional) Where do I need to check ? any help please Thanks

    Read the article

  • How to Transform a user's search string into a MS SQL Full-Text Search Phrase

    - by Atomiton
    I've search for answers for this and I can't seem to find an answer to what should be somewhat simple. This is related to another question I asked, but it's different. What's the best way to take a user's search phrase and throw it into a CONTAINSTABLE(table, column, @phrase, topN ) phrase? Say, for example the user inputs: Books by "Dr. Seuss" What's the best way to turn that into something that will return results in my ContainsTAble() phrase? I was previously parsing the search phrase and writing something like ISABOUT("Books" WEIGHT(1.0), "by" WEIGHT(0.9), "Dr. Seuss" WEIGHT(0.8)) as my @phrase but ISABOUT seems to be returning odd results... especially when one word searches are entered. Any Ideas?

    Read the article

  • Guidelines for using Merge task in SSIS

    - by thursdaysgeek
    I have a table with three fields, one an identity field, and I need to add some new records from a source that has the other two fields. I'm using SSIS, and I think I should use the merge tool, because one of the sources is not in the local database. But, I'm confused by the merge tool and the proper process. I have my one source (an Oracle table), and I get two fields, well_id and well_name, with a sort after, sorting by well_id. I have the destination table (sql server), and I'm also using that as a source. It has three fields: well_key (identity field), well_id, and well_name, and I then have a sort task, sorting on well_id. Both of those are input to my merge task. I was going to output to a temporary table, and then somehow get the new records back into the sql server table. Oracle Well SQL Well | | V V Sort Source Sort Well | | -------> Merge* <----------- | V Temp well table I suspect this isn't the best way to use this tool, however. What are the proper steps for a merge like this? One of my reasons for questioning this method is that my merge has an error, telling me that the "Merge Input 2" must be sorted, but its source is a sort task, so it IS sorted. Example data SQL Well (before merge) well_key well_id well_name 1 123 well k 2 292 well c 3 344 well t 5 439 well d Oracle Well well_id well_name 123 well k 292 well c 311 well y 344 well t 439 well d 532 well j SQL Well (after merge) well_key well_id well_name 1 123 well k 2 292 well c 3 344 well t 5 439 well d 6 311 well y 7 532 well j Would it be better to load my Oracle Well to a temporary local file, and then just use a sql insert statment on it?

    Read the article

  • cascading combo box causing empty fields in next record

    - by glinch
    Hi there, I'm having problems with a cascading combo box. Everything works fine with the combo boxes and the values get populated correctly. Private Sub cmbAdjComp_AfterUpdate() Me.cboAdjOff.RowSource = "SELECT AdjusterCompanyOffice.ID, AdjusterCompanyOffice.Address1, AdjusterCompanyOffice.Address2, AdjusterCompanyOffice.Address3, AdjusterCompanyOffice.Address4, AdjusterCompanyOffice.Address5 FROM" & _ " AdjusterCompanyOffice WHERE AdjusterCompanyOffice.AdjCompID = " & Me.cmbAdjComp.Column(1) & _ " ORDER BY AdjusterCompanyOffice.Address1" Me.cboAdjOff = Me.cboAdjOff.ItemData(0) End Sub The secondary combo box has a row source query: SELECT AdjusterCompanyOffice.ID, AdjusterCompanyOffice.Address1, AdjusterCompanyOffice.Address2, AdjusterCompanyOffice.Address3, AdjusterCompanyOffice.Address4, AdjusterCompanyOffice.Address5 FROM AdjusterCompanyOffice ORDER BY AdjusterCompanyOffice.Address1; Both comboboxes have the same controlsource. Everything works fine and dandy moving between records and the boxes show the correct fields for each record. When i use the first combo box, and then select the appropriate option in the second combo box, everything works great on the specific record. However when I move to the next record, the values in the second combo box are all empty. If i close the form and reopen it, and avoid using the cascading combo boxes all the values are all correct when i move between records. Somehow using the cascading combo boxes creates a conflict with the row source of the secondary combo box. Hope that is clear! Have been rummaging around for an answer but cant find anything. any help would be greatly appreciated. Thanks Noel

    Read the article

  • Is it possible to make a non-nullable column nullable when used in a view? (sql server)

    - by Matt
    Hi, To start off I have two tables, PersonNames and PersonNameVariations. When a name is searched, it finds the closest name to one of the ones available in PersonNames and records it in the PersonNameVariations table if it's not already in there. I am using a stored proc to search the PersonNames for a passed in PersonNameVariationand return the information on both the PersonName found and the PersonNameVariation that was compared to it. Since I am using the Entity Framework, I needed return a complex type in the Import Function but for some reason it says my current framework doesn't support it. My last option was to use an Entity to return in my stored proc instead. The result that I needed back is the information on both the PersonName that was found and the PersonNameVariation that was recorded. Since I cannot return both entities, I created a view PersonSearchVariationInfo and added it into my Entity Framework in order to use it as the entity to return. The problem is that the search will not always return a Person Name match. It needs to be able to return only the PersonNameVariation data in some cases, meaning that all the fields in the PersonSearchVariationInfo pertaining to PersonName need to be nullable. How can I take my view and make some of the fields nullable? When I do it directly in the Entity Framework I get a mapping error: Error 4 Error 3031: Problem in mapping fragments starting at line 1202:Non-nullable column myproject_vw_PersonSearchVariationInfo.DateAdded in table myproject_vw_PersonSearchVariationInfo is mapped to a nullable entity property. C:\Users\Administrator\Documents\Visual Studio 2010\Projects\MyProject\MyProject.Domain\EntityFramework\MyProjectDBEntities.edmx 1203 15 MyProject.Domain Anyone have any ideas? Thanks, Matt

    Read the article

  • Why is my mssql query failing?

    - by Eric Reynolds
    connect(); $arr = mssql_fetch_assoc(mssql_query("SELECT Applications.ProductName, Applications.ProductVersion, Applications.ProductSize, Applications.Description, Applications.ProductKey, Applications.ProductKeyID, Applications.AutomatedInstaller, Applications.AutomatedInstallerName, Applications.ISO, Applications.ISOName, Applications.Internet, Applications.InternetURL, Applications.DatePublished, Applications.LicenseID, Applications.InstallationGuide, Vendors.VendorName FROM Applications INNER JOIN Vendors ON Applications.VendorID = Vendors.VendorID WHERE ApplicationID = ".$ApplicationID)); $query1 = mssql_query("SELECT Issues.AppID, Issues.KnownIssues FROM Issues WHERE Issues.AppID=".$ApplicationID); $issues = mssql_fetch_assoc($query1); $query2 = mssql_query("SELECT ApplicationInfo.AppID, ApplicationInfo.Support_Status, ApplicationInfo.UD_Training, ApplicationInfo.AtomicTraining, ApplicationInfo.VendorURL FROM software.software_dbo.ApplicationInfo WHERE ApplicationInfo.AppID = ".$ApplicationID); $row = mssql_fetch_assoc($query2); function connect(){ $connect = mssql_connect(DBSERVER, DBO, DBPW) or die("Unable to connect to server"); $selected = mssql_select_db(DBNAME, $connect) or die("Unable to connect to database"); return $connect; } Above is the code. The first query/fetch_assoc works perfectly fine, however the next 2 queries fail and I cannot figure out why. Here is the error statement that shows up from php: Warning: mssql_query() [function.mssql-query]: message: Invalid object name 'Issues'. (severity 16) in /srv/www/htdocs/agreement.php on line 47 Warning: mssql_query() [function.mssql-query]: General SQL Server error: Check messages from the SQL Server (severity 16) in /srv/www/htdocs/agreement.php on line 47 Warning: mssql_query() [function.mssql-query]: Query failed in /srv/www/htdocs/agreement.php on line 47 Warning: mssql_fetch_assoc(): supplied argument is not a valid MS SQL-result resource in /srv/www/htdocs/agreement.php on line 48 Warning: mssql_query() [function.mssql-query]: message: Invalid object name 'software.software_dbo.ApplicationInfo'. (severity 16) in /srv/www/htdocs/agreement.php on line 51 Warning: mssql_query() [function.mssql-query]: General SQL Server error: Check messages from the SQL Server (severity 16) in /srv/www/htdocs/agreement.php on line 51 Warning: mssql_query() [function.mssql-query]: Query failed in /srv/www/htdocs/agreement.php on line 51 Warning: mssql_fetch_assoc(): supplied argument is not a valid MS SQL-result resource in /srv/www/htdocs/agreement.php on line 52 The error clearly centers around the fact that the query is not executing. In my database I have a table called Issues and a table called ApplicationInfo so I am unsure why it is telling me that they are invalid objects. Any help would be appreciated. Thanks, Eric R.

    Read the article

  • Delphi ADO SQL Syntax Error

    - by pr0wl
    Hello. I am getting an Syntax Error when processing the following lines of code. Especially on the AQ_Query.Open; procedure THauptfenster.Button1Click(Sender: TObject); var option: TZahlerArray; begin option := werZahlte; AQ_Query.Close; AQ_Query.SQL.Clear; AQ_Query.SQL.Add('USE wgwgwg;'); AQ_Query.SQL.Add('INSERT INTO abrechnung '); AQ_Query.SQL.Add('(`datum`, `titel`, `betrag`, `waldemar`, `jonas`, `ali`, `ben`)'); AQ_Query.SQL.Add(' VALUES '); AQ_Query.SQL.Add('(:datum, :essen, :betrag, :waldemar, :jonas, :ali, :ben);'); AQ_Query.Parameters.ParamByName('datum').Value := DateToStr(mcDatum.Date); AQ_Query.Parameters.ParamByName('essen').Value := ledTitel.Text; AQ_Query.Parameters.ParamByName('betrag').Value := ledPreis.Text; AQ_Query.Parameters.ParamByName('waldemar').Value := option[0]; AQ_Query.Parameters.ParamByName('jonas').Value := option[1]; AQ_Query.Parameters.ParamByName('ali').Value := option[2]; AQ_Query.Parameters.ParamByName('ben').Value := option[3]; AQ_Query.Open; end; The error: I am using MySQL Delphi 2010.

    Read the article

  • MS SQL Database with clustered GUID PKs - switch clustered index or switch to sequential (comb) GUID

    - by Eyvind
    We have a database in which all the PKs are GUIDs, and most of the PKs are also the clustered index for the table. We know that this is bad (due to the random nature of GUIDs). So, it seems there are basically two options here (short of throwing out GUIDs as PKs altogether, which we cannot do (at least not at this time)). We could change the GUID generation algorithm to e.g. the one that NHibernate uses, as detailed in this post, or we could, for the tables that are under the heaviest use, change to a different clustered index, e.g. an IDENTITY column, and keep the "random" GUIDs as PKs. Is it possible to give any general recommendations in such a scenario? The application in question has 500+ tables, the largest one presently at about 1,5 million rows, a few tables around 500 000 rows, and the rest significantly lower (most of them well below 10K). Furthermore, the application is installed at several customer sites already, so we have to take any possible negative effects for existing customer into consideration. Thanks!

    Read the article

  • MS SQL SELECT stored procedure according to combobox.selectedvalue

    - by Jay
    Hello, In order to fill a datagridview according to the selectedvalue of a combobox I've tried creating a stored procedure. However, as I'm not 100% sure what I'm doing it, depending on the WHERE statement at the end of my stored procedure, either returns everything within the table or nothing at all. This is what's in my class: Public Function GetAankoopDetails(ByRef DisplayMember As String, ByRef ValueMember As String) As DataTable DisplayMember = "AankoopDetailsID" ValueMember = "AankoopDetailsID" If DS.Tables.Count > 0 Then DS.Tables.Remove(DT) End If DT = DAC.ExecuteDataTable(My.Resources.S_AankoopDetails, _Result, _ DAC.Parameter(Const_AankoopID, AankoopID), _ DAC.Parameter("@ReturnValue", 0)) DS.Tables.Add(DT) Return DT End Function Public Function GetAankoopDetails() As DataTable If DS.Tables.Count > 0 Then DS.Tables.Remove(DT) End If DT = DAC.ExecuteDataTable(My.Resources.S_AankoopDetails, _Result, _ DAC.Parameter(Const_AankoopID, AankoopID), _ DAC.Parameter("@ReturnValue", 0)) DS.Tables.Add(DT) Return DT End Function This is the function in the code behind the form I've written in order to fill the datagridview: Private Sub GridAankoopDetails_Fill() Try Me.Cursor = Cursors.WaitCursor dgvAankoopDetails.DataSource = Nothing _clsAankoopDetails.AankoopDetailsID = cboKeuze.SelectedValue dgvAankoopDetails.DataSource = _clsAankoopDetails.GetAankoopDetails Catch ex As Exception MessageBox.Show("An error occurred while trying to fill the data grid: " & ex.Message, "Oops!", MessageBoxButtons.OK) Finally Me.Cursor = Cursors.Default End Try End Sub And finally, this is my stored procedure: (do note that I'm not sure what I'm doing here) USE [Budget] GO /****** Object: StoredProcedure [dbo].[S_AankoopDetails] Script Date: 04/12/2010 03:10:52 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[S_AankoopDetails] ( @AankoopID int, @ReturnValue int output ) AS declare @Value int set @Value =@@rowcount if @Value = 0 begin SELECT dbo.tblAankoopDetails.AankoopDetailsID, dbo.tblAankoopDetails.AankoopID, dbo.tblAankoopDetails.ArtikelID, dbo.tblAankoopDetails.Aantal, dbo.tblAankoopDetails.Prijs, dbo.tblAankoopDetails.Korting, dbo.tblAankoopDetails.SoortKorting, dbo.tblAankoopDetails.UitgavenDeelGroepID FROM dbo.tblAankoopDetails INNER JOIN dbo.tblAankoop ON dbo.tblAankoopDetails.AankoopID = dbo.tblAankoop.AankoopID INNER JOIN dbo.tblArtikel ON dbo.tblAankoopDetails.ArtikelID = dbo.tblArtikel.ArtikelID INNER JOIN dbo.tblUitgavenDeelGroep ON dbo.tblAankoopDetails.UitgavenDeelGroepID = dbo.tblUitgavenDeelGroep.UitgavenDeelGroepID WHERE dbo.tblAankoopDetails.Deleted = 0 and dbo.tblAankoopDetails.AankoopID = @AankoopID ORDER BY AankoopID if @@rowcount >0 begin set @ReturnValue=999 end else begin set @ReturnValue=997 end end if @Value >0 begin --Dit wil zeggen dat ik een gebruiker wil ingeven die reeds bestaat. (998) set @ReturnValue=998 end Does anyone know what I'm need to do to resolve this? Kind regards, Jay

    Read the article

  • xVelocity engines compared: VertiPaq vs ColumnStore #ssas #vertipaq #xvelocity #sql #tabular

    - by Marco Russo (SQLBI)
    During the last months I and Alberto worked in several projects using Analysis Services Tabular and we had to face real world issues, such as complex queries, large data volume, frequent data updates and so on. Sometime we faced the challenge of comparing Tabular performance with SQL Server. It seemed a non-sense, because even if the same core xVelocity technology is implemented in both products (SQL Server 2012 uses ColumnStore indexes, whereas Analysis Services 2012 uses VertiPaq), we initially assumed that the better optimization for the in-memory engine used by Analysis Services would have been always better than SQL Server. However, we discovered several important things: Processing time might be different and having data on SQL Server could make ColumnStore way faster for processing. Partitioning in SQL Server might be much more effective for query performance than Analysis Services. A single query can scale easily on more processor on SQL Server, whereas in Analysis Services the formula engine is single-threaded and could be a bottleneck for certain queries. In case of a large workload with many concurrent users, storage engine cache in Analysis Services could be a big advantage over SQL Server, especially for scalability As you can see, these considerations are not always obvious and you might be tempted to make other assumptions based on these information. Well, don’t do that. Before anything else, read the whitepaper VertiPaq vs ColumnStore Comparison written by Alberto Ferrari. Then, measure your workload. Finally, make some conclusion. But don’t make too many assumptions. You might be wrong, as we did at the beginning of this journey.

    Read the article

  • SQL Azure Pricing

    - by kaleidoscope
    Microsoft’s pricing for SQL Server in the cloud, SQLAzure has been announced: $9.99   per month for 0 – 1GB $99.99 per month up to 10GB. There’s currently a 10GB maximum size cap for SQLAzure. For larger data storage needs, you’ll need to break the databases into smaller sizes. Scaling SQL Azure Applications If you think you’re going to need 100GB in the near term, it probably makes sense to break your application up into multiple separate databases from the get-go (10 x $9.99 = $99.99 anyway) and just make really sure none of the individual databases exceed 10GB. Beep Beep, Back That Database Up The bandwidth costs for SQL Azure are $.15 per GB of outbound bandwidth.  Assuming that you don’t compress the data before you pull it out of the cloud, that means daily backups of a 1GB database will add another $4.50 per month, and a 10GB database will add another $45/month.  Daily backups will cost about half of what your monthly service charges cost. It’s not completely clear from the press release, but if Microsoft follows Amazon’s pricing model, bandwidth between the Microsoft cloud services will not incur a cost.  That would mean it might make sense to spin up an Windows Azure computing application for $.12 per hour, use that application to compress your SQL Azure database, and then send the compressed data off to Azure storage for backup.  That would eliminate the data in/out costs, and minimize the Azure storage costs ($.15/GB).  Database administrators would back up their SQL Azure data to Azure Storage, keep a history of backups there, and restore them to SQL Azure faster when needed. Of course, there’s no native backup support in SQL Azure, and it’s not clear whether Windows Azure will include tools like SQL Server Integration Services. More details can be found at http://www.brentozar.com/archive/2009/07/sql-azure-pricing-10-for-1gb-100-for-10gb/   Anish, S

    Read the article

  • how to save excel file in MS SQL Server 2005

    - by Anish Mohan
    I have a table in which there are two columns : 1. import type, 2. Excel import template. The second column - "Excel import template" should store the whole excel file. How would I save excel file in databse...can I use binary datatype column, convert excel file to bytes and save the same ? Thanks in advance !

    Read the article

  • How to factor out data layer in nopCommerce and replace MS SQL with RavenDB?

    - by Kaveh Shahbazian
    I am new to nopCommerce and ecommerce in general but I am involved in an ecommerce project. Now from my past experiences with RavenDB (which mostly were absolutely pleasant) and based on the needs of the business (fast changes with awkward business workflows) It seemed to be an appealing option to have RavenDB handling all sort of things related to the database. I do not understand design and architecture of nopCommerce fully so I did not reach to a conclusion on how to factor data parts, since it seems the services layer actually does not abstract data-layer concepts away; like bringing in EF working model to other layers. I have found another project which used NuDB as it's database as a nopCommerce fork. But it did not help because NuDB still has the feeling of a RDBMS and is not as different as RavenDB. Now first how can I learn about the internals of nopCommerce (other than investigating the code)? It's workflows? It's conventions? Second has anyone tried something similar before with a NoSQL database (say like MongoDB or RavenDB)? Is it possible to achieve this in a 1 (~2) month time frame? Thanks in advance;

    Read the article

  • Can T-SQL function return user-defined table type?

    - by abatishchev
    I have my own type: CREATE TYPE MyType AS TABLE ( foo INT ) and a function receiving it as a parameter: CREATE FUNCTION Test ( @in MyType READONLY ) RETURNS @return MyType AS ... can it return MyType or only TABLE repeating MyType's structure: CREATE FUNCTION Test ( @in MyType READONLY ) RETURNS @return TABLE (foo INT) AS ... ?

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >