Search Results

Search found 38535 results on 1542 pages for 'sql memory'.

Page 55/1542 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • SQL*Plus??? - SQL??????????????????(????? ???Tips-3)

    - by Yuichi.Hayashi
    ??SQL??????????????? $ sqlplus @batch1.sql ?????SQL????????????????????????????????????? ????????????????????????????SQL?????????????????????????? SQL*Plus?????????????????????????????????????????????????SQL????????????????? ???????????????????????? #!/bin/sh sqlplus -s /nolog    conn scott/tiger"    select sysdate from dual;    exit EOF ??????????????????SQL???????????? ?EOF?????????EndOfSQL??????????????????????????????????????????????????? ????????????????????????????????? ????????????????????????SQL???????????????????????????SQL????????????????????????????????????????? ??????SELECT???????????????????????? #!/bin/sh table_name=dual sqlplus -s /nolog    conn scott/tiger    select sysdate from $table_name;    exit EndOfSQL (Written by Hiroyuki Nakaie)

    Read the article

  • Can't remotely connect through SQL Server Management Studio

    - by FAtBalloon
    I have setup a SQL Server 2008 Express instance on a dedicated Windows 2008 Server hosted by 1and1.com. I cannot connect remotely to the server through management studio. I have taken the following steps below and am beyond any further ideas. I have researched the site and cannot figure anything else out so please forgive me if I missed something obvious, but I'm going crazy. Here's the lowdown. The SQL Server instance is running and works perfectly when working locally. In SQL Server Management Studio, I have checked the box "Allow Remote Connections to this Server" I have removed any external hardware firewall settings from the 1and1 admin panel Windows firewall on the server has been disabled, but just for kicks I added an inbound rule that allows for all connections on port 1433. In SQL Native Client configuration, TCP/IP is enabled. I also made sure the "IP1" with the server's IP address had a 0 for dynamic port, but I deleted it and added 1433 in the regular TCP Port field. I also set the "IPALL" TCP Port to 1433. In SQL Native Client configuration, SQL Server Browser is also running and I also tried adding an ALIAS in the I restarted SQL server after I set this value. Doing a "netstat -ano" on the server machine returns a TCP 0.0.0.0:1433 LISTENING UDP 0.0.0.0:1434 LISTENING I do a port scan from my local computer and it says that the port is FILTERED instead of LISTENING. I also tried to connect from Management studio on my local machine and it is throwing a connection error. Tried the following server names with SQL Server and Windows Authentication marked in the database security. ipaddress\SQLEXPRESS,1433 ipaddress\SQLEXPRESS ipaddress ipaddress,1433 tcp:ipaddress\SQLEXPRESS tcp:ipaddress\SQLEXPRESS,1433

    Read the article

  • Can't connect to Sql Server 2008 named instance

    - by eidylon
    I just installed Sql 2008 Express on a new server running Windows Server 2008. I know Sql is working properly, because I can connect to the db fine locally, on the server. I cannot connect to it from a client machine though, neither by IP address nor by machine name (iporname\instance). I know I have the correct IP address, because I am RDCing into the server to perform all this configuration and setup, and if I ping the server name, it is resolving to the correct IP address as well. On the server, I have set up an inbound firewall exception allowing all traffic on any port on any protocol to sqlservr.exe. In SSMS, in server > Properties > Connections Allow remote connections to this server is enabled. In Sql Server Configuration Manager, TCP/IP is enabled in both the Protocols for <instance> and the Client Protocols sections. I looked in the Windows logs, but don't see anything about connections being denied or dropped. As far as I can see, I have everything set right, but cannot connect from a client machine. The client CAN connect to other Sql 2008 Express servers okay, so I know the client configuration is correct. Any ideas where else I can look for info of what/where/how this connection is dropping, greatly apprecaited! The error being returned by the client is: **TITLE: Connect to Server** Cannot connect to [MY.IP.ADD.RSS]\[MYINSTNAME]. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)

    Read the article

  • attach / detach mssql 2008 sql server manager [SOLVED]

    - by Tillebeck
    An external consult wrote a guide on how to copy a database. Step two was detach the database using Sql Server Manager. After the detach the database was not visible in the SQL Server Manager... Not much to do but write a mail to the service provider asking to have the database attached again. The service porviders answer: Not posisble to attach again since the SQL Server security has been violated". Rolling back to last backup is not the option I want to use. Can any one give feedback if this seems logic and reasonable to assume that a detached database in a SQL Server 2008 accessed through SQL Server Manager cannot be reattached. It was done by rightclicking the database and choosing detach. -- update -- Based on the comments below I update the question with the server setup. There are two dedicated servers: srv1: Web server with remote desktop and an Sql Server Manager srv2: Sql server that can be accessed through the Sql Server Manager on the web server -- update2 -- After a restart of the server the DBA could suddenly do the attachment of the database. And I guess that after the restart it was a simple task. So all of your answer were rigth! It seems that I can only mark one as a correct answer so I marked the first answer correct. But all are correct answer. Thanks a lot. Without posting the link to this thread then we might had so suffer while watching our database beeing restored by a backup :-) Thanks a lot. BR. Anders

    Read the article

  • What is the maximum memory a process (MySQL) can consume on a 32-bit OS?

    - by mmattax
    I have MySQL running on a 32-bit RHEL box. The server itself has 4GB total memory with 2GB allocated to MySQL. I would like to know the max amount of memory I can put in the box and how much of that I can allocate to MySQL. I have heard both 2GB and 4GB as the per-process-limit on a 32-bit OS... Ultimately I'd like to know if I can increase the memory for MySQL without upgrading to a 64-bit OS.

    Read the article

  • How do i play nicely with MS SQL/SQL Server 2008

    - by acidzombie24
    Big problem. I have nearly given up. I am trying to port my prototype to use MS SQL so it will work on a server once i get it (the server will be SQL Server 2008, shared, i dont know any more info). So i tried to connect to SQL Server via visual studios IDE and had no luck. I enabled TCP and named pipes and restarted the service (and computer) with still no luck. I remembered about mdf files so i made that after an obstacle of not being able to make the connect string require i figure out visual studio has it in its properties and successfully connected with that. Then i had a problem with nested transactions. After not being able to figure out how to check i wondered if i can configure it to allow it somehow. I always thought all of MS were the same except for limitations but sql server seems to support nested transactions so theres no point trying to work around the problem with .mdf files since i wont need them and really just used it to port the base of my sql code and to check if syntax is correct. I tried installing SQL Server Management Studio since people mentioned it several times (as a solution or at least help). When installing it on windows 7 it says it may not be compatible. After running it, it launched SQL Server Installation Center (64-bit) which doesnt seem to be the same thing as i dont see a way to modify any of my server (networking) configurations or edit user permissions, etc. I am clueless what to do next. Does anyone have any ideas? I'm posting here bc i think my problem is more configurations and sql server then programming.

    Read the article

  • The Hot-Add Memory Hogs

    - by Andrew Clarke
    One of the more difficult tasks, when virtualizing a server, is to determine the amount of memory that Hypervisor should assign to the virtual machine. This requires accurate monitoring and, because of the consequences of setting the value too low, there is a great temptation to err on the side of over-provisioning. This results in fewer guest VMs and, in fact, with more accurate memory provisioning, many virtual environments could support 30% more VMs. In order to achieve a better consolidation (aka VM density) ratio, Windows Server 2008 R2 SP1 has introduced what Microsoft calls ‘Dynamic Memory’. This means that the start-up RAM VM memory assigned to guest virtual machines can be allowed to vary according to demand, changing dynamically while the VM is running, based on the workload of applications running inside. If demand outstrips supply, then memory can be rationed according to the ‘memory weight’ assigned to the guest VM. By this mechanism, memory becomes a shared resource that can be reallocated automatically as demand patterns vary. Unlike VMWare’s Memory Overcommit technology, the sum of all the memory allocations to each virtual machine will not exceed the total memory of the host computer. This is fine for applications that are self-regulating in their demands for memory, releasing memory back into the 'pool' when not under peak load. Other applications however, such as SQL Server Standard and Enterprise, are by nature, memory hogs under high workload; they can grab hot-add memory whilst running under load and then never release it. This requires more careful setting-up and the SQLOS team have provided some guidelines from for configuring SQL Server in virtual environments. Whereas VMWare’s Memory Overcommit is well-proven in a number of different configurations, Hyper-V’s ‘Dynamic Memory’ is new. So far, the indications are that it will improve the business case for virtualizing and it is probably a far more intuitive technology for the average IT professional to grasp. It is certainly worth testing to see whether it works for you.

    Read the article

  • SQL Server PowerShell Provider follows the Version of PowerShell on the Host and other errata

    - by BuckWoody
    There may be some misunderstanding on how the PowerShell Provider for SQL Server works. I’ve written an article or two explaining that you can use PowerShell with SQL Server, without having the SQL Server 2008 (or higher) provider around. After all, PowerShell just uses .NET, and SQL Server “Server Management Objects” or SMO listen to that interface as well. In SQL Server 2008 and higher we created a “MiniShell” for PowerShell that gives you the ability to treat a SQL Server Instance as a drive (called a “Provider” or path or drive) and a few commands (called command-lets). Using these two simple constructs you can move around SQL Server quickly and work with the objects it holds. I read the other day where someone stated that we had “re-compiled” PowerShell, so that you would have version 1.0 from SQL Server and 2.0 on your new server. Not so! Drop to a SQLPS prompt and a PowerShell prompt and type this in each: $PSVersionTable They should return the same value. You can think of a MiniShell as simply a compiled “profile” that gives you those providers and command-lets automatically – that’s all. In fact, you can load the SMO libraries yourself without the SQL Server 2008 Provider anywhere in sight. I do this all the time, since the MiniShell also has other restrictions. Also remember that if you run a PowerShell script as a SQL Agent Job step type (in 2008 and higher) that you’re running under the context of the account that starts Agent – I think most folks know this, but it’s good to keep in mind. There’s a re-written section of Books Online that goes over working with this very nicely – also covers the question “How to I connect to another server using the SQL Server PowerShell Provider” (hint: It’s just CD) and “How do I load all the SMO stuff if I don’t want to use the Provider” and more. Be sure and check out the note at the bottom that explains the firewall exceptions you’ll need to enable to CD to that remote server. Here’s that link: http://msdn.microsoft.com/en-us/library/cc281947.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Microsoft guarantees the performance of SQL Server

    - by simonsabin
    I have recently been informed that Microsoft will be guaranteeing the performance of SQL Server. Yes thats right Microsoft will guarantee that you will get better performance out of SQL Server that any other competitor system. However on the flip side there are also saying that end users also have to guarantee the performance of SQL Server if they want to use the next release of SQL Server targeted for 2011 or 2012. It appears that a recent recruit Mark Smith from Newcastle, England will be heading a new team that will be making sure you are running SQL Server on adequate hardware and making sure you are developing your applications according to best practices. The Performance Enforcement Team (SQLPET) will be a global group headed by mark that will oversee two other groups the existing Customer Advisory Team (SQLCAT) and another new team the Design and Operation Group (SQLDOG). Mark informed me that the team was originally thought out during Yukon and was going to be an independent body that went round to customers making sure they didn’t suffer performance problems. However it was felt that they needed to wait a few releases until SQL Server was really there. The original Yukon Independent Performance Enhancement Team (YIPET) has now become the SQL Performance Enforcement Team (SQLPET). When challenged about the change from enhancement to enforcement Mark was unwilling to comment. An anonymous source suggested that "..Microsoft is sick of the bad press SQL Server gets for performance when the performance problems are normally down to people developing applications badly and using inadequate hardware..." Its true that it is very easy to install and run SQL, unlike other RDMS systems and the flip side is that its also easy to get into performance problems due to under specified hardware and bad design. Its not yet confirmed if this enforcement will apply to all SKUs or just the high end ones. I would personally welcome some level of architectural and hardware advice service that clients would be able to turn to, in order to justify getting the appropriate hardware at the start of a project and not 1 year in when its often too late.

    Read the article

  • Python memory leaks

    - by Fragsworth
    I have a long-running script which, if let to run long enough, will consume all the memory on my system. Without going into details about the script, I have two questions: Are there any "Best Practices" to follow, which will help prevent leaks from occurring? What techniques are there to debug memory leaks in Python?

    Read the article

  • Cascading Deletes in SQL Sever 2008 not working.

    - by Vaccano
    I have the following table setup. Bag | +-> BagID (Guid) +-> BagNumber (Int) BagCommentRelation | +-> BagID (Int) +-> CommentID (Guid) BagComment | +-> CommentID (Guid) +-> Text (varchar(200)) BagCommentRelation has Foreign Keys to Bag and BagComment. So, I turned on cascading deletes for both those Foreign Keys, but when I delete a bag, it does not delete the Comment row. Do need to break out a trigger for this? Or am I missing something? (I am using SQL Server 2008) Note: Posting requested SQL. This is the defintion of the BagCommentRelation table. (I had the type of the bagID wrong (I thought it was a guid but it is an int).) CREATE TABLE [dbo].[Bag_CommentRelation]( [Id] [int] IDENTITY(1,1) NOT NULL, [BagId] [int] NOT NULL, [Sequence] [int] NOT NULL, [CommentId] [int] NOT NULL, CONSTRAINT [PK_Bag_CommentRelation] PRIMARY KEY CLUSTERED ( [BagId] ASC, [Sequence] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[Bag_CommentRelation] WITH CHECK ADD CONSTRAINT [FK_Bag_CommentRelation_Bag] FOREIGN KEY([BagId]) REFERENCES [dbo].[Bag] ([Id]) ON DELETE CASCADE GO ALTER TABLE [dbo].[Bag_CommentRelation] CHECK CONSTRAINT [FK_Bag_CommentRelation_Bag] GO ALTER TABLE [dbo].[Bag_CommentRelation] WITH CHECK ADD CONSTRAINT [FK_Bag_CommentRelation_Comment] FOREIGN KEY([CommentId]) REFERENCES [dbo].[Comment] ([CommentId]) ON DELETE CASCADE GO ALTER TABLE [dbo].[Bag_CommentRelation] CHECK CONSTRAINT [FK_Bag_CommentRelation_Comment] GO The row in this table deletes but the row in the comment table does not.

    Read the article

  • Is this overkill? Using MDX queries and cubes instead of SQL stored procedures

    - by Jason Holland
    I am new to Microsoft's SQL Server Analysis Services Cubes and MDX queries. Where I work we have a daily sales table in SQL Server 2005 that already contains an aggregate of sale information per store per day. At this time it contains only 164,000+ rows. We have a sales cube dedicated to this table that about 15 reports are based off of. Now, I should also note that we generate reports based on our own fiscal year criteria: a 13 period year (1 month equals 28 days etc.). Is this overkill? At what point is it justified to begin using SSAS Cubes/MDX over plain old SQL Server stored procedures? Since I have always been just using plain old SQL am I tragically late to the MDX party?

    Read the article

  • Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

    - by SQLOS Team
    Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server. Overview The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.  Scenario 1: VMs connected through the same Cloud Service 2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required. Step 1 – Provision VMs and Configure Ports   Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):   Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal): After provisioning of the 2 VMs above, here is the default port configurations for example: Step2 – Verify / Confirm the TCP port used by the database Engine By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.   1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration 2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP - > IP Addresses   3. Confirm the port number used by SQL Server Engine; in this case 1433 4. Update from Windows Authentication to Mixed mode   5.       Restart SQL Server service for the change to take effect 6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2 Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) Your will then be prompted to enter the password. Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok. Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1 Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.   Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1. Scenario 2: VMs provisioned in different Cloud Services 2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed. Step 1 – Provision new VM3 Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal): After provisioning is complete, here is the default port configurations: Step 2 – Add public port to VM1 connect to from VM3’s DB instance Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance. Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) You will then be prompted to enter the password.   Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: Ok. netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok.   Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1 RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above. Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port   Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.  Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1. Conclusion Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections. References SQL Server in Windows Azure Virtual Machines   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • MCSE and MCSA makes a return to the world of certification..... but not as you know it.

    - by Testas
    Quick announcementMicrosoft Learning today announced the certification tracks for the upcoming SQL Server 2012 exams.You begin by acheiving the MCSA - Microsoft Certified Solutions Associate (Not to be confused by the old Microsoft Certified System Administrator)If you are starting out this includes taking the following three exams:Exam 70-461: Querying Microsoft SQL Server 2012Exam 70-462: Administering Microsoft SQL Server 2012 DatabasesExam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012If you have an MCTS in SQL Server 2008 already you can take the following pathA pass in a SQL Server 2008 (MCTS) Microsoft Certified Technology Specialist examExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Once you have achieved you MCSA status you can then start for your MCSE - Microsoft Certified Solutions Expert certificationYou have a choice, to do the MCSE: SQL Server 2012 Data Platform, MCSE: SQL Server 2012 Business Intelligence or you could do bothMCSE: SQL Server 2012 Data Platform involvesObtain your SQL Server 2012 MCSAExam 70-464: Developing Microsoft SQL Server 2012 DatabasesExam 70-465: Designing Database Solutions for Microsoft SQL Server 2012There is also an upgrade pathA pass in a SQL Server 2008 (MCITP) Microsoft Certified IT Professional Database Administrator or Database Developer CertificationExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Exam 70-459: transisitioning your MCITP on SQL Server 2008 Database Administrator or Database Developer to MCSE:Data PlatformMCSE: SQL Server 2012 Business Intelligence involvesObtain your SQL Server 2012 MCSAExam 70-466: Implementing Data Models and Reports with Microsoft SQL Server 2012Exam 70-467: Designing Business Intelligence Solutions with Microsoft SQL Server 2012The upgrade path involves:A pass in a SQL Server 2008 (MCITP) Microsoft Certified IT Professional Business Intelligence CertificationExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Exam 70-460: transisitioning your MCITP on SQL Server 2008 Business Intelligence Developer to MCSE:Business IntelligenceAs a result if you want to achieve the MCSE in either Data Platform or Business Intelligence and you are starting from scratch there will be 5 exams to takeIf you have the ability to upgrade your certification because you have an MCITP already then it will be three examsFull details and questions can be found at http://www.microsoft.com/learning/en/us/certification/cert-sql-server.aspxThanksChris

    Read the article

  • What is the suggested approach to Syncing/Backing up/Restoring from SQL Server 2008 to SQL Server 20

    - by Eoin Campbell
    I only have SQL Server 2008 (Dev Edition) on my development machine I only have SQL Server 2005 available with my hosting company (and I don't have direct connection access to this database) I'm just wondering what the best approach is for: Getting the initlal DB Structure & Data into production. And keeping any structural changes/data changes in sync in future. As far as I can see... Replication - not an option cos I can't connect to the production DB. Restoring a backup - not an option because as far as I can see, you cannot export a DB from 2008 that is restorable in 2005 (even with the 2008 DB set in 2005 compatibility mode) and it wouldn't make sense to be restoring production over the top of my dev version anyway. Dump all the scripts from my 2008 Database, Revert my Dev to machine from 2008 - 2005, and recreate the database from the scripts, then just use backup & restore to get the initial DB into production, then run scripts through the web panel from that point onwards Dump all the scripts from my 2008 Database and generate the entire 2005 db from scripts in production. then run scripts through the web panel from that point onwards With the last 2 options, I'd probably need to script all the data inserts as well using some tool (which I presume exists on the web) Are there any other possibile solutions that I'm not considering.

    Read the article

  • Combining SQL Rows

    - by lumberjack4
    I've got SQL Compact Database that contains a table of IP Packet Headers. The Table looks like this: Table: PacketHeaders ID SrcAddress SrcPort DestAddress DestPort Bytes 1 10.0.25.1 255 10.0.25.50 500 64 2 10.0.25.50 500 10.0.25.1 255 80 3 10.0.25.50 500 10.0.25.1 255 16 4 75.48.0.25 387 74.26.9.40 198 72 5 74.26.9.40 198 75.48.0.25 387 64 6 10.0.25.1 255 10.0.25.50 500 48 I need to perform a query to show 'conversations' going on across a local network. Packets going from A - B is part of the same conversations as packets going from B - A. I need to perform a query to show the on going conversations. Basically what I need is something that looks like this: Returned Query: SrcAddress SrcPort DestAddress DestPort TotalBytes BytesA->B BytesB->A 10.0.25.1 255 10.0.25.50 500 208 112 96 75.48.0.25 387 74.26.9.40 198 136 72 64 As you can see I need the query (or series of queries) to recognize that A-B is the same as B-A and break up the byte counts accordingly. I'm not a SQL guru by any means but any help on this would be greatly appreciated.

    Read the article

  • Report Direct3D memory usage

    - by Jazz
    I have a Direct3D 9 application and I would like to monitor the memory usage. Is there a tool to know how much system and video memory is used by Direct3D? Ideally, it would also report how much is allocated for textures, vertex buffers, index buffers...

    Read the article

  • SQL Server: Is it possible to prevent SQL Agent from failing a step on error?

    - by Kenneth
    I have a stored procedure that runs custom backups for around 60 SQL servers (mixes 2000 through 2008R2). Occasionally, due to issues outside of my control (backup device inaccessible, network error, etc.) an individual backup on one or two databases will fail. This causes this entire step to fail, which means any subsequent backup commands are not executed and half of the databases on a given server may not be backed up. On the 2005+ boxes I am using TRY/CATCH blocks to manage these problems and continue backing up the remaining databases. On a 2000 server however, for example, I have no way to prevent this error from failing the entire step: Msg 3201, Level 16, State 1, Line 1 Cannot open backup device 'db-diff(\PATH\DB-DIFF-03-16-2010.DIF)'. Operating system error 5(Access is denied.). Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally. I am simply asking if anything like TRY/CATCH is possible in SQL 2000? I realize there are no built in methods for this, so I guess I am looking for some creativity. Even when wrapping each backup (or any failing statement) via sp_executesql the job fails instantly. Example: DECLARE @x INT, @iReturn INT PRINT 'Executing statement that will fail with 208.' EXEC @iReturn = Sp_executesql N'SELECT * from TABLETHATDOESNTEXIST;' PRINT Cast(@iReturn AS NVARCHAR) --In SSMS this return code prints. Executed as a job it fails and aborts before this statement.

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • How I shoud use BIT in MS SQL 2005

    - by adopilot
    Regarding to SQL performance. I have Scalar-Valued function for checking some specific condition in base, It returns BIT value True or False, I now do not know how I should fill @BIT parameter If I write. set @bit = convert(bit,1) or set @bit = 1 or set @bit='true' Function will work anyway but I do not know which method is recommended for daily use. Another Question, I have table in my base with around 4 million records, Daily insert is about 4K records in that table. Now I want to add CONSTRAINT on that table whit scalar valued function that I mentioned already Something like this ALTER TABLE fin_stavke ADD CONSTRAINT fin_stavke_knjizenje CHECK ( dbo.fn_ado_chk_fin(id)=convert(bit,1)) Where is filed "id" primary key of table fin_stavke and dbo.fn_ado_chk_fin looks like create FUNCTION fn_ado_chk_fin ( @stavka_id int ) RETURNS bit AS BEGIN declare @bit bit if exists (select * from fin_stavke where id=@stavka_id and doc_id is null and protocol_id is null) begin set @bit=0 end else begin set @bit=1 end return @bit; END GO Will this type and method of cheeking constraint will affect badly performance on my table and SQL at all ? If there is also better way to add control on this table please let me know.

    Read the article

  • Memory layout of executable

    - by Ross
    Hi all, When loading an executable then segments like the code, data, bss and so on need to be placed in memory. I am just wondering, if someone could tell me where on a standard x86 for example the libc library is placed. Is that at the top or bottom of memory. My guess is at the bottom, close to the application code, ie., that would look something like this here: --------- 0x1000 Stack | V ^ | Heap ---------- Data + BSS ---------- App Code ---------- libc ---------- 0x0000 Thanks a lot, Ross

    Read the article

  • Help Finding Memory Leak

    - by Neal L
    Hi all, I am writing an iPad app that downloads a rather large .csv file and parses the file into objects stored in Core Data. The program keeps crashing, and I've run it along with the Allocations performance tool and can see that it's eating up memory. Nothing is alloc'ed or init'ed in the code, so why am I gobbling up memory? Code at: http://pastie.org/955960 Thanks! -Neal

    Read the article

  • Updating nullability of columns in SQL 2008

    - by Shaul
    I have a very wide table, containing lots and lots of bit fields. These bit fields were originally set up as nullable. Now we've just made a decision that it doesn't make sense to have them nullable; the value is either Yes or No, default No. In other words, the schema should change from: create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit null, BitField2 bit null, ... BitFieldN bit null ) to create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit not null, BitField2 bit not null, ... BitFieldN bit not null ) alter table MyTable add constraint DF_BitField1 default 0 for BitField1 alter table MyTable add constraint DF_BitField2 default 0 for BitField2 alter table MyTable add constraint DF_BitField3 default 0 for BitField3 So I've just gone in through the SQL Management Studio, updating all these fields to non-nullable, default value 0. And guess what - when I try to update it, SQL Mgmt studio internally recreates the table and then tries to reinsert all the data into the new table... including the null values! Which of course generates an error, because it's explicitly trying to insert a null value into a non-nullable column. Aaargh! Obviously I could run N update statements of the form: update MyTable set BitField1 = 0 where BitField1 is null update MyTable set BitField2 = 0 where BitField2 is null but as I said before, there are n fields out there, and what's more, this change has to propagate out to several identical databases. Very painful to implement manually. Is there any way to make the table modification just ignore the null values and allow the default rule to kick in when you attempt to insert a null value?

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >