Search Results

Search found 2636 results on 106 pages for 'transaction isolation'.

Page 65/106 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Advantages of multiple SQL Server files with a single RAID array

    - by Dr Giles M
    Originally posted on stack overflow, but re-worded. Imagine the scenario : For a database I have RAID arrays R: (MDF) T: (transaction log) and of course shared transparent usage of X: (tempDB). I've been reading around and get the impression that if you are using RAID then adding multiple SQL Server NDF files sitting on R: within a filegroup won't yeild any more improvements. Of course, adding another raid array S: and putting an NDF file on that would. However, being a reasonably savvy software person, it's not unthinkable to hypothesise that, even for smaller MDFs sitting on one RAID array that SQL Server will perform growth and locking operations (for writes) on the MDF, so adding NDFs to the filegroup even if they sat on R: would distribute the locking operations and growth operations allowing more throughput? Or does the time taken to reconstruct the data from distributed filegroups outweigh the benefits of reduced locking? I'm also aware that the behaviour and benefits may be different for tables/indeces/log. Is there a good site that distinguishes the benefits of multiple files when RAID is already in place?

    Read the article

  • Automatically keeping two excel data tables in-sync (w/out VBA)

    - by Neil
    I'm putting together a workbook for tracking a stock portfolio. The primary sheet contains a table with the list of the transactions. From this I would like to create an overview table on another sheet with only one row per unique stock symbol that includes things like cost basis, returns, etc. The problem is that nothing I've tried updates the overview table correctly when rows are added to the transaction table. The closest I've got is something like the following: http://www.get-digital-help.com/2009/04/14/create-a-unique-alphabetically-sorted-list-extracted-from-a-column/ However, this requires applying that formula to every cell in the primary column of the overview sheet. And even then the range of the table isn't extended down to include new rows as they become valid. Essentially I'm looking for a way that auto-adds rows to a table and copies the formula based on a different table changing without using VBA. Trivial example data Sheet1 Symbol Type Shares Price F Buy 100 12 MSFT Buy 100 25 MSFT Sell 50 28 F Buy 100 16 Sheet2 Symbol Quantity F 200 MSFT 50

    Read the article

  • Could not retrieve backup settings for primary ID in Log shipping

    - by user1723139
    I am doing log shipping between two Amazon EC2 instances running Windows Server 2008 R2 with SQL Server 2008 R2 standard edition. Both the instances are in the same domain and I can access the shared folders between the instances. The SQL server service account, agent service account are all running under a domain account. When I activate log shipping (with stand by mode restore in secondary server), the initial backup gets restored on the secondary. After that the backup operation is getting failed and i get the following error message: *** Error: Could not retrieve backup settings for primary ID 'xxxxxx-xxxx-xxxx-xxxx-4d772cd7337e'.(Microsoft.SqlServer.Management.LogShipping) *** *** Error: Failed to connect to server IP-0A7653F2.(Microsoft.SqlServer.ConnectionInfo) *** ****** Error: A network-related or instance-specific error occurred while establishing a connection to SQL Server.******** **The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)(.Net SqlClient Data Provider) *** **----- END OF TRANSACTION LOG BACKUP -----**** Any ideas?

    Read the article

  • Slow IE8 Start-up due to LDAP DNS queries

    - by MikeJ-UK
    Recently (in the last few days), my installation of IE8 has been taking 15 to 20 seconds to load my home page. Specifically, the sequence of events (as reported by WireShark) is:- Browser issues a DNS A query to resolve the home page server's IP address. Browser then spends the next 15-20 seconds broadcasting DNS SRV _LDAP._TCP queries, (roughly on a 2 second tick) to which it receives no answer (we have no LDAP servers). Browser re-issues the DNS A query and resolves the server's IP address again. Finally, the browser issues an HTTP GET for the home page. Does anyone know why this is happening? Possibly related to this question EDIT: @Massimo, LDAP query is :- Domain Name System (query) Transaction ID: 0x11c5 Flags: 0x0100 (Standard query) Questions: 1 Answer RRS: 0 Authority RRS: 0 Additional RRS: 0 Queries _LDAP._TCP: type SRV, class IN Name: _LDAP._TCP Type: SRV (Service location) Class: IN (0x0001)

    Read the article

  • Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table

    - by Imagineer
    I'm getting the above mentioned error when backing up with ZRM, which is using mysqldump for backup. mysqldump --opt --extended-insert --single-transaction --create-options --default-character-set=utf8 --user=" " -p --all-databases "/nfs/backup/mysql01/dailyrun/20091216043001/backup.sql" mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table TICKET_ATTACHMENT at row: 2286 I have increased the size for 'max_allowed_packet' to be 1G in /etc/my.cnf which is the server setting and for the client side setting I've set it by running this command: mysql -u -p --max_allowed_packet=1G And I have verified that on the client and server side they are of the same value. This is to check the client side value according to this forum posting http://forums.mysql.com/read.php?35,75794,261640 mysql SELECT @@MAX_ALLOWED_PACKET - ; +----------------------+ | @@MAX_ALLOWED_PACKET | +----------------------+ | 1073741824 | +----------------------+ 1 row in set (0.00 sec) And this is the check the server value setting. mysql SHOW VARIABLES | max_allowed_packet | 1073741824 | I have ran out of ideas, and tried searching within expert exchange and googling for solutions but so far none has worked. Reference http://dev.mysql.com/doc/refman/5.1/en/packet-too-large.html Anyone please advise, thank you.

    Read the article

  • MSSQL Backup Question

    - by MJ
    I'm currently taking over for someone who was in charge of backing up over 250 servers on different platforms, until we hire a replacement. The main question I have is: If we use a backup software, such as Symantec backup exec, does this perform the correct backup for MSSQL Server? I was listening to Stack Overflow Podcast, and I heard them talk about you cannot just backup the SQL data files, but you also need the transaction log? So, if we just backup the whole machine, would we be able to recover it correctly, since we would be backing up the data file and the log? Thanks!

    Read the article

  • Changing Recovery Model in Replicated Database

    - by Rob
    I now am the proud owner of two servers that replicate with each other. I had nothing to do with the install, but (of course), now i have to support the databases. Both databases are in the Simple recovery model, but the users want to ensure as little data loss as possible so I'm thinking that I should change the recovery model over to full and start doing transaction log backups. I wasn't planning on backing up the subscribing database, only the publisher. Is this the right plan? Do I need to switch both the Subscriber and and the publisher to Full, or can I leave the subscriber in Simple, but have the Publisher in Full? When I change the recovery model in one (or both) do the databases need to be offline? Thanks

    Read the article

  • How to Shrink/Reset File stderr1 on a SAP System?

    - by Techboy
    I have a file called stderr1 in the work directory of several of the SAP servers in my production cluster. It has grown to around 19GB's to fill the hard disk on each server. I have deleted all trace files and WP files from within transaction SM50 but that hasn't deleted it (or re-named it to .old). If I try to rename or delete it manually, it says I can't because the file is in use. Please can you tell me how I can delete or shrink the stderr1 file?

    Read the article

  • Concurrent backups in SQL Server?

    - by Mikey Cee
    We currently have our backups managed by a third party company. There are a bunch of agent jobs created that take full backups (4 times a day) and transaction log backups (4 times an hour). We now want to manage our backups in house, but don't want to disable the third party's jobs until we are sure that we have everything configured correctly internally So I am proposing to have a short period (say, a couple of days) where backups are being taken both by the old and the new system. I am wondering what the ramifications of having these two different systems both manage backups, and the potential pitfalls of having backups taken simultaneously. Is this even supported? If so, and bearing in mind that the system can cope with one backup without any noticeable performance degradation, is it fairly logical to assume that it should be able to cope with two simultaneous backups? Currently the load on the server is fairly light and it rarely struggles. Any advice is appreciated

    Read the article

  • WS-AT Issue between WPS 6.2 and WAS 7.0

    - by AK
    Hi, I have a BPEL running on WPS 6.2 trying to call a web service on developed on RAD 7.5, deployed on RAD test environment. I have setup WS Transaction policy on both client and server. I get an error on WAS 7.0 saying Must Understand check failed for headers: {http:// schemas.xmlsoap.org/ws/2004/10/wscoor}CoordinationContext I tried to generate the same webservice on ibm wid 6.2 and deployed on EAR on WAS 7, it works perfect. Any thoughts ? Is there a SOAP runtime mismatch ? Help appreciated . -AK

    Read the article

  • MSSQL:Ultimate configuration for rebuilding indexes+statistics

    - by Niels Bosma
    My database is growing slower even though I have a bunch of indexes setup. Yesterday I figured out that I need to setup a maintenance plan to build the indexes etc. So my question is what's the ultimate configuration for this? Do I need All: "Rebuild idex task", "Reorganize index task" and "update statistics task". Anything else I need to setup. Shrink database? (Today, the only maintenance plans I have is backup) Does it matter in what order I run them? Any configuration options I should be aware of? I've read of problems with log growing wild, how do I fix that? My transaction log is quite small and is usually a problem for me. -

    Read the article

  • Database checksum features - redundant? useful?

    - by Eloff
    Just about every mainstream DB has a feature to calculate checksums per page, per sector, or per record. Now for a DB that does full recover after any crash, like PostgreSQL, is a checksum even useful? There will be no data loss as long as the xlog is ok, no matter what kind of corruption happened to the data itself, as the redo log is replayed every committed transaction will be restored. So checksums are useless on restore. Doesn't the filesystem or disk keep checksums anyway to detect corruption? So unless the checksum is per record, all it does is tell you there is corruption - which the OS should be yelling at you the minute you try to read it - so useless in operation? I can't imagine how a checksum can be helpful in any sane database - but since they all use them - I'd say that's just failure of imagination on my part. So how is it useful?

    Read the article

  • SQL 2005 Log Shipping - Was working, now isnt!

    - by Jim
    Hello, I had log shipping working between two SQL 2005 server fine. I suspect that a job was added to the source server which backed up the transaction log to disk (nothing to do with the existing log shipping job). As I understand it, if you do this then log shipping will fail to work. Sure enough, it no longer works. I've deleted the job which had just been created. Log shipping still does not work. I've rebooted both servers and, again, Log shipping does not work. I'm at a loss now... all I get is the folloing error: The log shipping secondary database XXXXXXXXXX has restore threshold of 45 minutes and is out of sync. No restore was performed for 5882 minutes. Restored latency is 15 minutes. Check agent log and logshipping monitor information. Any help appreciated! Thanks in advance.

    Read the article

  • Best solution for High Availability and SSRS on SQL Server 2008 R2?

    - by Chandra
    I have 2 Physical Servers with SQL Server 2008 R2. – SQL Server 1(Active) & SQL Server 2 (Passive) Web Application is developed using .Net 4.0 Framework. I want to know the best solution to have high availability and also have SSRS for reporting. Planned solution: Mirroring for Failover, and Transaction Replication for SSRS as the mirrored database can only be used for failover scenarios. SSRS will be on the Passive server, to reduce the load on the Active server. Let me know if the solution is correct. Also suggest alternate approaches.

    Read the article

  • How do you manage Labeled and All-Mail unread mails in Thunderbird with a GMail IMAP accounnt?

    - by Edward Beach
    I use Thunderbird with gmail via imap and do so with multiple accounts. On the gmail side I have filters that will automatically assign labels to incoming mail and archive it moving it out of the inbox. On the Thunderbird side it will see the new mail appear in the corresponding folder and the all mail folder -- that's fine but my problem is that they're both marked as unread. Since I have many accounts I use the unread mail view in Thunderbird's folder panel and what I see is both folders highlighted as unread. When I read the message in one folder the other only get marked as read I click on it and Thunderbird does another imap transaction. Is there a configuration that will recognize the same mail in two different folders automatically?

    Read the article

  • Excel - Linking multiple source spreadsheets with variable amounts of rows to a destination spreadsh

    - by Emilio
    I have multiple source spreadsheets, each with a variable number of rows. An example might be one spreadsheet per bank account, with one row for each transaction, with a date and amount. One spreadsheet might have 30 rows, the other 50, and so on. I want to create another spreadsheet which links to the various source spreadsheets and lists an aggregate of all transactions from all sources. So if 3 source sheets with 30, 50 and 20 rows respectively, the destination sheet would have 100 rows. The number of rows (transactions) in the source sheets can grow or shrink over time. I'd like the destination sheet to show one contiguous list of transactions without gaps (spaces). How can I do this?

    Read the article

  • Is it possible to upgrade PHP to 5.3 on a Centos Kloxo installation?

    - by Malachi
    I have a VPS running Centos with Kloxo on and I was wondering how I would upgrade the PHP to 5.3 - It's currently running 5.2.6. When I try and do a yum update I get the following errors: Resolving Dependencies --> Running transaction check --> Processing Dependency: libpq.so.4 for package: lxphp ---> Package postgresql-libs.i386 0:8.3.7-umask.7 set to be updated --> Finished Dependency Resolution lxphp-5.2.1-400.i386 from installed has depsolving problems --> Missing Dependency: libpq.so.4 is needed by package lxphp-5.2.1-400.i386 (installed) Error: Missing Dependency: libpq.so.4 is needed by package lxphp-5.2.1-400.i386 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. Any help would be greatly appreciated.

    Read the article

  • SQL server Rebuild Index

    - by Uday
    How can we know that before rebuilding index --How much space is required for the Transaction Log file( I knew we may required to consider sort_tempdb option , if we set to ON then we may required to ensure about tempdb space as well , Also if we set off then sorting, temporary indexes(during Build phase of rebuild index) creation will takes place in same Database.)?. Usually I have checked with Many users they say :Log file size =1.5 * Index size. How much space required for the Filegroup for datafiles-for ex-Consider I have one filegroup with 1 Mdf + ndf files. I have MSDN Link :those are pretty good information about per-requisites before rebuild index Link :http://msdn.microsoft.com/en-us/library/ms191183.aspx How can I tell exactly or Approx... to get Log/Primary FG size(or any other filegroup).

    Read the article

  • Correct password for ssh key rejected when ssh-d into machine

    - by user20342
    When I am logged into my machine directly, I can do all git operations, and when prompted for a password, the password is accepted. When I ssh into the same box and run git operations on the same repos, the password is rejected. Relevant section of .ssh/config looks like this: # Generic settings Host * ServerAliveInterval 600 ControlPath /tmp/ssh-%r@%h:%p ControlMaster auto KeepAlive yes IdentityFile ~/.ssh/id_rsa.pub Transaction looks like this when I login when I ssh into my box: {12-12-03 9:41}hbrown-wks2:~/workspace/spt/project@master??? hbrown% git pull Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Using bash does not appear to make a difference (i.e. ssh-agent /bin/bash). This is a recent development, but I can't cite the change that caused it.

    Read the article

  • How to Delete File stderr1 on a SAP System?

    - by Techboy
    I have a file called stderr1 in the work directory of several of the SAP servers in my production cluster. It has grown to around 19GB's to fill the hard disk on each server. I have deleted all trace files and WP files from within transaction SM50 but that hasn't deleted it (or re-named it to .old). If I try to rename or delete it manually, it says I can't because the file is in use. Please can you tell me how I can delete or shrink the stderr1 file?

    Read the article

  • MSDTC on server x is unavailable

    - by Fishcake
    I have Windows Server 2003 running in a virtual machine, running some software that is trying to update a database within transactions on my Windows 7 machine (the host for the VM). On my host I have edited the settings for Local DTC by selecting the following Client and Administration Allow Remote clients Allow Remote administration Transaction manager communication Allow inbound Allow outbound No authentication required However when I try to run the software I receive this error: MSDTC on server 'x' is unavailable. Whilst searching for fixes most just suggest making sure the service is running which I have. Cheers!

    Read the article

  • MySQL Windows vs. Linux: performance, caveats, pros and cons?

    - by gravyface
    Looking for (preferrably) some hard data or at least some experienced anecdotal responses with regards to hosting a MySQL database (roughly 5k transactions a day, 60-70% more reads than writes, < 100k of data per transaction i.e. no large binary objects like images, etc.) on Windows 2003/2008 vs. a Debian-based derivative (Ubuntu/Debian, etc.). This server will function only as a database server with a separate Web server on another physical box; this server will require remote access for management (SSH for Linux, RDP for Windows). I suspect that the Linux kernel/OS will compete less than the Windows Server for resources, but for this I can't be certain. There's also security footprint: even with Windows 2008, I'm thinking that the Linux box can be locked down more easily than the Windows Server. Anyone have any experience with both configurations?

    Read the article

  • automating sql express backup via VSS backup

    - by Ornus
    I need to set up on my server automated daily SQL db backups (sql express, so no maintenance plans). To keep things simple I'm gonna use a backup solution (JungleDisk) that uses VSS to back up the DB file. SQL fully supports VSS and on requests freezes DB I/O, so I understand I'm taking snapshots. JungleDisk supports doing differential back up and compression, so it simplifies things and keeps the cost/bandwidth down. Is it enough to just backup up db file (mdf). Do I need to back up transaction log (ldf) file as well? I'm ok with losing a day's worth of work (since the last backup). if I go this route, what's the best way to restore the database? are there any issues with this approach I'm not aware of?

    Read the article

  • SQL Server backup and restore process

    - by Nai
    Just wondering what backup processes you guys have. I am currently operating a weekly full database backup with daily differential backups. My understanding is that with such a set up, the difference between Full recovery mode and Simple recovery mode is that with Full recovery mode, I will be able to use the transaction logs to rollback my DB to a specific point in time having applied the latest differential backup. Assuming that in my scenario, the last differential backup serves as my last and ultimate 'save point', I don't see a need to rollback my DB even further back using the logs. This brings me to my question: Is there any additional benefits to be had using a Full recovery mode for my current backup process?

    Read the article

  • sql server log shipping

    - by voam
    I am setting up log shipping with two 2008 sql servers connecting with a vpn. As far as I know as long as the sql agent is able to access the share on the primary/secondary servers everything will work. When I set up the Log shipping on the primary server using SQL Mangagement Studio, before I can set any of the "Secondary Database Settings" it asks me to connect to the secondary server. But I really don't want to open up the connection to the secondary server. I will be initialing this secondary database with a backup so as long as the transaction logs get copies everything should work. How to I work around the GUI not enabling any of the settings for the secondary server until I actually connect to it? Thanks in advance!

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >