Search Results

Search found 31902 results on 1277 pages for 'sql backup'.

Page 425/1277 | < Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >

  • Time Machine vs Source Control?

    - by Blub
    Finally got convinced to start using some kind of version control for my code instead of zipping down a copy of the project at the end of each day. Downloaded Tortoise SVN and used it to create a repository localy on my hdd. I've been using it for 2 days now but I have to say that using it is actually more hassle than just copying the project manually in explorer. Sure, you only store incremental changes but with the cheap disks of today I can't really say that's an argument when you only have small projects. I haven't realy found a quick way to browse the older versions of my files eighter. What I want is an infinite undo that is completely transparent while I code, if I save the file I want a backup. I don't want to check out, check in and don't even get me started on moving files. I haven't tried Time Machine for OS X but it looks like it's exactly what I'm looking for. Does such a program exist for windows? Preferably free and with some kind of tagging-system so I can tag a timestamp when the project is working etc. Maybe should add that I mostly work alone on a single computer. Update: Some of you asked why I want backup. Since I work alone it's mostly to allow me to quickly hack up a solution without worrying that something will screw up.

    Read the article

  • How to cope with null results in SQL Tasks that return single rows in SSIS 2005?

    - by JSacksteder
    In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be. The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync. Is there a better way?

    Read the article

  • What are the best tools for modeling a pre-existing SQL database structure?

    - by Ejoso
    I have a MS SQL database that has been running strong for 10+ years. I'd like to diagram the database structure, without spending hours laying it all out in Visio or something similar... I've seen nice models diagrammed before, but I have no idea how they were created. From what I've seen - those models were created in advance of the database itself to assist in clarifying the relationships... but my database already exists! Anyone have any suggestions for tools that would work, or methods I could employ to tease out a nice clean document describing my database structure? Thanks in advance!

    Read the article

  • Do database engines other than SQL Server behave this way?

    - by Yishai
    I have a stored procedure that goes something like this (pseudo code) storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin select * from SOME_VIEW order by somecolumn end else if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent end All ran well for a long time. Suddenly, the query started running forever if param4 was 'Y'. Changing the code to this: storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin set param2 = null set param3 = null end if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent And it runs again within expected parameters (15 seconds or so for 40,000+ records). This is with SQL Server 2005. The gist of my question is this particular "feature" specific to SQL Server, or is this a common feature among RDBMS' in general that: Queries that ran fine for two years just stop working as the data grows. The "new" execution plan destroys the ability of the database server to execute the query even though a logically equivalent alternative runs just fine? This may seem like a rant against SQL Server, and I suppose to some degree it is, but I really do want to know if others experience this kind of reality with Oracle, DB2 or any other RDBMS. Although I have some experience with others, I have only seen this kind of volume and complexity on SQL Server, so I'm curious if others with large complex databases have similar experience in other products.

    Read the article

  • Which method of SQL Server 2005 or 2008 Replication is best for ease of field changes?

    - by Rick
    We need 15 minute warm updates from one SQL Server to another. Log Shipping looks good and appears easy to setup. We are also looking into Transactional Replication. The data only needs to copy one way. We have two main requirements: 1) The destination database needs to be a max 15 minute old copy of the source. It needs to re-try and get up-to-date if a network cable is unplugged for a while. 2) We would really like table (fields added or modified) changes in the source as easy as possible. Thanks in advance for all suggestions.

    Read the article

  • What would cause SQL 2008 Log Reader Agent to fail with "This process could not execute 'sp_replcmds

    - by Rick
    I've seen this error message in other posts. They didn't seem to help resolving our issue. We are trying this with two SQL Server 2008 servers. I backed up my database from the source server and then restored it on our destination server. We setup basic Transaction Replication. The Snapshot Agent is working fine. The Log Reader Agent fails with the error above. Is it most likely a login issue for this job or QueryTimeout?

    Read the article

  • Why can't I defragment my SQL 2008 .mdf file?

    - by LesterDove
    I am defragmenting a badly (95%) fragmented drive upon which large (35 gig) SQL Server 2008 .mdf files live. After defragmenting and viewing the exception report, I see that the production .mdf file that I'm most interested in could not be defragmented. I initially figured it was because MSSQL had an exclusive lock on the file, so I detached it and tried again. No luck - this particular .mdf file could not be defragmented. What am I missing? Most online references suggest that I should be able to file defrag an .mdf A note: yes, I'm talking about file defragmentation, not index defrag, which is already being done routinely, and which I'll re-run after this. Thanks! What am I missing?

    Read the article

  • sql statement question. Need to query 3 tables in one go!

    - by Stefan
    Hey there, I have an sql database. In this database is 3 tables I need to query. The first table has all the item info called item and the other two tables has data for votes and comments called userComment and the third for votes called userItem I currently have a function which uses this sql query to get the latest more popular (in terms of both votes and comments): $sql = "SELECT itemID, COUNT(*) AS cnt FROM ( SELECT `itemID` FROM `userItem` WHERE FROM_UNIXTIME( `time` ) >= NOW() - INTERVAL 1 DAY UNION ALL SELECT `itemID` FROM `userComment` WHERE FROM_UNIXTIME( `time` ) >= NOW() - INTERVAL 1 DAY AND `itemID` > 0 ) q GROUP BY `itemID` ORDER BY cnt DESC"; I know how to change this for either by votes alone or comments.... HOWEVER - I need to query the database to only return the itemID's of the ones which have specific conditions in only the item table these are WHERE categoryID = 'xx' AND typeID = 'xx' If the sql ninja could please help me on this one? Do I have to first return the results from the above query and the for each in the array fetched then check each against the item table and see if it fits the conditions to build a new array - or is that overkill? Thanks, Stefan

    Read the article

  • Backing up a Windos 7 partition from Macbook with no OS X

    - by mattcodes
    I have a 3 year macbook with Windows 7 installed as 40gb and OS X as 40gb (80gb HD). I want to remove OS X as Im at the limit of 40gb on Windows and I have not logged on to Mac OS X since installed Win7 (dont flame me). So I want to delete OS X partition and expand my win partition to 80gb BUT I still would like to be able to regularly (once a week/month) backup my Windows 7 partition - its took a while to setup everything up right - not just docs and programs - so when the hard drive dies I want to be able to restore the partition and boot away, (the daily volatile bits I can pull down from dropbox and project from soure control). With Mac OS X I could use Winclone - and this worked flawless last time the HD failed with XP but with the absence of OS X I will need something else. Im thinking can I use a Linux Live boot CD along with an external USB hard drive. Boot from CD and then dd? the partition to the USB? What linux distro live CD should I use? I say dd as if I know what am taking about (I dont) is this the best way to backup a partition (when it will be restored to same hardware as bootable) ? What command?

    Read the article

  • Web & SQL Hosting 32 vs. 16 GB of ram

    - by TravisK
    I'm in the market for a new dedicated host for my website. My question is I can pay more to upgrade to 32GB of RAM, but it seems overkill for my website right now, in fact, 16GB seems a little overkill. However, I run a lot of pretty intense full text searches for my site. I'm wondering if SQL Server would benefit, or could it be configured to use the 32 GB of RAM if I purchase the additional to help speed things up? I am assuming that most of my latency is caused by disk I/O and that for the extra money spent on RAM, I might not see any improvement in overall speed?

    Read the article

  • Windows disk change monitoring for malware analysis

    - by SuperDuck
    Not sure if this question belongs to here, because it has some relations with 'serverfault' (system backups) and 'stackoverflow' (software analysis). I'm looking for a solution to monitor disk changes on a Windows system and selectively revert them. It should be able to handle live files like registry parts, so may need to be an offline backup software. It shouldn't silently pass over files which the current admin user doesn't have permissions on (files with no permission entries or owned by the 'system' user) Registry change tracking would be a bonus but is not a requirement I use virtual machines for malware analysis, there is even no solution to list file changes in disk snapshot files (delta VMDK). I currently use Ashampoo for monitoring changes. Though it's the best one between similars, it's not a good software and hasn't really evolved in many 'platinum', 'deluxe' versions released in the last 10 years (it even used non-resizable windows until the latest version). The real problem is it misses some disk / registry changes. Perhaps it only compares modification dates and doesn't catch a change if the dates are preserved. So, I think the solution should compare files using hashes, or file sizes at least. There are numerous backup software out there and I'm sure one can handle this, offline or online.

    Read the article

  • Synchronize folder on network, preserving hard links

    - by Waleed Hamra
    I have few computers using Windows XP Pro. I want to synchronize/back a folder from one machine, to another one. This far, It's a simple problem, and I've used FreeFileSync for such operations, with very satisfactory results. But, this all changes when hard links come into play. Today's folder contains lots of hard links, using such backup programs will result in hard links being treated as multiple files, and copied as such, greatly increasing folder size on destination, and defeating the purpose of using all these hard links in the first place. It gets more complicated when we take into consideration the fact that network shares on Windows DON'T expose hard linking facilities, meaning that running a hard-link-aware tool like rsync using --hard-links will be of no use. So my question, how can i backup my folder to the other computer, while preserving hard links? I don't mind installing 3rd party tools to do it, as obviously, the standard windows shares approach won't work... I am guessing there might be some tool that can be installed on both machines and works in a server/client mode? anyone has any idea how to do this?

    Read the article

  • Is there a way to load an existing connection string for Linq to SQL from an app.config file?

    - by Brian Surowiec
    I'm running into a really annoying problem with my Linq to SQL project. When I add everything in under the web project everything goes as expected and I can tell it to use my existing connection string stored in the web.config file and the Linq code pulls directly from the ConfigurationManager. This all turns ugly once I move the code into its own project. I’ve created an app.config file, put the connection string in there as it was in the web.config but when I try to add another table in the IDE keeps forcing me to either hardcode the connection string or creates a Settings file and puts it in there, which then adds a new entry into the app.config file with a new name. Is there a way keep my Linq code in its own project yet still refer back to my config file without the IDE continuously hardcoding the connection string or creating the Settings file? I’m converting part of my DAL over to use Linq to SQL so I’d like to use the existing connection string that our old code is using as well as keep the value in a common location, and one spot, instead of in a number of spots. Manually changing the mode to WebSettings instead of AppSettings works untill I try to add a new table, then it goes back to hardcoding the value or recreating the Settings file. I also tried to switch the project type to be a web project and then rename my app.config to web.config and then everything works as I’d like it to. I’m just not sure if there are any downfalls to keeping this as a web project since it really isn't one. The project only contains the Linq to SQL code and an implementation of my repository classes. My project layout looks like this Website -connectionString.config -web.config (refers to connectionString.config) Middle Tier -Business Logic -Repository Interfaces -etc. DAL -Linq to SQL code -Existing SPROC code -connectionString.config (linked from the web poject) -app.config (refers to connectionString.config)

    Read the article

  • How can I retrieve a MS SQL Express Database from a non-booting computer?

    - by Redandwhite
    A client has a very important database that has not been backed up in 6 months. The PC has promptly failed. The Windows directory is corrupt, and the computer will not boot. It had a Microsoft SQL Server Express 2005 database on it. I have access to the hard drive by booting in with an Ubuntu Live CD, but I am not sure if I can find the database. I am not sure what I am looking for, or where to look either. The dead machine had Windows XP on it.

    Read the article

  • Is there any method of backing up Google Drive files in some sort of versioning system?

    - by VictorKilo
    Backstory My company is utilizing Google Drive for our shared files. Each user has their own Drive account. In addition, we have a corporate Drive account which holds documents which are shared to each user. Each folder is shared to different users depending on their permissions and positions in the company. Many users are able to add files, and updated folders within this shared Drive account. This is fine. What is not fine, is when someone deletes something that they shouldn't. I have little to no way of knowing when I file is deleted wrongfully. Furthermore, anything that gets deleted goes into the trash bin of the file's creator, so I can't just restore it from the trash. Question Is there any method of backing up Google Drive files in some sort of versioning system that would allow me to revert files back to defined points in time? What i have Tried I currently have this corporate drive account synced up to my personal computer through the Google Drive application. Each night, I run a backup on the file using Windows "Backup and Restore." This allows me to at least get back files that are lost, but I a cleaner method than this. It's very possible that I may not have the very latest version of a document on my computer when the utility runs.

    Read the article

  • How to use LVM on Rackspace Cloud

    - by batrick
    Dear all, I am trying to set up a simple but effective solution to make a backup of my rackspace cloud servers. These servers each run subversion, trac, and some database-backed custom php applications. My idea is to set up a LVM and mount a volume under, say, /srv. In this volume, I keep the data from all applications. Instead of caring about how to back-up each app in a different way (svn hotcopy, trac-admin hotcopy, huge mess for mysql), I simply take an LVM snapshot and back this one up cloud files using the excellent cloudcity script (http://github.com/jspringman/cloudcity/blob/master/cloudcity). The advantage of this solution is that it is quick and easy, and LVM allows to make decent backups. As more apps are added, it should not be required to change the backup script much. The downside, and main point of my question here, is that I am not sure how to get LVM working on Rackspace cloud, because there is only one root volume and no service like Amazon's EBS. I was thinking it may be possible to create a large empty file and use this as a "physical volume". Has anybody done anything like this before? Or do you know why it can never work? It would be great to hear from you. Thanks, batrick

    Read the article

  • Problems with Database Search Code (asp.net vb)

    - by Phil
    Here is a sample of my database search code: Dim sql As String = "Select * From Table Where " Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim andor As Boolean = AndOr1.SelectedValue 'selection can be AND or OR (0 / 1) 'Code for when the user selects AND If NameSearch.Text.ToString IsNot String.Empty And andor = 0 Then sql += "Surname LIKE '%" & name & "%' AND " End If If EmailSearch.Text.ToString IsNot String.Empty And andor = 0 Then sql += "Email LIKE '%" & email & "%' AND " End If If CitySearchBox.Text.ToString IsNot String.Empty And andor = 0 Then sql += "City LIKE '%" & city & "%' AND " End If 'Code for when the user selects OR If NameSearch.Text.ToString IsNot String.Empty And andor = 1 Then sql += "(Surname LIKE '%" & name & "%' OR " End If If EmailSearch.Text.ToString IsNot String.Empty And andor = 1 Then sql += "Email LIKE '%" & email & "%') OR " End If If CitySearchBox.Text.ToString IsNot String.Empty And andor = 1 Then sql += "(City LIKE '%" & city & "%' OR " End If sql = CleanString(sql) End Sub When the user selects AND (as andor.selectedvalue(0)) then the sql is produced fine like this; Select * From Table Where Surname LIKE '%test%' AND Email LIKE '%test%' AND City LIKE '%test%' But if the user selects OR (as andor.selectedvalue(1)), nothing is outputted except; Select * From Table Where Im sure the controls have values so are not string.empty and when the user selects OR the correct value 1 is being assigned to andor.

    Read the article

  • How can I concisely copy multiple SQL rows, with minor modifications?

    - by Steve Jessop
    I'm copying a subset of some data, so that the copy will be independently modifiable in future. One of my SQL statements looks something like this (I've changed table and column names): INSERT Product( ProductRangeID, Name, Weight, Price, Color, And, So, On ) SELECT @newrangeid AS ProductRangeID, Name, Weight, Price, Color, And, So, On FROM Product WHERE ProductRangeID = @oldrangeid and Color = 'Blue' That is, we're launching a new product range which initially just consists of all the blue items in some specified current range, under new SKUs. In future we may change the "blue-range" versions of the products independently of the old ones. I'm pretty new at SQL: is there something clever I should do to avoid listing all those columns, or at least avoid listing them twice? I can live with the current code, but I'd rather not have to come back and modify it if new columns are added to Product. In its current form it would just silently fail to copy the new column if I forget to do that, which should show up in testing but isn't great. I am copying every column except for the ProductRangeID (which I modify), the ProductID (incrementing primary key) and two DateCreated and timestamp columns (which take their auto-generated values for the new row). Btw, I suspect I should probably have a separate join table between ProductID and ProductRangeID. I didn't define the tables. This is in a T-SQL stored procedure on SQL Server 2008, if that makes any difference.

    Read the article

  • How can I create two partitions and clone one to the other (using Clonezilla)?

    - by johnny
    I was hoping someone could help. I want to create a "backup" partition. I want to create two partitions on my drive. One is a good install, which I want to then use clonezilla to copy the good partition to the broken/unused partition and have the restored partition boot up as usual. Example, C: goes bad. D: is a "good" copy of C. C gets a corrupt registry. I restore D to C, C will then boot up as usual. So, I need to do the clone with Clonezilla and the restore with the same. I see the part_...clone and restore. Will this do it? How do I get the partitions? EDIT: I am using XP. How can I do this? Also, I know this is not the best thing for all occasions. I have a offline backup as well. I would like to have both. Thanks for any help. I'm using Clonezilla if it matters.

    Read the article

  • What would cause SQL 2008 Log Reader Agent to fail with "This process could not execute 'sp_replcmds' "?

    - by Rick
    I've seen this error message in other posts. They didn't seem to help resolving our issue. We are trying this with two SQL Server 2008 servers. I backed up my database from the source server and then restored it on our destination server. We setup basic Transaction Replication. The Snapshot Agent is working fine. The Log Reader Agent fails with the error above. Is it most likely a login issue for this job or QueryTimeout?

    Read the article

  • Drop all foreign keys in a table

    - by trnTash
    I had this script which worked in sql server 2005 -- t-sql scriptlet to drop all constraints on a table DECLARE @database nvarchar(50) DECLARE @table nvarchar(50) set @database = 'dotnetnuke' set @table = 'tabs' DECLARE @sql nvarchar(255) WHILE EXISTS(select * from INFORMATION_SCHEMA.TABLE_CONSTRAINTS where constraint_catalog = @database and table_name = @table) BEGIN select @sql = 'ALTER TABLE ' + @table + ' DROP CONSTRAINT ' + CONSTRAINT_NAME from INFORMATION_SCHEMA.TABLE_CONSTRAINTS where constraint_catalog = @database and table_name = @table exec sp_executesql @sql END It does not work in SQL Server 2008. How can I easily drop all foreign key constraints for a certain table? Does anyone have a better script?

    Read the article

  • Is SQL Express 2008 subject to the same 5 connection limitation as file sharing on Windows XP?

    - by RichieACC
    File sharing on Windows XP has a 5 client limitation. Our solution uses both file sharing and SQL Express. The way I see it, we have 2 options here: -Reload the machine that they want to use as a server with Windows Server, or; -Supply them with a dedicated NAS server, and keep their server machine on Windows XP The second option is the preferred one, for reasons I'm not going to go into. I just need to confirm that the 5 client limitation applies to the file sharing only.

    Read the article

  • Copy a harddrive from a failed desktop machine using a second working one.

    - by MrEyes
    Heres the scenario: I have PC-A, an old PC that runs Windows XP but now refuses to boot due to a failed motherboard (or maybe PSU). This PC has a single 80gb IDE drive. I also have PC-B, running Windows Vista, this is working fine. I want to copy all the data off PC-As HDD onto PC-B. To do this I have taken the HDD out of PC-A and connected it as a slave to PC-B. PC-B now boots and sees the additional drive. However, when I attempt to access/copy user folders (i.e. Documents and Settings/[username]/*) I am told that I cannot access the folders due to user permissions. I am doing this under an adminstrator account on PC-B. So the question is, how can I "backup" the data? Preferably without making any changes to the drive contents. The reason for this is that it is possible that PC-A is failing due to a bad PSU, so I intend to replace it before writing off the machine. However I would feel much happier if I had a backup of the data on the HDD.

    Read the article

  • Is 10% too much for autogrow on a 4 GB sql server DB?

    - by ntsue
    I am getting the following error: 2011-03-07 21:59:35.73 spid64 Autogrow of file 'MYDB_DATA' in database 'MYDB' was cancelled by user or timed out after 16078 milliseconds. Use ALTER DATABASE to set a smaller FILEGROWTH value for this file or to explicitly set a new file size. I did some research, and I found that for large databases you should set autogrow to a fixed size (MB), and not to a percentage. I feel like this database is not large and I may not be addressing the correct issue by changing this value. Does anyone have any opinions? Thank you! EDIT: I should have specified SQL Server 2008 RC2 running on Windows Server 2008

    Read the article

  • Identity alternative for SQL Azure Federation : are Azure Queues or Service Bus Queues a good choice?

    - by JYL
    As many of developers, I'm looking for a way to integrate my existing app to SQL Azure Federations, and replacing the Identity columns (the primary keys of my tables) is a big problem. For many reasons, I do NOT want use GUID for my primary keys (please don't open the debate about the GUID or not, it's not my question : i just don't want a GUID, period). So I need to build a key provider to replace the "identity" feature of a standard SQL database. I'm using Entity Framework, so i can easily find one place to set the Id value just before the insert (by overriding the SaveChanges method of my ObjectContext class). I just need to find a "not too complicated" implementation for getting the current Id, which is "farm-ready". I've read this SO post : "ID Generation for Sharded Database (Azure Federated Database)" and "Synchronizing Multiple Nodes in Windows Azure from MSDN Magazine", but this solution sounds a bit complicated for me. I'm thinking about creating (automatically) one azure queue for each SQL table, which contain a pre-loaded list of consecutive integer. When I want an Id value, I just have to get a message from the queue (which becomes invisible and is deleted on the way), which give me the current available Id. About the choice between "Windows Azure Queues" and "Windows Azure Service Bus Queues", I prefere "Windows Azure Queues", due to the "high" latency of Service Bus Queues. I don't think that the lack of "ordering garantee" of Azure Queues is a problem. What do you think about that idea of using Azure Queues to provide Id values ? Do you see any argument to give up that idea ? Do you have a better idea, or even a good practice, to provider integer ids in SQL Azure Federation databases ? Thanks.

    Read the article

< Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >