Search Results

Search found 6200 results on 248 pages for 'recovery partition'.

Page 73/248 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Free space on SSD (over provisioning) per disk or per partition?

    - by Horst Walter
    It is recommended to keep some percentage of an SSD free for relocation ( Is free space required on a SSD for performance? ). However, is this rule meant per partition or per disk (whole SSD)? So, if I want to keep 20% free for performance reasons, is it acceptable if one partition is 95% filled, while another is almost empty and the overall empty disk space still is 20. Or does each partition has to fulfill the rule of 20% empty space?

    Read the article

  • How to copy VirtualBox VDI contents to a partition and dual boot the OS from it?

    - by Calmarius
    I'm a Linux user but I keep a compressed Windows XP ISO with me on a pen drive for the case I absolutely need Windows to do something. This works in VirtualBox most of the time. But now I want to play some games, so I would like to run the Windows image natively. My computer don't have CD drive so cannot just burn the ISO and make an install normally. What I trying to do is moving the installed Windows image to a physical NTFS partition on my HDD and set up GRUB to let me dual boot it. I found many tutorials that deal with making VDI to physical drive. But they assume I want to overwrite my entire drive. Moving the raw disk image with dd to the partition resulted in a corrupt partition. I also tried the VMDK trick to use that empty partition and install the Windows on it. Although the text mode phase of the installation finishes without problems, the VM won't work, either crashes and keeps rebooting or just immediately or freezes (depending on how I created the VMDK, with -rawdisk /dev/sda3 or -rawdisk /dev/sda -partition 3).

    Read the article

  • Hard drive partition size wrong. How do I resize without loss of data?

    - by BreezyChick89
    $ fsck fsck from util-linux 2.20.1 e2fsck 1.42 (29-Nov-2011) The filesystem size (according to the superblock) is 610471680 blocks The physical size of the device is 536870911 blocks Either the superblock or the partition table is likely to be corrupt! It should be 1 partition but it now shows 2.2tb partitioned and .3tb unpartitioned How do I make the first partition correctly be 2.5tb without destroying whatever is in either partition? I did not raid or anything. My devices have been getting repeatedly corrupt by thunderstorms. Looks like people recommend doing something like in other places. sudo resize2fs /dev/sdc1 610471680

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • New ways for backup, recovery and restore of Essbase Block Storage databases – part 2 by Bernhard Kinkel

    - by Alexandra Georgescu
    After discussing in the first part of this article new options in Essbase for the general backup and restore, this second part will deal with the also rather new feature of Transaction Logging and Replay, which was released in version 11.1, enhancing existing restore options. Tip: Transaction logging and replay cannot be used for aggregate storage databases. Please refer to the Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1). Even if backups are done on a regular, frequent base, subsequent data entries, loads or calculations would not be reflected in a restored database. Activating Transaction Logging could fill that gap and provides you with an option to capture these post-backup transactions for later replay. The following table shows, which are the transactions that could be logged when Transaction Logging is enabled: In order to activate its usage, corresponding statements could be added to the Essbase.cfg file, using the TRANSACTIONLOGLOCATION command. The complete syntax reads: TRANSACTIONLOGLOCATION [ appname [ dbname]] LOGLOCATION NATIVE ?ENABLE | DISABLE Where appname and dbname are optional parameters giving you the chance in combination with the ENABLE or DISABLE command to set Transaction Logging for certain applications or databases or to exclude them from being logged. If only an appname is specified, the setting applies to all databases in that particular application. If appname and dbname are not defined, all applications and databases would be covered. LOGLOCATION specifies the directory to which the log is written, e.g. D:\temp\trlogs. This directory must already exist or needs to be created before using it for log information being written to it. NATIVE is a reserved keyword that shouldn’t be changed. The following example shows how to first enable logging on a more general level for all databases in the application Sample, followed by a disabling statement on a more granular level for only the Basic database in application Sample, hence excluding it from being logged. TRANSACTIONLOGLOCATION Sample Hyperion/trlog/Sample NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Basic Hyperion/trlog/Sample NATIVE DISABLE Tip: After applying changes to the configuration file you must restart the Essbase server in order to initialize the settings. A maybe required replay of logged transactions after restoring a database can be done only by administrators. The following options are available: In Administration Services selecting Replay Transactions on the right-click menu on the database: Here you can select to replay transactions logged after the last replay request was originally executed or after the time of the last restored backup (whichever occurred later) or transactions logged after a specified time. Or you can replay transactions selectively based on a range of sequence IDs, which can be accessed using Display Transactions on the right-click menu on the database: These sequence ID s (0, 1, 2 … 7 in the screenshot below) are assigned to each logged transaction, indicating the order in which the transaction was performed. This helps to ensure the integrity of the restored data after a replay, as the replay of transactions is enforced in the same order in which they were originally performed. So for example a calculation originally run after a data load cannot be replayed before having replayed the data load first. After a transaction is replayed, you can replay only transactions with a greater sequence ID. For example, replaying the transaction with sequence ID of 4 includes all preceding transactions, while afterwards you can only replay transactions with a sequence ID of 5 or greater. Tip: After restoring a database from a backup you should always completely replay all logged transactions, which were executed after the backup, before executing new transactions. But not only the transaction information itself needs to be logged and stored in a specified directory as described above. During transaction logging, Essbase also creates archive copies of data load and rules files in the following default directory: ARBORPATH/app/appname/dbname/Replay These files are then used during the replay of a logged transaction. By default Essbase archives only data load and rules files for client data loads, but in order to specify the type of data to archive when logging transactions you can use the command TRANSACTIONLOGDATALOADARCHIVE as an additional entry in the Essbase.cfg file. The syntax for the statement is: TRANSACTIONLOGDATALOADARCHIVE [appname [dbname]] [OPTION] While to the [appname [dbname]] argument the same applies like before for TRANSACTIONLOGLOCATION, the valid values for the OPTION argument are the following: Make the respective setting for which files copies should be logged, considering from which location transactions are usually taking place. Selecting the NONE option prevents Essbase from saving the respective files and the data load cannot be replayed. In this case you must first manually load the data before you can replay the transactions. Tip: If you use server or SQL data and the data and rules files are not archived in the Replay directory (for example, you did not use the SERVER or SERVER_CLIENT option), Essbase replays the data that is actually in the data source at the moment of the replay, which may or may not be the data that was originally loaded. You can find more detailed information in the following documents: Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1) Oracle Essbase Online Documentation (rel. 11.1.2.1)) Enterprise Performance Management System Documentation (including previous releases) Or on the Oracle Technology Network. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected]. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis. Disclaimer: All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.

    Read the article

  • AST with fixed nodes instead of error nodes in antlr

    - by ahe
    I have an antlr generated Java parser that uses the C target and it works quite well. The problem is I also want it to parse erroneous code and produce a meaningful AST. If I feed it a minimal Java class with one import after which a semicolon is missing it produces two "Tree Error Node" objects where the "import" token and the tokens for the imported class should be. But since it parses the following code correctly and produces the correct nodes for this code it must recover from the error by adding the semicolon or by resyncing. Is there a way to make antlr reflect this fixed input it produces internally in the AST? Or can I at least get the tokens/text that produced the "Tree Node Errors" somehow? In the C targets antlr3commontreeadaptor.c around line 200 the following fragment indicates that the C target only creates dummy error nodes so far: static pANTLR3_BASE_TREE errorNode (pANTLR3_BASE_TREE_ADAPTOR adaptor, pANTLR3_TOKEN_STREAM ctnstream, pANTLR3_COMMON_TOKEN startToken, pANTLR3_COMMON_TOKEN stopToken, pANTLR3_EXCEPTION e) { // Use the supplied common tree node stream to get another tree from the factory // TODO: Look at creating the erronode as in Java, but this is complicated by the // need to track and free the memory allocated to it, so for now, we just // want something in the tree that isn't a NULL pointer. // return adaptor->createTypeText(adaptor, ANTLR3_TOKEN_INVALID, (pANTLR3_UINT8)"Tree Error Node"); } Am I out of luck here and only the error nodes the Java target produces would allow me to retrieve the text of the erroneous nodes?

    Read the article

  • How can I partition a vector?

    - by Karsten W.
    How can I build a function slice(x, n=2) which would return a list of vectors where each vector except maybe the last has size n, i.e. slice(letters, 10) would return list(c("a", "b", "c", "d", "e", "f", "g", "h", "i", "j"), c("k", "l", "m", "n", "o", "p", "q", "r", "s", "t"), c("u", "v", "w", "x", "y", "z")) ?

    Read the article

  • Table Partitioning

    - by Ankur Gahlot
    How advantageous is it to use partitioning of tables as compared to normal approach ? Is there a sort of sample case or detailed comparative analysis that could statistically ( i know this is too strong a word, but it would really help if it is illustrated by some numbers ) emphasize on the utility of the process. Thanks, Ankur

    Read the article

  • Recovering the deleted data from Android SD Card ?

    - by Nish
    I am trying to make an Android application which would try to recover deleted content from the SD Card. How feasible it is ? I have following method in mind: 1) Since, the files are not actually deleted, can I access the file system to see files which has been marked to be overwritten. 2) Or will I have do header/footer file carving ? Is it possible from the application layer of android ? I am concerned about files stored on contiguous sectors and not the fragmented once. I want to do a basic file retrieval. Please let me know some resources which I can use to make this application ?

    Read the article

  • Powermock Slows Down Test Startup on Eclipse/Fedora 10 when on NTFS partition

    - by MrWiggles
    I've just started having a proper play with Powermock and noticed that it slows down test startup immensely. A quick look at top while it was running shows that mount.nfts-3g was taking up most of the CPU. I moved Eclipse and my source directory to ext3 partitions to see if that was a problem and the tests now startup quicker but there's still a noticeable delay. Is this normal with Powermock or am I missing something obvious?

    Read the article

  • Why do System.IO.Log SequenceNumbers have variable length?

    - by Doug McClean
    I'm trying to use the System.IO.Log features to build a recoverable transaction system. I understand it to be implemented on top of the Common Log File System. The usual ARIES approach to write-ahead logging involves persisting log record sequence numbers in places other than the log (for example, in the header of the database page modified by the logged action). Interestingly, the documentation for CLFS says that such sequence numbers are always 64-bit integers. Confusingly, however, the .Net wrapper around those SequenceNumbers can be constructed from a byte[] but not from a UInt64. It's value can also be read as a byte[], but not as a UInt64. Inspecting the implementation of SequenceNumber.GetBytes() reveals that it can in fact return arrays of either 8 or 16 bytes. This raises a few questions: Why do the .Net sequence numbers differ in size from the CLFS sequence numbers? Why are the .Net sequence numbers variable in length? Why would you need 128 bits to represent such a sequence number? It seems like you would truncate the log well before using up a 64-bit address space (16 exbibytes, or around 10^19 bytes, more if you address longer words)? If log sequence numbers are going to be represented as 128 bit integers, why not provide a way to serialize/deserialize them as pairs of UInt64s instead of rather-pointlessly incurring heap allocations for short-lived new byte[]s every time you need to write/read one? Alternatively, why bother making SequenceNumber a value type at all? It seems an odd tradeoff to double the storage overhead of log sequence numbers just so you can have an untruncated log longer than a million terabytes, so I feel like I'm missing something here, or maybe several things. I'd much appreciate it if someone in the know could set me straight.

    Read the article

  • firefox: getting access to the list of tabs/windows to restore on startup

    - by robb
    Sometimes ffox fails to restore the previously open tabs/windows. This might be happening when some of the urls to be opened are no longer reachable (e.g. behind a vpn) or after the underlying OS (Windows) has been forcibly restarted (e.g. to complete an automated patch installation). Anyway, after restarting, can this list of urls be recovered somehow? Say for example, I was daft enough to have clicked on "start new session". Can I still get access to the old list of open urls? There is the browser history of course, but it contains a lot of stuff - the urls that were open when ffox last exited are not obvious. It would be neat if they were marked in some way - tagged for example. .robb

    Read the article

  • using Linq to partition data into arrays

    - by user200295
    I have an array of elements where the element has a Flagged boolean value. 1 flagged 2 not flagged 3 not flagged 4 flagged 5 not flagged 6 not flagged 7 not flagged 8 flagged 9 not flagged I want to break it into arrays based on the flagged indicator output array 1 {1,2,3} array 2 {4,5,6,7} array 3 {8,9}

    Read the article

  • How to find the latest row for each group of data

    - by Jason
    Hi All, I have a tricky problem that I'm trying to find the most effective method to solve. Here's a simplified version of my View structure. Table: Audits AuditID | PublicationID | AuditEndDate | AuditStartDate 1 | 3 | 13/05/2010 | 01/01/2010 2 | 1 | 31/12/2009 | 01/10/2009 3 | 3 | 31/03/2010 | 01/01/2010 4 | 3 | 31/12/2009 | 01/10/2009 5 | 2 | 31/03/2010 | 01/01/2010 6 | 2 | 31/12/2009 | 01/10/2009 7 | 1 | 30/09/2009 | 01/01/2009 There's 3 query's that I need from this. I need to one to get all the data. The next to get only the history data (that is, everything but exclude the latest data item by AuditEndDate) and then the last query is to obtain the latest data item (by AuditEndDate). There's an added layer of complexity that I have a date restriction (This is on a per user/group basis) where certain user groups can only see between certain dates. You'll notice this in the where clause as AuditEndDate<=blah and AuditStartDate=blah Foreach publication, select all the data available. select * from Audits Where auditEndDate<='31/03/10' and AuditStartDate='06/06/2009'; foreach publication, select all the data but Exclude the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009' /* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend!=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ Foreach publication, select only the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009'/* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ So at the moment, query 1 and 3 work fine, but query 2 just returns all the data instead of the restriction. Can anyone help me? Thanks jason

    Read the article

  • ideas for algorithm? sorting a list randomly with emphasis on variety

    - by Steve Eisner
    I have a table of items with [ID,ATTR1,ATTR2,ATTR3]. I'd like to select about half of the items, but try to get a random result set that is NOT clustered. In other words, there's a fairly even spread of ATTR1 values, ATTR2 values, and ATTR3 values. This does NOT necessarily represent the data as a whole, in other words, the total table may be generally concentrated on certain attribute values, but I'd like to select a subset with more variety. The attributes are not inter-related, so there's not really a correlation between ATTR1 and ATTR2. Any ideas for an efficient algorithm? Thanks! I don't really even know how to search for this :)

    Read the article

  • Losing partitions after every reboot

    - by Winston Smith
    I have an Acer laptop with one hard disk, which up until yesterday had 4 partitions: Recovery Partition (13GB) C: (140GB) D: (130GB) OEM Partition (10GB) I read that the OEM partition has all the stuff needed to restore the laptop to the factory settings, but since I'd already created restore disks and I needed the space, I wanted to get rid of it. Yesterday, I used diskpart to do that. In diskpart, I selected the OEM partition and issued the delete partition override command which removed it. Then I extended the D: partition into the unused space using windows disk management. Everything worked fine, until I rebooted my laptop, at which point the D: drive vanished. Looking in windows disk management again, I can see that there's an OEM partition of 140GB, which is obviously my D: drive. So I used EASEUS Partition Master and assigned a drive letter to the 'OEM' partition and I was able to access my files again. However, every time I reboot, it reverts back. How do I fix this permanently?

    Read the article

  • SQL Server Query Editors - any that warn of number of rows to be changed?

    - by ciaranarcher
    We're using SQL Server 2000 Query Analyser, and one issue we have is that very occasionally, when a user is updating our live database, they insert the incorrect/no(!) where clause. I know, not good, but it happens. Are there any editors that will warn of the number of rows that might be changed (if that is even possible) or even a way to configure an editor, if it is connected to a certain database, to prompt for confirmation before the query is run? We have a way to recover our data in the cases where we run an incorrect query, but it takes time, and I'm just seeing if there are any ways to catch the error, or at least give the user a second chance. Thanks in advance.

    Read the article

  • Does anyone has the experience of using the new p4 replicate command in their Perforce back-up /rest

    - by Thomas Corriol
    Hi all, we recently performed an upgrade of our whole perforce system to 2009.02 During this exercise, we noticed that the back-up /restore process that was installed here by the Perforce consultant a year ago was not completely working. Basically, the verify command has never worked (scary !). As we are obliged to revisit our Back-Up/Restore scripts, I was toying with the idea of using the new p4 replicate command. The idea is to use it alongside an rsync of the data files, so that in case of crash we will lose at worst an hour of work (if we execute them every hour). Does anyone has the experience or an example of back-up/restore scripts using the p4 replicate command of the 2009.02 version ? Thanks, Thomas

    Read the article

  • My database has been deleted suddenly on the server how to recover it?

    - by user2728312
    I'm running an application on windows server that connect to a SQL Server database. Today, when I opened SQL Server Management Studio, I was surprised the database is not in the list of the databases! I don't know what's the reason. I searched in the server files but I can't find the database and also in the recycle bin. I put my database in C:\db\myWeb.mdf and suddenly it's been removed! Can anyone tell me how to recover the database?

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >