Search Results

Search found 4705 results on 189 pages for 'export to csv'.

Page 74/189 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • wx_ref and custom wx_object's

    - by Iogann
    Hi! I am developing MDI application with help of wxErlang. I have a parent frame, implemented as wx_object: -module(main_frame). -export([new/0, init/1, handle_call/3, handle_event/2, terminate/2]). -behaviour(wx_object). .... And I have a child frame, implemented as wx_object too: module(child_frame). -export([new/2, init/1, handle_call/3, handle_event/2, terminate/2]). -export([save/1]). -behaviour(wx_object). % some public API method save(Frame) -> wx_object:call(Frame, save). .... I want to call save/1 for an active child frame from the parent frame. There is my code: ActiveChild = wxMDIParentFrame:getActiveChild(Frame), case wx:is_null(ActiveChild) of false - child_frame:save(ActiveChild); _ - ignore end This code fails because ActiveChild is #wx_ref{} with state=[], but wx_object:call/2 needs #wx_ref{} where state is set to the pid of the process which we call. What is the right method to do this? I thought only to store a list of all created child frames with its pids in the parent frame and search the pid in this list, but this is ugly.

    Read the article

  • Subversion (svn) beginner's questions

    - by Marius
    Hello, Here's what i'm trying to do. I have a project in /var/www/project. I'd like to use svn for this project. I've installed SVN on my debian server for this purpose, but i don't understand how to use it and the googling got me even more confused. I'd like to create a repository /var/svn/project and use it. After some changes occur, i'd like to export all the code back to /var/www/project. Now here's what i've done: i've created a repository: svnadmin create /var/svn/project i've imported the code: svn import /var/www/project file:///var/svn/project -m "Initial import" i've checked out the code with "Versions" client Everything seems to work fine, but ... If i go to /var/svn/project, there are no source files from my project there or in any subdirectory. Although the svn client is able to checkout all of those files. So i've read that in svn, files are not stored separately neither in berkley db nor in fsfs filesystems. Then the question is ... how do i export the source back to /var/www/project? If i do an svn export command on the /var/svn/project directory, it says i'm not in a working copy :(

    Read the article

  • Migrate from Oracle to MySQL

    - by Cassy
    Hi together. We ran into serious performance problems with our Oracle database and we would like to try to migrate to a MySQL-based database (either MySQL directly or, more preferrable, Infobright). The thing is, we need to let the old and the new system overlap for at least some weeks if not months, before we actually know, if all features of the new database match our needs. So, here is our situation: The Oracle database consists of multiple tables with each millions of rows. During the day, there are literally thousands of statements, which we cannot stop for migration. Every morning, new data is imported into the Oracle database, replacing some thousands of rows. Copying this process is not a problem, so we could, in theory, import in both databases in parallel. But, and here lies the challenge, for this to work, we need to have an export from the Oracle database with a consistent state from one day. (We cannot export some tables on Monday and some others on Tuesday, etc.) This means, that at least the export should be finished in less than one day. Our first thought was to dump the schema, but I wasn't able to find a tool to import an Oracle dump file into mysql. Exporting tables in CSV files might work, but I'm afraid it could take too long. So my question now is: What should I do? Is there any tool to import Oracle dump files into MySQL? Does anybody have any experience with such a large-scale migration? Thanks in advance, Cassy PS: Please, don't suggest performance optimization techniques for Oracle, we already tried a lot :-)

    Read the article

  • Lost in Nodester Installation

    - by jslamka
    I am trying to install my own version of Nodester. I have tried on Ubuntu 12.04 LTS and now with CentOS. I am not the most skilled Linux user (~2 months use) so I am at a loss at this point. The instructions are located at https://github.com/nodester/nodester/wiki/Install-nodester#wiki-a. They ask you to "export paths (to make npm work)" with the lines necessary to accomplish this. cd ~ echo -e "root = ~/.node_libraries\nmanroot = ~/local/share/man\nbinroot = ~/bin" > ~/.npmrc echo -e "export PATH=3d9c7cfd35d3628e0aa233dec9ce9a44d2231afcquot;\${PATH}:~/bin3d9c7cfd35d3628e0aa233dec9ce9a44d2231afcquot;;" >> ~/.bashrc source ~/.bashrc I can accomplish all of this until I get to the source ~/.bashrc line. When I run that, I get the following: [root@MYSERVER ~]# source ~/.bashrc -bash: /root/.bashrc: line 13: syntax error near unexpected token ';;' -bash: /root/.bashrc: line 13: 'export PATH=3d9c7cfd35d3628e0aa233dec9ce9a44d2231afcquot;${PATH}:~/bin3d9c7cfd35d3628e0aa233dec9ce9a44d2231afcquot;; I have tried changing the quot; to " and that didn't help. I tried changing quot; to colons and that didn't help. I also removed that and it didn't help (I am sure many of you at this point are probably wondering why I would even try those things). Does anyone have any insight as to what I need to do to get this to run properly?

    Read the article

  • SQL Server &ndash; Undelete a Table and Restore a Single Table from Backup

    - by Mladen Prajdic
    This post is part of the monthly community event called T-SQL Tuesday started by Adam Machanic (blog|twitter) and hosted by someone else each month. This month the host is Sankar Reddy (blog|twitter) and the topic is Misconceptions in SQL Server. You can follow posts for this theme on Twitter by looking at #TSQL2sDay hashtag. Let me start by saying: This code is a crazy hack that is to never be used unless you really, really have to. Really! And I don’t think there’s a time when you would really have to use it for real. Because it’s a hack there are number of things that can go wrong so play with it knowing that. I’ve managed to totally corrupt one database. :) Oh… and for those saying: yeah yeah.. you have a single table in a file group and you’re restoring that, I say “nay nay” to you. As we all know SQL Server can’t do single table restores from backup. This is kind of a obvious thing due to different relational integrity (RI) concerns. Since we have to maintain that we have to restore all tables represented in a RI graph. For this exercise i say BAH! to those concerns. Note that this method “works” only for simple tables that don’t have LOB and off rows data. The code can be expanded to include those but I’ve tried to leave things “simple”. Note that for this to work our table needs to be relatively static data-wise. This doesn’t work for OLTP table. Products are a perfect example of static data. They don’t change much between backups, pretty much everything depends on them and their table is one of those tables that are relatively easy to accidentally delete everything from. This only works if the database is in Full or Bulk-Logged recovery mode for tables where the contents have been deleted or truncated but NOT when a table was dropped. Everything we’ll talk about has to be done before the data pages are reused for other purposes. After deletion or truncation the pages are marked as reusable so you have to act fast. The best thing probably is to put the database into single user mode ASAP while you’re performing this procedure and return it to multi user after you’re done. How do we do it? We will be using an undocumented but known DBCC commands: DBCC PAGE, an undocumented function sys.fn_dblog and a little known DATABASE RESTORE PAGE option. All tests will be on a copy of Production.Product table in AdventureWorks database called Production.Product1 because the original table has FK constraints that prevent us from truncating it for testing. -- create a duplicate table. This doesn't preserve indexes!SELECT *INTO AdventureWorks.Production.Product1FROM AdventureWorks.Production.Product   After we run this code take a full back to perform further testing.   First let’s see what the difference between DELETE and TRUNCATE is when it comes to logging. With DELETE every row deletion is logged in the transaction log. With TRUNCATE only whole data page deallocations are logged in the transaction log. Getting deleted data pages is simple. All we have to look for is row delete entry in the sys.fn_dblog output. But getting data pages that were truncated from the transaction log presents a bit of an interesting problem. I will not go into depths of IAM(Index Allocation Map) and PFS (Page Free Space) pages but suffice to say that every IAM page has intervals that tell us which data pages are allocated for a table and which aren’t. If we deep dive into the sys.fn_dblog output we can see that once you truncate a table all the pages in all the intervals are deallocated and this is shown in the PFS page transaction log entry as deallocation of pages. For every 8 pages in the same extent there is one PFS page row in the transaction log. This row holds information about all 8 pages in CSV format which means we can get to this data with some parsing. A great help for parsing this stuff is Peter Debetta’s handy function dbo.HexStrToVarBin that converts hexadecimal string into a varbinary value that can be easily converted to integer tus giving us a readable page number. The shortened (columns removed) sys.fn_dblog output for a PFS page with CSV data for 1 extent (8 data pages) looks like this: -- [Page ID] is displayed in hex format. -- To convert it to readable int we'll use dbo.HexStrToVarBin function found at -- http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx -- This function must be installed in the master databaseSELECT Context, AllocUnitName, [Page ID], DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE [Current LSN] = '00000031:00000a46:007d' The pages at the end marked with 0x00—> are pages that are allocated in the extent but are not part of a table. We can inspect the raw content of each data page with a DBCC PAGE command: -- we need this trace flag to redirect output to the query window.DBCC TRACEON (3604); -- WITH TABLERESULTS gives us data in table format instead of message format-- we use format option 3 because it's the easiest to read and manipulate further onDBCC PAGE (AdventureWorks, 1, 613, 3) WITH TABLERESULTS   Since the DBACC PAGE output can be quite extensive I won’t put it here. You can see an example of it in the link at the beginning of this section. Getting deleted data back When we run a delete statement every row to be deleted is marked as a ghost record. A background process periodically cleans up those rows. A huge misconception is that the data is actually removed. It’s not. Only the pointers to the rows are removed while the data itself is still on the data page. We just can’t access it with normal means. To get those pointers back we need to restore every deleted page using the RESTORE PAGE option mentioned above. This restore must be done from a full backup, followed by any differential and log backups that you may have. This is necessary to bring the pages up to the same point in time as the rest of the data.  However the restore doesn’t magically connect the restored page back to the original table. It simply replaces the current page with the one from the backup. After the restore we use the DBCC PAGE to read data directly from all data pages and insert that data into a temporary table. To finish the RESTORE PAGE  procedure we finally have to take a tail log backup (simple backup of the transaction log) and restore it back. We can now insert data from the temporary table to our original table by hand. Getting truncated data back When we run a truncate the truncated data pages aren’t touched at all. Even the pointers to rows stay unchanged. Because of this getting data back from truncated table is simple. we just have to find out which pages belonged to our table and use DBCC PAGE to read data off of them. No restore is necessary. Turns out that the problems we had with finding the data pages is alleviated by not having to do a RESTORE PAGE procedure. Stop stalling… show me The Code! This is the code for getting back deleted and truncated data back. It’s commented in all the right places so don’t be afraid to take a closer look. Make sure you have a full backup before trying this out. Also I suggest that the last step of backing and restoring the tail log is performed by hand. USE masterGOIF OBJECT_ID('dbo.HexStrToVarBin') IS NULL RAISERROR ('No dbo.HexStrToVarBin installed. Go to http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx and install it in master database' , 18, 1) SET NOCOUNT ONBEGIN TRY DECLARE @dbName VARCHAR(1000), @schemaName VARCHAR(1000), @tableName VARCHAR(1000), @fullBackupName VARCHAR(1000), @undeletedTableName VARCHAR(1000), @sql VARCHAR(MAX), @tableWasTruncated bit; /* THE FIRST LINE ARE OUR INPUT PARAMETERS In this case we're trying to recover Production.Product1 table in AdventureWorks database. My full backup of AdventureWorks database is at e:\AW.bak */ SELECT @dbName = 'AdventureWorks', @schemaName = 'Production', @tableName = 'Product1', @fullBackupName = 'e:\AW.bak', @undeletedTableName = '##' + @tableName + '_Undeleted', @tableWasTruncated = 0, -- copy the structure from original table to a temp table that we'll fill with restored data @sql = 'IF OBJECT_ID(''tempdb..' + @undeletedTableName + ''') IS NOT NULL DROP TABLE ' + @undeletedTableName + ' SELECT *' + ' INTO ' + @undeletedTableName + ' FROM [' + @dbName + '].[' + @schemaName + '].[' + @tableName + ']' + ' WHERE 1 = 0' EXEC (@sql) IF OBJECT_ID('tempdb..#PagesToRestore') IS NOT NULL DROP TABLE #PagesToRestore /* FIND DATA PAGES WE NEED TO RESTORE*/ CREATE TABLE #PagesToRestore ([ID] INT IDENTITY(1,1), [FileID] INT, [PageID] INT, [SQLtoExec] VARCHAR(1000)) -- DBCC PACE statement to run later RAISERROR ('Looking for deleted pages...', 10, 1) -- use T-LOG direct read to get deleted data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) EXEC('USE [' + @dbName + '];SELECT FileID, PageID, ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), ' + 'CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageIDFROM sys.fn_dblog(NULL, NULL)WHERE AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'' ' + 'AND Context IN (''LCX_MARK_AS_GHOST'', ''LCX_HEAP'') AND Operation in (''LOP_DELETE_ROWS''))t');SELECT *FROM #PagesToRestore -- if upper EXEC returns 0 rows it means the table was truncated so find truncated pages IF (SELECT COUNT(*) FROM #PagesToRestore) = 0 BEGIN RAISERROR ('No deleted pages found. Looking for truncated pages...', 10, 1) -- use T-LOG read to get truncated data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) -- dark magic happens here -- because truncation simply deallocates pages we have to find out which pages were deallocated. -- we can find this out by looking at the PFS page row's Description column. -- for every deallocated extent the Description has a CSV of 8 pages in that extent. -- then it's just a matter of parsing it. -- we also remove the pages in the extent that weren't allocated to the table itself -- marked with '0x00-->00' EXEC ('USE [' + @dbName + '];DECLARE @truncatedPages TABLE(DeallocatedPages VARCHAR(8000), IsMultipleDeallocs BIT);INSERT INTO @truncatedPagesSELECT REPLACE(REPLACE(Description, ''Deallocated '', ''Y''), ''0x00-->00 '', ''N'') + '';'' AS DeallocatedPages, CHARINDEX('';'', Description) AS IsMultipleDeallocsFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageID, DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE Context IN (''LCX_PFS'') AND Description LIKE ''Deallocated%'' AND AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'') t;SELECT FileID, PageID , ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT LEFT(PageAndFile, 1) as WasPageAllocatedToTable , SUBSTRING(PageAndFile, 2, CHARINDEX('':'', PageAndFile) - 2 ) as FileID , CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING(PageAndFile, CHARINDEX('':'', PageAndFile) + 1, LEN(PageAndFile))))) as PageIDFROM ( SELECT SUBSTRING(DeallocatedPages, delimPosStart, delimPosEnd - delimPosStart) as PageAndFile, IsMultipleDeallocs FROM ( SELECT *, CHARINDEX('';'', DeallocatedPages)*(N-1) + 1 AS delimPosStart, CHARINDEX('';'', DeallocatedPages)*N AS delimPosEnd FROM @truncatedPages t1 CROSS APPLY (SELECT TOP (case when t1.IsMultipleDeallocs = 1 then 8 else 1 end) ROW_NUMBER() OVER(ORDER BY number) as N FROM master..spt_values) t2 )t)t)tWHERE WasPageAllocatedToTable = ''Y''') SELECT @tableWasTruncated = 1 END DECLARE @lastID INT, @pagesCount INT SELECT @lastID = 1, @pagesCount = COUNT(*) FROM #PagesToRestore SELECT @sql = 'Number of pages to restore: ' + CONVERT(VARCHAR(10), @pagesCount) IF @pagesCount = 0 RAISERROR ('No data pages to restore.', 18, 1) ELSE RAISERROR (@sql, 10, 1) -- If the table was truncated we'll read the data directly from data pages without restoring from backup IF @tableWasTruncated = 0 BEGIN -- RESTORE DATA PAGES FROM FULL BACKUP IN BATCHES OF 200 WHILE @lastID <= @pagesCount BEGIN -- create CSV string of pages to restore SELECT @sql = STUFF((SELECT ',' + CONVERT(VARCHAR(100), FileID) + ':' + CONVERT(VARCHAR(100), PageID) FROM #PagesToRestore WHERE ID BETWEEN @lastID AND @lastID + 200 ORDER BY ID FOR XML PATH('')), 1, 1, '') SELECT @sql = 'RESTORE DATABASE [' + @dbName + '] PAGE = ''' + @sql + ''' FROM DISK = ''' + @fullBackupName + '''' RAISERROR ('Starting RESTORE command:' , 10, 1) WITH NOWAIT; RAISERROR (@sql , 10, 1) WITH NOWAIT; EXEC(@sql); RAISERROR ('Restore DONE' , 10, 1) WITH NOWAIT; SELECT @lastID = @lastID + 200 END /* If you have any differential or transaction log backups you should restore them here to bring the previously restored data pages up to date */ END DECLARE @dbccSinglePage TABLE ( [ParentObject] NVARCHAR(500), [Object] NVARCHAR(500), [Field] NVARCHAR(500), [VALUE] NVARCHAR(MAX) ) DECLARE @cols NVARCHAR(MAX), @paramDefinition NVARCHAR(500), @SQLtoExec VARCHAR(1000), @FileID VARCHAR(100), @PageID VARCHAR(100), @i INT = 1 -- Get deleted table columns from information_schema view -- Need sp_executeSQL because database name can't be passed in as variable SELECT @cols = 'select @cols = STUFF((SELECT '', ['' + COLUMN_NAME + '']''FROM ' + @dbName + '.INFORMATION_SCHEMA.COLUMNSWHERE TABLE_NAME = ''' + @tableName + ''' AND TABLE_SCHEMA = ''' + @schemaName + '''ORDER BY ORDINAL_POSITIONFOR XML PATH('''')), 1, 2, '''')', @paramDefinition = N'@cols nvarchar(max) OUTPUT' EXECUTE sp_executesql @cols, @paramDefinition, @cols = @cols OUTPUT -- Loop through all the restored data pages, -- read data from them and insert them into temp table -- which you can then insert into the orignial deleted table DECLARE dbccPageCursor CURSOR GLOBAL FORWARD_ONLY FOR SELECT [FileID], [PageID], [SQLtoExec] FROM #PagesToRestore ORDER BY [FileID], [PageID] OPEN dbccPageCursor; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; WHILE @@FETCH_STATUS = 0 BEGIN RAISERROR ('---------------------------------------------', 10, 1) WITH NOWAIT; SELECT @sql = 'Loop iteration: ' + CONVERT(VARCHAR(10), @i); RAISERROR (@sql, 10, 1) WITH NOWAIT; SELECT @sql = 'Running: ' + @SQLtoExec RAISERROR (@sql, 10, 1) WITH NOWAIT; -- if something goes wrong with DBCC execution or data gathering, skip it but print error BEGIN TRY INSERT INTO @dbccSinglePage EXEC (@SQLtoExec) -- make the data insert magic happen here IF (SELECT CONVERT(BIGINT, [VALUE]) FROM @dbccSinglePage WHERE [Field] LIKE '%Metadata: ObjectId%') = OBJECT_ID('['+@dbName+'].['+@schemaName +'].['+@tableName+']') BEGIN DELETE @dbccSinglePage WHERE NOT ([ParentObject] LIKE 'Slot % Offset %' AND [Object] LIKE 'Slot % Column %') SELECT @sql = 'USE tempdb; ' + 'IF (OBJECTPROPERTY(object_id(''' + @undeletedTableName + '''), ''TableHasIdentity'') = 1) ' + 'SET IDENTITY_INSERT ' + @undeletedTableName + ' ON; ' + 'INSERT INTO ' + @undeletedTableName + '(' + @cols + ') ' + STUFF((SELECT ' UNION ALL SELECT ' + STUFF((SELECT ', ' + CASE WHEN VALUE = '[NULL]' THEN 'NULL' ELSE '''' + [VALUE] + '''' END FROM ( -- the unicorn help here to correctly set ordinal numbers of columns in a data page -- it's turning STRING order into INT order (1,10,11,2,21 into 1,2,..10,11...21) SELECT [ParentObject], [Object], Field, VALUE, RIGHT('00000' + O1, 6) AS ParentObjectOrder, RIGHT('00000' + REVERSE(LEFT(O2, CHARINDEX(' ', O2)-1)), 6) AS ObjectOrder FROM ( SELECT [ParentObject], [Object], Field, VALUE, REPLACE(LEFT([ParentObject], CHARINDEX('Offset', [ParentObject])-1), 'Slot ', '') AS O1, REVERSE(LEFT([Object], CHARINDEX('Offset ', [Object])-2)) AS O2 FROM @dbccSinglePage WHERE t.ParentObject = ParentObject )t)t ORDER BY ParentObjectOrder, ObjectOrder FOR XML PATH('')), 1, 2, '') FROM @dbccSinglePage t GROUP BY ParentObject FOR XML PATH('') ), 1, 11, '') + ';' RAISERROR (@sql, 10, 1) WITH NOWAIT; EXEC (@sql) END END TRY BEGIN CATCH SELECT @sql = 'ERROR!!!' + CHAR(10) + CHAR(13) + 'ErrorNumber: ' + ERROR_NUMBER() + '; ErrorMessage' + ERROR_MESSAGE() + CHAR(10) + CHAR(13) + 'FileID: ' + @FileID + '; PageID: ' + @PageID RAISERROR (@sql, 10, 1) WITH NOWAIT; END CATCH DELETE @dbccSinglePage SELECT @sql = 'Pages left to process: ' + CONVERT(VARCHAR(10), @pagesCount - @i) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13), @i = @i+1 RAISERROR (@sql, 10, 1) WITH NOWAIT; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; END CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; EXEC ('SELECT ''' + @undeletedTableName + ''' as TableName; SELECT * FROM ' + @undeletedTableName)END TRYBEGIN CATCH SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() AS ErrorMessage IF CURSOR_STATUS ('global', 'dbccPageCursor') >= 0 BEGIN CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; ENDEND CATCH-- if the table was deleted we need to finish the restore page sequenceIF @tableWasTruncated = 0BEGIN -- take a log tail backup and then restore it to complete page restore process DECLARE @currentDate VARCHAR(30) SELECT @currentDate = CONVERT(VARCHAR(30), GETDATE(), 112) RAISERROR ('Starting Log Tail backup to c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail backup done.', 10, 1) WITH NOWAIT; RAISERROR ('Starting Log Tail restore from c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail restore done.', 10, 1) WITH NOWAIT;END-- The last step is manual. Insert data from our temporary table to the original deleted table The misconception here is that you can do a single table restore properly in SQL Server. You can't. But with little experimentation you can get pretty close to it. One way to possible remove a dependency on a backup to retrieve deleted pages is to quickly run a similar script to the upper one that gets data directly from data pages while the rows are still marked as ghost records. It could be done if we could beat the ghost record cleanup task.

    Read the article

  • VAMT 3.0 Proxy Activate - No ‘Apply Confirmation ID’ option at all

    - by lez
    I tried to activate my windows box in isolated network zone, so I followed the process of 'Scenario 2: Proxy Activation' in http://technet.microsoft.com/en-us/library/hh825202.aspx using two VAMT 3.0 hosts. Everything went fine (actually I'm not sure what option to choose when exporting VAMT data to .cilx file, I tried 'Export products and product keys' and 'Export proxy activation data only' anyway, is this a cause of this problem, I have no idea), until I wanted to apply the CID and activate the isolated pc. In 'Activate', there is no 'Apply Confirmation ID' option!, its only options are 'Acquire and save confirmation ID only' and 'Acquire confirmation ID, apply to selected machine(s) and activate'. The error message is 'cannot resolve remote name 'go.microsoft.com'' when I chose any of them, looks like acquire confirmation id always need to go to this url. But I just want to apply cid... Has anyone run into this, please? I searched internet, seems no answer... Any suggestion would be appreciated, thank you!

    Read the article

  • How to resolve 0xc000007b error opening Adobe After Effects?

    - by user225170
    I have installed Adobe CS6 on Windows 8 and get the error "0xc000007b" every time I open open Adobe After Effects. All other Adobe software, including Photoshop and Premiere Pro, work perfectly. I have looked online extensively but have not found any solutions. What can I try/do to resolve this error? OS: Windows 8 (64 bit) - CPU: Intel Core i3-3110M 2,40GHz - RAM: 6GB DD3 - FX: AMD Radeon HD 7670M When I open Adobe After Effects CS6 with Dependency Walker, this message appears : "Error: At least one module has an unresolved import due to a missing export function in an implicitly dependent module. Error: Modules with different CPU types were found. Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module." What can I do to resolve this error?

    Read the article

  • How to migrate from Natara DayNotez for Pocket PC / Windows Mobile

    - by piggymouse
    I've been using DayNotez as my notes manager since the old Palm PDA days. When I moved to Windows Mobile, I installed DayNotez there and migrated from the Palm version. Now I wish to move from DayNotez altogether (I currently consider Evernote as a decent cross-platform tool). Problem is, DayNotez doesn't let me export the notes (unless I want to transfer them one by one, which is a pain). Natara offers an export tool for Windows, but it only works for Palm HotSync (as it reads from the backed-up PDB file). DayNotez Desktop for Windows stores its local DB under "My Documents\Natara\DayNotez\" directory in a file named "[device name] DayNotez.dnz". Quick look within the file spots a string "Standard Jet DB" near the beginning, but I couldn't open it as a regular JET/MDB file. Any help would be greatly appreciated.

    Read the article

  • Mac Mail import from Entourage failed

    - by elhombre
    I am trying to import the entire Folder-structure from Entourage 2008 to Mac Mail on a Mac OS X 10.5.8. First I wanted to export the whole E-Mail Database from Entourage to Desktop as a .rge File. The export got aborted through a message which indicated that my Entourage database is corrupt and that I should rebuild it. I then went over to Mac Mail and imported the Entourage Folder-structure through File/import mailboxes.../Entourage. The bulky import did take some time and the progress bar never reached its full length. But at the end the import was successful. When I checked the imported folder-structure in Mac Mail I noticed that quiet a few Subfolders where missing. I checked Console and saw following message. Mail[2899] Applescript Error: aliasForMailImporter got an error: AppleEvent handler failed. Question: What does that mean and what can I do to get my subfolder-structure over completely?

    Read the article

  • Extract reply to addresses from all emails in an outlook folder

    - by RodH257
    I've got to send a bulk email out to a bunch of people in reply to emails they sent me. I've got all the original emails stored in a single outlook folder. I want to extract all of the reply-to addresses from the emails in that folder so I can send an email to all of them. http://thetechieguy.wordpress.com/2008/04/22/extracting-email-address-from-outlook-2007-folder/ shows you how to export the FROM address of these emails, however a large amount of them came from a web service that sends emails 'on behalf of' the person I'm trying to mail to. So I need the reply-to address, and the export wizard doesn't do that. Does anyone have a tip on how to do this? This is Outlook 2007 on Exchange 2010.

    Read the article

  • Error "fileid changed" when accessing files over NFS

    - by Roman Prikhodchenko
    I have an nfs-kernel-server configured and running on Ubuntu 10.04 Server. /export THIRD_SERVER_IP(rw,fsid=0,insecure,no_subtree_check,async) SECOND_SERVER_IP(rw,fsid=0,insecure,no_subtree_check,async) /export/ebs THIRD_SERVER_IP(rw,fsid=0,insecure,no_subtree_check,async) SECOND_SERVER_IP(rw,nohide,insecure,no_subtree_check,async) I mounted the exported folder to the second server: mount -t nfs4 -o proto=tcp,port=2049 NFS_SERVER_IP_HERE:/ebs /ebs and it works just fine. I mounted it to the third server but I cannot access files from it. ls -l /ebs ls: reading directory /ebs: Stale NFS file handle total 0 The syslog on the third server says: kernel: [11575.483720] NFS: server NFS_SERVER_IP_HERE error: fileid changed kernel: [11575.483722] fsid 0:14: expected fileid 0x2, got 0x6e001 Some info: uname -r 2.6.32-312-ec2 uname -m i686

    Read the article

  • error exporting data using mysql workbench

    - by Rajneesh Rana
    hi, i have been getting warning of version mismatch when i was trying to export data dump using mysql workbench. So, i copied mysqldump from mysql server folder and placed it in workbench folder. Now when i am trying to export data i am getting error Operation failed with exitcode -1073741819 here is a entry of log 16:31:25 Dumping wordpress (wp_posts) Running: "mysqldump.exe" --defaults-extra-file="c:\docume~1\rajneesh.r\locals~1\temp\1\tmpxau7tz" --no-create-info=FALSE --order-by-primary=FALSE --force=FALSE --no-data=FALSE --tz-utc=TRUE --flush-privileges=FALSE --compress=FALSE --replace=FALSE --host=localhost --insert-ignore=FALSE --extended-insert=TRUE --user=root --quote-names=TRUE --hex-blob=FALSE --complete-insert=FALSE --add-locks=TRUE --port=3306 --disable-keys=TRUE --delayed-insert=FALSE --create-options=TRUE --delete-master-logs=FALSE --comments=TRUE --default-character-set=utf8 --max_allowed_packet=1G --flush-logs=FALSE --dump-date=TRUE --lock-tables=TRUE --allow-keywords=FALSE --events=FALSE "wordpress" "wp_posts" Operation failed with exitcode -1073741819 Please help me with these issues Thank You

    Read the article

  • Bash color prompt and long commands

    - by Eric J.
    I'm colorizing parts of my bash prompt using ANSI escape sequences. This works great, until the command I'm currently typing in is long enough that it has to wrap. Instead of the rest of the command displaying on the next line, it wraps back to column 1 of the current line, overwriting the beginning of the prompt. I get that behavior with this prompt: export PS1="[\u][\033[0;32;40mdemo \033[0;33;40m1.5.40.b\033[0;37;40m] \w> \033[0m" but it works correctly with the same prompt, ANSI sequences remove: export PS1="[\u][demo 1.5.40.b] \w> " I'm connecting using the current version of Putty, with default Putty settings. The OS is Ubuntu 8.10.

    Read the article

  • Testing home directory scripts by setting $HOME to the location of the test directory

    - by intuited
    I have an interdependent collection of scripts in my ~/bin directory as well as a developed ~/.vim directory and some other libraries and such in other subdirectories. I've been versioning all of this using git, and have realized that it would be potentially very easy and useful to do development and testing of new and existing scripts, vim plugins, etc. using a cloned repo, and then pull the working code into my actual home directory with a merge. The easiest way to do this would seem to be to just change & export $HOME, eg cd ~/testing; git clone ~ home export HOME=~/testing/home cd ~ screen -S testing-home # start vim, write/revise plugins, edit scripts, etc. # test revisions However since I've never tried this before I'm concerned that some programs, environment variables, etc., may end up using my actual home directory instead of the exported one. Is this a viable strategy? Are there just a few outliers that I should be careful about? Is there a much better way to do this sort of thing?

    Read the article

  • Exporting Speed Dial settings from within Firefox into Chrome Speed dial...

    - by Paul
    Win7 Home Prem 32-bit I usually use Firefox as my browser of choice and have been using the Addin Speed Dial for ages. I have many sites added on multiple tabs within it. I have just installed the Google Chrome browser which also has the Speed Dial Extension. Is there a way to export/import/copy/sync my Speed Dial settings i have in Firefox to the Speed Dial extension in Chrome? I know i can export from Speed Dial in Firefox but can't see any easy way to import from Speed Dial in Chrome.

    Read the article

  • How to efficiently merge a lot of vCard files for the same person?

    - by mihi
    I currently have contact information at several places: old PDA's address book mobile phone's phone book (primarily name, phone number) email client's address book (primarily name, email) web mailer's address book (primarily name, email) instant messenger's contact list (primarily name, im, email, birthday) And there are several social or business networking sites on the Internet where contacts provide information about themselves, like LinkedIn or XING. All those sources can export as vCard, but as you might imagine, I get a lot of vCards for the very same contact that way. Are there any tools where I can import them and then merge them (it may ask me which phone number is more current in case of field clashes of course)? Bonus points if it can track which information I have discarded so when I re-export all information from one of the sources I can't import to (networking sites), it won't ask me again if I want to overwrite phone number of person X with the same ancient number... I hope you understand what I try to accomplish, if not just ask :-)

    Read the article

  • HP-UX - custom rsync path

    - by stack_zen
    Hi. There are a range of HP-UX 11.11 hosts I'm unable to install rsync (I'm limited to a non-privileged user) I've extracted both rsync binary and libpopt.sl, libiconv.sl, libintl.sl from the depots into one of that user's directories: /home/zenith/rsync/ Problem is, I can't get my RH Linux box communicating with it: rsync -e --rsync-path=/home/zenith/rsync/rsync --compress=9 -pgtov --filter=+rs_/'*.log' --exclude='*' [email protected]:/home/zenith/service/logs/ /u01/rsync_depot/service/192.102.14.18/ /usr/lib/dld.sl: Can't find path for shared library: libintl.sl /usr/lib/dld.sl: No such file or directory sh: 1644 Abort(coredump) I've added to the remote host .profile export SHLIB_PATH=/usr/lib:/home/zenith/rsync export PATH=$PATH:/home/zenith/rsync but still, no libintl.sl is found. How can I initialize the correct env variable/ get this to work?

    Read the article

  • How to add chrome bookmarks to Windows 7 favorites folder (where IE favorites live)

    - by richardh
    I just switched to Windows 7 and love the taskbar and library features. If I make a desktop shortcut to a webpage, then it becomes searchable from the taskbar (i.e., press the Win/meta key and type the shortcut's name and it pops up). The IE bookmarks/favorites already come with shortcuts in your "Favorites" folder. Can I programatically do this with my chrome shortcuts? My first thought was to export bookmarks to IE, but I can't find an option in IE that allows me to export bookmarks/favorites as shortcuts. Thanks!

    Read the article

  • Two Tomcat SSL Providers & One FreeBSD

    - by mosg
    Hello everyone. Question: On FreeBSD8 I need to have two opened HTTPS different ports (443 and 444, for example). In other words, I need two providers, working simultaneously: Ordinary SSL signed certificate (# Thawte) on 443 port Special russian security provider (# DIGTProvider, based on CryptoPro CSP software) on 444 port I also have to mentioned, that the major provider is the 2'nd provider. Here is some of DIGTProvider options: add to ${JRE_HOME}/lib/security/java.security this line security.provider.N=com.digt.trusted.jce.provider.DIGTProvider ssl.SocketFactory.provider=com.digt.trusted.jsse.provider.DigtSocketFactory uncomment and edit in conf/server.xml HTTPS section: sslProtocol="GostTLS" (added) edit bin/catalina.sh and add: export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/opt/cprocsp/lib/ia32" export JAVA_OPTS="${JAVA_OPTS} -Dcom.digt.trusted.jsse.server.certFile=/home//server-gost.cer -Dcom.digt.trusted.jsse.server.keyPasswd=11111111" As I know if I just define in server.xml tomcat's configuration file two SSL connectors, tomcat would not start, because in JRE you can use only one JSSE provider. Thanks for help.

    Read the article

  • How to move Mailboxes over from old Exchange 2007 to new EBS 2008 network?

    - by Qwerty
    This q is similar to: http://serverfault.com/questions/39070/how-to-move-exchange-2003-mailbox-or-store-from-2003-to-2007-on-separate-networks Basically I am trying to move our exchange mailboxes over to a test domain that is hosting EBS2008 with Exchange 2007. We plan to move as soon as we can when we have our exchange data over. I have tried moving a db with mailboxes over but cannot get it to mount in the new Exchange in any way possible, including mounting it onto a recovery store. From my understanding the ONLY prerequisite for moving Exchange DBs across is that it must have the same Organizational name (unlike previous versions of Exchange). If anyone has any insight as to why I cannot mount and simply reattach the mailboxes, please give me an idea as to what could be wrong. It should be as simple as this. Note that the DBs I have are in a clean state. I cannot use ExMerge because I am not running any mailboxes on 2003. I have also tried using a 32bit Vista machine with the Export-Mailbox cmdlet to extract mailboxes but anything I do to it results in Permission errors. I have tried to troubleshoot these with no success. I am running in full admin with proper exchange roles and yet it still gives me access denied errors: Export-Mailbox : MapiExceptionNetworkError: Unable to make admin interface conn ection to server. (hr=0x80040115, ec=-2147221227) Also some errors show in the management console: get-MailboxDatabase Completed Warning: ERROR: Could not connect to the Microsoft Exchange Information Store service on server TATOOINE.baytech.local. One of the following problems may be occurring: 1- The Microsoft Exchange Information Store service is not running. 2- There is no network connectivity to server TATOOINE.baytech.local. 3- You do not have sufficient permissions to perform this command. The following permissions are required to perform this command: Exchange View-Only Administrator and local administrators group for the target server. 4- Credentials have been cached for an unpriviledged user. Try removing the entry for this server from Stored User Names and Passwords. Why I have to use a 32bit machine to export a simple .pst file is beyond me... So yeah I am now out of ideas and any help would be great! Thanks in advance.

    Read the article

  • Mounting to external NFS from a KVM VM.

    - by jbfink
    I've got a machine acting as a KVM host and another machine that NFS exports to that KVM host. I'd like for one of the internal VMs on the KVM host to be able to mount the NFS share. I can export to the KVM host IP fine and do a mount, but it doesn't work for the internal VM; I just get a failed error with "reason given by server: Permission denied". I've already tried to re-export the NFS from host to VM, but apparently doing two levels of NFS is not a Good Idea. Anyone know how I might get this working?

    Read the article

  • Error "fileid changed" when accessing files over NFS

    - by Roman Prikhodchenko
    I have an nfs-kernel-server configured and running on Ubuntu 10.04 Server. /export THIRD_SERVER_IP(rw,fsid=0,insecure,no_subtree_check,async) SECOND_SERVER_IP(rw,fsid=0,insecure,no_subtree_check,async) /export/ebs THIRD_SERVER_IP(rw,fsid=0,insecure,no_subtree_check,async) SECOND_SERVER_IP(rw,nohide,insecure,no_subtree_check,async) I mounted the exported folder to the second server: mount -t nfs4 -o proto=tcp,port=2049 NFS_SERVER_IP_HERE:/ebs /ebs and it works just fine. I mounted it to the third server but I cannot access files from it. ls -l /ebs ls: reading directory /ebs: Stale NFS file handle total 0 The syslog on the third server says: kernel: [11575.483720] NFS: server NFS_SERVER_IP_HERE error: fileid changed kernel: [11575.483722] fsid 0:14: expected fileid 0x2, got 0x6e001 Some info: uname -r 2.6.32-312-ec2 uname -m i686

    Read the article

  • PCI scan findings and problems with week ciphers on ports 993,443,995,465

    - by user64991
    From PCI scan results: Synops is : The remote service encrypts traffic using a protocol with known weaknesses . Description : The remote service accepts connections encrypted using SSL 2.0, which reportedly suffers from several cryptographic flaws and has been deprecated for several years. An attacker may be able to exploit these issues to conduct man-in-the-middle attacks or decrypt communications between the affected service and clients . See also : http://www.schneier.com/paper-ssl.pdf Solution: Consult the application's documentation to disable SSL 2.0 and use SSL 3.0 or TLS 1.0 instead. Risk Factor: Medium / CVSS Base Score : 2 (AV:R/AC:L/Au:NR/C:P/A:N/I:N/B:N) I have tried to change SSLProtocol all -SSLv2 to SSLProtocol -ALL +SSLv3 +TLSv1 And SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW To SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:!MEDIUM:!LOW:!SSLv2:!EXPORT But using SSLdigger, it shows the same result. Is this the right way to do something like this?

    Read the article

  • OS X Lion - Installing Oracle 10g Standard Edition

    - by Cellze
    Im trying to install oracle 10g on to OS X Lion, I have previous achieved this on snow leopard with the following http://blog.rayapps.com/2009/09/14/how-to-install-oracle-database-10g-on-mac-os-x-snow-leopard/ The issue im having is that the ulimit settings in the oracle/.bash_profile cannot be modified. I have the following in the bash_profile: export DISPLAY=:0.0 export ORACLE_BASE=$HOME umask 022 # must match `sysctl kern.maxprocperuid` ulimit -Hu 512 ulimit -Su 512 # must match `sysctl kern.maxfilesperproc` ulimit -Hn 10240 ulimit -Sn 10240 Upon applying the bash_profile settings . ~/.bash_profile i get the following error: -bash: ulimit: max user processes: cannot be modify limit: Invalid argument This then results in $ sqlplus / as sysdba not functioning correctly with a Segmentation fault: 11 The output of $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 10240 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 512 virtual memory (kbytes, -v) unlimited If any one knows how I can apply these ulimit settings to the oracle user I have created to allow me to install sqlplus and therefore create a db, that would be great.

    Read the article

  • Openstack keystone issue

    - by kevin
    i am having an issue with keystone,keystone is configured with users nova,glance and admin user and their endpoints are also defined. when performing keystone token-get it is showing token,but for commands like keystone user-list its showing No handlers could be found for logger "keystoneclient.client" Unable to communicate with identity service: 404 Not Found The resource could not be found. . (HTTP 404) but after setting these env variables it worked export SERVICE_ENDPOINT=http://192.168.10.15:35357/v2.0 export SERVICE_TOKEN=token but after that for keystone token-get its showing 'Client' object has no attribute 'service_catalog' Why is it so?How can it be fixed any ideas

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >