Search Results

Search found 67192 results on 2688 pages for 'excel external data'.

Page 94/2688 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Can I recover data from external HDD or do I format and lose it all?

    - by Col
    I have a Maxtor external HDD 500GB but haven't used it for a year or so. I have plugged it into a new laptop as the one I used it with before is busted. I know that there is a ton of data on the HDD that I would love to have the use of - mostly family and friends photos to be honest. But when I click on the HDD in Windows Explorer the only option I am given is to reformat the drive and lose the data. I'd be grateful if anyone could tell me if there is a way to get the data off the external drive before formatting it and losing it all.

    Read the article

  • Is there a limit of max. number of files in external hard drive folder?

    - by tfs
    I have a FAT32 external hard drive where I keep backups downloaded from webserver. I have a directory with 30 subdirectories. One of the subdirectories contains 21381 files and when I try to copy more files into this directory I get 0x80070052 error. However,it's possible to copy one more file in this directory (only one) if I make it's name shorter (8 characters instead of 22 as it's original name). How do I solve this problem? Now I can not synchronize external hard disk files with server files which is very important for me.

    Read the article

  • Win7 not detecting external HDD but Ubuntu is detecting. Why?

    - by unlimit
    I have a 500GB Toshiba external HDD. Since yesterday Windows 7 stopped detecting it, however I do see it listed on the "Safely remove hardware and eject media" icon on the taskbar. Then I tried the same external HDD on my Ubuntu and it detected it just fine. Ubuntu and Windows 7 are on the same laptop. I have dual boot. Can someone tell me why is it happening? Am I missing a driver in Windows 7? Additional info: 1. This drive has worked perfectly fine in the past. 2. I did not format this drive ever. 3. It just stopped working yesterday in windows.

    Read the article

  • Windows 7 not detecting external hard drive but Ubuntu is detecting. Why?

    - by unlimit
    I have a 500 GB Toshiba external hard drive. Since yesterday Windows 7 stopped detecting it, however I do see it listed on the "Safely remove hardware and eject media" icon on the taskbar. Then I tried the same external hard drive on my Ubuntu and it detected it just fine. Ubuntu and Windows 7 are on the same laptop. I have dual boot. Can someone tell me why is it happening? Am I missing a driver in Windows 7? Additional info: This drive has worked perfectly fine in the past. I did not format this drive ever. It just stopped working yesterday in windows.

    Read the article

  • Pass a range into a custom function from within a cell

    - by Luis
    Hi I'm using VBA in Excel and need to pass in the values from two ranges into a custom function from within a cell's formula. The function looks like this: Public Function multByElement(range1 As String, range2 As String) As Variant Dim arr1() As Variant, arr2() As Variant arr1 = Range(range1).value arr2 = Range(range2).value If UBound(arr1) = UBound(arr2) Then Dim arrayA() As Variant ReDim arrayA(LBound(arr1) To UBound(arr1)) For i = LBound(arr1) To UBound(arr1) arrayA(i) = arr1(i) * arr2(i) Next i multByElement = arrayA End If End Function As you can see, I'm trying to pass the string representation of the ranges. In the debugger I can see that they are properly passed in and the first visible problem occurs when it tries to read arr1(i) and shows as "subscript out of range". I have also tried passing in the range itself (ie range1 as Range...) but with no success. My best suspicion was that it has to do with the Active Sheet since it was called from a different sheet from the one with the formula (the sheet name is part of the string) but that was dispelled since I tried it both from within the same sheet and by specifying the sheet in the code. BTW, the formula in the cell looks like this: =AVERAGE(multByElement("A1:A3","B1:B3")) or =AVERAGE(multByElement("My Sheet1!A1:A3","My Sheet1!B1:B3")) for when I call it from a different sheet.

    Read the article

  • vba: what does ReDim Preserve do and simple array question

    - by every_answer_gets_a_point
    i am looking at someone else's vba excel code. they are doing ReDim Preserve dataMatrix(7, i) in both loops. what does this do? also, it seems like the second loop just overwrites the data in the first, loop, is that correct? Dim dataMatrix() As String Worksheets.Item("ETS").Select Do While Trim(Cells(r, 1)) <> "" Debug.Print "The line: ", Trim(Cells(r, 1)), r r = r + 1 dataMatrix(1, i) = Trim(Cells(r, 1)) ''file name dataMatrix(2, i) = Trim(Cells(r, 2)) ''sample type dataMatrix(3, i) = Trim(Cells(r, 3)) ''sample name dataMatrix(4, i) = "ETS" '' dataMatrix(5, i) = Trim(Cells(r, 5)) ''Response dataMatrix(6, i) = Trim(Cells(r, 6)) ''ISTD Response dataMatrix(7, i) = Trim(Cells(r, 10)) ''Calculated Conc i = i + 1 ReDim Preserve dataMatrix(7, i) Loop r = 5 Worksheets.Item("ETG").Select Do While Trim(Cells(r, 1)) <> "" Debug.Print "The line: ", Trim(Cells(r, 1)), r r = r + 1 dataMatrix(1, i) = Trim(Cells(r, 1)) ''file name dataMatrix(2, i) = Trim(Cells(r, 2)) ''sample type dataMatrix(3, i) = Trim(Cells(r, 3)) ''sample name dataMatrix(4, i) = "ETG" dataMatrix(5, i) = Trim(Cells(r, 5)) ''Response dataMatrix(6, i) = Trim(Cells(r, 6)) ''ISTD Response dataMatrix(7, i) = Trim(Cells(r, 10)) ''Calculated Conc i = i + 1 ReDim Preserve dataMatrix(7, i) Loop

    Read the article

  • Can't get findnext property of range class error

    - by Lawrence Knowlton
    I am trying to parse a report in Excel 2007. It is basically a report of accounting charge exceptions. The report has sections with a header for each type of exception. There are types of exceptions that are deleted from the report. I'm using a Do While loop to find each header and if the section needs to be deleted I have it do so. If nothing needs to be deleted the code works fine, but right after a section is deleted I get an "Unable to get the FindNext property of the Range Class" error. Here is my code: Sub merge_All_Section_Headers() ' Description: ' The next portion macro will find and format the Tranaction Source rows in the file ' by checking each row in column A for the following text: TRANSA. If a cell ' has this text in it, it is selected and a function called merge_text_cells ' is run, which performs concatenation of each Transaction Source header row and ' deletes the text from the rest of the cells with broken up text. ' lastRow = ActiveSheet.UsedRange.Rows.Count + 1 Range(lastRow & ":" & lastRow).Delete ActiveSheet.PageSetup.Orientation = xlLandscape With ActiveSheet.Range("A:A") Dim searchString As String searchString = "TRANSA" 'The following sets stringFound to either true or false based on whether or not 'the searchString (TRANSA) is found or not): Set stringFound = .Find(searchString, LookIn:=xlValues, lookat:=xlPart) If Not stringFound Is Nothing Then firstLocation = stringFound.Address Do stringFound.Select lastFound = stringFound.Address merge_Text_Cells If ((InStr(ActiveCell.Text, "CHARGE FILER") = 0) And _ (InStr(ActiveCell.Text, "CREDIT FILER") = 0) And _ (InStr(ActiveCell.Text, "PA MIDNIGHT FINAL") = 0) And _ (InStr(ActiveCell.Text, "BAD DEBT TURNOVER") = 0)) Then section_Del 'Function that deletes unwanted sections End If Range(lastFound).Select Set stringFound = .FindNext(stringFound) Loop While Not stringFound Is Nothing And stringFound.Address <> firstLocation End If End With Like I said it works fine when the section_Del is commented out. Any ideas as to how to remedy this would be greatly appreciated. Thanks!

    Read the article

  • How do I repeat function over several row.

    - by ChrisBD
    I'll admit that I'm not an Excel guru so maybe someone here can help me. On my worksheet I have several blocks of data. I calculate the sum of all items within column D of that block. Within each block I am checking the value of the cell in column C and if it contains the letter "y" and the value in column D of that row is equal to zero I must exclude the total sum of column D. Currently I am doing this by multiplying the sum value by either 1 or 0 which is produced by running a test over the cell contents. Below is an example of what I am using to test rows 23 to row 25 inclusively for data in Column D. I am also performing the same on Column E and G, but the "y" character is always in column C, hence the absolut column reference. =IF(AND($C23="y",D23=0),0,1)*IF(AND($C24="y",D24=0),0,1)*IF(AND($C25="y",D25=0),0,1) There must be a more efficient way to do this. Ideally I would like to write a function that I can paste into a cell and then select the rows or cells over which I run the test. Can anyone point me in the right direction?

    Read the article

  • Extract tar with multiple tars inside?

    - by Andrew Fashion
    Is there a way to untar a file with multiple tars inside? It's suppose to just untar everything inside including untarring the tars inside the tar... With windows it does it, quite annoying I can't figure it out on linux... Here is what I am doing: # tar -xvf socialengine4.0.5p1.tar core-base-4.0.5.tar core-install-4.0.7.tar external-autocompleter-4.0.0.tar external-calendar-4.0.1.tar external-chootools-4.0.3.tar external-fancyupload-4.0.1.tar external-firebug-4.0.0.tar external-flowplayer-4.0.0.tar external-moocomet-4.0.0.tar external-moocrop-4.0.0.tar external-moolasso-4.0.0.tar external-mootools-4.0.2.tar external-mootree-4.0.0.tar external-open-flash-chart-4.0.0.tar external-smoothbox-4.0.0.tar external-swfobject-4.0.0.tar external-tagger-4.0.2.tar external-tinymce-4.0.2.tar library-engine-4.0.5.tar library-facebook-4.0.0.tar library-ofc-4.0.0.tar library-pear-4.0.1.tar library-scaffold-4.0.3.tar module-activity-4.0.5p1.tar module-announcement-4.0.3.tar module-authorization-4.0.5.tar module-core-4.0.5.tar module-fields-4.0.5p1.tar module-invite-4.0.3.tar module-messages-4.0.5.tar module-network-4.0.5p1.tar module-storage-4.0.4.tar module-user-4.0.5.tar widget-rss-4.0.2.tar widget-weather-4.0.0.tar changelog.html [root@D18634 se4]# ls -l total 36980 -rw-r--r-- 1 1000 1000 27188 Oct 8 15:39 changelog.html -rw-r--r-- 1 1000 1000 359424 Oct 8 16:13 core-base-4.0.5.tar -rw-r--r-- 1 1000 1000 1122304 Oct 8 16:13 core-install-4.0.7.tar -rw-r--r-- 1 1000 1000 38400 Oct 8 16:13 external-autocompleter-4.0.0.tar -rw-r--r-- 1 1000 1000 100352 Oct 8 16:13 external-calendar-4.0.1.tar -rw-r--r-- 1 1000 1000 31232 Oct 8 16:13 external-chootools-4.0.3.tar -rw-r--r-- 1 1000 1000 66560 Oct 8 16:13 external-fancyupload-4.0.1.tar -rw-r--r-- 1 1000 1000 85504 Oct 8 16:13 external-firebug-4.0.0.tar -rw-r--r-- 1 1000 1000 216576 Oct 8 16:13 external-flowplayer-4.0.0.tar -rw-r--r-- 1 1000 1000 11776 Oct 8 16:13 external-moocomet-4.0.0.tar -rw-r--r-- 1 1000 1000 16384 Oct 8 16:13 external-moocrop-4.0.0.tar -rw-r--r-- 1 1000 1000 27648 Oct 8 16:13 external-moolasso-4.0.0.tar -rw-r--r-- 1 1000 1000 1445376 Oct 8 16:13 external-mootools-4.0.2.tar -rw-r--r-- 1 1000 1000 45568 Oct 8 16:13 external-mootree-4.0.0.tar -rw-r--r-- 1 1000 1000 330240 Oct 8 16:13 external-open-flash-chart-4.0.0.tar -rw-r--r-- 1 1000 1000 43008 Oct 8 16:13 external-smoothbox-4.0.0.tar -rw-r--r-- 1 1000 1000 18432 Oct 8 16:13 external-swfobject-4.0.0.tar -rw-r--r-- 1 1000 1000 19968 Oct 8 16:13 external-tagger-4.0.2.tar -rw-r--r-- 1 1000 1000 5711360 Oct 8 16:13 external-tinymce-4.0.2.tar -rw-r--r-- 1 1000 1000 1230848 Oct 8 16:13 library-engine-4.0.5.tar -rw-r--r-- 1 1000 1000 28672 Oct 8 16:13 library-facebook-4.0.0.tar -rw-r--r-- 1 1000 1000 125952 Oct 8 16:13 library-ofc-4.0.0.tar -rw-r--r-- 1 1000 1000 1715200 Oct 8 16:13 library-pear-4.0.1.tar -rw-r--r-- 1 1000 1000 340480 Oct 8 16:13 library-scaffold-4.0.3.tar -rw-r--r-- 1 1000 1000 354304 Oct 8 16:13 module-activity-4.0.5p1.tar -rw-r--r-- 1 root root 327680 Jan 8 02:37 module-albums-4.0.5.tar -rw-r--r-- 1 1000 1000 80896 Oct 8 16:13 module-announcement-4.0.3.tar -rw-r--r-- 1 1000 1000 147456 Oct 8 16:13 module-authorization-4.0.5.tar -rw-r--r-- 1 1000 1000 2643968 Oct 8 16:13 module-core-4.0.5.tar -rw-r--r-- 1 root root 665600 Jan 8 02:37 module-events-4.0.5.tar -rw-r--r-- 1 1000 1000 377344 Oct 8 16:13 module-fields-4.0.5p1.tar -rw-r--r-- 1 root root 501760 Jan 8 02:37 module-forum-4.0.5p1.tar -rw-r--r-- 1 1000 1000 81408 Oct 8 16:14 module-invite-4.0.3.tar -rw-r--r-- 1 1000 1000 147968 Oct 8 16:14 module-messages-4.0.5.tar -rw-r--r-- 1 1000 1000 111616 Oct 8 16:14 module-network-4.0.5p1.tar -rw-r--r-- 1 1000 1000 99840 Oct 8 16:14 module-storage-4.0.4.tar -rw-r--r-- 1 1000 1000 844288 Oct 8 16:14 module-user-4.0.5.tar -rw-r--r-- 1 root root 18094080 Jan 8 02:40 socialengine4.0.5p1.tar -rw-r--r-- 1 1000 1000 12288 Oct 8 16:14 widget-rss-4.0.2.tar -rw-r--r-- 1 1000 1000 13824 Oct 8 16:14 widget-weather-4.0.0.tar

    Read the article

  • How do I determine what Excel-2007 is removing when it repairs my file?

    - by sage
    Summary: Excel repairs my file, tells me what was removed, I go into the xml/zip structure to investigate, and I cannot figure out what was changed. Does anybody know what I can do to better understand what Excel changed? Is it futile to try to determine? It feels like this should be possible and like I'm almost there... Details: When I open a file that I have renamed unnamed.xlsm, I receive the following notice: "Excel found unreadable content in 'unnamed.xlsm'. Do you want to recover the contents of this workbook? If you trust the sounce of this workbook, click Yes." I know the file is safe, I click yes, and I receive a message that "Excel was able to open the file by repairing or removing the unreadable content." It provides the following summary, but also provides an xml file which seems to contain the same content so I did not show it. Summary: Removed Records: Shared formula from /xl/worksheets/sheet3.xml part Removed Records: Formula from /xl/calcChain.xml part (Calculation properties) In order to determine issue, I have created a copy of the offending file, renamed it to have a '.zip' ending, opened up the files that Excel says it modified (sheet3), and perused the xml content, but this was not informative. I tried saving the repaired file and doing a simple diff on the xml for sheet3, but there are many changes and this is not informative either. I did the same thing for calcChain.xml and this was more useful. After saving the displayed xml with line breaks in text format, it was easy to identify the items that have been removed, but now I want to make sense of them. Perhaps they give clues of what happened to shee3. The following comparison is long, but I don't know if the entire train of differences is relevant. FILE COMPARISON Produced: 1-7-2011 2:42:26 PM Mode: Just Differences Left file: u:\My Documents\[redacted]\calcChain_orig.xml Right file: u:\My Documents\[redacted]\calcChain_rep.xml 812 <c r="H18" i="8" /> <> 812 <c r="N2" i="8" /> 814 <c r="G18" /> +- 816 <c r="D19" /> +- 818 <c r="F19" /> +- 820 <c r="E18" /> +- 822 <c r="N2" i="8" /> +- 824 <c r="H18" /> +- -+ 820 <c r="H15" /> 821 <c r="H13" /> 822 <c r="O19" /> 823 <c r="O17" /> 824 <c r="O15" /> 825 <c r="M19" /> 826 <c r="M17" /> 827 <c r="M15" /> 828 <c r="M13" /> 829 <c r="J19" /> 830 <c r="J17" /> 831 <c r="J15" /> 832 <c r="J13" /> 833 <c r="O14" /> 834 <c r="H18" i="8" /> 835 <c r="G18" /> 836 <c r="D19" i="5" /> 837 <c r="F19" /> 838 <c r="E18" i="8" /> 839 <c r="H18" i="9" /> 827 <c r="H15" /> +- 829 <c r="H13" /> +- 831 <c r="O19" /> +- 833 <c r="O17" /> +- 835 <c r="O15" /> +- 837 <c r="M19" /> +- 839 <c r="M17" /> +- 841 <c r="M15" /> +- 843 <c r="M13" /> +- 845 <c r="J19" /> +- 847 <c r="J17" /> +- 849 <c r="J15" /> +- 851 <c r="J13" /> +- 853 <c r="O14" /> +- 1209 <c r="H48" /> +- 1210 <c r="H62" />

    Read the article

  • Webbased data modelling and management tool

    - by pixeldude
    Is there a web-based tool available, where I am able to... ...define data models (like in a database admin tool) ...fill in data (in custom web forms, not too generic) with basic features like completion ...import data from CSV oder Excel Sheets ...export data to CSV or SQL ...create snapshots of my data models (versions, diff, etc.) ...share my data models ...discuss/collaborate with other people about my data models Well, I can develop something like this in PHP or with Ruby or whatever. But this is such a common task, where the application support could be a lot better. And it would be language and database independent. This would help to maintain data models in different versions and you can maybe share your data models with others, extend it with your team members, etc. There is a website called FreeBase, which allows you to define a data entity model and fill in data, which also has export features, but I need to define my own data model with my own granularity and structure. And it should not be shared in public if I don't want to. How do you solve problems like this yourself?

    Read the article

  • Error in datatype (nvarchar instead of ntext)

    - by prabu R
    I am importing data from excel(.xls) to SQL Server 2008 using SSIS. I have included IMEX=1 in the connection string of excel connection manager. But a column consists of a value as below: 4-Hour Engineer Dispatch ASPP Engr Dispatch 1: Up to 1 dispatch (8 hours) per year. Hours exceeding allocation billed @ 1.5x hourly rate w/ 8-hr min Engr Dispatch: 8-hrs to arrive on-site from Ciena's determination of need On-Site Engineer Dispatch - 8 Hour ASPP Engr Dispatch 8: Up to 8 dispatch (64 hours) per year. Hours exceeding allocation billed @ 1.5x hourly rate w/ 8-hr min Engr Dispatch: NBD to dispatch from Ciena's determination of need Per Incident On Site Support ASPP Engr Dispatch 12: Up to 12 dispatch (96 hours) per year. Hours exceeding allocation billed @ 1.5x hourly rate w/ 8-hr min Engr Dispatch: Next day to arrive on-site from Ciena's determination of need Resident Engineer Engr Dispatch: 2-hrs to arrive on-site from Ciena's determination of need Engr Dispatch: 4-hrs to arrive on-site from Ciena's determination of need ASPP Engr Dispatch 2: Up to 2 dispatch (16 hours) per year. Hours exceeding allocation billed @ 1.5x hourly rate w/ 8-hr min N Actually there are about 600 rows in that excel file. But the above mentioned value is present after 450 rows only. So, the datatype of that column is taken as nvarchar(255) as default instead of ntext and so i am getting error. Anybody please help out... Thanks in advance...

    Read the article

  • asp code for upload data

    - by vicky
    hello everyone i have this code for uploading an excel file and save the data into database.I m not able to write the code for database entry. someone please help <% if (Request("FileName") <> "") Then Dim objUpload, lngLoop Response.Write(server.MapPath(".")) If Request.TotalBytes > 0 Then Set objUpload = New vbsUpload For lngLoop = 0 to objUpload.Files.Count - 1 'If accessing this page annonymously, 'the internet guest account must have 'write permission to the path below. objUpload.Files.Item(lngLoop).Save "D:\PrismUpdated\prism_latest\Prism\uploadxl\" Response.Write "File Uploaded" Next Dim FSYSObj, folderObj, process_folder process_folder = server.MapPath(".") & "\uploadxl" set FSYSObj = server.CreateObject("Scripting.FileSystemObject") set folderObj = FSYSObj.GetFolder(process_folder) set filCollection = folderObj.Files Dim SQLStr SQLStr = "INSERT ALL INTO TABLENAME " for each file in filCollection file_name = file.name path = folderObj & "\" & file_name Set objExcel_chk = CreateObject("Excel.Application") Set ws1 = objExcel_chk.Workbooks.Open(path).Sheets(1) row_cnt = 1 'for row_cnt = 6 to 7 ' if ws1.Cells(row_cnt,col_cnt).Value <> "" then ' col = col_cnt ' end if 'next While (ws1.Cells(row_cnt, 1).Value <> "") for col_cnt = 1 to 10 SQLStr = SQLStr & "VALUES('" & ws1.Cells(row_cnt, 1).Value & "')" next row_cnt = row_cnt + 1 WEnd 'objExcel_chk.Quit objExcel_chk.Workbooks.Close() set ws1 = nothing objExcel_chk.Quit Response.Write(SQLStr) 'set filobj = FSYSObj.GetFile (sub_fol_path & "\" & file_name) 'filobj.Delete next End if End If plz tell me how to save the following excel data to the oracle databse.any help would be appreciated

    Read the article

  • Advanced Data Source Engine coming to Telerik Reporting Q1 2010

    This is the final blog post from the pre-release series. In it we are going to share with you some of the updates coming to our reporting solution in Q1 2010. A new Declarative Data Source Engine will be added to Telerik Reporting, that will allow full control over data management, and deliver significant gains in rendering performance and memory consumption. Some of the engines new features will be: Data source parameters - those parameters will be used to limit data retrieved from the data source to just the data needed for the report. Data source parameters are processed on the data source side, however only queried data is fetched to the reporting engine, rather than the full data source. This leads to lower memory consumption, because data operations are performed on queried data only, rather than on all data. As a result, only the queried data needs to be stored in the memory vs. the whole dataset, which was the case with the old approach Support for stored procedures - they will assist in achieving a consistent implementation of logic across applications, and are especially practical for performing repetitive tasks. A stored procedure stores the SQL statements and logic, which can then be executed in different reports and/or applications. Stored Procedures will not only save development time, but they will also improve performance, because each stored procedure is compiled on the data base server once, and then is reutilized. In Telerik Reporting, the stored procedure will also be parameterized, where elements of the SQL statement will be bound to parameters. These parameterized SQL queries will be handled through the data source parameters, and are evaluated at run time. Using parameterized SQL queries will improve the performance and decrease the memory footprint of your application, because they will be applied directly on the database server and only the necessary data will be downloaded on the middle tier or client machine; Calculated fields through expressions - with the help of the new reporting engine you will be able to use field values in formulas to come up with a calculated field. A calculated field is a user defined field that is computed "on the fly" and does not exist in the data source, but can perform calculations using the data of the data source object it belongs to. Calculated fields are very handy for adding frequently used formulas to your reports; Improved performance and optimized in-memory OLAP engine - the new data source will come with several improvements in how aggregates are calculated, and memory is managed. As a result, you may experience between 30% (for simpler reports) and 400% (for calculation-intensive reports) in rendering performance, and about 50% decrease in memory consumption. Full design time support through wizards - Declarative data sources are a great advance and will save developers countless hours of coding. In Q1 2010, and true to Telerik Reportings essence, using the new data source engine and its features requires little to no coding, because we have extended most of the wizards to support the new functionality. The newly extended wizards are available in VS2005/VS2008/VS2010 design-time. More features will be revealed on the product's what's new page when the new version is officially released in a few days. Also make sure you attend the free webinar on Thursday, March 11th that will be dedicated to the updates in Telerik Reporting Q1 2010. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • BizTalk Cross Reference Data Management Strategy

    - by charlie.mott
    Article Source: http://geekswithblogs.net/charliemott This article describes an approach to the management of cross reference data for BizTalk.  Some articles about the BizTalk Cross Referencing features can be found here: http://home.comcast.net/~sdwoodgate/xrefseed.zip http://geekswithblogs.net/michaelstephenson/archive/2006/12/24/101995.aspx http://geekswithblogs.net/charliemott/archive/2009/04/20/value-vs.id-cross-referencing-in-biztalk.aspx Options Current options to managing this data include: Maintaining xml files in the format that can be used by the out-of-the-box BTSXRefImport.exe utility. Use of user interfaces that have been developed to manage this data: BizTalk Cross Referencing Tool XRef XML Creation Tool However, there are the following issues with the above options: The 'BizTalk Cross Referencing Tool' requires a separate database to manage.  The 'XRef XML Creation' tool has no means of persisting the data settings. The 'BizTalk Cross Referencing tool' generates integers in the common id field. I prefer to use a string (e.g. acme.country.uk). This is more readable. (see naming conventions below). Both UI tools continue to use BTSXRefImport.exe.  This utility replaces all xref data. This can be a problem in continuous integration environments that support multiple clients or BizTalk target instances.  If you upload the data for one client it would destroy the data for another client.  Yet in TFS where builds run concurrently, this would break unit tests. Alternative Approach In response to these issues, I instead use simple SQL scripts to directly populate the BizTalkMgmtDb xref tables combined with a data namepacing strategy to isolate client data. Naming Conventions All data keys use namespace prefixing.  The pattern will be <companyName>.<data Type>.  The naming conventions will be to use lower casing for all items.  The data must follow this pattern to isolate it from other company cross-reference data.  The table below shows some sample data. (Note: this data uses the 'ID' cross-reference tables.  the same principles apply for the 'value' cross-referencing tables). Table.Field Description Sample Data xref_AppType.appType Application Types acme.erp acme.portal acme.assetmanagement xref_AppInstance.appInstance Application Instances (each will have a corresponding application type). acme.dynamics.ax acme.dynamics.crm acme.sharepoint acme.maximo xref_IDXRef.idXRef Holds the cross reference data types. acme.taxcode acme.country xref_IDXRefData.CommonID Holds each cross reference type value used by the canonical schemas. acme.vatcode.exmpt acme.vatcode.std acme.country.usa acme.country.uk xref_IDXRefData.AppID This holds the value for each application instance and each xref type. GBP USD SQL Scripts The data to be stored in the BizTalkMgmtDb xref tables will be managed by SQL scripts stored in a database project in the visual studio solution. File(s) Description Build.cmd A sqlcmd script to deploy data by running the SQL scripts below.  (This can be run as part of the MSBuild process).   acme.purgexref.sql SQL script to clear acme.* data from the xref tables.  As such, this will not impact data for any other company. acme.applicationInstances.sql   SQL script to insert application type and application instance data.   acme.vatcode.sql acme.country.sql etc ...  There will be a separate SQL script to insert each cross-reference data type and application specific values for these types.

    Read the article

  • Master Note for Generic Data Warehousing

    - by lajos.varady(at)oracle.com
    ++++++++++++++++++++++++++++++++++++++++++++++++++++ The complete and the most recent version of this article can be viewed from My Oracle Support Knowledge Section. Master Note for Generic Data Warehousing [ID 1269175.1] ++++++++++++++++++++++++++++++++++++++++++++++++++++In this Document   Purpose   Master Note for Generic Data Warehousing      Components covered      Oracle Database Data Warehousing specific documents for recent versions      Technology Network Product Homes      Master Notes available in My Oracle Support      White Papers      Technical Presentations Platforms: 1-914CU; This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process and therefore has not been subject to an independent technical review. Applies to: Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.2.0.2 - Release: 9.2 to 11.2Information in this document applies to any platform. Purpose Provide navigation path Master Note for Generic Data Warehousing Components covered Read Only Materialized ViewsQuery RewriteDatabase Object PartitioningParallel Execution and Parallel QueryDatabase CompressionTransportable TablespacesOracle Online Analytical Processing (OLAP)Oracle Data MiningOracle Database Data Warehousing specific documents for recent versions 11g Release 2 (11.2)11g Release 1 (11.1)10g Release 2 (10.2)10g Release 1 (10.1)9i Release 2 (9.2)9i Release 1 (9.0)Technology Network Product HomesOracle Partitioning Advanced CompressionOracle Data MiningOracle OLAPMaster Notes available in My Oracle SupportThese technical articles have been written by Oracle Support Engineers to provide proactive and top level information and knowledge about the components of thedatabase we handle under the "Database Datawarehousing".Note 1166564.1 Master Note: Transportable Tablespaces (TTS) -- Common Questions and IssuesNote 1087507.1 Master Note for MVIEW 'ORA-' error diagnosis. For Materialized View CREATE or REFRESHNote 1102801.1 Master Note: How to Get a 10046 trace for a Parallel QueryNote 1097154.1 Master Note Parallel Execution Wait Events Note 1107593.1 Master Note for the Oracle OLAP OptionNote 1087643.1 Master Note for Oracle Data MiningNote 1215173.1 Master Note for Query RewriteNote 1223705.1 Master Note for OLTP Compression Note 1269175.1 Master Note for Generic Data WarehousingWhite Papers Transportable Tablespaces white papers Database Upgrade Using Transportable Tablespaces:Oracle Database 11g Release 1 (February 2009) Platform Migration Using Transportable Database Oracle Database 11g and 10g Release 2 (August 2008) Database Upgrade using Transportable Tablespaces: Oracle Database 10g Release 2 (April 2007) Platform Migration using Transportable Tablespaces: Oracle Database 10g Release 2 (April 2007)Parallel Execution and Parallel Query white papers Best Practices for Workload Management of a Data Warehouse on the Sun Oracle Database Machine (June 2010) Effective resource utilization by In-Memory Parallel Execution in Oracle Real Application Clusters 11g Release 2 (Feb 2010) Parallel Execution Fundamentals in Oracle Database 11g Release 2 (November 2009) Parallel Execution with Oracle Database 10g Release 2 (June 2005)Oracle Data Mining white paper Oracle Data Mining 11g Release 2 (March 2010)Partitioning white papers Partitioning with Oracle Database 11g Release 2 (September 2009) Partitioning in Oracle Database 11g (June 2007)Materialized Views and Query Rewrite white papers Oracle Materialized Views  and Query Rewrite (May 2005) Improving Performance using Query Rewrite in Oracle Database 10g (December 2003)Database Compression white papers Advanced Compression with Oracle Database 11g Release 2 (September 2009) Table Compression in Oracle Database 10g Release 2 (May 2005)Oracle OLAP white papers On-line Analytic Processing with Oracle Database 11g Release 2 (September 2009) Using Oracle Business Intelligence Enterprise Edition with the OLAP Option to Oracle Database 11g (July 2008)Generic white papers Enabling Pervasive BI through a Practical Data Warehouse Reference Architecture (February 2010) Optimizing and Protecting Storage with Oracle Database 11g Release 2 (November 2009) Oracle Database 11g for Data Warehousing and Business Intelligence (August 2009) Best practices for a Data Warehouse on Oracle Database 11g (September 2008)Technical PresentationsA selection of ObE - Oracle by Examples documents: Generic Using Basic Database Functionality for Data Warehousing (10g) Partitioning Manipulating Partitions in Oracle Database (11g Release 1) Using High-Speed Data Loading and Rolling Window Operations with Partitioning (11g Release 1) Using Partitioned Outer Join to Fill Gaps in Sparse Data (10g) Materialized View and Query Rewrite Using Materialized Views and Query Rewrite Capabilities (10g) Using the SQLAccess Advisor to Recommend Materialized Views and Indexes (10g) Oracle OLAP Using Microsoft Excel With Oracle 11g Cubes (how to analyze data in Oracle OLAP Cubes using Excel's native capabilities) Using Oracle OLAP 11g With Oracle BI Enterprise Edition (Creating OBIEE Metadata for OLAP 11g Cubes and querying those in BI Answers) Building OLAP 11g Cubes Querying OLAP 11g Cubes Creating Interactive APEX Reports Over OLAP 11g CubesSelection of presentations from the BIWA website:Extreme Data Warehousing With Exadata  by Hermann Baer (July 2010) (slides 2.5MB, recording 54MB)Data Mining Made Easy! Introducing Oracle Data Miner 11g Release 2 New "Work flow" GUI   by Charlie Berger (May 2010) (slides 4.8MB, recording 85MB )Best Practices for Deploying a Data Warehouse on Oracle Database 11g  by Maria Colgan (December 2009)  (slides 3MB, recording 18MB, white paper 3MB )

    Read the article

  • How to give my user permission to add/edit files on local apache server? [duplicate]

    - by Logan
    Possible Duplicate: How to make Apache run as current user I'm setting up my local test server again, and I seem to have forgotten how to successfully set up the LAMP server. I have installed LAMP server via tasksel command and I have configured the /var/www directory according to a guide I've found: After the lamp server installation you will need write permissions to the /var/www directory. Follow these steps to configure permissions. Add your user to the www-data group sudo usermod -a -G www-data <your user name> now add the /var/www folder to the www-data group sudo chgrp -R www-data /var/www now give write permissions to the www-data group sudo chmod -R g+w /var/www So logan user is now part of www-data group and the file/folder permissions look like the output below: logan@computer:/var/www$ ls -lart total 172 -rw-r--r-- 1 www-data www-data 1997 Oct 23 2010 wp-links-opml.php -rw-r--r-- 1 www-data www-data 3177 Nov 1 2010 wp-config-sample.php -rw-r--r-- 1 www-data www-data 3700 Jan 8 2012 wp-trackback.php -rw-r--r-- 1 www-data www-data 271 Jan 8 2012 wp-blog-header.php -rw-r--r-- 1 www-data www-data 395 Jan 8 2012 index.php -rw-r--r-- 1 www-data www-data 3522 Apr 10 2012 wp-comments-post.php -rw-r--r-- 1 www-data www-data 19929 May 6 2012 license.txt -rw-r--r-- 1 www-data www-data 18219 Sep 11 08:27 wp-signup.php -rw-r--r-- 1 www-data www-data 2719 Sep 11 16:11 xmlrpc.php -rw-r--r-- 1 www-data www-data 2718 Sep 23 12:57 wp-cron.php -rw-r--r-- 1 www-data www-data 7723 Sep 25 01:26 wp-mail.php -rw-r--r-- 1 www-data www-data 2408 Oct 26 15:40 wp-load.php -rw-r--r-- 1 www-data www-data 4663 Nov 17 10:11 wp-activate.php -rw-r--r-- 1 www-data www-data 9899 Nov 22 04:52 wp-settings.php -rw-r--r-- 1 www-data www-data 9175 Nov 29 19:57 readme.html -rw-r--r-- 1 www-data www-data 29310 Nov 30 08:40 wp-login.php drwxr-xr-x 14 root root 4096 Dec 24 17:41 .. drwx------ 9 www-data www-data 4096 Dec 26 16:11 wp-admin drwx------ 9 www-data www-data 4096 Dec 26 16:11 wp-includes -rw-rw-rw- 1 www-data www-data 3448 Dec 26 16:14 wp-config.php drwxrwxr-x 5 www-data www-data 4096 Dec 26 16:14 . drwx------ 6 www-data www-data 4096 Dec 26 16:19 wp-content Things work perfectly at http://localhost, I can view the website fine. The thing with this is that I will be working on a plugin for wordpress and I don't want to deal with separate owners under www directory to create or modify files/folders. When I give my user the ownership of /var/www recursively as logan:www-data I can create/modify files but cannot view the http://localhost. I get a Forbidden error. I'm assuming that this is because of the Apache's configuration? Which one is healthier or easier considering this is just a local test website, configuring apache to give user logan to view website and chmod /var/www logan:logan so that I can create files etc. without any sudo commands; or is it easier to configure user groups to get www-data user to act like my logan user? (Idk how that's possible, maybe putting www-data user under logan group?) Please shed some light to this subject. All I want is to be able to create/modifiy files under my user, and yet to be able to successfully view http://localhost I appreciate the help!

    Read the article

  • How does Microsoft Word And Excel Run without any installation?

    - by Sathya
    I was having a discussion on bookmarks in Word with a friend, and he suggested me to check out his implementation of a query in Word. Since I did not have Microsoft Word installed, I told him I don't have Word so I won't be able to test it. To this, he mentioned that he'll send the executables and it will work - I argued that without an installation this will fail. I was rather shocked when he sent me the standalone executables and on running them, Word actually launched and I was able to use almost every functionality o_0 How's this possible? I've never installed Microsoft Office on my system, this isn't any "portable" app or VMWare ThinStall ( thanks nhinkle, didn't know about this). There are absolutely no Microsoft Office related files - except for winword.exe and excel.exe. Curiously even Microsoft Excel works fine with just the standalone executable. winword.exe size is about 38 MB, and excel.exe size is just 35kb, which makes it even more strange. I'm running on Windows XP, the files were from Office 2003. I was discussing this on Chat prior to posting, here's the conversation

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • Master Data Management Implementation Styles

    - by david.butler(at)oracle.com
    In any Master Data Management solution deployment, one of the key decisions to be made is the choice of the MDM architecture. Gartner and other analysts describe some different Hub deployment styles, which must be supported by a best of breed MDM solution in order to guarantee the success of the deployment project.   Registry Style: In a Registry Style MDM Hub, the various source systems publish their data and a subscribing Hub stores only the source system IDs, the Foreign Keys (record IDs on source systems) and the key data values needed for matching. The Hub runs the cleansing and matching algorithms and assigns unique global identifiers to the matched records, but does not send any data back to the source systems. The Registry Style MDM Hub uses data federation capabilities to build the "virtual" golden view of the master entity from the connected systems.   Consolidation Style: The Consolidation Style MDM Hub has a physically instantiated, "golden" record stored in the central Hub. The authoring of the data remains distributed across the spoke systems and the master data can be updated based on events, but is not guaranteed to be up to date. The master data in this case is usually not used for transactions, but rather supports reporting; however, it can also be used for reference operationally.   Coexistence Style: The Coexistence Style MDM Hub involves master data that's authored and stored in numerous spoke systems, but includes a physically instantiated golden record in the central Hub and harmonized master data across the application portfolio. The golden record is constructed in the same manner as in the consolidation style, and, in the operational world, Consolidation Style MDM Hubs often evolve into the Coexistence Style. The key difference is that in this architectural style the master data stored in the central MDM system is selectively published out to the subscribing spoke systems.   Transaction Style: In this architecture, the Hub stores, enhances and maintains all the relevant (master) data attributes. It becomes the authoritative source of truth and publishes this valuable information back to the respective source systems. The Hub publishes and writes back the various data elements to the source systems after the linking, cleansing, matching and enriching algorithms have done their work. Upstream, transactional applications can read master data from the MDM Hub, and, potentially, all spoke systems subscribe to updates published from the central system in a form of harmonization. The Hub needs to support merging of master records. Security and visibility policies at the data attribute level need to be supported by the Transaction Style hub, as well.   Adaptive Transaction Style: This is similar to the Transaction Style, but additionally provides the capability to respond to diverse information and process requests across the enterprise. This style emerged most recently to address the limitations of the above approaches. With the Adaptive Transaction Style, the Hub is built as a platform for consolidating data from disparate third party and internal sources and for serving unified master entity views to operational applications, analytical systems or both. This approach delivers a real-time Hub that has a reliable, persistent foundation of master reference and relationship data, along with all the history and lineage of data changes needed for audit and compliance tracking. On top of this persistent master data foundation, the Hub can dynamically aggregate transaction data on demand from different source systems to deliver the unified golden view to downstream systems. Data can also be accessed through batch interfaces, published to a message bus or served through a real-time services layer. New data sources can be readily added in this approach by extending the data model and by configuring the new source mappings and the survivorship rules, meaning that all legacy data hubs can be leveraged to contribute their records/rules into the new transaction hub. Finally, through rich user interfaces for data stewardship, it allows exception handling by business analysts to keep it current with business rules/practices while maintaining the reliability of best-of-breed master records.   Confederation Style: In this architectural style, several Hubs are maintained at departmental and/or agency and/or territorial level, and each of them are connected to the other Hubs either directly or via a central Super-Hub. Each Domain level Hub can be implemented using any of the previously described styles, but normally the Central Super-Hub is a Registry Style one. This is particularly important for Public Sector organizations, where most of the time it is practically or legally impossible to store in a single central hub all the relevant constituent information from all departments.   Oracle MDM Solutions can be deployed according to any of the above MDM architectural styles, and have been specifically designed to fully support the Transaction and Adaptive Transaction styles. Oracle MDM Solutions provide strong data federation and integration capabilities which are key to enabling the use of the Confederated Hub as a possible architectural style approach. Don't lock yourself into a solution that cannot evolve with your needs. With Oracle's support for any type of deployment architecture, its ability to leverage the outstanding capabilities of the Oracle technology stack, and its open interfaces for non-Oracle technology stacks, Oracle MDM Solutions provide a low TCO and a quick ROI by enabling a phased implementation strategy.

    Read the article

  • Best practices for upgrading user data when updating versions of software

    - by Javy
    In my code I check the current version of the software on launch and compare it to the version stored in the user's data file(s). If the version is newer, then I call different methods to update the old data to the newer data version, if necessary. I usually have to make a new method to convert the data with each update that changes user data in some way, and cannot remove the old ones in case there was someone who missed an update. So the app must be able to go through each method call and update their data until they get their data current. With larger data sets, this could be a problem. In addition, I recently had a brief discussion with another StackOverflow user this and he indicated he always appended a date stamp to the filename to manage data versions, although his reasoning as to why this was better than storing the version data in the file itself was unclear. Since I've rarely seen management of user data versions in books I've read, I'm curious what are the best practices for naming user data files and procedures for updating older data to newer versions.

    Read the article

  • Excel hyperlinks can be attached to a range of cells -- what is the use case for this?

    - by John Machin
    In Excel 2003 and 2007 (and presumably 2010), it is possible to attach a hyperlink to a single cell; this is well known. Excel also allows you select a range for insertion. In that case, clicking on any cell in the range will jump to the target of the hyperlink. I can't find any web reference to this possibility. My question is: What is the use case for being able to do this? My only suggestion: The first worksheet is a menu for the remainder of the workbook. Each worksheet or topic has a hyperlink on the menu sheet. Each hyperlinks occupies a 3x3 range of cells to make it easier for users in a hurry to click on the correct link. A side question: Interestingly, Excel allows you to overlap ranges. Example: Link from A1:C3 to file1. Then link from B2:D4 to file2. The overlapped cells (B2:C3) now point to file2. Only A1, A2, A3, B1, and C1 now point to file1. No warning is given about the overlap. What is the rationale for this behaviour?

    Read the article

  • How to import this data set into excel? (column headings on each row delimited by a colon)

    - by Anonymous
    I'm trying to import the following data set into Excel. I've had no luck with the text import wizard. I'd like Excel to make id, name, street, etc the column names and insert each record onto a new row. , id: sdfg:435-345, name: Some Name, type: , street: Address Line 1, Some Place, postalcode: DN2 5FF, city: Cityhere, telephoneNumber: 01234 567890, mobileNumber: 01234 567890, faxNumber: /, url: http://www.website.co.uk, email: [email protected], remark: , geocode: 526.2456;-0.8520, category: some, more, info , id: sdfg:435-345f, name: Some Name, type: , street: Address Line 1, Some Place, postalcode: DN2 5FF, city: Cityhere, telephoneNumber: 01234 567890, mobileNumber: 01234 567890, faxNumber: /, url: http://www.website.co.uk, email: [email protected], remark: , geocode: 526.2456;-0.8520, category: some, more, info Is there any easy way to do this with Excel? I'm struggling to think of a way to convert this to a conventional CSV easily. As far as I can think, I'd have to remove the labels from each line, enclose each line in quotes, then delimit them with commas. Obviously that's made a little more difficult to script though seeing as some fields (address, for instance) contain comma-delimited data. I'm not good with regex at all. What's the best way to tackle this?

    Read the article

  • How to tackle archived who-is personal data with opt-out?

    - by defaye
    As far as I understand it, it is possible to opt-out (in the UK at least) of having your address details displayed on who-is information of a domain for non-trading individuals. What I want to know is, after opt-out, how do individuals combat archived data? Is there any enforcement of this? How many who-is websites are there which archive data and what rights do we have to force them to remove that data without paying absurd fees? In the case of capitulating to these scoundrels, what point is it in paying for the removal of archived data if that data can presumably resurface on another who-is repository? In other words, what strategy is one supposed to take, besides being wiser after the fact?

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >