Search Results

Search found 4705 results on 189 pages for 'export to csv'.

Page 72/189 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Exporting many tables on Oracle

    - by Adomas
    Hi, I would like to know, how to export many tables from oracle DB. I use exp.exe, create file expdat.dmp and so on. I choose to export only tables and there I must write which ones. Is there any chance of getting all of them? thanks

    Read the article

  • Btrieve Date Integer

    - by nmiranda
    Hi everyone, this is my question: I'm migrating data from a Btrieve file (.dat) through Pervasive Control Center and there is field type which is defined as integer but is a date and for example the date '31/12/2009' (seen in the legacy system) is view it as the number 733772 when I export it. The legacy system shows the date correctly but I can't export it in the same format or at least I can't convert it. Does anybody know how to convert this number through Excel or something?

    Read the article

  • How to generate the EC2 cerificate

    - by user192048
    While setting up the EC2 access, it seems I need two files, the private key and ec2 certificate. $ export EC2_PRIVATE_KEY=~/.ec2/pk-HKZYKTAIG2ECMXYIBH3HXV4ZBZQ55CLO.pem $ export EC2_CERT=~/.ec2/cert-HKZYKTAIG2ECMXYIBH3HXV4ZBZQ55CLO.pem However, I did not find anywhere I could download or create the key. from the documentation: The command line tools need access to the private key and X.509 certificate you generated after signing up for the Amazon EC2 service. I probably missed that, Is it possible to generate it again

    Read the article

  • In elisp is there a difference between the regex [\\]documentclass and \\documentclass?

    - by mcheema
    I was playing around with the rx function for generating regular expressions from sexps in ELISP but couldn't figure out how to generate the regular expression "\\documentclass" for use in org-export-latex-classes: (rx "\\documentclass") (rx "\\" "documentclass") (rx (char "\\") "documentclass") Which when evaluated give respectively the following outputs: "\\\\documentclass" "\\\\documentclass" "[\\]documentclass" Is "\\documentclass" equivalent to "[\\]documentclass"?---I think it is, but am not sure. Can I generate the former using rx? Edit: Whilst the question was valid I realize my motivation was not; because org-export-latex-classes use strings not regular expressions.

    Read the article

  • how to save settings of eclipse?

    - by aks
    I am using Rational software Architect(RSA) which is like eclipse .Now i have done lots of settings under windows preferences. Now want to export this settings and then apply those settings directly to another instance of RSA installed on another machine. How do i export and the import this and the import?The way to do this in eclipse will also work.

    Read the article

  • "Blank SQL" error with phppgadmin

    - by Hoàng Long
    Here is my problem: I export a database A_DB using "export" function of Phppgadmin. The dump file A_dump.sql includes both the database structure and data. Then I try to create another blank database B_DB, and import A_dump.sql into it. Every time I do that, the transaction failed with no error reported: SQL error: In statement: Is there some logs that I can find in phppgadmin that would allow me to investigate this problem? I have tried searching for an hour, but still not find anything.

    Read the article

  • Most efficient way to move a few SQL Server tables to SQLite?

    - by wom
    I have a fairly large SQL Server database; I'd like to pull 4 tables out and dump them directly into an sqlite.db for remote querying (via nightly batch). I was about to write a script to step through(most likely on a unix host kicked off via cron); but there should be a simpler method to export the tables directly (SQLite not an option in the included DTS Import/Export wizard) What would the most efficient method of dumping the SQL Server tables to SQLite via batch be?

    Read the article

  • `export PS1='value'` does not propagate to (Korn) subshells for root?

    - by user319845
    Please consider the following /root/.profile: export PS1=value1 export x=value2 How come the login shell shows the expected prompt (and $x as value2), while the subshells keep showing $x as value2 but $PS1 as '#'? Just in case, I'm trying this under OpenBSD. [Yeah, I know... What on earth am I doing with OpenBSD if I don't know this? Just toying... in an isolated, most definitely non-production VM =).]

    Read the article

  • Understanding NFS4 (Linux server)

    - by drumfire
    I've been a bit bothered by NFS4 on Linux. Some information 'out there' seems to conflict with other information, and other information appears hard to find. So here are a couple of things that caught my attention, hopefully someone out there can shed some light on this. This question focuses exclusively on NFS4 without Kerberos etc. 1. Exports There is ambiguous information in the exports manpage on the structure of /etc/exports. To quote from exports(5): Also, each line may have one or more specifications for default options after the path name, in the form of a dash ("-") followed by an option list. The option list is used for all subsequent exports on that line only. What does "subsequent exports on that line only" mean? 1.2 fsid=0 not required anymore? I was searching for fsid when I found a comment on the linux-nfs list stating fsid=0 is not required anymore. Now I'm just confused, do I need it with nfs4 or not?! 2. Non-exported directory still mountable Say I have the following tree: /exp /exp/users /exp/distr /exp/distr/archlinux /exp/distr/debian And I have the following entries in this fstab entry: /dev/disk/by-label/users /mnt/users ext4 defaults 0 0 /dev/disk/by-label/distr /mnt/distr ext4 defaults 0 0 /mnt/users /exp/users none bind 0 0 /mnt/distr /exp/distr none bind 0 0 And my exports is exactly this: /exp 192.168.1.0/24(fsid=0,rw,async,no_subtree_check,no_root_squash) /exp/distr 192.168.1.0/24(rw,async,no_subtree_check,no_root_squash) And exportfs -arv shows: exporting 192.168.1.0/24:/exp/distr exporting 192.168.1.0/24:/exp Then why am I able to do this and get no error on a client: mount -t nfs4 server:/exp/users /tmp/test Even though /exp/users is not exported? I didn't export this directory, and while I don't see the contents of /dev/disk/by-label/users unless I specify crossmnt, I am still able to write to the directory. Everything I write to there goes to the underlying directory of /exp/users which can be seen when I umount /exp/users; ls /exp/users.. 3. The odd case of showmount -d server As stated by rpc.mountd(8), this command should display directories that are either currently mounted by clients, or stale entries in /var/lib/nfs/rmtab, as can be read: The rpc.mountd daemon registers every successful MNT request by adding an entry to the /var/lib/nfs/rmtab file. When receivng a UMNT request from an NFS client, rpc.mountd simply removes the matching entry from /var/lib/nfs/rmtab, as long as the access control list for that export allows that sender to access the export. (...) Note, however, that there is little to guarantee that the contents of /var/lib/nfs/rmtab are accurate. A client may continue accessing an export even after invoking UMNT. If the client reboots without sending a UMNT request, stale entries remain for that client in /var/lib/nfs/rmtab. After reading this I surely wonder: Isn't it terribly insecure to just expose this type of client information; Aren't unaware server admins bound to have an rmtab with a lot of stale clients; Is this the reason that clients that mount nfs4 directories with mount -v get to see output like "nothing was mounted" even though something was mounted? I have a lot of other questions regarding nfs4, but I'll keep it at this for the moment.. :)

    Read the article

  • obiee memory usage

    - by user554629
    Heap memory is a frequent customer topic. Here's the quick refresher, oriented towards AIX, but the principles apply to other unix implementations. 1. 32-bit processes have a maximum addressability of 4GB; usable application heap size of 2-3 GB.  On AIX it is controlled by an environment variable: export LDR_CNTRL=....=MAXDATA=0x080000000   # 2GB ( The leading zero is deliberate, not required )   1a. It is  possible to get 3.25GB  heap size for a 32-bit process using @DSA (Discontiguous Segment Allocation)     export LDR_CNTRL=MAXDATA=0xd0000000@DSA  # 3.25 GB 32-bit only        One side-effect of using AIX segments "c" and "d" is that shared libraries will be loaded privately, and not shared.        If you need the additional heap space, this is worth the trade-off.  This option is frequently used for 32-bit java.   1b. 64-bit processes have no need for the @DSA option. 2. 64-bit processes can double the 32-bit heap size to 4GB using: export LDR_CNTRL=....=MAXDATA=0x100000000  # 1 with 8-zeros    2a. But this setting would place the same memory limitations on obiee as a 32-bit process    2b. The major benefit of 64-bit is to break the binds of 32-bit addressing.  At a minimum, use 8GB export LDR_CNTRL=....=MAXDATA=0x200000000  # 2 with 8-zeros    2c.  Many large customers are providing extra safety to their servers by using 16GB: export LDR_CNTRL=....=MAXDATA=0x400000000  # 4 with 8-zeros There is no performance penalty for providing virtual memory allocations larger than required by the application.  - If the server only uses 2GB of space in 64-bit ... specifying 16GB just provides an upper bound cushion.    When an unexpected user query causes a sudden memory surge, the extra memory keeps the server running. 3.  The next benefit to 64-bit is that you can provide huge thread stack sizes for      strange queries that might otherwise crash the server.      nqsserver uses fast recursive algorithms to traverse complicated control structures.    This means lots of thread space to hold the stack frames.    3a. Stack frames mostly contain register values;  64-bit registers are twice as large as 32-bit          At a minimum you should  quadruple the size of the server stack threads in NQSConfig.INI          when migrating from 32- to 64-bit, to prevent a rogue query from crashing the server.           Allocate more than is normally necessary for safety.    3b. There is no penalty for allocating more stack size than you need ...           it is just virtual memory;   no real resources  are consumed until the extra space is needed.    3c. Increasing thread stack sizes may require the process heap size (MAXDATA) to be increased.          Heap space is used for dynamic memory requests, and for thread stacks.          No performance penalty to run with large heap and thread stack sizes.           In a 32-bit world, this safety would require careful planning to avoid exceeding 2GM usable storage.     3d. Increasing the number of threads also may require additional heap storage.          Most thread stack frames on obiee are allocated when the server is started,          and the real memory usage increases as threads run work. Does 2.8GB sound like a lot of memory for an AIX application server? - I guess it is what you are accustomed to seeing from "grandpa's applications". - One of the primary design goals of obiee is to trade memory for services ( db, query caches, etc) - 2.8GB is still well under the 4GB heap size allocated with MAXDATA=0x100000000 - 2.8GB process size is also possible even on 32-bit Windows applications - It is not unusual to receive a sudden request for 30MB of contiguous storage on obiee.- This is not a memory leak;  eventually the nqsserver storage will stabilize, but it may take days to do so. vmstat is the tool of choice to observe memory usage.  On AIX vmstat will show  something that may be  startling to some people ... that available free memory ( the 2nd column ) is always  trending toward zero ... no available free memory.  Some customers have concluded that "nearly zero memory free" means it is time to upgrade the server with more real memory.   After the upgrade, the server again shows very little free memory available. Should you be concerned about this?   Many customers are !!  Here is what is happening: - AIX filesystems are built on a paging model.   If you read/write a  filesystem block it is paged into memory ( no read/write system calls ) - This filesystem "page" has its own "backing store" on disk, the original filesystem block.   When the system needs the real memory page holding the file block, there is no need to "page out".    The page can be stolen immediately, because the original is still on disk in the filesystem. - The filesystem  pages tend to collect ... every filesystem block that was ever seen since    system boot is available in memory.  If another application needs the file block, it is retrieved with no physical I/O. What happens if the system does need the memory ... to satisfy a 30MB heap request by nqsserver, for example? - Since the filesystem blocks have their own backing store ( not on a paging device )   the kernel can just steal any filesystem block ... on a least-recently-used basis   to satisfy a new real memory request for "computation pages". No cause for alarm.   vmstat is accurately displaying whether all filesystem blocks have been touched, and now reside in memory.   Back to nqsserver:  when should you be worried about its memory footprint? Answer:  Almost never.   Stop monitoring it ... stop fussing over it ... stop trying to optimize it. This is a production application, and nqsserver uses the memory it requires to accomplish the job, based on demand. C'mon ... never worry?   I'm from New York ... worry is what we do best. Ok, here is the metric you should be watching, using vmstat: - Are you paging ... there are several columns of vmstat outputbash-2.04$ vmstat 3 3 System configuration: lcpu=4 mem=4096MB kthr    memory              page              faults        cpu    ----- ------------ ------------------------ ------------ -----------  r  b    avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa  0  0 208492  2600   0   0   0   0    0   0  13   45  73  0  0 99  0  0  0 208492  2600   0   0   0   0    0   0   9   12  77  0  0 99  0  0  0 208492  2600   0   0   0   0    0   0   9   40  86  0  0 99  0 avm is the "available free memory" indicator that trends toward zerore   is "re-page".  The kernel steals a real memory page for one process;  immediately repages back to original processpi  "page in".   A process memory page previously paged out, now paged back in because the process needs itpo "page out" A process memory block was paged out, because it was needed by some other process Light paging activity ( re, pi, po ) is not a concern for worry.   Processes get started, need some memory, go away. Sustained paging activity  is cause for concern.   obiee users are having a terrible day if these counters are always changing. Hang on ... if nqsserver needs that memory and I reduce MAXDATA to keep the process under control, won't the nqsserver process crash when the memory is needed? Yes it will.   It means that nqsserver is configured to require too much memory and there are  lots of options to reduce the real memory requirement.  - number of threads  - size of query cache  - size of sort But I need nqsserver to keep running. Real memory is over-committed.    Many things can cause this:- running all application processes on a single server    ... DB server, web servers, WebLogic/WebSphere, sawserver, nqsserver, etc.   You could move some of those to another host machine and communicate over the network  The need for real memory doesn't go away, it's just distributed to other host machines. - AIX LPAR is configured with too little memory.     The AIX admin needs to provide more real memory to the LPAR running obiee. - More memory to this LPAR affects other partitions. Then it's time to visit your friendly IBM rep and buy more memory.

    Read the article

  • Cant turn off Redirected Access on Cluster Shared Volumes 2008r2 Failover clustering

    - by 562networks
    I read up on LH Mode and am still boggled what it is and what it does. I pass all validation on the Failover cluster wizard but in the Event Viewer I get erros for Event ID 5121 and 1034 related to one of the disks that is in the CSV for my hyper v machines. We have two disks in the CSV for our hyper V farm. Everything seems to work just fine but im worried about the even viewer errors. I have also read that people are having problems like I turning off Redirected access.

    Read the article

  • IOError: [Errno 32] Broken pipe

    - by khati
    I got "IOError: [Errno 32] Broken pipe" while writing files in linux. I am using python to read each line a of csv file and then write into a database table. My code is f = open(path,'r') command = command to connect to database p = Popen(command, shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, env=env) query = " COPY myTable( id, name, address) FROM STDIN WITH DELIMITER ';' CSV QUOTE '"'; " p.stdin.write(query.encode('ascii')) *-->(Here exactly I got the error, p.stdin.write(query.encode('ascii')) IOError: [Errno 32] Broken pipe )* So when I run this program in linux, I got error "IOError: [Errno 32] Broken pipe" . However this works fine when I run in windows7. Do I need to do some configuration in Linux sever? Any suggestions will be appreciated. Thank you.

    Read the article

  • Migrate AD DS Server 2003 to Server 2008 R2

    - by user2566483
    I would like to get a couple opinions Found this article online and wanted to know if it is good to follow http://www.msserverpro.com/migrating-active-directory-domain-controller-from-windows-server-2003-sp2-to-windows-server-2008-r2/ Couple of things that need to be done. 1. Move over all active directory settings from old Server 2003 server to new Server 2008R2 2. Setup all users on new server using csvde. csvde -f output.csv -- on old server csvde -i -f output.csv -- on new server Any advice would be greatly appreciated.

    Read the article

  • Windows Handling Piped Comands Error Redirection

    - by jpmartins
    Warning: I am no expert on building scripts, and sorry for lousy English. In an case of generating a CSV from a database query I'm using the following commands. ... CALL java.exe -classpath ... com.xigole.util.sql.Jisql -user dmfodbc -pf pwd.file -driver com.sybase.jdbc3.jdbc.SybDriver -cstring %constr% -c ; -input 42.sql -formatter csv -delimiter ; 2%LOGFILE% | CALL grep -v -e "SELECT right" -e "executing: " -e " rows affect" %FicheiroR% 2%LOGFILE% ... I'm using windows implementation of grep. The 2%LOGFILE% in both java and grep command is causing an error message indicating the file is being use by another process. The Ugly workaround i have came up with is to put grep error redirect to a temporary %LOGFILE%.aux java ... | grep ... 2%LOGFILE%.aux type %LOGFILE%.aux % %LOGFILE% del %LOGFILE%.aux What is a better solution?

    Read the article

  • Powershell BitLocker Recovery Key

    - by TheNoobofNoobs
    I'm trying to get a list of all computers that have a bit locker recovery key (or information for that matter) populated in their respective fields in AD. I am unable to even start on a script as I don't know where to begin. I did find this online but it doesn't appear to be working. foreach($comp in get-adcomputer -filter *) { get-adobject -filter 'objectclass -eq "msFVE-RecoveryInformation"' - searchbase $comp.distinguishedname -properties msfve-recoverypassword,whencreated | sort whencreated | select msfve-recoverypassword -last 1 } Export-Csv "FilePath.csv" Any ideas as to how I can go about this. Running Windows 7, Powershell 3.0, Windows Server 2008 R2.

    Read the article

  • Advanced (?) Excel sorting

    - by Preston Grayskull
    First of all, I'd like to admit that I don't really know anything about Excel, but I have tried to look up a solution to this in Excel books and Googling. Here's what I'm trying to do: I have a really long spreadsheet There are 7 columns total, but only two columns that I'm most interested in. Here's an example CSV that is much more simple than my actual dataset, but the search/sort is analogous: John, Apple Dave, Apple Dave, Orange Steve, Apple Steve, Orange Steve, Kiwi Bob, Apple Bob, Banana I'm interested in extracting the entire rows (all of the columns) that meet the following criteria: ["Apple"] OR ["Apple" and "Orange"] NOT ["Apple" and "Orange" and Anything Else] NOT ["Apple" and Anything that isn't Orange] So with the above CSV, I would get the entire rows for John and Dave, but not Steve and not Bob. I started doing this manually, and will likely finish by the time this question has an answer, but I would like to know this for future reference. Thanks!

    Read the article

  • Q: MySQL Cluster - Data insertion in NDBCLUSTER table - error out after 5 million rows

    - by Mata
    MysqlCluster version: mysql-5.6.11 ndb-7.3.2 Insertload = 50 M dataset Datanodes = 3 LOAD DATA INFILE '/input_50m/Table_1_sorted.csv' IGNORE INTO TABLE nw_ndb FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' We recently setup a new mySQL cluster and trying to load data from a flat file. But getting error “Got temporary error 4010 'Node failure caused abort of transaction' from NDBCLUSTER" when inserting 5 million rows in a single table in MySQL Cluster. We are using "LOAD DATA INFILE" command to load the data in the table from csv file. Server (musqld, ndb nodes) has good hardware: 126 GB RAM, 32 Gb allocated to mysqld tried below settings with no effect: SET autocommit=0; SET FOREIGN_KEY_CHECKS=0; SET unique_checks=0; SET GLOBAL ndb_batch_size=8*1024*1024; SET GLOBAL ndb_cache_check_time = 1000; SET GLOBAL ndb_index_stat_cache_entries = 10000000; SET SESSION BULK_INSERT_BUFFER_SIZE=256217728; SET GLOBAL KEY_BUFFER_SIZE=256217728; Any clues?

    Read the article

  • Setting up scripts in Amazon EC2 Cloud

    - by racket99
    Hello, I am currently running a few perl and python scripts on a windows pc and would like to port over to the Amazon EC2 servers running 64-bit LINUX. The scripts are basic web scrapers that go to a variety of websites, get data and then save daily as csv files. I would like to install these in the cloud and get them running in an automated way so that they will run without my intervention. Also given that I don't want to lose all the data if the instance crashes, I should also upload the csv files to Amazon S3. Any idea how I can do this? I am not terribly versed in LINUX nor do I know Perl/Python well. What is the best way for me to tackle thi

    Read the article

  • How do I completely turn off Excel 2010 autoformatting?

    - by Samuel
    I am using a lot of csv files at work with excel 2010. These have no formatting so Excel 2010 autoformats all the cells. I've found workarounds but the ones I have found require action for each file or each cell (i.e. adding a single quote). My current workaround is using the "show formulas" option under formula auditing in the formulas tab. This seems to show the raw data (since they are just csv files there aren't formulas). If I could just keep this active so I don't have to turn it on.

    Read the article

  • Small-scale database options for .NET

    - by raney
    I have a .NET 4.0/WPF based application I've developed and maintain for my company that acts as a friendly GUI central-point-of-information, combining information pulled from a couple of SQL databases, as well as CSV exports from a few other applications. I would like to build out my own database to support the entirety of the information that the application accesses, so that I could have a service running on my server that would read in necessary remote SQL info and file exports, to provide the user's application with a single database to connect to, as well as to remove all of the file handling currently involved in the program (copying new CSV resources from network location, reading them into memory each launch.) I have complete control and flexibility here as long as the user's experience isn't affected, and this is as much a learning experience as it is tidying up. Caveat being, I don't have much in the way of a budget. Right now I recognize my options to be: SQL Express - I'm comfortable with the server setup, I like ADO.NET and LINQ to SQL. I feel that I have the least to learn here, but it would let me focus on SQL in a familiar environment. Perhaps in conjunction with Entity Framework? MongoDB - I don't know a whole lot about, but I've heard the name enough to make me curious. Brief research seems friendly enough, and there is .NET support. I like working with open source projects. My questions are: What's popular and extensible right now? I'm not far from starting to job-hunt, and I'd like this project to be relevant going forward. What am I missing? Pros, cons? Other options? What plays well with .NET? What are the things I should be considering, the questions I should be asking, when making a decision like this? Thanks for your time.

    Read the article

  • Reporting Solution in PHP / CodeIgniter - Server side logic vs client side

    - by dot
    I'm building a report for an end user. They would like to see a list of all widgets... but then also like to see widgets with missing attributes, like missing names, or missing size. So i was thinking of creating one method that returns json data containing all widgets... and then using javascript to let them filter the data for missing data, instead of requerying the database. Ultimately, they need to be able to save all "reports" (filtered versions of data) inside a csv file. These are the two options I'm mulling over: Design 1 Create 3 separate methods in my controller/model like: get_all_data() get_records_with_missing_names() get_records_with_missing_size() And then when these methods are called, I would display the data on screen and give them a button to save to csv file. Design 2 Create one method called get_all_data() and then somehow, give them tools in the view to filter the json data using tables etc... and then letting them save subsets of the data. The reality is, in order to display all data, I still need to massage the data, and therefore, I know which records are missing attributes. So i'd rather not create separate methods by each filter. I'm not sure how I would do that just yet but at this point, i would like to know some pros/cons of each method. Thanks.

    Read the article

  • Is it okay to use a language that isn't supported by your company for some tasks?

    - by systempuntoout
    I work for a company that supports several languages: COBOL, VB6, C# and Java. I use those languages for my primary work, but I often find myself to coding some minor programs (e.g. scripts) in Python because I found it to be the best tool for that type of task. For example: An analyst gives me a complex CSV file to populate some DB tables, so I would use Python to parse it and create a DB script. What's the problem? The main problem I see is that a few parts of these quick & dirty scripts are slowly gaining importance and: My company does not support Python They're not version controlled (I back them up in another way) My coworkers do not know Python The analysts have even started referencing them in email ("launch the script that exports..."), so they are needed more often than I initially thought. I should add that these scripts are just utilities that are not part of the main project; they simply help to get trivial tasks done in less time. For my own small tasks they help a lot. In short, if I were a lottery winner to be in a accident, my coworkers would need to keep the project alive without those scripts; they would spend more time in fixing CSV errors by hand for example. Is this a common scenario? Am I doing something wrong? What should I do?

    Read the article

  • batch: comparing filenames and renaming [migrated]

    - by user2978770
    i'm new to both this platform and batch programming and i'm slowly but steadily driving crazy :-( I'm studying in Germany and just started on a bigger project that mainly consists of analyzing data and finding algorithms in order to maintain a certain function of a system. In order to get started i got a bunch of recorded data that, unfortunately, is not consistent when i comes to naming. Normally all files (all in one folder) should start with SPY.SPYNODE.SIDE and then go on with the specific names for each values or variables. However, the data logger messed it up a couple of times and gives weird names like SP0E1A~1.csv (all files are .csv-files). An that's when i figured in stead of renaming a couple of thousand files manually i could "easily" use a simple batch file to do that job for name. And that's exactly when I started to go crazy :-) So far i came up with the following: FOR /R %%i in (%CD%) DO ( set file1=%%i if not %file1%=="SPY.SPYNODE.SIDE" DO ( set /p "filename" < %file1% rename %file1% %filename% ) ) So what i want it to do is this (in pseudo) look through the whole folder and every file save the filename in variable file1 if file1 partially equals SPY.SPYNODE.SIDE open the file and save the first line (which contains the correct name of the file) in variable filename rename the file with the correct filename But so far it doesn't really work and i don't know why. Could anybody give me a hint or some advice how i should proceed? I really appreciate any kind of help!

    Read the article

  • Where do I find scripts generated by SharePoint MCMS Migration Profiles

    - by HipCzeck
    I am attempting to migrate data from an Microsoft Content Management Server (MCMS) 2002 instance into a new Microsoft Office Sharepoint Server (MOSS) 2007 installation using the Manage Microsoft Content Management Server Migration Profiles tool in the Operations space of MOSS Central Administration. When analyzing the profile, I receive 4 warnings, all of which may be safely ignored, but when I actually execute the migration profile, I get the same warnings and an additional error with a description of: Line 6: Incorrect syntax near ';'. I have seen this error numerous times when mucking about in SQL Server and recognize it as a Transact SQL error message, but can't find the actual SQL statement that is being executed so that I may determine the source of the error. EDIT: After enabling verbose logging on the MCMS 2002 Migration category, and poring through the Unified Logging Service (ULS) logs, I received a more complete stack trace at the point of the error, and a couple more anomalies listed below. Anomalies: The following is an abbreviated listing from the ULS logs around the time of the pre-migration analysis. 01 MCMS 2002 Migration Verbose Start ConnectionCheck 02 MCMS 2002 Migration Verbose End ConnectionCheck 03 MCMS 2002 Migration Verbose Start DatabaseCheck 04 MCMS 2002 Migration High Extra table SiteDeployLock will not be migrated 05 MCMS 2002 Migration High Analysis: Extra index PK__SiteDeployLock__05D8E0BE 06 MCMS 2002 Migration Verbose End DatabaseCheck 07 MCMS 2002 Migration Medium Pre-migration analysis: RootCheckTask is skipped because database check is blocked. 08 MCMS 2002 Migration Medium Pre-migration analysis: RightsGroupNameCheckTask is skipped because database check is blocked. 09 MCMS 2002 Migration Medium Pre-migration analysis: InvalidNameCheckTask is skipped because database check is blocked. 10 MCMS 2002 Migration Medium Pre-migration analysis: LeafNameCheckTask is skipped because database check is blocked. 11 MCMS 2002 Migration Medium Pre-migration analysis: LeafLengthCheckTask is skipped because database check is blocked. 12 MCMS 2002 Migration Medium Pre-migration analysis: TemplateNameCheckTask is skipped because database check is blocked. 13 MCMS 2002 Migration Medium Pre-migration analysis: TemplateCollisionCheckTask is skipped because database check is blocked. 14 MCMS 2002 Migration Medium Pre-migration analysis: PlaceholderCheckTask is skipped because database check is blocked. 15 MCMS 2002 Migration Medium Pre-migration analysis: CheckedOutItemsCheckTask is skipped because database check is blocked. 16 MCMS 2002 Migration Medium Pre-migration analysis: SubmittedItemsCheckTask is skipped because database check is blocked. 17 MCMS 2002 Migration Medium Pre-migration analysis: DeletedItemsCheckTask is skipped because database check is blocked. 18 MCMS 2002 Migration Medium Pre-migration analysis: UserCheckTask is skipped because database check is blocked. 19 MCMS 2002 Migration Medium Pre-migration analysis: FileSizeCheckTask is skipped because database check is blocked. 20 MCMS 2002 Migration Medium Pre-migration analysis: HostHeaderMapCheckTask is skipped because database check is blocked. 21 MCMS 2002 Migration Verbose Start Server check 22 MCMS 2002 Migration Verbose End Server check 23 MCMS 2002 Migration Verbose Start Server emptyness check 24 MCMS 2002 Migration Verbose End Server emptyness check 25 MCMS 2002 Migration Medium PreMigrationAnalyzer: Dry run starts 26 MCMS 2002 Migration Verbose CleanLockProcedure: start. 27 MCMS 2002 Migration High CleanLockProcedure: connection system lock is null 28 MCMS 2002 Migration Verbose Finished all tasks 29 MCMS 2002 Migration High PreMigrationAnalyzer ends with True 30 MCMS 2002 Migration Verbose Migration profile status is changed to AnalysisPassed Specifically, the two High level alerts on lines 4 and 5 are reflected in the migration report as warnings when running Pre-migration Analysis or running the migration profile. In addition, two other warnings appear in the migration report indicating two tables containing data (LayoutProperty and NodeLayout) that should be empty. According to the documentation, warnings are not sufficient cause to stop migration from occurring. Other anomalies are on lines 7-20 indicating a series of tests that are skipped because database check is blocked. The ULS doesn't give any additional warnings to indicate that the database check was blocked or exited in exceptional circumstances. After switching the profile from pre-migration analysis to exporting, there is one medium level warning that LastChangeTime is not set or incorrect. (null). As with all the skipped test names and SQL table names from the warnings, the major search engines are unable (with the exception of LayoutProperty) to find any reference to these objects or tests. Finally, the section of the log indicating the actual live migration attempt is appended below: 01 MCMS 2002 Migration Medium LastChangeTime is not set or incorrect. (null) 02 MCMS 2002 Migration Verbose Set export lock 03 MCMS 2002 Migration Verbose CleanLockProcedure: start. 04 MCMS 2002 Migration Verbose CleanLockProcedure: end. 05 MCMS 2002 Migration Verbose Prepare for export 06 MCMS 2002 Migration Verbose Open connection... 07 MCMS 2002 Migration Verbose Create temporary stored procedures 08 MCMS 2002 Migration Verbose Create temporary tables... 09 MCMS 2002 Migration Verbose Initialize temporary tables... 10 MCMS 2002 Migration Verbose InitializeTemporaryTables: start 11 MCMS 2002 Migration Verbose Initialize export table... 12 MCMS 2002 Migration Verbose InitializeExportTable: start 13 MCMS 2002 Migration Verbose CleanLockProcedure: start. 14 MCMS 2002 Migration Verbose CleanLockProcedure: end. 15 MCMS 2002 Migration High Migration throws exception: Line 6: Incorrect syntax near ';'.. Stacktrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SharePoint.Publishing.Internal.Administration... 16 MCMS 2002 Migration High ....MigrationBatchCommand.ExecuteImmediate(String command) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand.ExecuteWaitingCommands() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDBSerializer.SerializeSelectedExportObject(StringCollection objectAttribs) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeExportTable(ScopeType scopeType) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeTemporaryTables(DateTime lastChangeTime) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess.InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis, SqlConnection connection) at Microsoft.SharePoint.Publishing.Internal.Admin... 17 MCMS 2002 Migration High ...stration.MigrationDataAccess.InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.Export(MigrationDataAccess dataAccess) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.MigrateInternal(). 18 MCMS 2002 Migration Verbose MigrationProfile: GetInstance. Start. 19 MCMS 2002 Migration Verbose MigrationProfile: GetInstance. End. 20 MCMS 2002 Migration Verbose Migration profile status is changed to Failed The stack trace of the failed parsing of the SQL command appear on lines 15-17. A cleaner version of the stack trace is appended below. Full Stack Trace: Migration throws exception: Line 6: Incorrect syntax near ';'.. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning( TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand .ExecuteImmediate(String command) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationBatchCommand .ExecuteWaitingCommands() at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDBSerializer .SerializeSelectedExportObject(StringCollection objectAttribs) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeExportTable(ScopeType scopeType) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeTemporaryTables(DateTime lastChangeTime) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis, SqlConnection connection) at Microsoft.SharePoint.Publishing.Internal.Administration.MigrationDataAccess .InitializeDatabase(DateTime lastChangeTime, Boolean isAnalysis) at Microsoft.SharePoint.Publishing.Administration.ContentMigration.Export (MigrationDataAccess dataAccess) at Microsoft.SharePoint.Publishing.Administration.ContentMigration .MigrateInternal(). None of this log information indicates the SQL command that is failing a parser check. I've checked the SQL servers hosting the source and destination databases for a trace of the query, but neither seems to have triggered the parse failure condition. That appears to have happened on the SharePoint server. Are there any other locations I should investigate that might tell me where to find the source of the error?

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >