Search Results

Search found 1968 results on 79 pages for 'pickle dump'.

Page 5/79 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Postgres pg_dump dumps database in a different order every time

    - by behrk2
    Hello, I am writing a PHP script (which also uses linux bash commands) which will run through test cases by doing the following: I am using a PostgreSQL database (8.4.2)... 1.) Create a DB 2.) Modify the DB 3.) Store a database dump of the DB (pg_dump) 4.) Do regression testing by doing steps 1.) and 2.), and then take another database dump and compare it (diff) with the original database dump from step number 3.) However, I am finding that pg_dump will not always dump the database in the same way. It will dump things in a different order every time. Therefore, when I do a diff on the two database dumps, the comparison will result in the two files being different, when they are actually the same, just in a different order. Is there a different way I can go about doing the pg_dump? Thanks!

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • Error importing large MySQL dump file which includes binary BLOBs in Windows

    - by Daniel Magliola
    I'm trying to import a MySQL dump file, which I got from my hosting company, into my Windows dev machine, and i'm running into problems. I'm importing this from the command line, and i'm getting a very weird error: ERROR 2005 (HY000) at line 3118: Unknown MySQL server host '+?*á±dÆ-N+Æ·h^ye"p-i+ Z+-$?P+Y.8+|?+l8/l¦¦î7æ¦X¦XE.ºG[ ;-ï?éµ?º+¦¦].?+f9d릦'+ÿG?-0à¡úè?-?ù??¥'+NÑ' (11004) I'm attaching the screenshot because i'm assuming the binary data will get lost... I'm not exactly sure what the problem is, but two potential issues are the size of the file (2 Gb) which is not insanely large, but it's not trivially small either, and the other is the fact that many of these tables have JPG images in them (which is why the file is 2Gb large, for the most part). Also, the dump was taken in a Linux machine and I'm importing this into Windows, not sure if that could add to the problems (I understand it shouldn't) Now, that binary garbage is why I think the images in the file might be a problem, but i've been able to import similar dumps from the same hosting company in the past, so i'm not sure what might be the issue. Also, trying to look into this file (and line 3118 in particular) is kind of impossible given its size (i'm not really handy with Linux command line tools like grep, sed, etc). The file might be corrupted, but i'm not exactly sure how to check it. What I downloaded was a .gz file, which I "tested" with WinRar and it says it looks OK (i'm assuming gz has some kind of CRC). If you can think of a better way to test it, I'd love to try that. Any ideas what could be going on / how to get past this error? I'm not very attached to the data in particular, since I just want this as a copy for dev, so if I have to lose a few records, i'm fine with that, as long as the schema remains perfectly sound. Thanks! Daniel

    Read the article

  • nginx: dump HTTP requests for debugging

    - by Alexander Gladysh
    Ubuntu 10.04.2 nginx 0.7.65 I see some weird HTTP requests coming to my nginx server. To better understand what is going on, I want to dump whole HTTP request data for such queries. (I.e. dump all request headers and body somewhere I can read them.) Can I do this with nginx? Alternatively, is there some HTTP server that allows me to do this out of the box, to which I can proxy these requests by the means of nginx? Update: Note that this box has a bunch of normal traffic, and I would like to avoid capturing all of it on low level (say, with tcpdump) and filtering it out later. I think it would be much easier to filter good traffic first in a rewrite rule (fortunately I can write one quite easily in this case), and then deal with bogus traffic only. And I do not want to channel bogus traffic to another box just to be able to capture it there with tcpdump. Update 2: To give a bit more details, bogus request have parameter named (say) foo in their GET query (the value of the parameter can differ). Good requests are guaranteed not to have this parameter ever. If I can filter by this in tcpdump or ngrep somehow — no problem, I'll use these.

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • The script not working as expected files dump path

    - by user3319390
    I have a script needs to be dump matching cname from my file contains and then matching scode to dump file to $cname/$year/$month/$day/ into files like access and error logs #!/bin/sh #base_dir="/home/vizion/Desktop" path="/home/vizion/Desktop/adn_DF9D_20140515_0005.log" name=$(basename "$path" ".log") for x in *.log; do year=${x:9:4}; month=${x:13:2}; day=${x:15:2}; done while read -r line do cname=$(echo ${line} | awk '{split($7,c,"/"); print c[3]}') scode=$(echo ${line} | awk -F"[ ]" '{print $9}') [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day/" [[ ( ${scode} -ge 200 ) && ( ${scode} -le 399 ) ]] && { # [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day/" echo ${line} >> /home/vizion/Desktop/$cname/$year/$month/$day/${cname}_${name}_access.log } [[ ( ${scode} -ge 400 ) && ( ${scode} -le 599 ) ]] && { [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day" echo ${line} >> ${cname}_${name}_error.log } done < $path i am able to filter logs but not not dumping the exact location It's going other locations suggest to me correction in script

    Read the article

  • import svn history

    - by Corey Watts
    I had to wipe our svn server, but I failed to "dump" the repositories before installing a new OS. However, I had a complete backup of every file in each repository. I've since transferred all the old files back over. Unfortunately the version history is completely gone. I still have all the old incremental files, and svn can see each revision with the "verify" command, but I'm wondering if it is possible to import the old history directly from the actual files (not a dump file)?

    Read the article

  • Availability of downloadable DNSBL dumps?

    - by mtah
    I need to cross-reference data involving a large number of IP addresses against known public proxies, spam-listed IPs etc. For obvious performance and network load reasons, obtaining a regularly updated dump for off-line processing would be preferrable. I currently use http://www.dnsbl.manitu.net/download/nixspam-ip.dump.gz - a digest of ix.dnsbl.manitu.net with 40,000 entries updated every 15 minutes. I'd like something more substantial though, so my question is: does such a thing exist?

    Read the article

  • restore -A usage

    - by Martin v. Löwis
    I have created a number of dump files using Linux dump(8), using the -A option to get a table of contents on disk (the backups are on tape). Now I'm trying to look into these archive files, using restore -i -A <archive>` However, this insists on asking what tape to use, and complains if I say none. What am I doing incorrectly? I was hoping that I can use these archive index files without having to insert the tape to use.

    Read the article

  • Dump SQL Server Stored Procedures to Files

    - by Jake Wharton
    Is there a non-interactive (read: script-able) way to dump all stored procedures to disk? We keep versions of our stored procedures in the repository to track changes and for deployment and rollback purposes. Currently whenever we want to modify a stored procedure you have to pull it out of the DB directly when you begin your change.

    Read the article

  • Restoring MySQL dump - ERROR 2006 (HY000)

    - by matcheek
    I am just trying to restore a mysql dump. Below are the command and error message. Can anybody give me some clues how to approach this? 10:54:16 Restoring C:\Users\matcheek\Documents\dumps\Dump20120405-1.sql Running: mysql.exe "--defaults-extra-file="d:\temp\tmpbvhy4i.cnf" " --host=127.0.0.1 --user=root --port=3306 --default-character-set=utf8 --comments < "C:\\Users\\matcheek\\Documents\\dumps\\Dump20120405-1.sql" ERROR 2006 (HY000) at line 271: MySQL server has gone away

    Read the article

  • LINQPad - Dump extension method - I want one!

    - by gav
    Hi, LINQPad is amazing, particularly useful is the Dump() extension methods which renders objects and structs of almost any type, anonymous or not, to the console. Initially, when I moved to Visual Studio 2010, I tried to make my own Dump method using a delegate to get the values to render for anonymous types etc. It's getting pretty complicated though and whilst it was fun and educational at first what I need is a solid implementation. Having checked out the LinqPad code in reflector I am even more assured that I'm not going to get the implementation right. Is there a free library I can include to provide the Dump functionality? Thanks, Gavin

    Read the article

  • Import SQL dump into MySQL

    - by BryanWheelock
    I'm confused how to import a SQL dump file. I can't seem to import the database without creating the database first in MySQL. This is the error displayed when database_name has not yet been created: username = username of someone with access to the database on the original server. database_name = name of database from the original server $ mysql -u username -p -h localhost database_name < dumpfile.sql Enter password: ERROR 1049 (42000): Unknown database 'database_name' If I log into MySQL as root and create the database, database_name mysql -u root create database database_name; create user username;# same username as the user from the database I got the dump from. grant all privileges on database_name.* to username@"localhost" identified by 'password'; exit mysql then attempt to import the sql dump again: $ mysql -u username -p database_name < dumpfile.sql Enter password: ERROR 1007 (HY000) at line 21: Can't create database 'database_name'; database exists How am I supposed to import the SQL dumpfile?

    Read the article

  • dump csv from sqlalchemy

    - by afilatun
    For some reason, I want to dump a table from a database (sqlite3) in the form of a csv file. I'm using a python script with elixir (based on sqlalchemy) to modify the database. I was wondering if there is any way to dump the table I use to csv. I've seen sqlalchemy serializer but it doesn't seem to be what I want. Am I doing it wrong? Should I call the sqlite3 python module after closing my sqlalchemy session to dump to a file instead? Or should I use something homemade?

    Read the article

  • Unable to load the Starteam Dump into SVN

    - by ssarivis
    Hi, I have a dump created from StarTeam 2008 R 2 (10.4.7.-64) using svn importer 1.1-M8. But when I am trying to import the dump its giving me error: * adding path : tags/Test/GH/13_Environment/Process/Capgemini EN Template - Business Case.doc ...svnadmin: File already exists: filesystem 'help\db', transaction '2-2', path 'tags/Test/GH/13_Environment/Process/Capgemini EN Template - Business Case.doc' I can see from the svn admin load o/p that the file has been added already. May be the dump created by SVN Importer is not correct. Can anyone guide me how to solve this ?

    Read the article

  • Generate MySQL data dump in SQL from PHP

    - by Álvaro G. Vicario
    I'm writing a PHP script to generate SQL dumps from my database for version control purposes. It already dumps the data structure by means of running the appropriate SHOW CREATE .... query. Now I want to dump data itself but I'm unsure about the best method. My requirements are: I need a record per row Rows must be sorted by primary key SQL must be valid and exact no matter the data type (integers, strings, binary data...) Dumps should be identical when data has not changed I can detect and run mysqldump as external command but that adds an extra system requirement and I need to parse the output in order to remove headers and footers with dump information I don't need (such as server version or dump date). I'd love to keep my script as simple as I can so it can be hold in an standalone file. What are my alternatives?

    Read the article

  • How do I generate a core dump from apache

    - by blockhead
    I have a script which intermittently returns a white screen of death in firefox and Error 324 (net::ERR_EMPTY_RESPONSE): Unknown error. chrome. When I try to access the script using a PHP HTTP client (like Zend_Http_Client), intermittently I get an exception (sorry I don't have the exact message on me at the moment). I suspect a segfault. This is further buttressed by the lines in my error log that look like this: [Thu Mar 18 16:03:02 2010] [notice] child pid 845 exit signal Segmentation fault (11) Now, I'm running RedHat, and I know that RedHat doesn't generate core dumps out-of-the-box. I followed the instructions here http://kbase.redhat.com/faq/docs/DOC-5353, but I'm not seeing any core dumps. How do I generate a core dump?

    Read the article

  • Need help generating a core dump from apache segfault

    - by blockhead
    I have a script which intermittently returns a white screen of death in firefox and Error 324 (net::ERR_EMPTY_RESPONSE): Unknown error. chrome. When I try to access the script using a PHP HTTP client (like Zend_Http_Client), intermittently I get an exception (sorry I don't have the exact message on me at the moment). I suspect a segfault. This is further buttressed by the lines in my error log that look like this: [Thu Mar 18 16:03:02 2010] [notice] child pid 845 exit signal Segmentation fault (11) Now, I'm running RedHat, and I know that RedHat doesn't generate core dumps out-of-the-box. I followed the instructions here http://kbase.redhat.com/faq/docs/DOC-5353, but I'm not seeing any core dumps. How do I generate a core dump?

    Read the article

  • Timing issue with autohotscript, fails to dump or open destination file

    - by learnerforever
    I've created a autohotscript to quickly dump selected text into my jot file on the Desktop and I think I'm facing a timing error. The script works like thus: Select text when reading a text file, browsing internet, reading PDF, etc. Hit Ctrl + J Contents of selected text is dumped into my jot file. When I press Ctrl + J very quickly, it sometimes doesn't come up in my jot file and sometimes when I keep pressing Ctrl + J for a long time, many instances of the text appear. Could somebody please point out what's wrong with this script and how I can improve it. ^j:: Clipboard := "" ; clear Send, ^c ; simulate Ctrl+C (=selection in clipboard) selection = %Clipboard% ; save the content of the clipboard FileAppend, `n%selection%`n,C:\Users\jagrati\Desktop\jots.txt return

    Read the article

  • COM+ automatic collection of a process dump file and process termination at high call time

    - by immi
    hI, I want to configure my machine for automatic collection of a process dump file and process termination as mentioned in http://support.microsoft.com/kb/910904. But after setting the registry settings according to the KB article, i am not getting the required behaviour. Only a warning is being logged when call time goes high (which is the default behaviour). i am running Windows Server 2003 with SP2. Is there any thing that i am missing? for example restart any COM+ runtime etc. Regards

    Read the article

  • Doing a mysql dump causes swapping issues

    - by DFischer
    I do a mysqldump manually every night. I just noticed that after it is done and I try to access the website it is very slow. After I take a look at the free -mh I notice that the server is now swapping when it otherwise wasn't before the mysqldump. What am I to do in this case? Just restart the server every time I backup? That doesn't seem very effective. My database file raw is 1.1gb after the dump.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >