Search Results

Search found 2190 results on 88 pages for 'pg dump'.

Page 17/88 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Making evince the default application for embedding PDFs in firefox in ubuntu

    - by Seamus
    While I'm on the subject of complaining about things that upgrading to ubuntu 10.04 broke (see here), another problem that happened on moving from 9.04 to 9.10 (I think) is this: Firefox used to open embedded pdf files with evince and that was fine. Now it opens them with okular, which I don't like for several reasons. (Pg Up, Pg Dn keys don't work. I don't like the KDE save file dialogue box, because I'm used to gnome. ) So my question is: how do I change which program is used to embed PDFs in firefox in ubuntu?

    Read the article

  • Can snort output an alert for a portscan (sfPortscan) to syslog?

    - by Jamie McNaught
    I've been working on this for too long now. I'm sure the answer should be obvious, but... Snort manual: http://www.snort.org/assets/125/snort_manual-2_8_5_1.pdf lists two logging outputs on pg 39 (pg 40 according to Acrobat Reader) as: "Unified Output" and "Log File Output" which I am guessing the former refers to the "unified" output mode... which makes me think the answer is "No, snort cannot output alerts for detected portscans to syslog." Config file I've been using is: alert tcp any 80 -> any any (msg:"TestTestTest"; content: "testtesttest"; sid:123) preprocessor sfportscan: proto { all } \ memcap { 10000000 } \ scan_type { all } \ sense_level { high } \ logfile { pscan.log } (yes, very basic I know). A simple nmap triggers output to the pscan.log Can anyone confirm this? Or point out how I do this?

    Read the article

  • Exported csv file is not in right lining with pgadmin

    - by user938363
    We exported a pg 9.3 table to csv file in pgadmin. The problem is that from about 10th line, the lining of the columns were messed up and did not line up correctly with the columns above. We tried a few times and every output has the same problem. We follow the instruction on http://www.question-defense.com/2010/10/15/how-to-export-from-pgadmin-export-pgadmin-data-to-csv for export. The only difference that UTF8 is selected instead of localcharset. What's the right way to export csv in pg?

    Read the article

  • Deleted All Snapshots, Now Won't Boot VM with Snapshot not found error

    - by Jharwood
    I've just tried deleting the snapshots from this virtual machine running ESXI5, so that I can grow the Thick Partition. I've now got the below error message when I try to start the VM, the VM also can't be grown above 0 MB i assume for the same reason as below. I've checked the datastore and the original VMDK is still there. Reason: The system cannot find the file specified. Cannot open the disk 'VM1-PG-000002.vmdk' or one of the snapshot disks it depends on. VMware ESX cannot find the virtual disk "VM1-PG-000002.vmdk". Verify the path is valid and try again. How do i tell ESXI5 to use the proper VMDK?

    Read the article

  • insert, delete etc. keys not working on Cherry Strait keyboard

    - by Brabster
    Hey folks, I got a Cherry Strait USB wired keyboard for xmas and I've been unable to get several keys working under Ubuntu 10.10 or Win XP. There are a few keys. including the Print Screen key, insert, end, delete, pg up, pg down and home. I've not been able to identify any others that aren't working. I'm not sure as to the best approach to determine what's wrong. Is there any way to confirm whether there is any output from the keyboard in response to pressing those keys, ideally in Ubuntu as that's where I spend most of my time so that I know if it's a fault with the device itself? (I've tried different USB ports and also hitting those keys whilst in the Keyboard Shortcuts app, no response to these keys)

    Read the article

  • send different object value to different funtions

    - by user295189
    I have the code below. I want to send the value of value1 n.value1s = new Array(); n.value1sIDs = new Array(); n.value1sNames = new Array(); n.value1sColors = new Array(); n.descriptions = new Array(); to pg.loadLinkedvalue1s(n); and for value2 to pg.loadLinkedvalue2s(n); Howd I do that in javascript without haveing to rewrite the complete function please see the code below if(n.id == "row"){ n.rs = n.parentElement; if(n.rs.multiSelect == 0){ n.selected = 1; this.selectedRows = [ n ]; if(this.lastClicked && this.lastClicked != n){ selectionChanged = 1; this.lastClicked.selected = 0; this.lastClicked.style.color = "000099"; this.lastClicked.style.backgroundColor = ""; } } else { n.selected = n.selected ? 0 : 1; this.getSelectedRows(); } this.lastClicked = n; n.value1s = new Array(); n.value1sIDs = new Array(); n.value1sNames = new Array(); n.value1sColors = new Array(); n.descriptions = new Array(); n.value2s = new Array(); n.value2IDs = new Array(); n.value2Names = new Array(); n.value2Colors = new Array(); n.value2SortOrders = new Array(); n.value2Descriptions = new Array(); var value1s = myOfficeFunction.DOMArray(n.all.value1s.all.value1); var value2s = myOfficeFunction.DOMArray(n.all.value1s.all.value2); for(var i=0,j=0,k=1;i<vaue1s.length;i++){ n.sortOrders[j] = k++; n.vaue1s[j] = vaue1s[i].v; n.vaue1IDs[j] = vaue1s[i].i; n.vaue1Colors[j] = vaue1s[i].c; alert(n.vaue1Colors[j]); var vals = vaue1s[i].innerText.split(String.fromCharCode(127)); n.cptSortOrders[j] = k++; n.value2s[j] = value2s[i].v; n.value2IDs[j] = value2s[i].i; n.value2Colors[j] = value2s[i].c; var value2Vals = value2s[i].innerText.split(String.fromCharCode(127)); if(vals.length == 2){ alert(n.vaue1Colors[j]); n.vaue1Names[j] = vals[0]; n.descriptions[j++] = vals[1]; } if(value2Vals.length == 2){ n.value2Names[j] = cptVals[0]; alert(n.value2Names[j]); n.cptDescriptions[j++] = cptVals[1]; alert(n.cptDescriptions[j++]); } } //want to run this with value1 only pg.loadLinkedvalue1s(n); // want to run this with value2 only pg.loadLinkedvalue2s(n); }

    Read the article

  • Cannot connect to postgresql on port 5432

    - by Assaf Lavie
    I installed the Bitnami Django stack which included PostgreSQL 8.4. When I run psql -U postgres I get the following error: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? PG is definitely running and the pg_hba.conf file looks like this: # TYPE DATABASE USER CIDR-ADDRESS METHOD # "local" is for Unix domain socket connections only local all all md5 # IPv4 local connections: host all all 127.0.0.1/32 md5 # IPv6 local connections: host all all ::1/128 md5 What gives? "Proof" that pg is running: root@assaf-desktop:/home/assaf# ps axf | grep postgres 14338 ? S 0:00 /opt/djangostack-1.3-0/postgresql/bin/postgres -D /opt/djangostack-1.3-0/postgresql/data -p 5432 14347 ? Ss 0:00 \_ postgres: writer process 14348 ? Ss 0:00 \_ postgres: wal writer process 14349 ? Ss 0:00 \_ postgres: autovacuum launcher process 14350 ? Ss 0:00 \_ postgres: stats collector process 15139 pts/1 S+ 0:00 \_ grep --color=auto postgres root@assaf-desktop:/home/assaf# netstat -nltp | grep 5432 tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 14338/postgres tcp6 0 0 ::1:5432 :::* LISTEN 14338/postgres root@assaf-desktop:/home/assaf#

    Read the article

  • Domain changes required for SSL integration

    - by user131003
    Currently my site supports regular payment options (User is taken to Payment Gateway/PG website). Now I'm trying to implement "seamless" PG integration. I need SSL for this. I'm having a dedicated server with 5 static IPs from Hostgator/HG. options: I take SSL for www.my_domain.com. According to HG, I need to change IP of main site as current IP is not really dedicated as it is being shared by cpanel etc. So They need to bind another dedicated IP to main domain for SSL to work. This would required DNS change for main website and hence cause few hours downtime (which is ok). I've noticed that most of the e-commerce websites are using subdomains like secure.my_domain.com for ssl/https. This sounds like a better approach. But I've got few doubts in this case: a) Would I need to re-register with existing PGs (Paypal, Google Checkout, Authorize.net) if I switch to subdomain? Re-registering is not an option for me. b) Would DNS change be required for www.my_domain.com in this case. This confusion arose because of following reply from HG : "If the sub domain secure.my_domain.com is added to an existing cPanel it will use the IP for that cPanel so as long as it is a Dedicated IP that will be fine. If secure.my_domain.com gets setup as its own cPanel it will need to be assigned to a Dedicated IP which would have a DNS change involved.". Please suggest?

    Read the article

  • OpenSSL Handshake Failure (14094410) - Erroneous Client Certificate Check from Mobile Phone

    - by Clayton Sims
    I'm running a proxy server through Apache with modssl, which we're using to proxy POSTs from mobile devices to another internal server. This works successfully for most clients, but requests from a specific phone model (Nokia 2690) are showing a bizarre handshake failure. It looks as though OpenSSL is either requesting (or attempting to read an unsolicited) client certificate from the phone (which is especially bizarre because j2me's kssl implementation doesn't support client certs). I've disabled client certificates with the SSLVerifyClient none directive in both the virtual host conf and the modssl conf. The trace from error.log on debug level is (details redacted): [client 41.220.207.10] Connection to child 0 established (server www.myserver.org:443) [info] Seeding PRNG with 656 bytes of entropy [debug] ssl_engine_kernel.c(1866): OpenSSL: Handshake: start [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: before/accept initialization [debug] ssl_engine_io.c(1882): OpenSSL: read 11/11 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90d0] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1882): OpenSSL: read 49/49 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90db] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 read client hello A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 write server hello A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 write certificate A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 write server done A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 flush data [debug] ssl_engine_io.c(1882): OpenSSL: read 5/5 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90d0] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1882): OpenSSL: read 2/2 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90d5] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_kernel.c(1879): OpenSSL: Read: SSLv3 read client certificate A [debug] ssl_engine_kernel.c(1898): OpenSSL: Exit: failed in SSLv3 read client certificate A [client 41.220.207.10] SSL library error 1 in handshake (server www.myserver.org:443) [info] SSL Library Error: 336151568 error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure [client 41.220.207.10] Connection closed to child 0 with abortive shutdown (server www.myserver.org:443) I've tried enabling all ciphers and all protocols temporarily with modssl, neither of which seemed to be the issue. The phone should be using RSA_RC4_128_MD5 and SSLv3, all of which are available. Am I missing something more fundamental about what's failing here? It seemed like the certificate request might have been part of a renegotiation failure. I tried enabling SSLInsecureRenegotiation On on the virtual host, in case it was an issue of the phone's SSL not supporting the new protocol, but to no avail. Currently running: Apache/2.2.16 (Ubuntu) mod_ssl/2.2.16 OpenSSL/0.9.8o Apache proxy_html/3.0.1

    Read the article

  • Subversion - Retrieval of mergeinfo unsupported

    - by jamesthomson
    Hi, I've recently updated my Subversion package on Debian Etch to 1.5.1 via a back-port. I've gone through what I believe are all the appropriate steps but cannot for the life of me get past the following error message when I try to merge: Retrieval of mergeinfo unsupported by '.' The '.' isn't important as I get the same message whether I'm SSH'd on to the server or using TortoiseSVN through Windows. I'll take you through what I did to upgrade and test step by step: Update of Subversion Added the following line to /etc/apt/sources.list: deb http://www.backports.org/debian etch-backports main contrib non-free and then ran apt-get -s -t etch-backports install subversion Checked the version of the subversion installation Done this by running svnadmin --version and got the following output: svnadmin, version 1.5.1 (r32289) compiled Dec 11 2008, 18:10:14 Checked the client too using svn --version and got the following svn, version 1.5.1 (r32289) compiled Dec 11 2008, 18:10:14 Ok, so all looking good so far. Now I just need to upgrade the repository. After plenty of research, the most foolproof way to do this seemed to be to dump the repository and then load it again. So here's what I did: svnadmin dump /var/svn/repo > repo.dump rm -aR /var/svn/repo/* svnadmin create /var/svn/repo svnadmin load < repo.dump All that seemed to work fine. I then checked to see if the repository had been upgraded by looking at the contents of /var/svn/repo/db/format which gave: 3 layout sharded 1000 Again this indicated a Subversion 1.5 repository so all looking good. Now I try and do a merge using the Subversion client in Debian: svn mergeinfo https://mysvn/repo . and I get the following error: svn: Retrieval of mergeinfo unsupported by '.' I get the same error message whether I'm using the Debian shell on the same server or if I'm connecting via TortoiseSVN and a Windows box. If I browse to the repository using my web browser, the version number at the bottom reads: Powered by Subversion version 1.4.2 (r22196). In case it helps, the created date on mod_dav_svn.so is 2009-08-06 18:29 I just cannot figure out why I'm getting this message so any help pointing me in the right direction would be greatly appreciated. All the forum and mailing list posts that I found relating to this error were solved by doing an svnadmin upgrade, though I have actually tried that and still no joy. Thanks in advance, James.

    Read the article

  • Free converter for JPEG to PDF

    - by Codeslayer
    Hello all, Is there any free converter available which will convert multiple JPEG files and dump them in a single PDF file? The reason for this is I have multiple scanned JPEG files which I want to dump as a single PDF. I am not scanning the image as TIFF files because it takes a huge amount of space. Thanx.

    Read the article

  • Practically expected transfer rates for sdhc class 6

    - by bobobobo
    I went and bought an expensive SanDisk Extreme III SDHC 8GB class 6 chip. However when I dump data from the card to the machine via USB 2.0 cable, its only getting 5.0 MB/second maximum according to Windows 7 disk explorer. It still can take up to 20 minutes to dump the card when its near full. This is so far below the rated 20MB/s transfer speed I can't believe it. Is this normal or might I have a defective chip?

    Read the article

  • Convert video to apng/png?

    - by acidzombie24
    I tried to do this with ffmpeg but failed (i also failled making animated gifs). Is there a simple to use free program (command line is ok) to convert videos to animated pngs? As long as it doesnt dump the video frame by frame into png files and create a monster size png then i should like it.(I didnt see an option to make ffmpeg not dump every frame) From the wiki http://en.wikipedia.org/wiki/APNG

    Read the article

  • Is there extensible structured file analyzer, like network analysis tools?

    - by ???
    There are many network analysis tools like Wireshark, Sniffer Pro, Omnipeak which can dump the packet data in structured manner. I'm just writing my own file analyzer for general purpose, which can dump JPEG, PNG, EXE, ELF, ASN.1 DER encoded files, etc. in tree style. There are so many file formats in the world that I can't handle them all. So I'm wondering if there's some software already there, with pluggable architecture and a large established file format repository?

    Read the article

  • Lots of dropped packages when tcpdumping on busy interface

    - by Frands Hansen
    My challenge I need to do tcpdumping of a lot of data - actually from 2 interfaces left in promiscuous mode that are able to see a lot of traffic. To sum it up Log all traffic in promiscuous mode from 2 interfaces Those interfaces are not assigned an IP address pcap files must be rotated per ~1G When 10 TB of files are stored, start truncating the oldest What I currently do Right now I use tcpdump like this: tcpdump -n -C 1000 -z /data/compress.sh -i any -w /data/livedump/capture.pcap $FILTER The $FILTER contains src/dst filters so that I can use -i any. The reason for this is, that I have two interfaces and I would like to run the dump in a single thread rather than two. compress.sh takes care of assigning tar to another CPU core, compress the data, give it a reasonable filename and move it to an archive location. I cannot specify two interfaces, thus I have chosen to use filters and dump from any interface. Right now, I do not do any housekeeping, but I plan on monitoring disk and when I have 100G left I will start wiping the oldest files - this should be fine. And now; my problem I see dropped packets. This is from a dump that has been running for a few hours and collected roughly 250 gigs of pcap files: 430083369 packets captured 430115470 packets received by filter 32057 packets dropped by kernel <-- This is my concern How can I avoid so many packets being dropped? These things I did already try or look at Changed the value of /proc/sys/net/core/rmem_max and /proc/sys/net/core/rmem_default which did indeed help - actually it took care of just around half of the dropped packets. I have also looked at gulp - the problem with gulp is, that it does not support multiple interfaces in one process and it gets angry if the interface does not have an IP address. Unfortunately, that is a deal breaker in my case. Next problem is, that when the traffic flows though a pipe, I cannot get the automatic rotation going. Getting one huge 10 TB file is not very efficient and I don't have a machine with 10TB+ RAM that I can run wireshark on, so that's out. Do you have any suggestions? Maybe even a better way of doing my traffic dump altogether.

    Read the article

  • Parsing Wiki XML Dumps ver0.4 just got tough

    - by syed
    Hello, I am trying to parse Wikipedia XML Dump using "Parse-MediaWikiDump-1.0.4" along with "Wikiprep.pl" script. I guess this script works fine with ver0.3 Wiki XML Dumps but not with the latest ver0.4 Dumps. I get the following error. Can't locate object method "page" via package "Parse::MediaWikiDump::Pages" at wikiprep.pl line 390. Also, under the "Parse-MediaWikiDump-1.0.4" documentation @ http://search.cpan.org/~triddle/Parse-MediaWikiDump-1.0.4/lib/Parse/MediaWikiDump/Pages.pm, I read "LIMITATIONS Version 0.4 This class was updated to support version 0.4 dump files from a MediaWiki instance but it does not currently support any of the new information available in those files." Any work arounds would help me get to the next level. Note: one may wonder why cannot we directly use SAX or STAX parser instead, wikipedia dump is a 25GB plus single file, stack/memory issues are obvious. Hence, the above perl script resolves this issue but currently I am stuck with this version problem.

    Read the article

  • WinDbg fails to find symbol file reporting 'unrecognized OMF sig'

    - by sean e
    I have received a 64bit dump of a 32bit app that was running on Win7 x64. I am able to load it in WinDbg (hint: !wow64exts.sw) running on a 64bit OS. The symbols for most of my dlls are loaded properly. The pdb for one though does not load. The same pdb does load properly for the same dll when reading a 32bit dump on a different system. I've also confirmed that the dll and pdb match each other via the chkmatch utility. I tried .symopt +40 but the pdb still didn't load. I did !sym noisy then .reload - WinDbg reported: DBGHELP: unrecognized OMF sig: 811f1121 *** ERROR: Symbol file could not be found. Defaulted to export symbols Any ideas on what to try to get WinDbg to load my pdb when reading a 64bit dump?

    Read the article

  • StreamWriter Problem - 2 Spaces Written as Hex '20 c2 a0' instead of Hex '20 20'

    - by Daver
    I'm writing a bunch of strings to a file using a string writer but I've discovered a problem when I look at the file created in hex, and that is that one of the spaces (x20) is replaced with a non-breaking space instead (xc2 a0) when there are 2 spaces separating words. I don't know if this is a big deal but I would like to know if there is an easy resolution to this? Here's what I'm seeing: 20 c2 a0 53 57 45 45 50 Dump = "  SWEEP" But I would like it to always be: 20 20 53 57 45 45 50 Dump = " SWEEP" Note that the c2 a0 aren't visible here but the dump looks something like 'A.' when I use the Notepad++ Hex Plugin. Does anyone have any ideas? Cheers and Thanks In Advance; -Daver

    Read the article

  • SVN: is it possible to delete a branch that was copied removed etc for good?

    - by dimus
    I have to remove a branch from svn history for good. Normally I would use svnadmin dump /path/to/repo |svndumpfilter --drop-empty-revs --renumber-revs exclude /branches/bad_branch However this branch was not just created, but also moved and then removed and dump script fails to process downstream information with messages like: Invalid copy source path '/branches/bad_branch' So I imagine 2 ways to cope with the problem keep only last few revisions of the history and put current repository as an archive on the web make a dump up to the revision where the 'bad_branch' was created and apply the rest of the changes as a patch, therefore losing history of a few recent commits. Is there a better, cleaner way to deal with this?

    Read the article

  • Identify cause of hundreds of AJP threads in Tomcat

    - by Rich
    We have two Tomcat 6.0.20 servers fronted by Apache, with communication between the two using AJP. Tomcat in turn consumes web services on a JBoss cluster. This morning, one of the Tomcat machines was using 100% of CPU on 6 of the 8 cores on our machine. We took a heap dump using JConsole, and then tried to connect JVisualVM to get a profile to see what was taking all the CPU, but this caused Tomcat to crash. At least we had the heap dump! I have loaded the heap dump into Eclipse MAT, where I have found that we have 565 instances of java.lang.Thread. Some of these, obviously, are entirely legitimate, but the vast majority are named "ajp-6009-XXX" where XXX is a number. I know my way around Eclipse MAT pretty well, but haven't been able to find an explanation for it. If anyone has some pointers as to why Tomcat may be doing this, or some hints on finding out why using Eclipse MAT, that'd be appreciated!

    Read the article

  • Is it a good idea to cache data from web services into a database?

    - by Thierry Lam
    Let's assume that Stackoverflow offers web services where you can retrieve all the questions asked by a specific user. A request to get all question from user A can result in the following json output: { { "question": "What is rest?", "date_created": "20/02/2010", "votes": 1, }, { "question": "Which database to use for ...", "date_created": "20/07/2009", "votes": 5, }, } If I want to manipulate and present the data in any ways that I want, will it be wise to dump it in a local database? At some point, I will also want to retrieve all answers for each question and store them in a local database. The workflow that I'm thinking is: User logs in. Web services retrieve all questions asked by the logged in user, dump them in a local database. User wants all answers for a specific question, another web service does the retrieval and dump them in a local database. After user logs out, delete from the local database all questions and answers from that user.

    Read the article

  • Postgres: clear entire database before re-creating / re-populating from bash script

    - by Hoff
    hi folks, I'm writing a shell script (will become a cronjob) that will: 1: dump my production database 2: import the dump into my development database Between step 1 and 2, I need to clear the development database (drop all tables?). How is this best accomplished from a shell script? So far, it looks like this: #!/bin/bash time=`date '+%Y'-'%m'-'%d'` # 1. export(dump) the current production database pg_dump -U production_db_name > /backup/dir/backup-${time}.sql # missing step: drop all tables from development database so it can be re-populated # 2. load the backup into the development database psql -U development_db_name < backup/dir/backup-${time}.sql Many thanks in advance! Martin

    Read the article

  • ASP.Net: Finding the cause of OutOfMemoryExpcetions

    - by Keith Bloom
    I trying to track down the cause of an OutOfMemory for a website. This site has ~12,000 .aspx pages and the last time it crashed I captured a memory dump using adplus. After some investigation I found a lot of heap fragmentation, there are around 100MB of Free blocks which can't be assigned. Digging deeper one of the Large Object Heaps is fragmented and the causes seems to be String interning as described [here][1] Could this be caused by the number of pages in the site? As they are all compiled they sit in memory and by looking at the dump they are interned and PINNED which I think means they stick around for a while. I would find this odd as there are many sites with more pages, but dynamic compilation could account for the growth in memory. What other methods are there for finding the cause of the memory leak? I have tried to capture a dump using adplus in hang mode but this fails and the IIS worker process get recycled. [1]: • http://stackoverflow.com/questions/686950/large-object-heap-fragmentation

    Read the article

  • Selecting keys based on metadata, possible with Amazon S3?

    - by nbv4
    I'm sending files to my S3 bucket that are basically gzipped database dumps. They keys are a human readable date ("2010-05-04.dump"), and along with that, I'm setting a metadata field to the UNIX time of the dump. I want to write a script that retrieve the latest dump from the bucket. That is to say I want the the key with the largest unix time metadata value. Is this possible with Amazon S3, or is this not how S3 is meant to work? I'm using both the command line tool aws, and the python library boto

    Read the article

  • Generating a set of files containing dumps of individual tables in a way that guarantees database co

    - by intuited
    I'd like to dump a MySQL database in such a way that a file is created for the definition of each table, and another file is created for the data in each table. I'd like this to be done in a way that guarantees database integrity by locking the entire database for the duration of the dump. What is the best way to do this? Similarly, what's the best way to lock the database while restoring a set of these dump files? edit I can't assume that mysql will have permission to write to files.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >