Search Results

Search found 4835 results on 194 pages for 'svn dump'.

Page 87/194 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Is there extensible structured file analyzer, like network analysis tools?

    - by ???
    There are many network analysis tools like Wireshark, Sniffer Pro, Omnipeak which can dump the packet data in structured manner. I'm just writing my own file analyzer for general purpose, which can dump JPEG, PNG, EXE, ELF, ASN.1 DER encoded files, etc. in tree style. There are so many file formats in the world that I can't handle them all. So I'm wondering if there's some software already there, with pluggable architecture and a large established file format repository?

    Read the article

  • Lots of dropped packages when tcpdumping on busy interface

    - by Frands Hansen
    My challenge I need to do tcpdumping of a lot of data - actually from 2 interfaces left in promiscuous mode that are able to see a lot of traffic. To sum it up Log all traffic in promiscuous mode from 2 interfaces Those interfaces are not assigned an IP address pcap files must be rotated per ~1G When 10 TB of files are stored, start truncating the oldest What I currently do Right now I use tcpdump like this: tcpdump -n -C 1000 -z /data/compress.sh -i any -w /data/livedump/capture.pcap $FILTER The $FILTER contains src/dst filters so that I can use -i any. The reason for this is, that I have two interfaces and I would like to run the dump in a single thread rather than two. compress.sh takes care of assigning tar to another CPU core, compress the data, give it a reasonable filename and move it to an archive location. I cannot specify two interfaces, thus I have chosen to use filters and dump from any interface. Right now, I do not do any housekeeping, but I plan on monitoring disk and when I have 100G left I will start wiping the oldest files - this should be fine. And now; my problem I see dropped packets. This is from a dump that has been running for a few hours and collected roughly 250 gigs of pcap files: 430083369 packets captured 430115470 packets received by filter 32057 packets dropped by kernel <-- This is my concern How can I avoid so many packets being dropped? These things I did already try or look at Changed the value of /proc/sys/net/core/rmem_max and /proc/sys/net/core/rmem_default which did indeed help - actually it took care of just around half of the dropped packets. I have also looked at gulp - the problem with gulp is, that it does not support multiple interfaces in one process and it gets angry if the interface does not have an IP address. Unfortunately, that is a deal breaker in my case. Next problem is, that when the traffic flows though a pipe, I cannot get the automatic rotation going. Getting one huge 10 TB file is not very efficient and I don't have a machine with 10TB+ RAM that I can run wireshark on, so that's out. Do you have any suggestions? Maybe even a better way of doing my traffic dump altogether.

    Read the article

  • Setting up a subdomain SSL with custom port

    - by Webnet
    I'm setting up a subdomain on a dedicated server that I'm going to use for SVN services. The SVN server is up and running I just need to setup the subdomain. The https has been switched to a custom port because there's a confliction with a port forward pointing to another server. Should I do this through GoDaddy or Apache?

    Read the article

  • nhibernate activerecord linq Contains problem

    - by Robert Ivanc
    Hi, I am having problems with the following query in Castle ActiveRecord 2.12: var q = from o in SodisceFMClientVAR.Queryable where taxnos2.Contains(o.TaxFileNo) select o; taxNos2 is an array of strings. When run I get an exception: + InnerException {"Index was out of range. Must be non-negative and less than the size of the collection.\r\nParameter name: index"} System.Exception {System.ArgumentOutOfRangeException} StackTrace " at Castle.ActiveRecord.ActiveRecordBase.ExecuteQuery(IActiveRecordQuery query)\r\n at Castle.ActiveRecord.Linq.LinqResultWrapper`1.Populate()\r\n at Castle.ActiveRecord.Linq.LinqResultWrapper`1.GetEnumerator()\r\n at NHibernate.Linq.Query`1.GetEnumerator()\r\n at System.Linq.Buffer`1..ctor(IEnumerable`1 source)\r\n at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source)\r\n at prosoft.skb.insolventnostDataAccess.InsolventnostDataAccAR.GetOurUsersListLS(ICollection`1 taxNos) in C:\\svn\\skb\\insolventnostWithAR\\prosoft.skb.insolventnostDataAccess\\InsolventnostDataAR.cs:line 214\r\n at prosoft.skb.insolventnostDataFromWS.InsolventnostFromWS.filterByOurUsers(IEnumerable`1 odprtiPostopki) in C:\\svn\\skb\\insolventnostWithAR\\prosoft.skb.insolventnostDataFromWS\\InsolventnostFromWS.cs:line 237\r\n at prosoft.skb.insolventnostDataFromWS.InsolventnostFromWS.SyncData() in C:\\svn\\skb\\insolventnostWithAR\\prosoft.skb.insolventnostDataFromWS\\InsolventnostFromWS.cs:line 53" string Does Contains even work in linq for nhibernate? I couldn't find anything via google... Is there a workaround? Thanks!

    Read the article

  • Parsing Wiki XML Dumps ver0.4 just got tough

    - by syed
    Hello, I am trying to parse Wikipedia XML Dump using "Parse-MediaWikiDump-1.0.4" along with "Wikiprep.pl" script. I guess this script works fine with ver0.3 Wiki XML Dumps but not with the latest ver0.4 Dumps. I get the following error. Can't locate object method "page" via package "Parse::MediaWikiDump::Pages" at wikiprep.pl line 390. Also, under the "Parse-MediaWikiDump-1.0.4" documentation @ http://search.cpan.org/~triddle/Parse-MediaWikiDump-1.0.4/lib/Parse/MediaWikiDump/Pages.pm, I read "LIMITATIONS Version 0.4 This class was updated to support version 0.4 dump files from a MediaWiki instance but it does not currently support any of the new information available in those files." Any work arounds would help me get to the next level. Note: one may wonder why cannot we directly use SAX or STAX parser instead, wikipedia dump is a 25GB plus single file, stack/memory issues are obvious. Hence, the above perl script resolves this issue but currently I am stuck with this version problem.

    Read the article

  • WinDbg fails to find symbol file reporting 'unrecognized OMF sig'

    - by sean e
    I have received a 64bit dump of a 32bit app that was running on Win7 x64. I am able to load it in WinDbg (hint: !wow64exts.sw) running on a 64bit OS. The symbols for most of my dlls are loaded properly. The pdb for one though does not load. The same pdb does load properly for the same dll when reading a 32bit dump on a different system. I've also confirmed that the dll and pdb match each other via the chkmatch utility. I tried .symopt +40 but the pdb still didn't load. I did !sym noisy then .reload - WinDbg reported: DBGHELP: unrecognized OMF sig: 811f1121 *** ERROR: Symbol file could not be found. Defaulted to export symbols Any ideas on what to try to get WinDbg to load my pdb when reading a 64bit dump?

    Read the article

  • StreamWriter Problem - 2 Spaces Written as Hex '20 c2 a0' instead of Hex '20 20'

    - by Daver
    I'm writing a bunch of strings to a file using a string writer but I've discovered a problem when I look at the file created in hex, and that is that one of the spaces (x20) is replaced with a non-breaking space instead (xc2 a0) when there are 2 spaces separating words. I don't know if this is a big deal but I would like to know if there is an easy resolution to this? Here's what I'm seeing: 20 c2 a0 53 57 45 45 50 Dump = "  SWEEP" But I would like it to always be: 20 20 53 57 45 45 50 Dump = " SWEEP" Note that the c2 a0 aren't visible here but the dump looks something like 'A.' when I use the Notepad++ Hex Plugin. Does anyone have any ideas? Cheers and Thanks In Advance; -Daver

    Read the article

  • svnkit: Problem getting entry name

    - by GreenKiwi
    I'm trying to create a SVN Eclipese EFS plugin and have problems when getting the names of entries. When I make a call to: SVNRepository `//Fetches the contents of a directory into the provided collection object and returns the directory entry itself. SVNDirEntry getDir(String path, long revision, boolean includeCommitMessages, Collection entries)` It correctly returns the entry for the provided path, however, it doesn't set the "name" value on the "returned" entry. Note, the items returned in the collection are all OK. Does anyone know why this is? And/or if there is a workaround? See: http://svnkit.com/javadoc/org/tmatesoft/svn/core/io/SVNRepository.html http://svnkit.com/javadoc/org/tmatesoft/svn/core/io/SVNRepository.html#getDir(java.lang.String, long, boolean, java.util.Collection)

    Read the article

  • How to catch prompts with PySVN?

    - by Geoff
    I'm new to Python and PySVN in general, and I'm trying to export my SVN repository using pysvn. Here's my code: #set up svn login data def svn_credentials (realm, username, may_save): return True, svn_login_name, svn_login_password, False #establish connection svn_client = pysvn.Client () svn_client.callback_get_login = svn_credentials #export data svn_client.export('server-path-goes-here', 'client-path-goes-here', force=True Which works fine, but if the password is wrong or the user name is unknown, this code just sits. I believe it's being presented with a user login prompt on the SVN side, but I'm at a loss as to how to check whats happening with callback_get_login. Any help would be greatly appreciated.

    Read the article

  • Identify cause of hundreds of AJP threads in Tomcat

    - by Rich
    We have two Tomcat 6.0.20 servers fronted by Apache, with communication between the two using AJP. Tomcat in turn consumes web services on a JBoss cluster. This morning, one of the Tomcat machines was using 100% of CPU on 6 of the 8 cores on our machine. We took a heap dump using JConsole, and then tried to connect JVisualVM to get a profile to see what was taking all the CPU, but this caused Tomcat to crash. At least we had the heap dump! I have loaded the heap dump into Eclipse MAT, where I have found that we have 565 instances of java.lang.Thread. Some of these, obviously, are entirely legitimate, but the vast majority are named "ajp-6009-XXX" where XXX is a number. I know my way around Eclipse MAT pretty well, but haven't been able to find an explanation for it. If anyone has some pointers as to why Tomcat may be doing this, or some hints on finding out why using Eclipse MAT, that'd be appreciated!

    Read the article

  • Is it a good idea to cache data from web services into a database?

    - by Thierry Lam
    Let's assume that Stackoverflow offers web services where you can retrieve all the questions asked by a specific user. A request to get all question from user A can result in the following json output: { { "question": "What is rest?", "date_created": "20/02/2010", "votes": 1, }, { "question": "Which database to use for ...", "date_created": "20/07/2009", "votes": 5, }, } If I want to manipulate and present the data in any ways that I want, will it be wise to dump it in a local database? At some point, I will also want to retrieve all answers for each question and store them in a local database. The workflow that I'm thinking is: User logs in. Web services retrieve all questions asked by the logged in user, dump them in a local database. User wants all answers for a specific question, another web service does the retrieval and dump them in a local database. After user logs out, delete from the local database all questions and answers from that user.

    Read the article

  • Subversion and project management web based super tool. Like Team Foundation Server but not TFS.

    - by Rob Stevenson-Leggett
    Hi, We're currently looking at an IT upgrade and I'm after recommendations for a tool which can do some or all of the following. SVN management (authz, web viewer, commit log, diff) Create template projects (1 click e.g. create me a microsite with this name in svn and give these people access) Reporting on code churn, time spent on tasks on a per project basis User story management Basically like Team Foundation Server but that integrates with SVN properly (reason for this - we have a wide range of skill sets and not everyone can use a TFS client). Is there a combination of Trac plugins + something that can create trac instances (a la Dreamhost's admin panel) that can acheive this. On a side note, does anyone have any experience of version controlling designery type files - e.g. PSDs, Illustrator files. Any advice at all appreciated. Cheers, Rob

    Read the article

  • Postgres: clear entire database before re-creating / re-populating from bash script

    - by Hoff
    hi folks, I'm writing a shell script (will become a cronjob) that will: 1: dump my production database 2: import the dump into my development database Between step 1 and 2, I need to clear the development database (drop all tables?). How is this best accomplished from a shell script? So far, it looks like this: #!/bin/bash time=`date '+%Y'-'%m'-'%d'` # 1. export(dump) the current production database pg_dump -U production_db_name > /backup/dir/backup-${time}.sql # missing step: drop all tables from development database so it can be re-populated # 2. load the backup into the development database psql -U development_db_name < backup/dir/backup-${time}.sql Many thanks in advance! Martin

    Read the article

  • pysvn client.log() returning empty dictionary

    - by nashr rafeeg
    i have the following script that i am using to get the log messages from svn import pysvn class svncheck(): def __init__(self, svn_root="http://10.11.25.3/svn/Moodle/modules", svn_user=None, svn_password=None): self.user = svn_user self.password = svn_password self.root = svn_root def diffrence(self): client = pysvn.Client() client.commit_info_style = 1 client.callback_notify = self.notify client.callback_get_login = self.credentials log = client.log( self.root, revision_start=pysvn.Revision( pysvn.opt_revision_kind.number, 0), revision_end=pysvn.Revision( pysvn.opt_revision_kind.number, 5829), discover_changed_paths=True, strict_node_history=True, limit=0, include_merged_revisions=False, ) print log def notify( event_dict ): print event_dict return def credentials(realm, username, may_save): return True, self.user, self.password, True s = svncheck() s.diffrence() when i run this script its returning a empty dictionary object [<PysvnLog ''>, <PysvnLog ''>, <PysvnLog ''>,.. any idea what i am doing wrong here ? i am using pysvn version 1.7.2 built again svn version 1.6.5 cheers Nash

    Read the article

  • ASP.Net: Finding the cause of OutOfMemoryExpcetions

    - by Keith Bloom
    I trying to track down the cause of an OutOfMemory for a website. This site has ~12,000 .aspx pages and the last time it crashed I captured a memory dump using adplus. After some investigation I found a lot of heap fragmentation, there are around 100MB of Free blocks which can't be assigned. Digging deeper one of the Large Object Heaps is fragmented and the causes seems to be String interning as described [here][1] Could this be caused by the number of pages in the site? As they are all compiled they sit in memory and by looking at the dump they are interned and PINNED which I think means they stick around for a while. I would find this odd as there are many sites with more pages, but dynamic compilation could account for the growth in memory. What other methods are there for finding the cause of the memory leak? I have tried to capture a dump using adplus in hang mode but this fails and the IIS worker process get recycled. [1]: • http://stackoverflow.com/questions/686950/large-object-heap-fragmentation

    Read the article

  • Does deleting a branch in git remove it from the history?

    - by Ken Liu
    Coming from svn, just starting to become familiar with git. When a branch is deleted in git, is it removed from the history? In svn, you can easily recover a branch by reverting the delete operation (reverse merge). Like all deletes in svn, the branch is never really deleted, it's just removed from the current tree. If the branch is actually deleted from the history in git, what happens to the changes that were merged from that branch? Are they retained?

    Read the article

  • getting global name not defined error

    - by nashr rafeeg
    i have the following class class notify(): def __init__(self,server="localhost", port=23053): self.host = server self.port = port register = gntp.GNTPRegister() register.add_header('Application-Name',"SVN Monitor") register.add_notification("svnupdate",True) growl(register) def svn_update(self, author="Unknown", files=0): notice = gntp.GNTPNotice() notice.add_header('Application-Name',"SVN Monitor") notice.add_header('Notification-Name', "svnupdate") notice.add_header('Notification-Title',"SVN Commit") # notice.add_header('Notification-Icon',"") notice.add_header('Notification-Text',Msg) growl(notice) def growl(data): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((self.host,self.port)) s.send(data) response = gntp.parse_gntp(s.recv(1024)) print response s.close() but when ever i try to use this class via the follwoing code i get 'NameError: global name 'growl' is not defined' from growlnotify import * n = notify() n.svn_update() any one has an idea what is going on here ? cheers nash

    Read the article

  • Selecting keys based on metadata, possible with Amazon S3?

    - by nbv4
    I'm sending files to my S3 bucket that are basically gzipped database dumps. They keys are a human readable date ("2010-05-04.dump"), and along with that, I'm setting a metadata field to the UNIX time of the dump. I want to write a script that retrieve the latest dump from the bucket. That is to say I want the the key with the largest unix time metadata value. Is this possible with Amazon S3, or is this not how S3 is meant to work? I'm using both the command line tool aws, and the python library boto

    Read the article

  • Steps needed to install a PHP application (Mantis) on Windows Home Server - How please?

    - by Brian Frost
    I'm using Windows Home Server and have already managed to install SVN on it to allow me to use Tortoise SVN on client PC's sharing a repository on the server via SVN's service and port. I'd now like to install a bug tracker hosted on this server. I'm not fussy about which one but I saw Mantis - which is a PHP application and looks ok for my purpose. This is where I get weak on such stuff - what steps do I need to do to install and configure PHP (and presumably MySql to get mantis working? It is an http application. As an alternate answer, I'd be happy to use another - more easily installed - bug tracker that has a server service and a port of its own. I'll appreciate any comments.

    Read the article

  • Generating a set of files containing dumps of individual tables in a way that guarantees database co

    - by intuited
    I'd like to dump a MySQL database in such a way that a file is created for the definition of each table, and another file is created for the data in each table. I'd like this to be done in a way that guarantees database integrity by locking the entire database for the duration of the dump. What is the best way to do this? Similarly, what's the best way to lock the database while restoring a set of these dump files? edit I can't assume that mysql will have permission to write to files.

    Read the article

  • MySQL Import Database Error because of Extended Inserts

    - by Castgame
    Hello all, I'm importing a 400MB(uncompressed) MySQL database. I'm using BIGDUMP, and I am getting this error: Stopped at the line 387. At this place the current query includes more than 300 dump lines. That can happen if your dump file was created by some tool which doesn't place a semicolon followed by a linebreak at the end of each query, or if your dump contains extended inserts. Please read the BigDump FAQs for more infos. I believe the file does contain Extended Inserts, however I have no way to regenerate the database as it has been deleted from the old server. How can I import this database or convert it to be imported? Thanks for any help. Best Nick EDIT: It appears the only viable answer is to separate the extended inserts, but I still need help figuring out how to split the file as the answer below suggests. Please help. Thank you.

    Read the article

  • Browsing to Subversion repository location indicated in Apache conf file

    - by sonrael
    I have Subversion 1.6.6 and Apache 2.2.14 installed and working. I have made the following changes to the Apache httpd.conf file: #Uncommented by me for Subversion installation LoadModule dav_module modules/mod_dav.so LoadModule dav_fs_module modules/mod_dav_fs.so #Added by me for subversion installation LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so < Location /svn DAV svn SVNPath C:\Users\RED\Repositories < /Location ...When I navigate to localhost Apache is working properly, but if I try to go to localhost/svn the browser just hangs waiting for a response from the server. What is supposed to happen here? Does it have to do with the fact that I'm behind a wireless router on a dynamic IP address (although I can access localhost no problem so...)? As you can see I'm on windows (Vista)

    Read the article

  • Free E-Book - TortoiseSVN and Subversion Cookbook - Oracle Edition

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/24/free-e-book---tortoisesvn-and-subversion-cookbook---oracle-edition.aspxAt http://www.red-gate.com/products/oracle-development/education/entrypage/svn-tortoise-oracle-ebook?utm_source=simpletalk&utm_medium=pubemail&utm_ad_content=SVNOraclecookbook-20130624&utm_campaign=sourcecontrolfororacle&utm_term=main, Redgate are offering a free eBook - TortoiseSVN and Subversion Cookbook - Oracle Edition "Download your free copy of TortoiseSVN and Subversion Cookbook - Oracle Edition and use these recipes to work better, faster, and do things you never knew you could do with SVN. If you're new to source control, this book provides a concise guide to getting the most out of Subversion."Those of using Oracle for your back-end database, may be interested in a free trial of Source Control for Oracle.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >