Search Results

Search found 1864 results on 75 pages for 'dump'.

Page 43/75 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • cvs to mercurial conversion gets tags wrong

    - by Mark Borgerding
    I've tried all the recommended conversion techniques Mostly they manage to get the latest version of the files right, but every one of them trashes my history. Many (most?) of the tags from my cvs project have at least one file in error when I run "hg up $tag" My cvs repo is not all that complicated. Why can't anything convert it? I'd like to dump cvs and convert to mercurial, but not without history. To recap my frustration: I tried hg convert (tried --branchsort,--timesort, fuzz=0) I tried cvs2svn and then hg convert. tailor does not work with recent versions of mercurial fromcvs disappeared from the face of the earth hg-cvs-import has been abandoned for 4 years and doesn't work with recent versions of hg I have tried using the two most recent versions of mercurial ( 1.5 and 1.5.1 ).

    Read the article

  • Attach data to Richtext using flex, mysql and php(newbie)

    - by dmschenk
    I'm trying to attach data to a richtext field in Flashbuilder. The data is an html string in a mysql database and I'm connecting using php. The php script is just converting to XML. The connection works because I can easily dump the data to a datagrid and I can see the string. when I try to hook the data to a Richtext field I get "object CallResponder" so I am connected, but I'm not sure how to break it down so that it is just an html string for the text field. Thanks protected function creationCompleteHandler(event:FlexEvent):void { getAllAbout_fxResult.token = aboutfxService.getAllAbout_fx(); } <s:RichText x="10" y="10" text="{getAllAbout_fxResult}" creationComplete="creationCompleteHandler(event)" width="300" height="300"/>

    Read the article

  • Preserve images in Excel headers using Apache POI

    - by ddm
    I am trying to generate Excel reports using Apache POI 3.6 (latest). Since POI has limited support for header and footer generation (text only), I decided to start from a blank excel file with the header already prepared and fill the Excel cells using POI (cf. question 714172). Unfortunately, when opening the workbook with POI and writing it immediately to disk (without any cell manpulation), the header seems to be lost. Here is the code I used to test this behavior: public final class ExcelWorkbookCreator { public static void main(String[] args) { FileOutputStream outputStream = null; try { outputStream = new FileOutputStream(new File("dump.xls")); InputStream inputStream = ExcelWorkbookCreator.class.getResourceAsStream("report_template.xls"); HSSFWorkbook workbook = new HSSFWorkbook(inputStream, true); workbook.write(outputStream); } catch (Exception exception) { throw new RuntimeException(exception); } finally { if (outputStream != null) { try { outputStream.close(); } catch (IOException exception) { // Nothing much to do } } } } }

    Read the article

  • Anyone have an XSL to convert Boost.Test XML logs to a presentable format?

    - by Stuart Lange
    I have some C++ projects running through cruisecontrol.net. As a part of the build process, we compile and run Boost.Test unit test suites. I have these configured to dump XML log files. While the format is similar to JUnit/NUnit, it's not quite the same (and lacks some information), so cruisecontrol.net is unable to pick them up. I am wondering if anyone has created (or knows of) an existing XSL transform that will convert Boost.Test results to JUnit/NUnit format, or alternatively, directly to a presentable (html) format. Thanks!

    Read the article

  • serializing JSON files with newlines in Python

    - by user248237
    I am using json and jsonpickle sometimes to serialize objects to files, using the following function: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f) f.close() The problem is that if I serialize a dictionary for example, using "json_serialize(mydict, myfilename)" then the entire serialization gets put on one line. This means that I can't grep the file for entries to be inspected by hand, like I would a CSV file. Is there a way to make it so each element of an object (e.g. each entry in a dict, or each element in a list) is placed on a separate line in the JSON output file? thanks.

    Read the article

  • Visual Studio Extensions

    - by John Maloney
    I have a project that generates text (representing an interface and a class) based on metadata. I would like to take this generated code and insert it as a new class and interface directly into the currently opened solution under a specific project and directory. I will create the menu tool that will generate the class but what I don't know how to do is gain access to the following items from within my custom Visual Studio Extension: Iterate the current solution and find a project to dump the generated code into. Open a new file window within Visual Studio and inject the generated text that comes from my tool directly into that window. Create a new folder in a specific project within the current solution from within my custom extension. EDIT - To clarify I need to open a new file (e.g. Right Click on a Project - Add - New Class) and insert text into it from within my custom Visual Studio Extension. Thanks

    Read the article

  • Best way to fetch data from a single database table with multiple threads?

    - by Ravi Bhatt
    Hi, we have a system where we collect data every second on user activity on multiple web sites. we dump that data into a database X (say MS SQL Server). we now need to fetch data from this single table from daatbase X and insert into database Y (say mySql). we want to fetch time based data from database X through multiple threads so that we fetch as fast as we can. Once fetched and stored in database Y, we will delete data from database X. Are there any best practices on this sort of design? any specific things to take care on table design like sharing or something? Are there any other things that we need to take care to make sure we fetch it as fast as we can from threads running on multiple machines? Thanks in advance! Ravi

    Read the article

  • T-Sql SPROC - Parse C# Datatable to XML in Database (SQL Server 2005)

    - by Goober
    Scenario I've got an application written in C# that needs to dump some data every minute to a database. Because its not me that has written the spec, I have been told to store the data as XML in the SQL Server database and NOT TO USE the "bulk upload" feature. Essentially I just wanted to have a single stored procedure that would take XML (that I would produce from my datatable) and insert it into the database.....and do this every minute. Current Situation I've heard about the use of "Sp_xml_preparedocument" but I'm struggling to understand most of the examples that I've seen (My XML is far narrower than my C Sharp ability). Question I would really appreciate someone either pointing me in the direction of a worthwhile tutorial or helping explain things. EDIT - Using (SQL Server 2005)

    Read the article

  • Problem while loading the application on iPAD?

    - by chaitanya
    Hi, I developed a simple application for iPAD. I want to test the app how it works on the device. I have paid developer licence, and i have added the device id and created the app id and i have downloaded the provisioning profile using both. The same way how we will build the app for iphone i have done for ipad. i have sent the provisioning profile and .ipa file to my friend to load on to the ipad device(same device which i have added in the developer.apple.com). when he tried to drag n drop the provisioning file on to the device from iTunes it is giving below error. "abc.mobileprovision" was not copied on to the iPAD, because it cannot be palyed on this iPAD I am not able to understand what the exact error is. Can anyone please let me know how to dump the applicatio on to the ipad device?

    Read the article

  • "tailing" a binary file based on string location using bash?

    - by ilitirit
    I've got a bunch of binary files, each containing an embedded string near the end of the file but at different places (only occurs once in each file). I need to extract the part of the file starting at the location of the string till the end of the file and dump it into a new file. eg. If the file's contents is "AWREDEDEDEXXXERESSDSDS" and the string of interest is "XXX", then the part of the file I need is "XXXERESSDSDS". What's the easiest way to do this in bash?

    Read the article

  • Simple image server

    - by Joel
    I have a bunch of images that I need others to browse via a web browser in pretty much the same way as Apache-Gallery. I'd be able to dump all my images in a directory so that users hitting: http://server:port/directory would see small thumbnails and selecting an image would load it full size on a page with options to browse the previous or next image. I'm looking for a non Apache solution, much like the wonderfull Python simple http server, that can be launched anywhere with minimal configuration & fuss e.g. python -m SimpleHTTPServer 8000 In fact, the python solution above is pretty much want I want except it doesn't thumbnail the images but just a simple directory listing. Happy to use an app written in any common language so long as it is self contained and can run on linux on a custom port (and to re-iterate, not an Apache module).

    Read the article

  • MySQL configuration that allows for locking many tables?

    - by Floyd Bonne
    I need to do a MySQLDump on a DB with ~700 tables and when I try with my current configuration, I get an error: mysqldump: Got error: 1016: Can't open file: './my_db/content_node_field_instance.frm' (errno: 24) when using LOCK TABLES Searching around I've found that this happens because it tries to lock all tables and fails because they are "too many". I know I can try lock-tables=no and get a dump, but this way my DB might be in an inconsistent state. So, does anyone know what is the MySQL configuration setting I need to change in order to be able to do the locking I need? I'm running 5.1.37-1ubuntu5.1 with MyISAM. Thanks!

    Read the article

  • Problems with contenttypes when loading a fixture in Django

    - by gerdemb
    I am having trouble loading Django fixtures into my MySQL database because of contenttypes conflicts. First I tried dumping the data from only my app like this: ./manage.py dumpdata escola > fixture.json but I kept getting missing foreign key problems, because my app "escola" uses tables from other applications. I kept adding additional apps until I got to this: ./manage.py dumpdata contenttypes auth escola > fixture.json Now the problem is the following constraint violation when I try to load the data as a test fixture: IntegrityError: (1062, "Duplicate entry 'escola-t23aluno' for key 2") It seems the problem is that Django is trying to dynamically recreate contenttypes with different primary key values that conflict with the primary key values from the fixture. This appears to be the same as bug documented here: http://code.djangoproject.com/ticket/7052 The problem is that the recommended workaround is to dump the contenttypes app which I'm already doing!? What gives? If it makes any difference I do have some custom model permissions as documented here: http://docs.djangoproject.com/en/dev/ref/models/options/#permissions

    Read the article

  • remuxing mpv files from h264 AVI files

    - by crankharder
    I have a bunch of, I think, x264 encoded AVIs that I'd like to convert to m4v so that I can play with Quicktime. Here's how I created them First I dump the vob from DVD with this: $ mplayer -dumpstream -dumpfile new.vob dvd://1 Then I compress it: $ mencoder -oac copy -o new.avi -ovc x264 -x264encopts crf=18 new.vob I tried doing this to convertthem to m4v, but it's blowing up... I tried dumping the h264/acc streams: $ mplayer new.avi -dumpvideo -dumpfile new.h264 $ mplayer new.avi -dumpaudio -dumpfile new.acc And remuxing(?) with MP4Box but I'm getting an error: $ MP4Box -add new.h264#video -add new.aac#audio new.m4v Cannot find H264 start code Error importing new.h264#video: BitStream Not Compliant So not sure what to do now...

    Read the article

  • GLib Hash Table - Pointer

    - by Mike
    I'm trying to increment the value of some specific key if it was found. For some reason I keep getting the (pointer) address when I dump all keys:values from the hash table. Output a: 153654132 // should be 5 b: 1 c: 153654276 // should be 3 d: 1 e: 1 f: 153654420 // should be 3 int proc() { struct st stu; gpointer ok, ov; //... some non-related code here if(!g_hash_table_lookup_extended(table, key, &ok, &ov)){ stu.my_int = g_malloc(sizeof(guint)); *(stu.my_int) = 0; g_hash_table_insert(table, g_strdup(key), GINT_TO_POINTER(1)); }else{ stu.my_int = g_malloc(sizeof(guint)); *(stu.my_int)++; g_hash_table_insert(table, g_strdup(key), stu.my_int); } } Any ideas will be appreciate it.

    Read the article

  • GLib Hash Table - Pointer

    - by Mike
    I'm trying to increment the value of some specific key if it was found. For some reason I keep getting the (pointer) address when I dump all keys:values from the hash table. Output a: 153654132 // should be 5 b: 1 c: 153654276 // should be 3 d: 1 e: 1 f: 153654420 // should be 3 int proc() { struct st stu; gpointer ok, ov; //... some non-related code here if(!g_hash_table_lookup_extended(ht, key, &ok, &ov)){ stu.my_int = g_malloc(sizeof(guint)); *(stu.my_int) = 0; g_hash_table_insert(table, g_strdup(key), GINT_TO_POINTER(1)); }else{ stu.my_int = g_malloc(sizeof(guint)); *(stu.my_int)++; g_hash_table_insert(table, g_strdup(key), stu.my_int); } } Any ideas will be appreciate it.

    Read the article

  • PHP DOMNode entities and nodeValue

    - by Obsidian
    When getting the nodeValue of a DOMNode object that has entities in the nodeValue (i.e. a & gt;) then it converts the entity into it's printable character (i.e. ) Does anyone know of a way to get it to keep it as an entity, it really messes up string comparisons when it converts to something unexpected. The following code reproduces the problem you will notice the length of the dump is 3 when it should be 6. <?php $xml='<?xml version="1.0"?> <root> <element>&gt;</element> </root>'; $a=new DOMDocument(); $a->loadXML($xml); var_dump($a->childNodes->item(0)->nodeValue);

    Read the article

  • .Net: Adding files and folders to SETUP Project programmatically

    - by Manish
    So, here is the scenario: I want to create a installer which would just dump few files and folders at a location specified by user. But the problem is these files are required to be picked up from a fixed source folder and then the installer is build. Also, these files may change any time and then again a new version of the installer is required to be created. So this needs to be done programmatically. Also, how can I add some coding stuff in setup projects? (New to SETUP PROJECTS) How? Any ideas/comments appreciated...

    Read the article

  • PDB file from different versions of Visual Studio

    - by m3rLinEz
    I have an old DLL file which was built with VC++ 6. Now I need to investigate the dump file but I don't have its PDB available. The stacktrace reported by WinDbg is also inaccurate. Is it possible to rebuild the project with later versions of Visual Studio i.e. 2003, 2005, 2008, have the PDB generated, and use this to map addresses to symbols in the old DLL? Is there something like VC 6.0 compatible mode for building project? Obtaining VC++ 6 is one option, but it looks like VS6.0 has already vanished from MSDN subscriber download page :( Thanks!

    Read the article

  • Looking for an alternative to cfdump

    - by invertedSpear
    I think I just realized how restrictive my web host is when they wouldn't let me use cfdump. This actually kind of angers me, cause really, what harm is dump going to do? Anyway my question is has anyone written a cfdump alternative that will kick out complex types of data or can link me to a site with a code example? Can't really used cfc's or udfs either cause guess what, they're blocked too. Anyway looking for something simple that I can just paste in my cfml and I will be happy. It's sad that I used to be able to do this, but have forgotten a lot of that skillset since I moved into Flex and AS. oh and they're using cf7, so no cf8 or 9 tricks ;-) Thanks in advance.

    Read the article

  • Disabling Xdebug's dumping of caught exceptions

    - by nuqqsa
    By default Xdebug will dump any exception regardless of whether it is caught or not: try { throw new Exception(); } catch (Exception $e) { } echo 'life goes on'; With XDebug enabled and the default settings this piece of code will actually output something like the following (nicely formatted): ( ! ) Exception: in /test.php on line 3 Call Stack # Time Memory Function Location 1 0.0003 52596 {main}( ) ../test.php:0 life goes on Is it possible to disable this behaviour and have it dumping only the uncaught exceptions? Thanks in advance. UPDATE: I'm about to conclude that this is a bug, since xdebug.show_exception_trace is disabled by default yet it doesn't behave as expected (using Xdebug v2.0.5 with PHP 5.2.10 on Ubuntu 9.10).

    Read the article

  • [Gdata] GetAuthSubToken returns None

    - by Matt
    Hey guys, I am a little lost on how to get the auth token. Here is the code I am using on the return from authorizing my app: client = gdata.service.GDataService() gdata.alt.appengine.run_on_appengine(client) sessionToken = gdata.auth.extract_auth_sub_token_from_url(self.request.uri) client.UpgradeToSessionToken(sessionToken) logging.info(client.GetAuthSubToken()) what gets logged is "None" so that does seem right :-( if I use this: temp = client.upgrade_to_session_token(sessionToken) logging.info(dump(temp)) I get this: {'scopes': ['http://www.google.com/calendar/feeds/'], 'auth_header': 'AuthSub token=CNKe7drpFRDzp8uVARjD-s-wAg'} so I can see that I am getting a AuthSub Token and I guess I could just parse that and grab the token but that doesn't seem like the way things should work. If I try to use AuthSubTokenInfo I get this: Traceback (most recent call last): File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/webapp/__init__.py", line 507, in __call__ handler.get(*groups) File "controllers/indexController.py", line 47, in get logging.info(client.AuthSubTokenInfo()) File "/Users/matthusby/Dropbox/appengine/projects/FBCal/gdata/service.py", line 938, in AuthSubTokenInfo token = self.token_store.find_token(scopes[0]) TypeError: 'NoneType' object is unsubscriptable so it looks like my token_store is not getting filled in correctly, is that something I should be doing? Also I am using gdata 2.0.9 Thanks Matt

    Read the article

  • Filtering Filenames with bash

    - by Stefan Liebenberg
    I have a directory full of log files in the form ${name}.log.${year}{month}${day} such that they look like this: logs/ production.log.20100314 production.log.20100321 production.log.20100328 production.log.20100403 production.log.20100410 ... production.log.20100314 production.log.old I'd like to use a bash script to filter out all the logs older than x amount of month's and dump it into *.log.old X=6 #months LIST=*.log.*; for file in LIST; do is_older = file_is_older_than_months( ${file}, ${X} ); if is_older; then cat ${c} >> production.log.old; rm ${c}; fi done; How can I get all the files older than x months? and... How can I avoid that *.log.old file is included in the LIST attribute? Thank you Stefan

    Read the article

  • Garbage collection of Strings returned from C# method calls in ascx pages

    - by Icarus
    Hi, For a web application developed on ASP.NET, we are finding that for user control files (ascx) we are returning long strings as a result of method calls. These are embedded in the ascx pages using the special tags <% %> When performing memory dump analysis for the application, we find that many of those strings are not being garbage collected. Also, the ascx pages are compiled to temporary DLLs and they are held in memory. Is this responsible for causing the long strings to remain in memory and not be garbage collected ? Note : The strings are larger than 85K in size.

    Read the article

  • How to return an image in an HTTP response with CherryPy

    - by colinmarc
    I have code which generates a Cairo ImageSurface, and I expose it like so: def preview(...): surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, width, height) ... cherrypy.response.headers['Content-Type'] = "image/png" return surface.get_data() preview.exposed = True This doesn't work (browsers report that the image has errors). I've tested that surface.write_to_png('test.png') works, but I'm not sure what to dump the data into to return it. I'm guessing some file-like object? According to the pycairo documentation, get_data() returns a buffer. I've also now tried: tempf = os.tmpfile() surface.write_to_png(tempf) return tempf Also, is it better to create and hold this image in memory (like I'm trying to do) or write it to disk as a temp file and serve it from there? I only need the image once, then it can be discarded.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >