Search Results

Search found 1968 results on 79 pages for 'pickle dump'.

Page 18/79 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • LINQ to Twitter Queries with LINQPad

    - by Joe Mayo
    LINQPad is a popular utility for .NET developers who use LINQ a lot.  In addition to standard SQL queries, LINQPad also supports other types of LINQ providers, including LINQ to Twitter.  The following sections explain how to set up LINQPad for making queries with LINQ to Twitter. LINQPad comes in a couple versions and this example uses LINQPad4, which runs on the .NET Framework 4.0. 1. The first thing you'll need to do is set up a reference to the LinqToTwitter.dll. From the Query menu, select query properties. Click the Browse button and find the LinqToTwitter.dll binary. You should see something similar to the Query Properties window below. 2. While you have the query properties window open, add the namespace for the LINQ to Twitter types.  Click the Additional Namespace Imports tab and type in LinqToTwitter. The results are shown below: 3. The default query type, when you first start LINQPad, is C# Expression, but you'll need to change this to support multiple statements.  Change the Language dropdown, on the Main window, to C# Statements. 4. To query LINQ to Twitter, instantiate a TwitterContext, by typing the following into the LINQPad Query window: var ctx = new TwitterContext(); Note: If you're getting syntax errors, go back and make sure you did steps #2 and #3 properly. 5. Next, add a query, but don't materialize it, like this: var tweets = from tweet in ctx.Status where tweet.Type == StatusType.Public select new { tweet.Text, tweet.Geo, tweet.User }; 6. Next, you want the output to be displayed in the LINQPad grid, so do a Dump, like this: tweets.Dump(); The following image shows the final results:   That was an unauthenticated query, but you can also perform authenticated queries with LINQ to Twitter's support of OAuth.  Here's an example that uses the PinAuthorizer (type this into the LINQPad Query window): var auth = new PinAuthorizer { Credentials = new InMemoryCredentials { ConsumerKey = "", ConsumerSecret = "" }, UseCompression = true, GoToTwitterAuthorization = pageLink => Process.Start(pageLink), GetPin = () => { // this executes after user authorizes, which begins with the call to auth.Authorize() below. Console.WriteLine("\nAfter you authorize this application, Twitter will give you a 7-digit PIN Number.\n"); Console.Write("Enter the PIN number here: "); return Console.ReadLine(); } }; // start the authorization process (launches Twitter authorization page). auth.Authorize(); var ctx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/"); var tweets = from tweet in ctx.Status where tweet.Type == StatusType.Public select new { tweet.Text, tweet.Geo, tweet.User }; tweets.Dump(); This code is very similar to what you'll find in the LINQ to Twitter downloadable source code solution, in the LinqToTwitterDemo project.  For obvious reasons, I changed the value assigned to ConsumerKey and ConsumerSecret, which you'll have to obtain by visiting http://dev.twitter.com and registering your application. One tip, you'll probably want to make this easier on yourself by creating your own DLL that encapsulates all of the OAuth logic and then call a method or property on you custom class that returns a fully functioning TwitterContext.  This will help avoid adding all this code every time you want to make a query. Now, you know how to set up LINQPad for LINQ to Twitter, perform unauthenticated queries, and perform queries with OAuth. Joe

    Read the article

  • how to do loop for array which have different data for each array

    - by Suriani Salleh
    i have this file XML file.. I need to convert it form XMl to MYSQL. if it have only one array than i know how to do it.. now my question how to extract this two array Each array will have different value of data..for example for first array, pmIntervalTxEthMaxUtilization data : 0,74,0,0,48 and for second array pmIntervalRxPowerLevel data: -79,-68,-52 , pmIntervalTxPowerLevel data: 13,11,-55 . can some one help to guide how write php code to extract this xml file to MY SQL <mi> <mts>20130618020000</mts> <gp>900</gp> <mt>pmIntervalRxUndersizedFrames</mt> [ this is first array] <mt>pmIntervalRxUnicastFrames</mt> <mt>pmIntervalTxUnicastFrames</mt> <mt>pmIntervalRxEthMaxUtilization</mt> <mt>pmIntervalTxEthMaxUtilization</mt> <mv> <moid>port:1:3:23-24</moid> <sf>FALSE</sf> <r>0</r> [the data for 1st array i want to insert in DB] <r>0</r> <r>0</r> <r>5</r> <r>0</r> </mv> </mi> <mi> <mts>20130618020000</mts> <gp>900</gp> <mt>pmIntervalRxSES</mt> [this is second array] <mt>pmIntervalRxPowerLevel</mt> <mt>pmIntervalTxPowerLevel</mt> <mv> <moid>client:1:3:23-24</moid> <sf>FALSE</sf> <r>0</r> [the data for 2nd array i want to insert in DB] <r>-79</r> <r>13</r> </mv> </mi> this is the code for one array that i write..i dont know how to write code for two array because the field appear two times and have different data value for each array // Loop through the specified xpath foreach($xml->mi->mv as $subchild) { $port_no = $subchild->moid; $rx_ses = $subchild->r[0]; $rx_es = $subchild->r[1]; $tx_power = $subchild->r[10]; // dump into database; ........................... i have do a little research on it this is the out come... $i = 0; while( $i < 5) { // Loop through the specified xpath foreach($xml->md->mi->mv as $subchild) { $port_no = $subchild->moid; $rx_uni = $subchild->r[10]; $tx_uni = $subchild->r[11]; $rx_eth = $subchild->r[16]; $tx_eth = $subchild->r[17]; // dump into database; .............................. $i++; if( $i == 5 )break; } } // Loop through the specified xpath foreach($xml->mi->mv as $subchild) { $port_no = $subchild->moid; $rx_ses = $subchild->r[0]; $rx_es = $subchild->r[1]; $tx_power = $subchild->r[10]; // dump into database; .......................

    Read the article

  • Math with Timestamp

    - by Knut Vatsendvik
    table.sql { border-width: 1px; border-spacing: 2px; border-style: dashed; border-color: #0023ff; border-collapse: separate; background-color: white; } table.sql th { border-width: 1px; padding: 1px; border-style: none; border-color: gray; background-color: white; -moz-border-radius: 0px 0px 0px 0px; } table.sql td { border-width: 1px; padding: 3px; border-style: none; border-color: gray; background-color: white; -moz-border-radius: 0px 0px 0px 0px; } .sql-keyword { color: #0000cd; background-color: inherit; } .sql-result { color: #458b74; background-color: inherit; } Got this little SQL quiz from a colleague.  How to add or subtract exactly 1 second from a Timestamp?  Sounded simple enough at first blink, but was a bit trickier than expected. If the data type had been a Date, we knew that we could add or subtract days, minutes or seconds using + or – sysdate + 1 to add one day sysdate - (1 / 24) to subtract one hour sysdate + (1 / 86400) to add one second Would the same arithmetic work with Timestamp as with Date? Let’s test it out with the following query SELECT   systimestamp , systimestamp + (1 / 86400) FROM dual; ---------- 03.05.2010 22.11.50,240887 +02:00 03.05.2010 The first result line shows us the system time down to fractions of seconds. The second result line shows the result as Date (as used for date calculation) meaning now that the granularity is reduced down to a second.   By using the PL/SQL dump() function, we can confirm this with the following query SELECT   dump(systimestamp) , dump(systimestamp + (1 / 86400)) FROM dual; ---------- Typ=188 Len=20: 218,7,5,4,8,53,9,0,200,46,89,20,2,0,5,0,0,0,0,0 Typ=13 Len=8: 218,7,5,4,10,53,10,0 Where typ=13 is a runtime representation for Date. So how can we increase the precision to include fractions of second? After investigating it a bit, we found out that the interval data type INTERVAL DAY TO SECOND could be used with the result of addition or subtraction being a Timestamp. Let’s try again our first query again, now using the interval data type. SELECT systimestamp,    systimestamp + INTERVAL '0 00:00:01.0' DAY TO SECOND(1) FROM dual; ---------- 03.05.2010 22.58.32,723659000 +02:00 03.05.2010 22.58.33,723659000 +02:00 Yes, it worked! To finish the story, here is one example showing how to specify an interval of 2 days, 6 hours, 30 minutes, 4 seconds and 111 thousands of a second. INTERVAL ‘2 6:30:4.111’ DAY TO SECOND(3)

    Read the article

  • How to break WinDbg in an anonymous method?

    - by Richard Berg
    Title kinda says it all. The usual SOS command !bpmd doesn't do a lot of good without a name. Some ideas I had: dump every method, then use !bpmd -md when you find the corresponding MethodDesc not practical in real world usage, from what I can tell. Even if I wrote a macro to limit the dump to anonymous types/methods, there's no obvious way to tell them apart. use Reflector to dump the MSIL name doesn't help when dealing with dynamic assemblies and/or Reflection.Emit. Visual Studio's inability to read local vars inside such scenarios is the whole reason I turned to Windbg in the first place... set the breakpoint in VS, wait for it to hit, then change to Windbg using the noninvasive trick attempting to detach from VS causes it to hang (along with the app). I think this is due to the fact that the managed debugger is a "soft" debugger via thread injection instead of a standard "hard" debugger. Or maybe it's just a VS bug specific to Silverlight (would hardly be the first I've encountered). set a breakpoint on some other location known to call into the anonymous method, then single-step your way in my backup plan, though I'd rather not resort to it if this Q&A reveals a better way

    Read the article

  • Working with a large data object between ruby processes

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds Update: Since my original question did not receive many responses, I'm assuming there's no solution as easy as I would have hoped. Presently I'm considering two options: Create a Sinatra application to store this hash with an API to modify/access it. Create a C application to do the same as #1, but a lot faster. The scope of my problem has increased such that the hash may be larger than my original example. So #2 may be necessary. But I have no idea where to start in terms of writing a C application that exposes an appropriate API. A good walkthrough through how best to implement #1 or #2 may receive best answer credit.

    Read the article

  • JVM segmentation faults due to "Invalid memory access of location"

    - by Dan
    I have a small project written in Scala 2.9.2 with unit tests written using ScalaTest. I use SBT for compiling and running my tests. Running sbt test on my project makes the JVM segfault regularly, but just compiling and running my project from SBT works fine. Here is the exact error message: Invalid memory access of location 0x8 rip=0x10959f3c9 [1] 11925 segmentation fault sbt I cannot locate a core dump anywhere, but would be happy to provide it if it can be obtained. Running java -version results in this: java version "1.6.0_37" Java(TM) SE Runtime Environment (build 1.6.0_37-b06-434-11M3909) Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01-434, mixed mode) But I've also got Java 7 installed (though I was never able to actually run a Java program with it, afaik). Another issue that may be related: some of my test cases contain titles including parentheses like ( and ). SBT or ScalaTest (not sure) will consequently insert square parens in the middle of the output. For example, a test case with the name (..)..(..) might suddenly look like (..[)..](..). Any help resolving these issues is much appreciated :-) EDIT: I installed the Java 7 JDK, so now java -version shows the right thing: java version "1.7.0_07" Java(TM) SE Runtime Environment (build 1.7.0_07-b10) Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode) This also means that I now get a more detailed segfault error and a core dump: # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x000000010a71a3e3, pid=16830, tid=19459 # # JRE version: 7.0_07-b10 # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.3-b01 mixed mode bsd-amd64 compressed oops) # Problematic frame: # V [libjvm.dylib+0x3cd3e3] And the dump.

    Read the article

  • Sharing large objects between ruby processes without a performance hit

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds

    Read the article

  • How can I store HTML in a Doctrine YML fixture

    - by argibson
    I am working with a CMS-type site in Symfony 1.4 (Doctrine 1.2) and one of the things that is frustrating me is not being able to store HTML pages in YML fixtures. Instead I have to create SQL backups of the data if I want to drop and rebuild which is a bit of a pest when Symfony/Doctrine has a fantastic mechanism for doing exactly this. I could write a mechanism that reads in a set of HTML files for each page and fills the data in that way (or even write it as a task). But before I go down that road I am wondering if there is any way for HTML to be stored in a YML fixture so that Doctrine can simply import it into the database. Update: I have tried using symfony doctrine:data-dump and symfony doctrine:data-load however despite the dump correctly creating the fixture with the HTML, the load task appears to 'skip' the value of the column with the HTML and enters everything else into the row. In the database the field doesn't show up as 'NULL' but rather empty so I believe Doctrine is adding the value of the column as ''. Below is a sample of the YML fixture that symfony doctrine:data-dump created. I have tried running symfony doctrine:data-load against various forms of this including removing all the escaped characters (new lines and quotes leaving only angle brackets) but it still doesn't work. Product_69: name: 'My Product' Developer: Developer_30 tagline: 'Text that briefly describes the product' version: '2008' first_published: '' price_code: A79 summary: '' box_image: '' description: "<div id=\"featureSlider\">\n <ul class=\"slider\">\n <li class=\"sliderItem\" title=\"Summary\">\n <div class=\"feature\">\n Some text goes in here</div>\n </li>\n </ul>\n </div>\n" is_visible: true

    Read the article

  • shoulda macros with rspec2 beta 5 and rails3 beta2

    - by Millisami
    I've setup Rspec2 beta5 and shoulda as following to use shoulda macros inside rspec model tests. Gemfile group :test do gem "rspec", ">= 2.0.0.beta.4" gem "rspec-rails", ">= 2.0.0.beta.4" gem 'shoulda', :git => 'git://github.com/bmaddy/ shoulda.git' gem "faker" gem "machinist" gem "pickle", :git => 'git://github.com/codegram/ pickle.git' gem 'capybara', :git => 'git://github.com/jnicklas/ capybara.git' gem 'database_cleaner', :git => 'git://github.com/bmabey/ database_cleaner.git' gem 'cucumber-rails', :git => 'git://github.com/aslakhellesoy/ cucumber-rails.git' end *spec_helper.rb* Dir["#{File.dirname(__FILE__)}/support/**/*.rb"].each {|f| require f} require 'shoulda' Rspec.configure do |config| *spec/models/outlet_spec.rb* require 'spec_helper' describe Outlet do it { should validate_presence_of(:name) } end And when I run the spec, I get the following error. [~/rails_apps/rails3_apps/automation (master)?] ? spec spec/models/ outlet_spec.rb DEPRECATION WARNING: RAILS_ROOT is deprecated! Use Rails.root instead. (called from join at /home/millisami/.rvm/gems/ruby-1.9.1-p378%rails3/ bundler/gems/shoulda-87e75311f83548760114cd4188afa4f83fecdc22-master/ lib/shoulda/autoload_macros.rb:40) F 1) Outlet Failure/Error: it { should validate_presence_of(:name) } undefined method `validate_presence_of' for #<Rspec::Core::ExampleGroup::Nested_1:0xc4dc138 @__memoized={}> # ./spec/models/outlet_spec.rb:4:in `block (2 levels) in <top (required)>' Finished in 0.0399 seconds 1 example, 1 failures [~/rails_apps/rails3_apps/automation (master)?] ? Why the "undefined method" ?? Is the shoulda getting loaded?

    Read the article

  • Producing Mini Dumps for _caught_ SEH exceptions in mixed code DLL

    - by Assaf Lavie
    I'm trying to use code similar to clrdump to create mini dumps in my managed process. This managed process invokes C++/CLI code which invokes some native C++ static lib code, wherein SEH exceptions may be thrown (e.g. the occasional access violation). C# WinForms -> C++/CLI DLL -> Static C++ Lib -> ACCESS VIOLATION Our policy is to produce mini dumps for all SEH exceptions (caught & uncaught) and then translate them to C++ exceptions to be handled by application code. This works for purely native processes just fine; but when the application is a C# application - not so much. The only way I see to produce dumps from SEH exceptions in a C# process is to not catch them - and then, as unhandled exceptions, use the Application.ThreadException handler to create a mini dump. The alternative is to let the CLR translate the SEH exception into a .Net exception and catch it (e.g. System.AccessViolationException) - but that would mean no dump is created, and information is lost (stack trace information in Exception isn't as rich as the mini dump). So how can I handle SEH exceptions by both creating a minidump and translating the exception into a .Net exception so that my application may try to recover?

    Read the article

  • svn import, dont modify revision OR modify the list of files in a transaction

    - by Vaughan Durno
    Hi Ive gained so much knowledge/insight from this site in the past few years, now im actually hoping to get some enlightenment. The scenario is as follows: You have the general structure of the repo (trunk,branches,tags) but added to the layout you have another directory called 'db_revs'. Now in the pre-commit, you take a dump of a specific database (the specifics are irrelevant) into a temporary file, say /tmp/REV.sql (REV being the HEAD revision number of the repo, or the transaction). K all is well and you can just import that temp file into the repo at /db_revs/REV.sql Now obviously that import, even tho its happening during a commit, increments the revision of the repo. So when u do a commit at some point to say 'test.php' in the trunk and it completes at say revision 159, then the pre-commit runs as it should and the DB dump gets imported but then u r sitting with a tree in the repo-browser where 'trunk' is at revision 159, and 'db_revs', which has the imported dump, is at 158 (Ive made it so that the filename matches the revision ie: 159.sql but that file is then at revision 158). NB If you're doing an import in a pre-commit, you need to add some logic to not perform the import, say by checking first for the existence of the temp file, otherwise it will cause, um, a stack overflow and your PC will quickly crawl to a stand still So I wanted to know if it was possible to make an import to not commit its changes. I realise I might be barking up the wrong tree to begin with so I have another idea of doing this so that brings me to the 2nd part of my question, would it be possible to modify the list of files that the transaction is about to commit to the repo. I know this can be done to a WC but that wont help as a WC is a checked out copy of say the trunk so im not sure how u would add a file to the 'db_revs' folder which is above trunk? Any help is greatly appreciated Cheers Vaughan

    Read the article

  • How is it that json serialization is so much faster than yaml serialization in python?

    - by guidoism
    I have code that relies heavily on yaml for cross-language serialization and while working on speeding some stuff up I noticed that yaml was insanely slow compared to other serialization methods (e.g., pickle, json). So what really blows my mind is that json is so much faster that yaml when the output is nearly identical. >>> import yaml, cjson; d={'foo': {'bar': 1}} >>> yaml.dump(d, Dumper=yaml.SafeDumper) 'foo: {bar: 1}\n' >>> cjson.encode(d) '{"foo": {"bar": 1}}' >>> import yaml, cjson; >>> timeit("yaml.dump(d, Dumper=yaml.SafeDumper)", setup="import yaml; d={'foo': {'bar': 1}}", number=10000) 44.506911039352417 >>> timeit("yaml.dump(d, Dumper=yaml.CSafeDumper)", setup="import yaml; d={'foo': {'bar': 1}}", number=10000) 16.852826118469238 >>> timeit("cjson.encode(d)", setup="import cjson; d={'foo': {'bar': 1}}", number=10000) 0.073784112930297852 PyYaml's CSafeDumper and cjson are both written in C so it's not like this is a C vs Python speed issue. I've even added some random data to it to see if cjson is doing any caching, but it's still way faster than PyYaml. I realize that yaml is a superset of json, but how could the yaml serializer be 2 orders of magnitude slower with such simple input?

    Read the article

  • Ruby custom class to and from YAML;

    - by Sanarothe
    Hi. I'm having trouble deserializing a ruby class that I wrote to YAML. Where I want to be I want to be able to pass one object around as a full 'question' which includes the question text, some possible answers (For multi. choice) and the correct answer. One module (The encoder) takes input, builds a 'question' class out of it and appends it to the question pool. Another module reads a question pool and builds an array of 'question' objects. Where I am currently Sample Question Pool --- | --- !ruby/object:MultiQ a: "no" answer: "no" b: "no" c: "no" d: "no" text: "yes?" Encoder dump to YAML file. Object is a MultiQ filled up with input. (See below.) def dump(file, object) File.open(file, 'a') do |out| YAML.dump(object.to_yaml, out) end object = nil end MultiQ Class definition class MultiQ attr_accessor :text, :answer, :a, :b, :c, :d def initialize(text, answer, a, b, c, d) @text = text @answer = answer @a = a @b = b @c = c @d = d end end The decoder (I've been trying different things, so what's here wasn't my first or best guess. But I'm at a loss and the documentation doesn't really explain things thoroughly enough.) File.open( "test_set.yaml" ) do |yf| YAML.load_documents( yf ) { |item| new = YAML.object_maker( MultiQ, item) puts new } end Questions you can answer How do I achieve my goal? What methods should I use, between parsing, loading files or documents, to successfully deserialize a Ruby class? I've already looked over the YAML Rdoc, and I didn't absorb very much, so please don't just link me to it. What other methods would you suggest using? Is there a better way to store questions like this? Should I be using document db, relational db, xml? Some other format?

    Read the article

  • A UnicodeDecodeError that occurs with json in python on Windows, but not Mac.

    - by ventolin
    On windows, I have the following problem: >>> string = "Don´t Forget To Breathe" >>> import json,os,codecs >>> f = codecs.open("C:\\temp.txt","w","UTF-8") >>> json.dump(string,f) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python26\lib\json\__init__.py", line 180, in dump for chunk in iterable: File "C:\Python26\lib\json\encoder.py", line 294, in _iterencode yield encoder(o) UnicodeDecodeError: 'utf8' codec can't decode bytes in position 3-5: invalid data (Notice the non-ascii apostrophe in the string.) However, my friend, on his mac (also using python2.6), can run through this like a breeze: > string = "Don´t Forget To Breathe" > import json,os,codecs > f = codecs.open("/tmp/temp.txt","w","UTF-8") > json.dump(string,f) > f.close(); open('/tmp/temp.txt').read() '"Don\\u00b4t Forget To Breathe"' Why is this? I've also tried using UTF-16 and UTF-32 with json and codecs, but to no avail.

    Read the article

  • SvnDumpFilter 2,3: Error parsing header. How to fix?

    - by flashnik
    I use SVN from Collabnet, 1.6.9 version. I tried to separate my project on two parts with SvnDumpFilter and encountered an error because some folders (includeing the one I want to separate) had been moved. Then I googled that SvnDumpFilter 2 and 3 can solve this problem. I tried to use them but in both cases encountered an error: Error parsing header. Here is the beginning of source dump: SVN-fs-dump-format-version: 2 UUID: REP_GUID Revision-number: 0 Prop-content-length: 56 Content-length: 56 K 8 svn:date V 27 2008-10-28T07:01:45.445155Z PROPS-END Revision-number: 1 Prop-content-length: 151 Content-length: 151 K 7 svn:log V 48 ?±???·???µ??-?‡?°???‚??, ?????‚?????°?? ?±?‹?»?° K 10 svn:author V 8 flashnik K 8 svn:date V 27 2008-10-29T20:18:56.633888Z PROPS-END Node-path: Foo Node-kind: dir Node-action: add Prop-content-length: 10 Content-length: 10 PROPS-END Node-path: Foo/Bar Node-kind: dir Node-action: add Prop-content-length: 10 Content-length: 10 PROPS-END Node-path: Foo/Bar/example.doc Node-kind: file Node-action: add Prop-content-length: 59 Text-content-length: 181248 Text-content-md5: f14c77a031ab2de001ac5239427ceded Text-content-sha1: 95470e8d29bf76b00485c4fa33f4029f5c2386cb Content-length: 181307 K 13 svn:mime-type V 24 application/octet-stream ....Some binary code and so on SvnDumpFilter3 produces following part before dying: SVN-fs-dump-format-version: 2 UUID: REP_GUID Revision-number: 0 Prop-content-length: 56 Content-length: 56 K 8 svn:date V 27 2008-10-28T07:01:45.445155Z PROPS-END Revision-number: 1 Prop-content-length: 151 Content-length: 151 K 7 svn:log V 48 ?±???·???µ??-?‡?°???‚??, ?????‚?????°?? ?±?‹?»?° K 10 svn:author V 8 flashnik K 8 svn:date V 27 2008-10-29T20:18:56.633888Z PROPS-END What's wrong? How to fix it? Does it work with my subversion version?

    Read the article

  • Crash generated during destruction of hash_map

    - by Alien01
    I am using hash_map in application as typedef hash_map<DWORD,CComPtr<IInterfaceXX>> MapDword2Interface; In main application I am using static instance of this map static MapDword2Interface m_mapDword2Interface; I have got one crash dump from one of the client machines which point to the crash in clearing this map I opened that crash dump and here is assembly during debugging > call std::list<std::pair<unsigned long const ,ATL::CComPtr<IInterfaceXX> >,std::allocator<std::pair<unsigned long const ,ATL::CComPtr<IInterfaceXX> > > >::clear > mov eax,dword ptr [CMainApp::m_mapDword2Interface+8 (49XXXXX)] Here is code where crash dump is pointing. Below code is from stl:list file void clear() { // erase all #if _HAS_ITERATOR_DEBUGGING this->_Orphan_ptr(*this, 0); #endif /* _HAS_ITERATOR_DEBUGGING */ _Nodeptr _Pnext; _Nodeptr _Pnode = _Nextnode(_Myhead); _Nextnode(_Myhead) = _Myhead; _Prevnode(_Myhead) = _Myhead; _Mysize = 0; for (; _Pnode != _Myhead; _Pnode = _Pnext) { // delete an element _Pnext = _Nextnode(_Pnode); this->_Alnod.destroy(_Pnode); this->_Alnod.deallocate(_Pnode, 1); } } Crash is pointing to the this->_Alnod.destroy(_Pnode); statement in above code. I am not able to guess it, what could be reason. Any ideas??? How can I make sure, even is there is something wrong with the map , it should not crash?

    Read the article

  • Symbols (pdb) for native dll are not loaded due to post build step

    - by sean e
    I have a native release dll that is built with symbols. There is a post build step that modifies the dll. The post build step does some compression and probably appends some data. The pdb file is still valid however neither WinDbg nor Visual Studio 2008 will load the symbols for the dll after the post build step. What bits in either the pdb file or the dll do we need to modify to get either WinDbg or Visual Studio to load the symbols when it loads a dump in which our release dll is referenced? Is it filesize that matters? A checksum or hash? A timestamp? Modify the dump? or modify the pdb? modify the dll before it is shipped? (We know the pdb is valid because we are able to use it to manually get symbol names for addresses in dump callstacks that reference the released dll. It's just a total pain in the *ss do it by hand for every address in a callstack in all the threads.)

    Read the article

  • do the Python libraries have a natural dependence on the global namespace?

    - by msw
    I first ran into this when trying to determine the relative performance of two generators: t = timeit.repeat('g.get()', setup='g = my_generator()') So I dug into the timeit module and found that the setup and statement are evaluated with their own private, initially empty namespaces so naturally the binding of g never becomes accessible to the g.get() statement. The obvious solution is to wrap them into a class, thus adding to the global namespace. I bumped into this again when attempting, in another project, to use the multiprocessing module to divide a task among workers. I even bundled everything nicely into a class but unfortunately the call pool.apply_async(runmc, arg) fails with a PicklingError because buried inside the work object that runmc instantiates is (effectively) an assignment: self.predicate = lambda x, y: x > y so the whole object can't be (understandably) pickled and whereas: def foo(x, y): return x > y pickle.dumps(foo) is fine, the sequence bar = lambda x, y: x > y yields True from callable(bar) and from type(bar), but it Can't pickle <function <lambda> at 0xb759b764>: it's not found as __main__.<lambda>. I've given only code fragments because I can easily fix these cases by merely pulling them out into module or object level defs. The bug here appears to be in my understanding of the semantics of namespace use in general. If the nature of the language requires that I create more def statements I'll happily do so; I fear that I'm missing an essential concept though. Why is there such a strong reliance on the global namespace? Or, what am I failing to understand? Namespaces are one honking great idea -- let's do more of those!

    Read the article

  • Connect to Facebook with Javascript Client Library using Adobe AIR

    - by Chuck Hriczko
    I'm having an issue connecting to Facebook through the Javascript Client Library in Adobe AIR. It works fine in the browser but in Adobe AIR it seems to not be able to find the Facebook functions. Here is some code I copied from the Facebook website: (Oh and I have the xd_receiver.htm file in the correct path too) <textarea style="width: 1000px; height: 300px;" id="_traceTextBox"> </textarea> <script src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> // var api = FB.Facebook.apiClient; // require user to login api.requireLogin(function(exception){ FB.FBDebug.logLevel=1; FB.FBDebug.dump("Current user id is " + api.get_session().uid); // Get friends list //5-14-09: this code below is broken, correct code follows //api.friends_get(null, function(result){ // Debug.dump(result, 'friendsResult from non-batch execution '); // }); api.friends_get(new Array(), function(result, exception){ FB.FBDebug.dump(result, 'friendsResult from non-batch execution '); }); }); }); //]]> </script>

    Read the article

  • overloading "<<" with a struct (no class) cout style

    - by monkeyking
    I have a struct that I'd like to output using either 'std::cout' or some other output stream. Is this possible without using classes? Thanks #include <iostream> #include <fstream> template <typename T> struct point{ T x; T y; }; template <typename T> std::ostream& dump(std::ostream &o,point<T> p) const{ o<<"x: " << p.x <<"\ty: " << p.y <<std::endl; } template<typename T> std::ostream& operator << (std::ostream &o,const point<T> &a){ return dump(o,a); } int main(){ point<double> p; p.x=0.1; p.y=0.3; dump(std::cout,p); std::cout << p ;//how? return 0; } I tried different syntax' but I cant seem to make it work.

    Read the article

  • Using placeholders/variables in a sed command

    - by jesse_galley
    I want to store a specific part of a matched result as a variable to be used for replacement later. I would like to keep this in a one liner instead of finding the variable I need before hand. when configuring apache, and use mod_rewrite, you can specificy specific parts of patterns to be used as variables,like this: RewriteRule ^www.example.com/page/(.*)$ http://www.example.com/page.php?page=$1 [R=301,L] the part of the pattern match that's contained inside the parenthesis is stored as $1 for use later. So if the url was www.example.com/page/home, it would be replaced with www.example.com/page.php?page=home. So the "home" part of the match was saved in $1 because it was the part of the pattern inside the parenthesis. I want something like this functionality with a sed command, I need to automatically replace many strings in a SQL dump file, to add drop table if exist commands before each create table, but I need to know the table name to do this, so if the dump file contains something like: ... CREATE TABLE `orders` ... I need to run something like: cat dump.sql | sed "s/CREATE TABLE `(.*)`/DROP TABLE IF EXISTS $1\N CREATE TABLE `$1`/g" to get the result of: ... DROP TABLE IF EXISTS `orders` CREATE TABLE `orders` ... I'm using the mod_rewrite syntax in the sed command as a logical example of what I'm trying to do. Any suggestions?

    Read the article

  • error exporting data using mysql workbench

    - by Rajneesh Rana
    hi, i have been getting warning of version mismatch when i was trying to export data dump using mysql workbench. So, i copied mysqldump from mysql server folder and placed it in workbench folder. Now when i am trying to export data i am getting error Operation failed with exitcode -1073741819 here is a entry of log 16:31:25 Dumping wordpress (wp_posts) Running: "mysqldump.exe" --defaults-extra-file="c:\docume~1\rajneesh.r\locals~1\temp\1\tmpxau7tz" --no-create-info=FALSE --order-by-primary=FALSE --force=FALSE --no-data=FALSE --tz-utc=TRUE --flush-privileges=FALSE --compress=FALSE --replace=FALSE --host=localhost --insert-ignore=FALSE --extended-insert=TRUE --user=root --quote-names=TRUE --hex-blob=FALSE --complete-insert=FALSE --add-locks=TRUE --port=3306 --disable-keys=TRUE --delayed-insert=FALSE --create-options=TRUE --delete-master-logs=FALSE --comments=TRUE --default-character-set=utf8 --max_allowed_packet=1G --flush-logs=FALSE --dump-date=TRUE --lock-tables=TRUE --allow-keywords=FALSE --events=FALSE "wordpress" "wp_posts" Operation failed with exitcode -1073741819 Please help me with these issues Thank You

    Read the article

  • tcpdump filter that excludes private ip traffic

    - by Kyle Brandt
    For a generic filter to exclude all traffic in my dump that is between private IP address, I came up with the following: sudo tcpdump -n ' (not ( (src net 172.16.0.0/20 or src net 10.0.0.0/8 or src net 192.168.0.0/16) and (dst net 172.16.0.0/20 or dst net 10.0.0.0/8 or dst net 192.168.0.0/16) ) ) and (not ( (dst net 172.16.0.0/20 or dst net 10.0.0.0/8 or dst net 192.168.0.0/16) and (src net 172.16.0.0/20 or src net 10.0.0.0/8 or src net 192.168.0.0/16) ) )' -w test2.dump Seems pretty excessive, but it also seems to work, is this filter a lot longer than it needs to be and there is better way to express this logic, or is there anything wrong with the filter?

    Read the article

  • jetty crash trouble shooting

    - by user886356
    Recently I switch to amazon ec2 + jetty9 + oracle jdk7_u45 for cost saving. I found the jetty server is very unstable. It crash randomly without any jvm dump file. Tried to enable stdout with the dumpBeforeStop=TRUE. It won't append the dump messages to stderrout.log before crash. Seems it isn't related to OutOfMemoryError as I have enabled the gc verbose options and found it still has many available memory before crash. : 162604K-3340K(176960K), 0.2240040 secs] 248332K-89101K(373568K), 0.2736860 secs] [Times: user=0.01 sys=0.01, real=0.28 secs] Tried to downgrade to jetty8 with different jdk combination (jdk6 / jdk7). Still got the same problem. Tried to remove all jvm options and using "sudo java -jar start.jar" to run jetty. Still crash. Any other way to shoot the problem?

    Read the article

  • How to get more NFS packet details from Wireshark?

    - by Joe Swanson
    How can I get Wireshark to give me details about NFS packets at this level of granularity? (as exemplified here here) Specifically, I am interesting in looking at the the "Stable" option toward the bottom. When I analyze captured packets (whether by capturing directly via Wireshark, importing from a tshark dump, or importing from a tcpdump dump), I do not see a "Network File System" section in the packet details. I only get general TCP information. It recognizes that a packet is destined for a NFS port, but I am not able to see these details. Any ideas?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >