Search Results

Search found 4528 results on 182 pages for 'michael paul'.

Page 174/182 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Sweave can't see a vector if run from a function ?

    - by PaulHurleyuk
    I have a function that sets a vector to a string, copies a Sweave document with a new name and then runs that Sweave. Inside the Sweave document I want to use the vector I set in the function, but it doesn't seem to see it. (Edit: I changed this function to use tempdir(() as suggested by Dirk) I created a sweave file test_sweave.rnw; % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{Sweave} \begin{document} \title{Test Sweave Document} \author{gb02413} \maketitle <<>>= ls() Sys.time() print(paste("The chosen study was ",chstud,sep="")) @ \end{document} and I have this function; onOK <- function(){ chstud<-"test" message(paste("Chosen Study is ",chstud,sep="")) newfile<-paste(chstud,"_report",sep="") mypath<-paste(tempdir(),"\\",sep="") setwd(mypath) message(paste("Copying test_sweave.Rnw to ",paste(mypath,newfile,".Rnw",sep=""),sep="")) file.copy("c:\\local\\test_sweave.Rnw", paste(mypath,newfile,".Rnw",sep=""), overwrite=TRUE) Sweave(paste(mypath,newfile,".Rnw",sep="")) require(tools) texi2dvi(file = paste(mypath,newfile,".tex",sep=""), pdf = TRUE) } If I run the code from the function directly, the resulting file has this output for ls(); > ls() [1] "chstud" "mypath" "newfile" "onOK" However If I call onOK() I get this output; > ls() [1] "onOK" and the print(...chstud...)) function generates an error. I suspect this is an environment problem, but I assumed because the call to Sweave occurs within the onOK function, it would be in the same enviroment, and would see all the objects created within the function. How can I get the Sweave process to see the chstud vector ? Thanks Paul.

    Read the article

  • How can I write a unit test to determine whether an object can be garbage collected?

    - by driis
    In relation to my previous question, I need to check whether a component that will be instantiated by Castle Windsor, can be garbage collected after my code has finished using it. I have tried the suggestion in the answers from the previous question, but it does not seem to work as expected, at least for my code. So I would like to write a unit test that tests whether a specific object instance can be garbage collected after some of my code has run. Is that possible to do in a reliable way ? EDIT I currently have the following test based on Paul Stovell's answer, which succeeds: [TestMethod] public void ReleaseTest() { WindsorContainer container = new WindsorContainer(); container.Kernel.ReleasePolicy = new NoTrackingReleasePolicy(); container.AddComponentWithLifestyle<ReleaseTester>(LifestyleType.Transient); Assert.AreEqual(0, ReleaseTester.refCount); var weakRef = new WeakReference(container.Resolve<ReleaseTester>()); Assert.AreEqual(1, ReleaseTester.refCount); GC.Collect(); GC.WaitForPendingFinalizers(); Assert.AreEqual(0, ReleaseTester.refCount, "Component not released"); } private class ReleaseTester { public static int refCount = 0; public ReleaseTester() { refCount++; } ~ReleaseTester() { refCount--; } } Am I right assuming that, based on the test above, I can conclude that Windsor will not leak memory when using the NoTrackingReleasePolicy ?

    Read the article

  • Am I mocking this helper function right in my Django test?

    - by CppLearner
    lib.py from django.core.urlresolvers import reverse def render_reverse(f, kwargs): """ kwargs is a dictionary, usually of the form {'args': [cbid]} """ return reverse(f, **kwargs) tests.py from lib import render_reverse, print_ls class LibTest(unittest.TestCase): def test_render_reverse_is_correct(self): #with patch('webclient.apps.codebundles.lib.reverse') as mock_reverse: with patch('django.core.urlresolvers.reverse') as mock_reverse: from lib import render_reverse mock_f = MagicMock(name='f', return_value='dummy_views') mock_kwargs = MagicMock(name='kwargs',return_value={'args':['123']}) mock_reverse.return_value = '/natrium/cb/details/123' response = render_reverse(mock_f(), mock_kwargs()) self.assertTrue('/natrium/cb/details/' in response) But instead, I get File "/var/lib/graphyte-webclient/graphyte-webenv/lib/python2.6/site-packages/django/core/urlresolvers.py", line 296, in reverse "arguments '%s' not found." % (lookup_view_s, args, kwargs)) NoReverseMatch: Reverse for 'dummy_readfile' with arguments '('123',)' and keyword arguments '{}' not found. Why is it calling reverse instead of my mock_reverse (it is looking up my urls.py!!) The author of Mock library Michael Foord did a video cast here (around 9:17), and in the example he passed the mock object request to the view function index. Furthermore, he patched POll and assigned an expected return value. Isn't that what I am doing here? I patched reverse? Thanks.

    Read the article

  • Xcode "Build and Archive" from command line

    - by Dan Fabulich
    Xcode 3.2 provides an awesome new feature under the Build menu, "Build and Archive" which generates an .ipa file suitable for Ad Hoc distribution. You can also open the Organizer, go to "Archived Applications," and "Submit Application to iTunesConnect." Is there a way to use "Build and Archive" from the command line (as part of a build script)? I'd assume that xcodebuild would be involved somehow, but the man page doesn't seem to say anything about this. UPDATE Michael Grinich requested clarification; here's what exactly you can't do with command-line builds, features you can ONLY do with Xcode's Organizer after you "Build and Archive." You can click "Share Application..." to share your IPA with beta testers. As Guillaume points out below, due to some Xcode magic, this IPA file does not require a separately distributed .mobileprovision file that beta testers need to install; that's magical. No command-line script can do it. For example, Arrix's script (submitted May 1) does not meet that requirement. More importantly, after you've beta tested a build, you can click "Submit Application to iTunes Connect" to submit that EXACT same build to Apple, the very binary you tested, without rebuilding it. That's impossible from the command line, because signing the app is part of the build process; you can sign bits for Ad Hoc beta testing OR you can sign them for submission to the App Store, but not both. No IPA built on the command-line can be beta tested on phones and then submitted directly to Apple. I'd love for someone to come along and prove me wrong: both of these features work great in the Xcode GUI and cannot be replicated from the command line.

    Read the article

  • How to get the second sub element of every element in a list in R

    - by PaulHurleyuk
    I know I've come across this problem before, but I'm having a bit of a mental block at the moment. and as I can't find it on SO, I'll post it here so I can find it next time. I have a dataframe that contains a field representing an ID label. This label has two parts, an alpha prefix and a numeric suffix. I want to split it apart and create two new fields with these values in. structure(list(lab = c("N00", "N01", "N02", "B00", "B01", "B02", "Z21", "BA01", "NA03")), .Names = "lab", row.names = c(NA, -9L ), class = "data.frame") df$pre<-strsplit(df$lab, "[0-9]+") df$suf<-strsplit(df$lab, "[A-Z]+") Which gives lab pre suf 1 N00 N , 00 2 N01 N , 01 3 N02 N , 02 4 B00 B , 00 5 B01 B , 01 6 B02 B , 02 7 Z21 Z , 21 8 BA01 BA , 01 9 NA03 NA , 03 So, the first strsplit works fine, but the second gives a list, each having two elements, an empty string and the result I want, and stuffs them both into the dataframe column. How can I select the second sub-element from each element of the list ? (or, is there a better way to do this) Thanks Paul.

    Read the article

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • Convert a raw string to an array of big-endian words with Ruby

    - by Zag zag..
    Hello, I would like to convert a raw string to an array of big-endian words. As example, here is a JavaScript function that do it well (by Paul Johnston): /* * Convert a raw string to an array of big-endian words * Characters >255 have their high-byte silently ignored. */ function rstr2binb(input) { var output = Array(input.length >> 2); for(var i = 0; i < output.length; i++) output[i] = 0; for(var i = 0; i < input.length * 8; i += 8) output[i>>5] |= (input.charCodeAt(i / 8) & 0xFF) << (24 - i % 32); return output; } I believe the Ruby equivalent can be String#unpack(format). However, I don't know what should be the correct format parameter. Thank you for any help. Regards

    Read the article

  • Problems using (building?) native gem extensions on OS X

    - by goodmike
    I am having trouble with some of my rubygems, in particular those that use native extensions. I am on a MacBookPro, with Snow Leopard. I have XCode 3.2.1 installed, with gcc 4.2.1. Ruby 1.8.6, because I'm lazy and a scaredy cat and don't want to upgrade yet. Ruby is running in 32-bit mode. I built this ruby from scratch when my MBP ran OSX 10.4. When I require one of the affected gems in irb, I get a Load Error for the gem extension's bundle file. For example, here's nokogigi dissing me: > require 'rubygems' = true > require 'nokogiri' LoadError: Failed to load /usr/local/lib/ruby/gems/1.8/gems/nokogiri-1.4.1/lib/nokogiri/nokogiri.bundle This is also happening with the Postgres pg and MongoDB mongo gems. My first thought was that the extensions must not be building right. But gem install wasn't throwing any errors. So I reinstalled with the verbose flag, hoping to see some helpful warnings. I've put the output in a Pastie, and the only warning I see is a consistent one about "passing argument n of ‘foo’ with different width due to prototype." I suspect that this might be an issue from upgrading to Snow Leopard, but I'm a little surprised to experience it now, since I've updated my XCode. Could it stem from running Ruby in 1.8.6? I'm embarrassed that I don't know quite enough about my Mac and OSX to know where to look next, so any guidance, even just a pointer to some document I couldn't find via Google, would be most welcome. Michael

    Read the article

  • Ruby and public_method_defined? : strange behaviour

    - by aXon
    Hi there Whilst reading through the book "The well grounded Rubyist", I came across some strange behaviour. The idea behind the code is using one's own method_missing method. The only thing I am not able to grasp is, why this code gets executed, as I do not have any Person.all_with_* class methods defined, which in turn means that the self.public_method_defined?(attr) returns true (attr is friends and then hobbies). #!/usr/bin/env ruby1.9 class Person PEOPLE = [] attr_reader :name, :hobbies, :friends def initialize(mame) @name = name @hobbies = [] @friends = [] PEOPLE << self end def has_hobby(hobby) @hobbies << hobby end def has_friend(friend) @friends << friend end def self.method_missing(m,*args) method = m.to_s if method.start_with?("all_with_") attr = method[9..-1] if self.public_method_defined?(attr) PEOPLE.find_all do |person| person.send(attr).include?(args[0]) end else raise ArgumentError, "Can't find #{attr}" end else super end end end j = Person.new("John") p = Person.new("Paul") g = Person.new("George") r = Person.new("Ringo") j.has_friend(p) j.has_friend(g) g.has_friend(p) r.has_hobby("rings") Person.all_with_friends(p).each do |person| puts "#{person.name} is friends with #{p.name}" end Person.all_with_hobbies("rings").each do |person| puts "#{person.name} is into rings" end The output is is friends with is friends with is into rings which is really understandable, as there is nothing to be executed.

    Read the article

  • What do you mean by the expressiveness in programming lanuguage?

    - by prosseek
    I see a lot of the word 'expressiveness' when people want to stress one language is better than the other. But I don't see exactly what they mean by it. Is it the verboseness/succinctness? I mean, if one language can write down something shorter than the other, does that mean expressiveness? Please refer to my other question - http://stackoverflow.com/questions/2411772/article-about-code-density-as-a-measure-of-programming-language-power Is it the power of the language? Paul Graham says that one language is more powerful than the other language in a sense that one language can do that the other language can't do (for example, LISP can do something with macro that the other language can't do). Is it just something that makes life easier? Regular expression can be one of the examples. Is it a different way of solving the same problem: something like SQL to solve the search problem? What do you think about the expressiveness of a programming lanuage? Can you show the expressiveness using some code? What's the relationship with the expressiveness and DSL? Do people come up with DSL to get the expressiveness?

    Read the article

  • Nokogiri changing custom elements

    - by dagda1
    Hi, I have sample html that I have marked up with some special tags that will be used by a different program, an example of the html is below. You should note the <START:organization>..<END> elements. <html> <head/> <body> <ul> <li> <START:organization> Advanced Integrated Pest Management <END> </li> <li> <START:organization> American Bakers Association <END> </li> </ul> </body> </html> I wanted to use nokogiri to preprocess the html to easily remove irrelevant tags like <script>. I created the following extension to the nokogiri document class: module Nokogiri module HTML class Document def prepare_html xpath("//script").remove to_html.remove_new_lines end end end end The problem is that nokogiri is changing the <START:organization> element to <organization>. Is there anyway that I can preserve the htnl to maintain my custom markup tags? Thanks Paul

    Read the article

  • Can I do this?? Trying to load an object from within itself.

    - by Smikey
    Hi all! I have an object which conforms to the NSCoding protocol. The object contains a method to save itself to memory, like so: - (void)saveToFile { NSMutableData *data = [[NSMutableData alloc] init]; NSKeyedArchiver *archiver = [[NSKeyedArchiver alloc] initForWritingWithMutableData:data]; [archiver encodeObject:self forKey:kDataKey]; [archiver finishEncoding]; [data writeToFile:[self dataFilePath] atomically:YES]; [archiver release]; [data release]; } This works just fine. But I would also like to initialise an empty version of this object and then call its own 'Load' method to load whatever data exists on file into its variables. So I've created the following: - (void)loadFromFile { NSString *filePath = [self dataFilePath]; if ([[NSFileManager defaultManager] fileExistsAtPath:filePath]) { NSData *data = [[NSMutableData alloc] initWithContentsOfFile:[self dataFilePath]]; NSKeyedUnarchiver *unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:data]; self = [unarchiver decodeObjectForKey:kDataKey]; [unarchiver finishDecoding]; } } Now this second method doesn't manage to load any variables. Is this line not possible perhaps? self = [unarchiver decodeObjectForKey:kDataKey]; Ultimately I would like to use the code like this: One viewController takes user entered input, creates an object and saves it to memory using [anObject saveToFile]; And a second viewController creates an empty object, then initialises its values to those stored on file by calling: [emptyObject loadFromFile]; Any suggestions on how to make this work would be hugely appreciated. Thanks :) Michael

    Read the article

  • What Getters and Setters should and shouldn't do.

    - by cyclotis04
    I've run into a lot of differing opinions on Getters and Setters lately, so I figured I should make it into it's own question. A previous question of mine received an immediate comment (later deleted) that stated setters shouldn't have any side effects, and a SetProperty method would be a better choice. Indeed, this seems to be Microsoft's opinion as well. However, their properties often raise events, such as Resized when a form's Width or Height property is set. OwenP also states "you shouldn't let a property throw exceptions, properties shouldn't have side effects, order shouldn't matter, and properties should return relatively quickly." Yet Michael Stum states that exceptions should be thrown while validating data within a setter. If your setter doesn't throw an exception, how could you effectively validate data, as so many of the answers to this question suggest? What about when you need to raise an event, like nearly all of Microsoft's Control's do? Aren't you then at the mercy of whomever subscribed to your event? If their handler performs a massive amount of information, or throws an error itself, what happens to your setter? Finally, what about lazy loading within the getter? This too could violate the previous guidelines. What is acceptable to place in a getter or setter, and what should be kept in only accessor methods?

    Read the article

  • Can ScalaCheck/Specs warnings safely be ignored when using SBT with ScalaTest?

    - by pdbartlett
    I have a simple FunSuite-based ScalaTest: package pdbartlett.hello_sbt import org.scalatest.FunSuite class SanityTest extends FunSuite { test("a simple test") { assert(true) } test("a very slightly more complicated test - purposely fails") { assert(42 === (6 * 9)) } } Which I'm running with the following SBT project config: import sbt._ class HelloSbtProject(info: ProjectInfo) extends DefaultProject(info) { // Dummy action, just to show config working OK. lazy val solveQ = task { println("42"); None } // Managed dependencies val scalatest = "org.scalatest" % "scalatest" % "1.0" % "test" } However, when I runsbt test I get the following warnings: ... [info] == test-compile == [info] Source analysis: 0 new/modified, 0 indirectly invalidated, 0 removed. [info] Compiling test sources... [info] Nothing to compile. [warn] Could not load superclass 'org.scalacheck.Properties' : java.lang.ClassNotFoundException: org.scalacheck.Properties [warn] Could not load superclass 'org.specs.Specification' : java.lang.ClassNotFoundException: org.specs.Specification [warn] Could not load superclass 'org.specs.Specification' : java.lang.ClassNotFoundException: org.specs.Specification [info] Post-analysis: 3 classes. [info] == test-compile == For the moment I'm assuming these are just "noise" (caused by the unified test interface?) and that I can safely ignore them. But it is slightly annoying to some inner OCD part of me (though not so annoying that I'm prepared to add dependencies for the other frameworks). Is this a correct assumption, or are there subtle errors in my test/config code? If it is safe to ignore, is there any other way to suppress these errors, or do people routinely include all three frameworks so they can pick and choose the best approach for different tests? TIA, Paul. (ADDED: scala v2.7.7 and sbt v0.7.4)

    Read the article

  • Google Maps API v3 - infoWindows all have same content

    - by paulriedel
    Hello, I've been having problems with the infoWindows and Google Maps API v3. Initially, I've ran into the problem that everyone else has of closing infoWindows when opening a new one. I thought to have solved the problem by defining "infowindow" beforehand. Now they close when I click on a new marker, but the content is the same. How should I re-structure my code to make sure the content is the right one each time - and only one infoWindow is open at a given time? Thank you! Paul var allLatLngs = new Array(); var last = 0; var infowindow; function displayResults(start, count){ if(start === undefined){ start = last; } if(count === undefined){ count = 20; } jQuery.each(jsresults, function(index, value) { if(index >= start && index < start+count){ var obj = jQuery.parseJSON(value); $("#textresults").append(index + ": <strong>" + obj.name + "</strong> " + Math.round(obj.distanz*100)/100 + " km entfernt" + "<br/>"); var myLatlng = new google.maps.LatLng(obj.geo_lat, obj.geo_lon); allLatLngs.push(myLatlng); var contentString = '<strong>'+obj.name+'</strong>'; infowindow = new google.maps.InfoWindow({ content: contentString }); var marker = new google.maps.Marker({ position: myLatlng, //title:"Hello World!" }); marker.setMap(map); google.maps.event.addListener(marker, 'click', function() { if (infowindow) { infowindow.close(map,marker); } infowindow.open(map,marker); }); } }); last = start+count;

    Read the article

  • Android ListView: how to select an item?

    - by mmo
    I am having trouble with a ListView I created: I want an item to get selected when I click on it. My code for this looks like: protected void onResume() { ... ListView lv = getListView(); lv.setOnItemSelectedListener(new OnItemSelectedListener() { public void onItemSelected(AdapterView<?> adapterView, View view, int pos, long id) { Log.v(TAG, "onItemSelected(..., " + pos + ",...) => selected: " + getSelectedItemPosition()); } public void onNothingSelected(AdapterView<?> adapterView) { Log.v(TAG, "onNothingSelected(...) => selected: " + getSelectedItemPosition()); } }); lv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> adapterView, View view, int pos, long id) { lv.setSelection(pos); Log.v(TAG, "onItemClick(..., " + pos + ",...) => selected: " + getSelectedItemPosition()); } }); ... } When I run this and click e.g. on the second item (i.e. pos=1) I get: 04-03 23:08:36.994: V/DisplayLists(663): onItemClick(..., 1,...) => selected: -1 i.e. even though the OnItemClickListener is called with the proper argument and calls a setSelection(1), there is no item selected (and hence also OnItemSelectedListener.onItemSelected(...) is never called) and getSelectedItemPosition() still yields -1 after the setSelection(1)-call. What am I missing? Michael PS.: My list does have =2 elements...

    Read the article

  • routes as explained in RoR tutorial 2nd Ed?

    - by 7stud
    The author, Michael Hartl, says: Here the rule: get "static_pages/home" maps requests for the URI /static_pages/home to the home action in the StaticPages controller. How? The type of request is given, the url is given, but where is the mapping to a controller and action? My tests all pass, though. I also tried deleting all the actions in the StaticPagesController, which just looks like this: class StaticPagesController < ApplicationController def home end def about end def help end def contact end end ...and my tests still pass, which is puzzling. The 2nd edition of the book(online) is really frustrating. Specifically, the section about making changes to the Guardfile is impossible to follow. For instance, if I instruct you to edit this file: blah blah blah dog dog dog beetle beetle beetle jump jump jump and make these changes: blah blah blah . . . go go go . . . jump jump jump ...would you have any idea where the line 'go go go' should be in the code? And the hint for exercise 3.5-1 is flat out wrong. If the author would put up a comment section at the end of every chapter, the rails community could self-edit the book.

    Read the article

  • Border of single th spreads to neighboring th when colspan set on td row below

    - by Samuel Hapak
    Having following html code: <table> <tr><th>First</th><th class='second'>Second</th><th class='third'>Third</th><th>Fourth</th></tr> <tr><td>Mike</td><td colspan=2 >John</td><td>Paul</td></tr> </table>? And following css: table { border-collapse: collapse; } td, th { border: 1px black solid; } td { border-top: none; } th { border-bottom: none; } th.second { border-bottom: 3px green solid; } th.third { } ? I would expect as result one table with 3px solid green line below the second th cell. Instead of that in Chrome, I have solid green border below both the second and the third th cell. In the firefox, results are just as expected. Is this browser bug, or my code is illegal? You can see example at http://jsfiddle.net/tt6aP/3/ PS: Try to set th.third { border-bottom: 2px solid red; } And then try to raise it to 3px. This is even more strange. Screenshots Expected: Chrome: Firefox:

    Read the article

  • Issue with class design to model user preferences for different classes

    - by Mulone
    Hi all, I'm not sure how to design a couple of classes in my app. Basically that's a situation: each user can have many preferences each preference can be referred to an object of different classes (e.g. album, film, book etc) the preference is expressed as a set of values (e.g. score, etc). The problem is that many users can have preferences on the same objects, e.g.: John: score=5 for filmid=apocalypsenow Paul: score=3 for filmid=apocalypsenow And naturally I don't want to duplicate the object film in each user. So I could create a class called "preference" holding a score and then a target object, something like: User{ hasMany preferences } Preference{ belongsTo User double score Film target Album target //etc } and then define just one target. Then I would create an interface for the target Classes (album, film etc): Interface canBePreferred{ hasMany preferences } And implement all of those classes. This could work, but it looks pretty ugly and it would requires a lot of joins to work. Do you have some patterns I could use to model this nicely? Cheers, Mulone

    Read the article

  • Can't get node.js built on cygwin

    - by mwt
    Following the instructions here: https://github.com/ry/node/wiki/Building-node.js-on-Cygwin-(Windows) I've tried installing on two machines, either of which I'd be happy to get up and running. WinXP On 'make', I get: Build failed: -> task failed <err #2>: {task: libv8.a SConstruct -> libv8.a} According to the instructions, this is caused by having $SHELL set to a Windows style path, but I've set it to /bin/bash and get the same error. Win7 On './configure', I get: $ ./configure Checking for program g++ or c++ : /usr/bin/g++ Checking for program cpp : /usr/bin/cpp Checking for program ar : /usr/bin/ar Checking for program ranlib : /usr/bin/ranlib Checking for g++ : ok Checking for program gcc or cc : /usr/bin/gcc 0 [main] python 1092 C:\bin\python.exe: *** fatal error - unable to remap \\?\C:\lib\python2.6\lib-dynload\_functools.dll to same address as parent: 0x360000 != 0x3E0000 Stack trace: Frame Function Args 002891E8 6102749B (002891E8, 00000000, 00000000, 00000000) 002894D8 6102749B (61177B80, 00008000, 00000000, 61179977) 0028A508 61004AFB (611A136C, 61241CF4, 00360000, 003E0000) End of stack trace 0 [main] python 3536 fork: child 1092 - died waiting for dll loading, errno 11 /Users/Michael/Desktop/node/wscript:177: error: could not configure a c compiler! I've run 'rebaseall' and restarted the machine but still get that error. Edit: Ok, rebaseall was apparently erroring on some mingw stuff, so I edited the rebaseall script to fix that, and now it configures on Win7. The new problem is that it emits the exact same error as my XP machine now when I try to make. This is on tag v0.3.5.

    Read the article

  • casting vs using the 'as' keyword in the CLR

    - by Frank V
    I'm learning about design patterns and because of that I've ended using a lot of interfaces. One of my "goals" is to program to an interface, not an implementation. What I've found is that I'm doing a lot of casting or object type conversion. What I'd like to know is if there is a difference between these two methods of conversion: public interface IMyInterface { void AMethod(); } public class MyClass : IMyInterface { public void AMethod() { //Do work } // other helper methods.... } public class Implementation { IMyInterface _MyObj; MyClass _myCls1; MyClass _myCls2; public Implementation() { _MyObj = new MyClass(); // What is the difference here: _myCls1 = (MyClass)_MyObj; _myCls2 = (_MyObj as MyClass); } } If there is a difference, is there a cost difference or how does this affect my program? Hopefully this makes sense. Sorry for the bad example; it is all I could think of... Update: What is "in general" the preferred method? (I had a question similar to this posted in the 'answers'. I moved it up here at the suggestion of Michael Haren. Also, I want to thank everyone who's provided insight and perspective on my question.

    Read the article

  • Userdefined margins in WPF printing

    - by MTR
    Most printing samples for WPF go like this: PrintDialog dialog = new PrintDialog(); if (dialog.ShowDialog() == true) { StackPanel myPanel = new StackPanel(); myPanel.Margin = new Thickness(15); Image myImage = new Image(); myImage.Width = dialog.PrintableAreaWidth; myImage.Stretch = Stretch.Uniform; myImage.Source = new BitmapImage(new Uri("pack://application:,,,/Images/picture.bmp")); myPanel.Children.Add(myImage); myPanel.Measure(new Size(dialog.PrintableAreaWidth, dialog.PrintableAreaHeight)); myPanel.Arrange(new Rect(new Point(0, 0), myPanel.DesiredSize)); dialog.PrintVisual(myPanel, "A Great Image."); } What I don't like about this is, that they always set the margin to a fixed value. But in PrintDialog the user has the option to choose a individual margin that no sample cares about. If the user now selects a margin that is larger as the fixed margin set by program, the printout is truncated. Is there a way to get the user selected margin value from PrintDialog? TIA Michael

    Read the article

  • dojo.parser.parse only working first time it's called.

    - by crazymerlin
    I have a page that when a user clicks on a link for some reporting tools, it first asks them to enter some report parameters. I get the parameters dialog as a form using AJAX, based on the id of the link. Each dialog has some dojo controls on it, so I need to parse them when the dialog appears, because it is not originally part of the page. The first dialog when called works fine, but subsequent calls to dialogs fails to parse the dojo controls. Example: showParametersDialog : function(doc) { var content = doc.firstChild.firstChild.data; var container = document.createElement('div'); container.id = 'dialog'; container.innerHTML = content; container.style.background = 'transparent'; container.style.position = 'absolute'; container.style.top = (document.body.clientHeight / 2) - 124 + "px"; container.style.left = (document.body.clientWidth / 2) - 133 + "px"; container.style.display = 'block'; document.body.appendChild(container); // set up date fields var date_from = dojo.byId('date_from'); var date_to = dojo.byId('date_to'); try { date_from.value = dojo.date.locale.format(new Date(), {selector: 'date'}); date_to.value = dojo.date.locale.format(new Date(), {selector: 'date'}); } catch(e) { var now = new Date(); date_from.value = String(now.getMonth() + "/" + now.getDate() + "/" + now.getFullYear()); date_to.value = String(now.getMonth() + "/" + now.getDate() + "/" + now.getFullYear()); } dojo.parser.parse(); } All dialogs have the common date fields. So when I call this dialog the first time, and dojo.parser.parse() is called, it parses the controls on the dialog, but only the first time...after than, no dojo. Any thoughts? Thanks, Paul.

    Read the article

  • OpenGL Mipmapping: how does OpenGL decide on map level?

    - by Droozle
    Hi, I am having trouble implementing mipmapping in OpenGL. I am using OpenFrameworks and have modified the ofTexture class to support the creation and rendering of mipmaps. The following code is the original texture creation code from the class (slightly modified for clarity): glEnable(texData.textureTarget); glBindTexture(texData.textureTarget, (GLuint)texData.textureID); glTexSubImage2D(texData.textureTarget, 0, 0, 0, w, h, texData.glType, texData.pixelType, data); glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glDisable(texData.textureTarget); This is my version with mipmap support: glEnable(texData.textureTarget); glBindTexture(texData.textureTarget, (GLuint)texData.textureID); gluBuild2DMipmaps(texData.textureTarget, texData.glTypeInternal, w, h, texData.glType, texData.pixelType, data); glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glDisable(texData.textureTarget); The code does not generate errors (gluBuild2DMipmaps returns '0') and the textures are rendered without problems. However, I do not see any difference. The scene I render consists of "flat, square tiles" at z=0. It's basically a 2D scene. I zoom in and out by using "glScale()" before drawing the tiles. When I zoom out, the pixels of the tile textures start to "dance", indicating (as far as I can tell) unfiltered texture look-up. See: http://www.youtube.com/watch?v=b_As2Np3m8A at 25s. My question is: since I do not move the camera position, but only use scaling of the whole scene, does this mean OpenGL can not decide on the appropriate mipmap level and uses the full texture size (level 0)? Paul

    Read the article

  • How to detect a Socket disconnection?

    - by AngryHacker
    I've implemented a task using the async Sockets pattern in Silverlight 3. I started with Michael Schwarz's implementation and built on top of that. So basically, my Silverlight app establishes a persistent socket connection to a device and then data flows both ways as necessary between the device and the Silverlight app. One thing I am struggling with is how to detect disconnection. I could think of 2 approaches: Keep-Alive. I know this can be done at the Sockets level, but I am not sure how to do this in an async model. How would the Socket class let me know there has been a disconnection. Manual keep alive. Basically, I am having the Silverlight app send a dummy packet every 20 seconds or so. If it fails, I'd assume disconnection. However, incredibly, SocketAsyncEventArgs.SocketError always reports success, even if I simply unplug the device that the Silverlight app is connected to. I am not sure whether this is a bug or what or perhaps I need to upgrade to SL4. Any ideas, direction or implementation would be appreciated.

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >