Search Results

Search found 16166 results on 647 pages for 'conexant high def audio'.

Page 152/647 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Show image with link based on first directory with Javascript

    - by Nestorete
    Hi, i need some help please, i want to show an image on my Page depending of the first directory of the URL. Example: In Any of this URLs will show the image1.jpg www.mysite.com/audio/amplifiers/400wats.html www.mysite.com/audio/ www.mysite.com/audio/amplifiers/ In any of this others will show the image2.jpg www.mysite.com/video/spots/40wats.html www.mysite.com/video/amplifiers/400wats.html www.mysite.com/video/lighting/laser.html www.mysite.com/video/laser/ At the moment i can show the image only if the url is only the first directory, bu no in the internal directory or documents. This is the script that i'm using right now: <script type="text/javascript"> switch (location.pathname) { case "/audio/": document.write("From Web<BR>") break case "/video/": document.write('<A HREF="slides.htm" target="_blank"><IMG SRC="/adman/banners/joinvip.gif" WIDTH=728 HEIGHT=90 BORDER=0></A>') break default: document.write('<A HREF="http://www.apple.com" target="_blank"><IMG SRC="http://www.amd.com/us-en/assets/content_type/DownloadableAssets/NEW_PIB_728x90.gif" WIDTH=728 HEIGHT=90 BORDER=0></A>') break } </script> Thank you

    Read the article

  • Getting Argument Names In Ruby Reflection

    - by Joe Soul-bringer
    I would like to do some fairly heavy-duty reflection in the Ruby programming language. I would like to create a function which would return the names of the arguments of various calling functions higher up the call stack (just one higher would be enough but why stop there?). I could use Kernel.caller go to the file and parse the argument list but that would be ugly and unreliable. The function that I would like would work in the following way: module A def method1( tuti, fruity) foo end def method2(bim, bam, boom) foo end def foo print caller_args[1].join(",") #the "1" mean one step up the call stack end end A.method1 #prints "tuti,fruity" A.method2 #prints "bim, bam, boom" I would not mind using ParseTree or some similar tool for this task but looking at Parsetree, it is not obvious how to use it for this purpose. Creating a C extension like this is another possibility but it would be nice if someone had already done it for me. Edit2: I can see that I'll probably need some kind of C extension. I suppose that means my question is what combination of C extension would work most easily. I don't think caller+ParseTree would be enough by themselves. As far as why I would like to do this goes, rather than saying "automatic debugging", perhaps I should say that I would like to use this functionality to do automatic checking of the calling and return conditions of functions. Say def add x, y check_positive return x + y end Where check_positive would throw an exception if x and y weren't positive (obviously, there would be more to it than that but hopefully this gives enough motivation)

    Read the article

  • Django: how to cleanup form fields and avoid code duplication

    - by Alexander Konstantinov
    Quite often I need to filter some form data before using it (saving to database etc.) Let's say I want to strip whitespaces and replace repeating whitespaces with a single one in most of the text fields, in many forms. It's not difficult to do this using clean_<fieldname> methods: # Simplified model with two text fields class MyModel(models.Model): title = models.CharField() description = models.CharField() # Model-based form class MyForm(forms.ModelForm): class Meta: model = MyModel def clean_title(self): title = self.cleaned_data['title'] return re.sub(r'\s{2,}', ' ', title.strip()) def clean_description(self): description = self.cleaned_data['description'] return re.sub(r'\s{2,}', ' ', description.strip()) It does exactly what I need, and has a nice side effect which I like: if user enters only whitespaces, the field will be considered empty and therefore invalid (if it is required) and I don't even have to throw a ValidationError. The obvious problem here is code duplication. Even if I'll create some function for that, say my_text_filter, I'll have to call it for every text field in all my forms: from myproject.filters import my_text_filter class MyForm(forms.ModelForm): class Meta: model = MyModel def clean_title(self): return my_text_filter(self.cleaned_data['title']) def clean_description(self): return my_text_filter(self.cleaned_data['description']) The question: is there any standard and simple way in Django (I use version 1.2 if that matters) to do this (like, for example, by adding property validators = {'title': my_text_filter, 'description': my_text_filter} to MyModel), or at least some more or less standard workaround? I've read about form validation and validators in the documentation, but couldn't find what I need there.

    Read the article

  • How test that ASP.NET MVC route redirects to other site?

    - by Matt Lacey
    Due to a prinitng error in some promotional material I have a site that is receiving a lot of requests which should be for one site arriving at another. i.e. The valid sites are http://site1.com/abc & http://site2.com/def but people are being told to go to http://site1.com/def. I have control over site1 but not site2. site1 contains logic for checking that the first part of the route is valid in an actionfilter, like this: public override void OnActionExecuting(ActionExecutingContext filterContext) { base.OnActionExecuting(filterContext); if ((!filterContext.ActionParameters.ContainsKey("id")) || (!manager.WhiteLabelExists(filterContext.ActionParameters["id"].ToString()))) { if (filterContext.ActionParameters["id"].ToString().ToLowerInvariant().Equals("def")) { filterContext.HttpContext.Response.Redirect("http://site2.com/def", true); } filterContext.Result = new ViewResult { ViewName = "NoWhiteLabel" }; filterContext.HttpContext.Response.Clear(); } } I'm not sure how to test the redirection to the other site though. I already have tests for redirecting to "NoWhiteLabel" using the MvcContrib Test Helpers, but these aren't able to handle (as far as I can see) this situation. How do I test the redirection to antoher site?

    Read the article

  • Swig: No Constructor defined

    - by wheaties
    I added %allowexcept to my *.i file when building a Python <-- C++ bridge using Swig. Then I removed that preprocessing directive. Now I can't get the Python produced code to recognize the constructor of a C++ class. Here's the class file: #include <exception> class Swig_Foo{ public: Swig_Foo(){} virtual ~Swig_Foo(){} virtual getInt() =0; virtual throwException() { throw std::exception(); } }; And here's the code Swig produces from it: class Swig_Foo: __swig_setmethods__ = {} __setattr__ = lambda self, name, value: _swig_setattr(self, Swig_Foo, name, value) __swig_getmethods__ = {} __getattr__ = lambda self, name: _swig_getattr(self, Swig_Foo, name) def __init__(self): raise AttributeError, "No constructor defined" __repr__ = _swig_repr __swig_destroy__ = _foo.delete_Swig_Foo __del__ = lambda self : None; def getInt(*args): return apply(_foo.Swig_Foo_getInt, args) def throwOut(*args): return apply(_foo.Swig_Foo_throwOut, args) Swig_Foo_swigregister = _foo.Swig_Foo_swigregister Swig_Foo_swigregister(Swig_Foo) The problem is the def __init__self): raise AttributeError, "No constructor defined" portion. It never did this before I added the %allowexception and now that I've removed it, I can't get it to create a real constructor. All the other classes have actual constructors. Quite baffled. Anyone know how I can get it to stop doing this?

    Read the article

  • Unit testing and mocking email sender in Python with Google AppEngine

    - by CVertex
    I'm a newbie to python and the app engine. I have this code that sends an email based on request params after some auth logic. in my Unit tests (i'm using GAEUnit), how do I confirm an email with specific contents were sent? - i.e. how do I mock the emailer with a fake emailer to verify send was called? class EmailHandler(webapp.RequestHandler): def bad_input(self): self.response.set_status(400) self.response.headers['Content-Type'] = 'text/plain' self.response.out.write("<html><body>bad input </body></html>") def get(self): to_addr = self.request.get("to") subj = self.request.get("subject") msg = self.request.get("body") if not mail.is_email_valid(to_addr): # Return an error message... # self.bad_input() pass # authenticate here message = mail.EmailMessage() message.sender = "[email protected]" message.to = to_addr message.subject = subj message.body = msg message.send() self.response.headers['Content-Type'] = 'text/plain' self.response.out.write("<html><body>success!</body></html>") And the unit tests, import unittest from webtest import TestApp from google.appengine.ext import webapp from email import EmailHandler class SendingEmails(unittest.TestCase): def setUp(self): self.application = webapp.WSGIApplication([('/', EmailHandler)], debug=True) def test_success(self): app = TestApp(self.application) response = app.get('http://localhost:8080/[email protected]&body=blah_blah_blah&subject=mySubject') self.assertEqual('200 OK', response.status) self.assertTrue('success' in response) # somehow, assert email was sent

    Read the article

  • Ruby-on-rails: routing problem: controller action looks for show when it should look for finalize

    - by cbrulak
    background: trying to use the twitter gem for ruby-on-rails. in routes: map.resources :twitter_sessions map.finalize_twitter_sessions 'twitter_sessions/finalize', :controller => 'twitter_sessions', :action => 'finalize' (twitter_sessions is the controller for the twitter sessions in my app). The view has one file new.html.erb and is very simple: <% form_tag(twitter_sessions_path) do |f| %> <p><%= submit_tag "twitter!" %></p> <% end %> and the twitter_sessions_controller.rb: def new end def create oauth.set_callback_url(finalize_twitter_sessions_url) session['rtoken'] = oauth.request_token.token session['rsecret'] = oauth.request_token.secret redirect_to oauth.request_token.authorize_url end def destroy reset_session redirect_to new_session_path end def finalize oauth.authorize_from_request(session['rtoken'], session['rsecret'], params[:oauth_verifier]) profile = Twitter::Base.new(oauth).verify_credentials session['rtoken'] = session['rsecret'] = nil session[:atoken] = oauth.access_token.token session[:asecret] = oauth.access_token.secret sign_in(profile) redirect_back_or root_path end However, after I click the "twitter" button, I get this error: 401 Unauthorized .../gems/oauth-0.3.6/lib/oauth/consumer.rb:200:in `token_request' .../gems/oauth-0.3.6/lib/oauth/consumer.rb:128:in `get_request_token' .../gems/twitter-0.9.2/lib/twitter/oauth.rb:32:in `request_token' .../gems/twitter-0.9.2/lib/twitter/oauth.rb:25:in `set_callback_url' app/controllers/twitter_sessions_controller.rb:7:in `create' If I go to the finalize url, http://localhost:3000/twitter_sessions/finalize, directly, I get this error: Unknown action No action responded to show. Actions: create, destroy, finalize, isLoggedInToBeta, login_required, and new Any ideas? Thanks

    Read the article

  • Is it approproate it use django signals withing the same app

    - by Alex Lebedev
    Trying to add email notification to my app in the cleanest way possible. When certain fields of a model change, app should send a notification to a user. Here's my old solution: from django.contrib.auth import User class MyModel(models.Model): user = models.ForeignKey(User) field_a = models.CharField() field_b = models.CharField() def save(self, *args, **kwargs): old = self.__class__.objects.get(pk=self.pk) if self.pk else None super(MyModel, self).save(*args, **kwargs) if old and old.field_b != self.field_b: self.notify("b-changed") # Sevelar more events here # ... def notify(self, event) subj, text = self._prepare_notification(event) send_mail(subj, body, settings.DEFAULT_FROM_EMAIL, [self.user.email], fail_silently=True) This worked fine while I had one or two notification types, but after that just felt wrong to have so much code in my save() method. So, I changed code to signal-based: from django.db.models import signals def remember_old(sender, instance, **kwargs): """pre_save hanlder to save clean copy of original record into `old` attribute """ instance.old = None if instance.pk: try: instance.old = sender.objects.get(pk=instance.pk) except ObjectDoesNotExist: pass def on_mymodel_save(sender, instance, created, **kwargs): old = instance.old if old and old.field_b != instance.field_b: self.notify("b-changed") # Sevelar more events here # ... signals.pre_save.connect(remember_old, sender=MyModel, dispatch_uid="mymodel-remember-old") signals.post_save.connect(on_mymodel_save, sender=MyModel, dispatch_uid="mymodel-on-save") The benefit is that I can separate event handlers into different module, reducing size of models.py and I can enable/disable them individually. The downside is that this solution is more code and signal handlers are separated from model itself and unknowing reader can miss them altogether. So, colleagues, do you think it's worth it?

    Read the article

  • cannot override a concrete member without a third member that's overridden by both

    - by huynhjl
    What does the following error message mean? cannot override a concrete member without a third member that's overridden by both (this rule is designed to prevent ``accidental overrides''); I was trying to do stackable trait modifications. It's a little bit after the fact since I already have a hierarchy in place and I'm trying to modify the behavior without having to rewrite a lot of code. I have a base class called AbstractProcessor that defines an abstract method sort of like this: class AbstractProcessor { def onPush(i:Info): Unit } I have a couple existing traits, to implement different onPush behaviors. trait Pass1 { def onPush(i:Info): Unit = { ... } } trait Pass2 { def onPush(i:Info): Unit = { ... } } So that allows me to use new AbstractProcessor with Pass1 or new AbstractProcessor with Pass2. Now I would like to do some processing before and after the onPush call in Pass1 and Pass2 while minimizing code changes to AbstractProcessor and Pass1 and Pass2. I thought of creating a trait that does something like this: trait Custom extends AbstractProcessor { abstract override def onPush(i:Info): Unit = { // do stuff before super.onPush(i) // do stuff after } And using it with new AbstractProcessor with Pass1 with Custom and I got that error message.

    Read the article

  • Authentication Problem - not recognizing 'else' - Ruby on rails...

    - by bgadoci
    I can't seem to figure out what I am doing wrong here. I have implemented the Super Simple Authentication from Ryan Bates tutorial and while the login portion is functioning correctly, I can't get an error message and redirect to happen correctly for a bad login. Ryan Bates admits in his comments he left this out but can't seem to implement his recommendation. Basically what is happening is that when someone logs in correctly it works. When a bad password is entered it does the same redirect and flashes 'successfully logged in' thought they are not. The admin links do not show (which is correct and are the links protected by the <% if admin? %) but I need it to say 'failed login' and redirect to login path. Here is my code: SessionsController class SessionsController < ApplicationController def create if session[:password] = params[:password] flash[:notice] = 'Successfully logged in' redirect_to posts_path else flash[:notice] = "whoops" redirect_to login_path end end def destroy reset_session flash[:notice] = 'Successfully logged out' redirect_to posts_path end end ApplicationController class ApplicationController < ActionController::Base helper_method :admin? protected def authorize unless admin? flash[:error] = "unauthorized request" redirect_to posts_path false end end def admin? session[:password] == "123456" end helper :all # include all helpers, all the time protect_from_forgery # See ActionController::RequestForgeryProtection for details # end

    Read the article

  • When to use "property" builtin: auxiliary functions and generators

    - by Seth Johnson
    I recently discovered Python's property built-in, which disguises class method getters and setters as a class's property. I'm now being tempted to use it in ways that I'm pretty sure are inappropriate. Using the property keyword is clearly the right thing to do if class A has a property _x whose allowable values you want to restrict; i.e., it would replace the getX() and setX() construction one might write in C++. But where else is it appropriate to make a function a property? For example, if you have class Vertex(object): def __init__(self): self.x = 0.0 self.y = 1.0 class Polygon(object): def __init__(self, list_of_vertices): self.vertices = list_of_vertices def get_vertex_positions(self): return zip( *( (v.x,v.y) for v in self.vertices ) ) is it appropriate to add vertex_positions = property( get_vertex_positions ) ? Is it ever ok to make a generator look like a property? Imagine if a change in our code meant that we no longer stored Polygon.vertices the same way. Would it then be ok to add this to Polygon? @property def vertices(self): for v in self._new_v_thing: yield v.calculate_equivalent_vertex()

    Read the article

  • Unexpected performance curve from CPython merge sort

    - by vkazanov
    I have implemented a naive merge sorting algorithm in Python. Algorithm and test code is below: import time import random import matplotlib.pyplot as plt import math from collections import deque def sort(unsorted): if len(unsorted) <= 1: return unsorted to_merge = deque(deque([elem]) for elem in unsorted) while len(to_merge) > 1: left = to_merge.popleft() right = to_merge.popleft() to_merge.append(merge(left, right)) return to_merge.pop() def merge(left, right): result = deque() while left or right: if left and right: elem = left.popleft() if left[0] > right[0] else right.popleft() elif not left and right: elem = right.popleft() elif not right and left: elem = left.popleft() result.append(elem) return result LOOP_COUNT = 100 START_N = 1 END_N = 1000 def test(fun, test_data): start = time.clock() for _ in xrange(LOOP_COUNT): fun(test_data) return time.clock() - start def run_test(): timings, elem_nums = [], [] test_data = random.sample(xrange(100000), END_N) for i in xrange(START_N, END_N): loop_test_data = test_data[:i] elapsed = test(sort, loop_test_data) timings.append(elapsed) elem_nums.append(len(loop_test_data)) print "%f s --- %d elems" % (elapsed, len(loop_test_data)) plt.plot(elem_nums, timings) plt.show() run_test() As much as I can see everything is OK and I should get a nice N*logN curve as a result. But the picture differs a bit: Things I've tried to investigate the issue: PyPy. The curve is ok. Disabled the GC using the gc module. Wrong guess. Debug output showed that it doesn't even run until the end of the test. Memory profiling using meliae - nothing special or suspicious. ` I had another implementation (a recursive one using the same merge function), it acts the similar way. The more full test cycles I create - the more "jumps" there are in the curve. So how can this behaviour be explained and - hopefully - fixed? UPD: changed lists to collections.deque UPD2: added the full test code UPD3: I use Python 2.7.1 on a Ubuntu 11.04 OS, using a quad-core 2Hz notebook. I tried to turn of most of all other processes: the number of spikes went down but at least one of them was still there.

    Read the article

  • Replace textfields with dropdown select fields

    - by 47
    I have three model classes that look as below: class Model(models.Model): model = models.CharField(max_length=20, blank=False) manufacturer = models.ForeignKey(Manufacturer) date_added = models.DateField(default=datetime.today) def __unicode__(self): name = ''+str(self.manufacturer)+" "+str(self.model) return name class Series(models.Model): series = models.CharField(max_length=20, blank=True, null=True) model = models.ForeignKey(Model) date_added = models.DateField(default=datetime.today) def __unicode__(self): name = str(self.model)+" "+str(self.series) return name class Manufacturer(models.Model): MANUFACTURER_POPULARITY_CHOICES = ( ('1', 'Primary'), ('2', 'Secondary'), ('3', 'Tertiary'), ) manufacturer = models.CharField(max_length=15, blank=False) date_added = models.DateField(default=datetime.today) manufacturer_popularity = models.CharField(max_length=1, choices=MANUFACTURER_POPULARITY_CHOICES) def __unicode__(self): return self.manufacturer I want to have the fields for model series and manufacturer represented as dropdowns instead of text fields. I have customized the model forms as below: class SeriesForm(ModelForm): series = forms.ModelChoiceField(queryset=Series.objects.all()) class Meta: model = Series exclude = ('model', 'date_added',) class ModelForm(ModelForm): model = forms.ModelChoiceField(queryset=Model.objects.all()) class Meta: model = Model exclude = ('manufacturer', 'date_added',) class ManufacturerForm(ModelForm): manufacturer = forms.ModelChoiceField(queryset=Manufacturer.objects.all()) class Meta: model = Manufacturer exclude = ('date_added',) However, the dropdowns are populated with the unicode in the respective class...how can I further customize this to get the end result I want? Also, how can I populate the forms with the correct data for editing? Currently only SeriesForm is populated. The starting point of all this is from another class whose declaration is as below: class CommonVehicle(models.Model): year = models.ForeignKey(Year) series = models.ForeignKey(Series) .... def __unicode__(self): name = ''+str(self.year)+" "+str(self.series) return name

    Read the article

  • Python: Networked IDLE?

    - by Rosarch
    Is there any existing web app that lets multiple users work with an interactive IDLE type session at once? Something like: IDLE 2.6.4 Morgan: >>> letters = list("abcdefg") Morgan: >>> # now, how would you iterate over letters? Jack: >>> for char in letters: print "char %s" % char char a char b char c char d char e char f char g Morgan: >>> # nice nice If not, I would like to create one. Is there some module I can use that simulates an interactive session? I'd want an interface like this: def class InteractiveSession(): ''' An interactive Python session ''' def putLine(line): ''' Evaluates line ''' pass def outputLines(): ''' A list of all lines that have been output by the session ''' pass def currentVars(): ''' A dictionary of currently defined variables and their values ''' pass (Although that last function would be more of an extra feature.) To formulate my problem another way: I'd like to create a new front end for IDLE. How can I do this?

    Read the article

  • Refresh decorator

    - by Morgoth
    I'm trying to write a decorator that 'refreshes' after being called, but where the refreshing only occurs once after the last function exits. Here is an example: @auto_refresh def a(): print "In a" @auto_refresh def b(): print "In b" a() If a() is called, I want the refresh function to be run after exiting a(). If b() is called, I want the refresh function to be run after exiting b(), but not after a() when called by b(). Here is an example of a class that does this: class auto_refresh(object): def __init__(self, f): print "Initializing decorator" self.f = f def __call__(self, *args, **kwargs): print "Before function" if 'refresh' in kwargs: refresh = kwargs.pop('refresh') else: refresh = False self.f(*args, **kwargs) print "After function" if refresh: print "Refreshing" With this decorator, if I run b() print '---' b(refresh=True) print '---' b(refresh=False) I get the following output: Initializing decorator Initializing decorator Before function In b Before function In a After function After function --- Before function In b Before function In a After function After function Refreshing --- Before function In b Before function In a After function After function So when written this way, not specifying the refresh argument means that refresh is defaulted to False. Can anyone think of a way to change this so that refresh is True when not specified? Changing the refresh = False to refresh = True in the decorator does not work: Initializing decorator Initializing decorator Before function In b Before function In a After function Refreshing After function Refreshing --- Before function In b Before function In a After function Refreshing After function Refreshing --- Before function In b Before function In a After function Refreshing After function because refresh then gets called multiple times in the first and second case, and once in the last case (when it should be once in the first and second case, and not in the last case).

    Read the article

  • How to get user input before saving a file in Sublime Text

    - by EddieJessup
    I'm making a plugin in Sublime Text that prompts the user for a password to encrypt a file before it's saved. There's a hook in the API that's executed before a save is executed, so my naïve implementation is: class TranscryptEventListener(sublime_plugin.EventListener): def on_pre_save(self, view): # If document is set to encode on save if view.settings().get('ON_SAVE'): self.view = view # Prompt user for password message = "Create a Password:" view.window().show_input_panel(message, "", self.on_done, None, None) def on_done(self, password): self.view.run_command("encode", {password": password}) The problem with this is, by the time the input panel appears for the user to enter their password, the document has already been saved (despite the trigger being 'on_pre_save'). Then once the user hits enter, the document is encrypted fine, but the situation is that there's a saved plaintext file, and a modified buffer filled with the encrypted text. So I need to make Sublime Text wait until the user's input the password before carrying out the save. Is there a way to do this? At the moment I'm just manually re-saving once the encryption has been done: def on_pre_save(self, view, encode=False): if view.settings().get('ON_SAVE') and not view.settings().get('ENCODED'): self.view = view message = "Create a Password:" view.window().show_input_panel(message, "", self.on_done, None, None) def on_done(self, password): self.view.run_command("encode", {password": password}) self.view.settings().set('ENCODED', True) self.view.run_command('save') self.view.settings().set('ENCODED', False) but this is messy and if the user cancels the encryption then the plaintext file gets saved, which isn't ideal. Any thoughts? Edit: I think I could do it cleanly by overriding the default save command. I hoped to do this by using the on_text_command or on_window_command triggers, but it seems that the save command doesn't trigger either of these (maybe it's an application command? But there's no on_application_command). Is there just no way to override the save function?

    Read the article

  • How should I avoid memoization causing bugs in Ruby?

    - by Andrew Grimm
    Is there a consensus on how to avoid memoization causing bugs due to mutable state? In this example, a cached result had its state mutated, and therefore gave the wrong result the second time it was called. class Greeter def initialize @greeting_cache = {} end def expensive_greeting_calculation(formality) case formality when :casual then "Hi" when :formal then "Hello" end end def greeting(formality) unless @greeting_cache.has_key?(formality) @greeting_cache[formality] = expensive_greeting_calculation(formality) end @greeting_cache[formality] end end def memoization_mutator greeter = Greeter.new first_person = "Bob" # Mildly contrived in this case, # but you could encounter this in more complex scenarios puts(greeter.greeting(:casual) << " " << first_person) # => Hi Bob second_person = "Sue" puts(greeter.greeting(:casual) << " " << second_person) # => Hi Bob Sue end memoization_mutator Approaches I can see to avoid this are: greeting could return a dup or clone of @greeting_cache[formality] greeting could freeze the result of @greeting_cache[formality]. That'd cause an exception to be raised when memoization_mutator appends strings to it. Check all code that uses the result of greeting to ensure none of it does any mutating of the string. Is there a consensus on the best approach? Is the only disadvantage of doing (1) or (2) decreased performance? (I also suspect freezing an object may not work fully if it has references to other objects) Side note: this problem doesn't affect the main application of memoization: as Fixnums are immutable, calculating Fibonacci sequences doesn't have problems with mutable state. :)

    Read the article

  • FIFO dequeueing in python?

    - by Aaron Ramsey
    hello again everybody— I'm looking to make a functional (not necessarily optimally efficient, as I'm very new to programming) FIFO queue, and am having trouble with my dequeueing. My code looks like this: class QueueNode: def __init__(self, data): self.data = data self.next = None def __str__(self): return str(self.data) class Queue: def__init__(self): self.front = None self.rear = None self.size = 0 def enqueue(self, item) newnode = QueueNode(item) newnode.next = None if self.size == 0: self.front = self.rear = newnode else: self.rear = newnode self.rear.next = newnode.next self.size = self.size+1 def dequeue(self) dequeued = self.front.data del self.front self.size = self.size-1 if self.size == 0: self.rear = None print self.front #for testing if I do this, and dequeue an item, I get the error "AttributeError: Queue instance has no attribute 'front'." I guess my function doesn't properly assign the new front of the queue? I'm not sure how to fix it though. I don't really want to start from scratch, so if there's a tweak to my code that would work, I'd prefer that—I'm not trying to minimize runtime so much as just get a feel for classes and things of that nature. Thanks in advance for any help, you guys are the best.

    Read the article

  • Why Stream/lazy val implementation using is faster than ListBuffer one

    - by anrizal
    I coded the following implementation of lazy sieve algorithms using Stream and lazy val below : def primes(): Stream[Int] = { lazy val ps = 2 #:: sieve(3) def sieve(p: Int): Stream[Int] = { p #:: sieve( Stream.from(p + 2, 2). find(i=> ps.takeWhile(j => j * j <= i). forall(i % _ > 0)).get) } ps } and the following implementation using (mutable) ListBuffer: import scala.collection.mutable.ListBuffer def primes(): Stream[Int] = { def sieve(p: Int, ps: ListBuffer[Int]): Stream[Int] = { p #:: { val nextprime = Stream.from(p + 2, 2). find(i=> ps.takeWhile(j => j * j <= i). forall(i % _ > 0)).get sieve(nextprime, ps += nextprime) } } sieve(3, ListBuffer(3))} When I did primes().takeWhile(_ < 1000000).size , the first implementation is 3 times faster than the second one. What's the explanation for this ? I edited the second version: it should have been sieve(3, ListBuffer(3)) instead of sieve(3, ListBuffer()) .

    Read the article

  • Why does one of these statements compile in Scala but not the other?

    - by Jeff
    (Note: I'm using Scala 2.7.7 here, not 2.8). I'm doing something pretty simple -- creating a map based on the values in a simple, 2-column CSV file -- and I've completed it easily enough, but I'm perplexed at why my first attempt didn't compile. Here's the code: // Returns Iterator[String] private def getLines = Source.fromFile(csvFilePath).getLines // This doesn't compile: def mapping: Map[String,String] = { Map(getLines map { line: String => val pairArr = line.split(",") pairArr(0) -> pairArr(1).trim() }.toList:_*) } // This DOES compile def mapping: Map[String,String] = { def strPair(line: String): (String,String) = { val pairArr = line.split(",") pairArr(0) -> pairArr(1).trim() } Map(getLines.map( strPair(_) ).toList:_*) } The compiler error is CsvReader.scala:16: error: value toList is not a member of (St ring) = (java.lang.String, java.lang.String) [scalac] possible cause: maybe a semicolon is missing before `value toList'? [scalac] }.toList:_*) [scalac] ^ [scalac] one error found So what gives? They seem like they should be equivalent to me, apart from the explicit function definition (vs. anonymous in the nonworking example) and () vs. {}. If I replace the curly braces with parentheses in the nonworking example, the error is "';' expected, but 'val' found." But if I remove the local variable definition and split the string twice AND use parens instead of curly braces, it compiles. Can someone explain this difference to me, preferably with a link to Scala docs explaining the difference between parens and curly braces when used to surround method arguments?

    Read the article

  • Django formset unit test

    - by Py
    I can't running Unit Test with formset. I try to do a test: class NewClientTestCase(TestCase): def setUp(self): self.c = Client() def test_0_create_individual_with_same_adress(self): post_data = { 'ctype': User.CONTACT_INDIVIDUAL, 'username': 'dupond.f', 'email': '[email protected]', 'password': 'pwd', 'password2': 'pwd', 'civility': User.CIVILITY_MISTER, 'first_name': 'François', 'last_name': 'DUPOND', 'phone': '+33 1 34 12 52 30', 'gsm': '+33 6 34 12 52 30', 'fax': '+33 1 34 12 52 30', 'form-0-address1': '33 avenue Gambetta', 'form-0-address2': 'apt 50', 'form-0-zip_code': '75020', 'form-0-city': 'Paris', 'form-0-country': 'FRA', 'same_for_billing': True, } response = self.c.post(reverse('client:full_account'), post_data, follow=True) self.assertRedirects(response, '%s?created=1' % reverse('client:dashboard')) and i have this error: ValidationError: [u'ManagementForm data is missing or has been tampered with'] My view : def full_account(request, url_redirect=''): from forms import NewUserFullForm, AddressForm, BaseArticleFormSet fields_required = [] fields_notrequired = [] AddressFormSet = formset_factory(AddressForm, extra=2, formset=BaseArticleFormSet) if request.method == 'POST': form = NewUserFullForm(request.POST) objforms = AddressFormSet(request.POST) if objforms.is_valid() and form.is_valid(): user = form.save() address = objforms.forms[0].save() if url_redirect=='': url_redirect = '%s?created=1' % reverse('client:dashboard') logon(request, form.instance) return HttpResponseRedirect(url_redirect) else: form = NewUserFullForm() objforms = AddressFormSet() return direct_to_template(request, 'clients/full_account.html', { 'form':form, 'formset': objforms, 'tld_fr':False, }) and my form file : class BaseArticleFormSet(BaseFormSet): def clean(self): msg_err = _('Ce champ est obligatoire.') non_errors = True if 'same_for_billing' in self.data and self.data['same_for_billing'] == 'on': same_for_billing = True else: same_for_billing = False for i in [0, 1]: form = self.forms[i] for field in form.fields: name_field = 'form-%d-%s' % (i, field ) value_field = self.data[name_field].strip() if i == 0 and self.forms[0].fields[field].required and value_field =='': form.errors[field] = msg_err non_errors = False elif i == 1 and not same_for_billing and self.forms[1].fields[field].required and value_field =='': form.errors[field] = msg_err non_errors = False return non_errors class AddressForm(forms.ModelForm): class Meta: model = Address address1 = forms.CharField() address2 = forms.CharField(required=False) zip_code = forms.CharField() city = forms.CharField() country = forms.ChoiceField(choices=CountryField.COUNTRIES, initial='FRA')

    Read the article

  • Python Turtle Graphics, how to plot functions over an interval?

    - by TheDragonAce
    I need to plot a function over a specified interval. The function is f1, which is shown below in the code, and the interval is [-7, -3]; [-1, 1]; [3, 7] with a step of .01. When I execute the program, nothing is drawn. Any ideas? import turtle from math import sqrt wn = turtle.Screen() wn.bgcolor("white") wn.title("Plotting") mypen = turtle.Turtle() mypen.shape("classic") mypen.color("black") mypen.speed(10) while True: try: def f1(x): return 2 * sqrt((-abs(abs(x)-1)) * abs(3 - abs(x))/((abs(x)-1)*(3-abs(x)))) * \ (1 + abs(abs(x)-3)/(abs(x)-3))*sqrt(1-(x/7)**2)+(5+0.97*(abs(x-0.5)+abs(x+0.5))-\ 3*(abs(x-0.75)+abs(x+0.75)))*(1+abs(1-abs(x))/(1-abs(x))) mypen.penup() step=.01 startf11=-7 stopf11=-3 startf12=-1 stopf12=1 startf13=3 stopf13=7 def f11 (startf11,stopf11,step): rc=[] y = f1(startf11) while y<=stopf11: rc.append(startf11) #y+=step mypen.setpos(f1(startf11)*25,y*25) mypen.dot() def f12 (startf12,stopf12,step): rc=[] y = f1(startf12) while y<=stopf12: rc.append(startf12) #y+=step mypen.setpos(f1(startf12)*25, y*25) mypen.dot() def f13 (startf13,stopf13,step): rc=[] y = f1(startf13) while y<=stopf13: rc.append(startf13) #y+=step mypen.setpos(f1(startf13)*25, y*25) mypen.dot() f11(startf11,stopf11,step) f12(startf12,stopf12,step) f13(startf13,stopf13,step) except ZeroDivisionError: continue

    Read the article

  • ruby/datamapper: Refactor class methods to module

    - by DeSchleib
    Hello, i've the following code and tried the whole day to refactor the class methods to a sperate module to share the functionality with all of my model classes. Code (http://pastie.org/974847): class Merchant include DataMapper::Resource property :id, Serial [...] class << self @allowed_properties = [:id,:vendor_id, :identifier] alias_method :old_get, :get def get *args [...] end def first_or_create_or_update attr_hash [...] end end end I'd like to archive something like: class Merchant include DataMapper::Resource include MyClassFunctions [...] end module MyClassFunctions def get [...] def first_or_create_or_update[...] end => Merchant.allowed_properties = [:id] => Merchant.get( :id=> 1 ) But unfortunately, my ruby skills are to bad. I read a lot of stuff (e.g. here) and now i'm even more confused. I stumbled over the following two points: alias_method will fail, because it will dynamically defined in the DataMapper::Resource module. How to get a class method allowed_properties due including a module? What's the ruby way to go? Many thanks in advance.

    Read the article

  • Problem bounding name to a class in Django

    - by martinthenext
    Hello! I've got a view function that has to decide which form to use depending on some conditions. The two forms look like that: class OpenExtraForm(forms.ModelForm): class Meta: model = Extra def __init__(self, *args, **kwargs): super(OpenExtraForm, self).__init__(*args, **kwargs) self.fields['opening_challenge'].label = "lame translation" def clean_opening_challenge(self): challenge = self.cleaned_data['opening_challenge'] if challenge is None: raise forms.ValidationError('??????? ???, ??????????? ?????? ???. ???????????') return challenge class HiddenExtraForm(forms.ModelForm): class Meta: model = Extra exclude = ('opening_challenge') def __init__(self, *args, **kwargs): super(HiddenExtraForm, self).__init__(*args, **kwargs) The view code goes like that: @login_required def manage_extra(request, extra_id=None, hidden=False): if not_admin(request.user): raise Http404 if extra_id is None: # Adding a new extra extra = Extra() if hidden: FormClass = HiddenExtraForm else: FormClass = OpenExtraForm else: # Editing an extra extra = get_object_or_404(Extra, pk=extra_id) if extra.is_hidden(): FromClass = HiddenExtraForm else: FormClass = OpenExtraForm if request.POST: form = FormClass(request.POST, instance=extra) if form.is_valid(): form.save() return HttpResponseRedirect(reverse(view_extra, args=[extra.id])) else: form = FormClass(instance=extra) return render_to_response('form.html', { 'form' : form, }, context_instance=RequestContext(request) ) The problem is somehow if extra.is_hidden() returns True, the statement FromClass = HiddenExtraForm doesn't work. I mean, in all other conditions that are used in the code it works fine: the correct Form classes are intantiated and it all works. But if extra.is_hidden(), the debugger shows that the condition is passed and it goes to the next line and does nothing! As a result I get a UnboundLocalVar error which says FormClass hasn't been asssigned at all. Any ideas on what's happening?

    Read the article

  • Returning the same type the function was passed

    - by Ken Bloom
    I have the following code implementation of Breadth-First search. trait State{ def successors:Seq[State] def isSuccess:Boolean = false def admissableHeuristic:Double } def breadthFirstSearch(initial:State):Option[List[State]] = { val open= new scala.collection.mutable.Queue[List[State]] val closed = new scala.collection.mutable.HashSet[State] open.enqueue(initial::Nil) while (!open.isEmpty){ val path:List[State]=open.dequeue() if(path.head.isSuccess) return Some(path.reverse) closed += path.head for (x <- path.head.successors) if (!closed.contains(x)) open.enqueue(x::path) } return None } If I define a subtype of State for my particular problem class CannibalsState extends State { //... } What's the best way to make breadthFirstSearch return the same subtype as it was passed? Supposing I change this so that there are 3 different state classes for my particular problem and they share a common supertype: abstract class CannibalsState extends State { //... } class LeftSideOfRiver extends CannibalsState { //... } class InTransit extends CannibalsState { //... } class RightSideOfRiver extends CannibalsState { //... } How can I make the types work out so that breadthFirstSearch infers that the correct return type is CannibalsState when it's passed an instance of LeftSideOfRiver? Can this be done with an abstract type member, or must it be done with generics?

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >