Search Results

Search found 5146 results on 206 pages for 'foo chow'.

Page 22/206 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • How should I implement simple caches with concurrency on Redis?

    - by solublefish
    Background I have a 2-tier web service - just my app server and an RDBMS. I want to move to a pool of identical app servers behind a load balancer. I currently cache a bunch of objects in-process. I hope to move them to a shared Redis. I have a dozen or so caches of simple, small-sized business objects. For example, I have a set of Foos. Each Foo has a unique FooId and an OwnerId. One "owner" may own multiple Foos. In a traditional RDBMS this is just a table with an index on the PK FooId and one on OwnerId. I'm caching this in one process simply: Dictionary<int,Foo> _cacheFooById; Dictionary<int,HashSet<int>> _indexFooIdsByOwnerId; Reads come straight from here, and writes go here and to the RDBMS. I usually have this invariant: "For a given group [say by OwnerId], the whole group is in cache or none of it is." So when I cache miss on a Foo, I pull that Foo and all the owner's other Foos from the RDBMS. Updates make sure to keep the index up to date and respect the invariant. When an owner calls GetMyFoos I never have to worry that some are cached and some aren't. What I did already The first/simplest answer seems to be to use plain ol' SET and GET with a composite key and json value: SET( "ServiceCache:Foo:" + theFoo.Id, JsonSerialize(theFoo)); I later decided I liked: HSET( "ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); That lets me get all the values in one cache as HVALS. It also felt right - I'm literally moving hashtables to Redis, so perhaps my top-level items should be hashes. This works to first order. If my high-level code is like: UpdateCache(myFoo); AddToIndex(myFoo); That translates into: HSET ("ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); var myFoos = JsonDeserialize( HGET ("ServiceCache:FooIndex", theFoo.OwnerId) ); myFoos.Add(theFoo.OwnerId); HSET ("ServiceCache:FooIndex", theFoo.OwnerId, JsonSerialize(myFoos)); However, this is broken in two ways. Two concurrent operations can read/modify/write at the same time. The latter "wins" the final HSET and the former's index update is lost. Another operation could read the index in between the first and second lines. It would miss a Foo that it should find. So how do I index properly? I think I could use a Redis set instead of a json-encoded value for the index. That would solve part of the problem since the "add-to-index-if-not-already-present" would be atomic. I also read about using MULTI as a "transaction" but it doesn't seem like it does what I want. Am I right that I can't really MULTI; HGET; {update}; HSET; EXEC since it doesn't even do the HGET before I issue the EXEC? I also read about using WATCH and MULTI for optimistic concurrency, then retrying on failure. But WATCH only works on top-level keys. So it's back to SET/GET instead of HSET/HGET. And now I need a new index-like-thing to support getting all the values in a given cache. If I understand it right, I can combine all these things to do the job. Something like: while(!succeeded) { WATCH( "ServiceCache:Foo:" + theFoo.FooId ); WATCH( "ServiceCache:FooIndexByOwner:" + theFoo.OwnerId ); WATCH( "ServiceCache:FooIndexAll" ); MULTI(); SET ("ServiceCache:Foo:" + theFoo.FooId, JsonSerialize(theFoo)); SADD ("ServiceCache:FooIndexByOwner:" + theFoo.OwnerId, theFoo.FooId); SADD ("ServiceCache:FooIndexAll", theFoo.FooId); EXEC(); //TODO somehow set succeeded properly } Finally I'd have to translate this pseudocode into real code depending how my client library uses WATCH/MULTI/EXEC; it looks like they need some sort of context to hook them together. All in all this seems like a lot of complexity for what has to be a very common case; I can't help but think there's a better, smarter, Redis-ish way to do things that I'm just not seeing. How do I lock properly? Even if I had no indexes, there's still a (probably rare) race condition. A: HGET - cache miss B: HGET - cache miss A: SELECT B: SELECT A: HSET C: HGET - cache hit C: UPDATE C: HSET B: HSET ** this is stale data that's clobbering C's update. Note that C could just be a really-fast A. Again I think WATCH, MULTI, retry would work, but... ick. I know in some places people use special Redis keys as locks for other objects. Is that a reasonable approach here? Should those be top-level keys like ServiceCache:FooLocks:{Id} or ServiceCache:Locks:Foo:{Id}? Or make a separate hash for them - ServiceCache:Locks with subkeys Foo:{Id}, or ServiceCache:Locks:Foo with subkeys {Id} ? How would I work around abandoned locks, say if a transaction (or a whole server) crashes while "holding" the lock?

    Read the article

  • Testing warnings with doctest

    - by Eli Courtwright
    I'd like to use doctests to test the presence of certain warnings. For example, suppose I have the following module: from warnings import warn class Foo(object): """ Instantiating Foo always gives a warning: >>> foo = Foo() testdocs.py:14: UserWarning: Boo! warn("Boo!", UserWarning) >>> """ def __init__(self): warn("Boo!", UserWarning) If I run python -m doctest testdocs.py to run the doctest in my class and make sure that the warning is printed, I get: testdocs.py:14: UserWarning: Boo! warn("Boo!", UserWarning) ********************************************************************** File "testdocs.py", line 7, in testdocs.Foo Failed example: foo = Foo() Expected: testdocs.py:14: UserWarning: Boo! warn("Boo!", UserWarning) Got nothing ********************************************************************** 1 items had failures: 1 of 1 in testdocs.Foo ***Test Failed*** 1 failures. It looks like the warning is getting printed but not captured or noticed by doctest. I'm guessing that this is because warnings are printed to sys.stderr instead of sys.stdout. But this happens even when I say sys.stderr = sys.stdout at the end of my module. So is there any way to use doctests to test for warnings? I can find no mention of this one way or the other in the documentation or in my Google searching.

    Read the article

  • System.Dynamic bug?

    - by ControlFlow
    While I playing with the C# 4.0 dynamic, I found strange things happening with the code like this: using System.Dynamic; sealed class Foo : DynamicObject { public override bool TryInvoke( InvokeBinder binder, object[] args, out object result) { result = new object(); return true; } static void Main() { dynamic foo = new Foo(); var t1 = foo(0); var t2 = foo(0); var t3 = foo(0); var t4 = foo(0); var t5 = foo(0); } } Ok, it works but... take a look at IntelliTrace window: So every invokation (and other operations too on dynamic object) causes throwing and catching strange exceptions twice! I understand, that sometimes exceptions mechanism may be used for optimizations, for example first call to dynamic may be performed to some stub delegate, that simply throws exception - this may be like a signal to dynamic binder to resolve an correct member and re-point delegate. Next call to the same delegate will be performed without any checks. But... behavior of the code above looks very strange. Maybe throwing and catching exceptions twice per any operation on DynamicObject - is a bug?

    Read the article

  • Why does Java's invokevirtual need to resolve the called method's compile-time class?

    - by Chris
    Consider this simple Java class: class MyClass { public void bar(MyClass c) { c.foo(); } } I want to discuss what happens on the line c.foo(). At the bytecode level, the meat of c.foo() will be the invokevirtual opcode, and, according to the documentation for invokevirtual, more or less the following will happen: Look up the foo method defined in compile-time class MyClass. (This involves first resolving MyClass.) Do some checks, including: Verify that c is not an initialization method, and verify that calling MyClass.foo wouldn't violate any protected modifiers. Figure out which method to actually call. In particular, look up c's runtime type. If that type has foo(), call that method and return. If not, look up c's runtime type's superclass; if that type has foo, call that method and return. If not, look up c's runtime type's superclass's superclass; if that type has foo, call that method and return. Etc.. If no suitable method can be found, then error. Step #3 alone seems adequate for figuring out which method to call and verifying that said method has the correct argument/return types. So my question is why step #1 gets performed in the first place. Possible answers seem to be: You don't have enough information to perform step #3 until step #1 is complete. (This seems implausible at first glance, so please explain.) The linking or access modifier checks done in #1 and #2 are essential to prevent certain bad things from happening, and those checks must be performed based on the compile-time type, rather than the run-time type hierarchy. (Please explain.)

    Read the article

  • How can I make this work with deep properties

    - by Martin Robins
    Given the following code... class Program { static void Main(string[] args) { Foo foo = new Foo { Bar = new Bar { Name = "Martin" }, Name = "Martin" }; DoLambdaStuff(foo, f => f.Name); DoLambdaStuff(foo, f => f.Bar.Name); } static void DoLambdaStuff<TObject, TValue>(TObject obj, Expression<Func<TObject, TValue>> expression) { // Set up and test "getter"... Func<TObject, TValue> getValue = expression.Compile(); TValue stuff = getValue(obj); // Set up and test "setter"... ParameterExpression objectParameterExpression = Expression.Parameter(typeof(TObject)), valueParameterExpression = Expression.Parameter(typeof(TValue)); Expression<Action<TObject, TValue>> setValueExpression = Expression.Lambda<Action<TObject, TValue>>( Expression.Block( Expression.Assign(Expression.Property(objectParameterExpression, ((MemberExpression)expression.Body).Member.Name), valueParameterExpression) ), objectParameterExpression, valueParameterExpression ); Action<TObject, TValue> setValue = setValueExpression.Compile(); setValue(obj, stuff); } } class Foo { public Bar Bar { get; set; } public string Name { get; set; } } class Bar { public string Name { get; set; } } The call to DoLambdaStuff(foo, f => f.Name) works ok because I am accessing a shallow property, however the call to DoLambdaStuff(foo, f => f.Bar.Name) fails - although the creation of the getValue function works fine, the creation of the setValueExpression fails because I am attempting to access a deep property of the object. Can anybody please help me to modify this so that I can create the setValueExpression for deep properties as well as shallow? Thanks.

    Read the article

  • Unit testing with Mocks when SUT is leveraging Task Parallel Libaray

    - by StevenH
    I am trying to unit test / verify that a method is being called on a dependency, by the system under test. The depenedency is IFoo. The dependent class is IBar. IBar is implemented as Bar. Bar will call Start() on IFoo in a new (System.Threading.Tasks.)Task, when Start() is called on Bar instance. Unit Test (Moq): [Test] public void StartBar_ShouldCallStartOnAllFoo_WhenFoosExist() { //ARRANGE //Create a foo, and setup expectation var mockFoo0 = new Mock<IFoo>(); mockFoo0.Setup(foo => foo.Start()); var mockFoo1 = new Mock<IFoo>(); mockFoo1.Setup(foo => foo.Start()); //Add mockobjects to a collection var foos = new List<IFoo> { mockFoo0.Object, mockFoo1.Object }; IBar sutBar = new Bar(foos); //ACT sutBar.Start(); //Should call mockFoo.Start() //ASSERT mockFoo0.VerifyAll(); mockFoo1.VerifyAll(); } Implementation of IBar as Bar: class Bar : IBar { private IEnumerable<IFoo> Foos { get; set; } public Bar(IEnumerable<IFoo> foos) { Foos = foos; } public void Start() { foreach(var foo in Foos) { Task.Factory.StartNew( () => { foo.Start(); }); } } } I appears that the issue is obviously due to the fact that the call to "foo.Start()" is taking place on another thread (/task), and the mock (Moq framework) can't detect it. But I could be wrong. Thanks

    Read the article

  • Count all lists of adjacent nodes stored in an array.

    - by Ben Brodie
    There are many naive approaches to this problem, but I'm looking for a good solution. Here is the problem (will be implemented in Java): You have a function foo(int a, int b) that returns true if 'a' is "adjacent" to 'b' and false if 'a' is not adjacent to 'b'. You have an array such as this [1,4,5,9,3,2,6,15,89,11,24], but in reality has a very long length, like 120, and will be repeated over and over (its a genetic algorithm fitness function) which is why efficiency is important. I want a function that returns the length of each possible 'list' of adjacencies in the array, but not including the 'lists' which simply subsets of a larger list. For example, if foo(1,4) - true, foo(4,5) - true, foo(5,9)- false, foo(9,3) & foo(3,2) & foo(2,6), foo(6,15) - true, then there are 'lists' (1,4,5) and (9,3,2,6), so length 3 and 4. I don't want it to return (3,2,6), though, because this is simply a subset of 9,3,2,6. Thanks.

    Read the article

  • Is it valid to use unsafe struct * as an opaque type instead of IntPtr in .NET Platform Invoke?

    - by David Jeske
    .NET Platform Invoke advocates declaring pointer types as IntPtr. For example, the following [DllImport("user32.dll")] static extern IntPtr SendMessage(IntPtr hWnd, UInt32 Msg, Int32 wParam, Int32 lParam); However, I find when interfacing with interesting native interfaces, that have many pointer types, flattening everything into IntPtr makes the code very hard to read and removes the typical typechecking that a compiler can do. I've been using a pattern where I declare an unsafe struct to be an opaque pointer type. I can store this pointer type in a managed object, and the compiler can typecheck it form me. For example: class Foo { unsafe struct FOO {}; // opaque type unsafe FOO *my_foo; class if { [DllImport("mydll")] extern static unsafe FOO* get_foo(); [DllImport("mydll")] extern static unsafe void do_something_foo(FOO *foo); } public unsafe Foo() { this.my_foo = if.get_foo(); } public unsafe do_something_foo() { if.do_something_foo(this.my_foo); } While this example may not seem different than using IntPtr, when there are several pointer types moving between managed and native code, using these opaque pointer types for typechecking is a godsend. I have not run into any trouble using this technique in practice. However, I also have not seen an examples of anyone using this technique, and I wonder why. Is there any reason that the above code is invalid in the eyes of the .NET runtime? My main question is about how the .NET GC system treats "unsafe FOO *my_foo". Is this pointer something the GC system is going to try to trace, or is it simply going to ignore it? My hope is that because the underlying type is a struct, and it's declared unsafe, that the GC would ignore it. However, I don't know for sure. Thoughts?

    Read the article

  • Converting an int64 value to a Number object in JavaScript

    - by Matt
    I have a COM object which has a method that returns an unsigned int64 (VT_UI8) value. We have an HTML page which contains some JavaScript which can load the COM object and make the call to that method, to retrieve the value as such: var foo = MyCOMObject.GetInt64Value(); This value can easily be displayed to the user in a message dialog using: alert(foo); or displayed on the page by: document.getElementById('displayToUser').innerHTML = foo; However, we cannot use this value as a Number (e.g. if we try to multiply it by 2) without the page throwing "Number expected" errors. If we check "typeof(foo)" it returns "unknown". I've found a workaround for this by doing the following: document.getElementById('displayToUser').innerHTML = foo; var bar = parseInt(document.getElementById('displayToUser').innerHTML); alert(bar*2); What I need to know is how to make that process more efficient. Specifically, is there a way to cast foo to a String explicitly, rather than having to set some document element's innerHTML to foo and then retrieve it from that. I wouldn't mind calling something like: alert(parseInt((string)foo) * 2); Even better would be if there is a way to directly convert the int64 to a Number, without going through the String conversion, but I hold out less hope for that.

    Read the article

  • Saving multiple objects in a single call in rails

    - by CaptnCraig
    I have a method in rails that is doing something like this: a = Foo.new("bar") a.save b = Foo.new("baz") b.save ... x = Foo.new("123", :parent_id => a.id) x.save ... z = Foo.new("zxy", :parent_id => b.id) z.save The problem is this takes longer and longer the more entities I add. I suspect this is because it has to hit the database for every record. Since they are nested, I know I can't save the children before the parents are saved, but I would like to save all of the parents at once, and then all of the children. It would be nice to do something like: a = Foo.new("bar") b = Foo.new("baz") ... saveall(a,b,...) x = Foo.new("123", :parent_id => a.id) ... z = Foo.new("zxy", :parent_id => b.id) saveall(x,...,z) That would do it all in only two database hits. Is there an easy way to do this in rails, or am I stuck doing it one at a time?

    Read the article

  • Vim + OmniCppComplete and completing members of class members

    - by Robert S. Barnes
    I've noticed that I can't seem to complete members of class members using OmniCppComplete. For example, given the following files: // foo.h #include <string> class foo { public: void set_str(const std::string &); std::string get_str_reverse( void ); private: std::string str; }; // foo.cpp #include "foo.h" using std::string; string foo::get_str_reverse ( void ) { string temp; temp.assign(str); reverse(temp.begin(), temp.end()); return temp; } /* ----- end of method foo::get_str ----- */ void foo::set_str ( const string &s ) { str.assign(s); } /* ----- end of method foo::set_str ----- */ I've set up tags for stdlibc++ and generated the tags for these two files using: ctags -R --c++-kinds=+pl --fields=+iaS --extra=+q . When I type temp. in the cpp I get a list of string member functions as expected. But if I type str. omnicomplete spits out "Pattern Not Found". I've noticed that the temp. completion only works if I have the using std::string; declaration. How do I get completion to work on my class members?

    Read the article

  • Having difficulties in ending Michael Hartl's tutorial. Help?

    - by konzepz
    Following Michael Hartl's (amazing) Ruby on Rails Tutorial, on the final section, I get the following errors: 1) User micropost associations status feed should include the microposts of followed users Failure/Error: @user.feed.should include(mp3) expected [#<Micropost id: 2, content: "Foo bar", user_id: 1, created_at: "2011-01-12 21:22:41", updated_at: "2011-01-12 22:22:41">, #<Micropost id: 1, content: "Foo bar", user_id: 1, created_at: "2011-01-11 22:22:41", updated_at: "2011-01-12 22:22:41">] to include #<Micropost id: 3, content: "Foo bar", user_id: 2, created_at: "2011-01-12 22:22:41", updated_at: "2011-01-12 22:22:41"> Diff: @@ -1,2 +1,2 @@ -#<Micropost id: 3, content: "Foo bar", user_id: 2, created_at: "2011-01-12 22:22:41", updated_at: "2011-01-12 22:22:41"> +[#<Micropost id: 2, content: "Foo bar", user_id: 1, created_at: "2011-01-12 21:22:41", updated_at: "2011-01-12 22:22:41">, #<Micropost id: 1, content: "Foo bar", user_id: 1, created_at: "2011-01-11 22:22:41", updated_at: "2011-01-12 22:22:41">] # ./spec/models/user_spec.rb:214 2) Micropost from_users_followed_by should include the followed user's microposts Failure/Error: Micropost.from_users_followed_by(@user).should include(@other_post) expected [#<Micropost id: 1, content: "foo", user_id: 1, created_at: "2011-01-12 22:22:46", updated_at: "2011-01-12 22:22:46">] to include #<Micropost id: 2, content: "bar", user_id: 2, created_at: "2011-01-12 22:22:46", updated_at: "2011-01-12 22:22:46"> Diff: @@ -1,2 +1,2 @@ -#<Micropost id: 2, content: "bar", user_id: 2, created_at: "2011-01-12 22:22:46", updated_at: "2011-01-12 22:22:46"> +[#<Micropost id: 1, content: "foo", user_id: 1, created_at: "2011-01-12 22:22:46", updated_at: "2011-01-12 22:22:46">] # ./spec/models/micropost_spec.rb:75 Finished in 9.18 seconds 153 examples, 2 failures Seems like mp3 is not included in the feed. Any ideas on how to fix it? Or where to look for possible errors in the code? I compared the files with Hartl's original code; seems exact. Thanks.

    Read the article

  • Can PHP dissect its own syntax?

    - by Nathan Long
    Can PHP dissect its own syntax? For example, I'd like to write a function that takes in an input like $object->attribute and says to itself: OK, he's giving me $foo->bar, which means he must think that $foo is an object that has a property called bar. Before I try accessing bar and potentially get a 'Trying to get property of non-object' error, let me check whether $foo is even an object. The end goal is to echo a value if it is set, and fail silently if not. I want to avoid repetition like this: <input value="<? if(is_object($foo) && is_set($foo->bar)){ echo $foo->bar; }?> "/> ...and to avoid writing a function that does the above, but has to have the object and attribute passed in separately, like this: <input value="<? echoAttribute($foo,'bar') ?>" /> ...but to instead write something which: preserves the object-attribute syntax is flexible: can also handle array keys or regular variables Like this: <input value="<? echoIfSet($foo->bar); ?> /> <input value="<? echoIfSet($baz['buzz']); ?> /> <input value="<? echoIfSet($moo); ?> /> But this all depends on PHP being able to tell me "what kind of thing am I asking for when I say $object->attribute or $array[$key]", so that my function can handle each according to its own type. Is this possible?

    Read the article

  • Is it normal for C++ static initialization to appear twice in the same backtrace?

    - by Joseph Garvin
    I'm trying to debug a C++ program compiled with GCC that freezes at startup. GCC mutex protects function's static local variables, and it appears that waiting to acquire such a lock is why it freezes. How this happens is rather confusing. First module A's static initialization occurs (there are __static_init functions GCC invokes that are visible in the backtrace), which calls a function Foo(), that has a static local variable. The static local variable is an object who's constructor calls through several layers of functions, then suddenly the backtrace has a few ??'s, and then it's is in the static initialization of a second module B (the __static functions occur all over again), which then calls Foo(), but since Foo() never returned the first time the mutex on the local static variable is still set, and it locks. How can one static init trigger another? My first theory was shared libraries -- that module A would be calling some function in module B that would cause module B to load, thus triggering B's static init, but that doesn't appear to be the case. Module A doesn't use module B at all. So I have a second (and horrifying) guess. Say that: Module A uses some templated function or a function in a templated class, e.g. foo<int>::bar() Module B also uses foo<int>::bar() Module A doesn't depend on module B at all At link time, the linker has two instances of foo<int>::bar(), but this is OK because template functions are marked as weak symbols... At runtime, module A calls foo<int>::bar, and the static init of module B is triggered, even though module B doesn't depend on module A! Why? Because the linker decided to go with module B's instance of foo::bar instead of module A's instance at link time. Is this particular scenario valid? Or should one module's static init never trigger static init in another module?

    Read the article

  • Setting an Excel Range with an Array using Python and comtypes?

    - by technomalogical
    Using comtypes to drive Python, it seems some magic is happening behind the scenes that is not converting tuples and lists to VARIANT types: # RANGE(“C14:D21”) has values # Setting the Value on the Range with a Variant should work, but # list or tuple is not getting converted properly it seems >>>from comtypes.client import CreateObject >>>xl = CreateObject("Excel.application") >>>xl.Workbooks.Open(r'C:\temp\my_file.xlsx') >>>xl.Visible = True >>>vals=tuple([(x,y) for x,y in zip('abcdefgh',xrange(8))]) # creates: #(('a', 0), ('b', 1), ('c', 2), ('d', 3), ('e', 4), ('f', 5), ('g', 6), ('h', 7)) >>>sheet = xl.Workbooks[1].Sheets["Sheet1"] >>>sheet.Range["C14","D21"].Value() (('foo',1),('foo',2),('foo',3),('foo',4),('foo',6),('foo',6),('foo',7),('foo',8)) >>>sheet.Range["C14","D21"].Value[()] = vals # no error, this blanks out the cells in the Range According to the comtypes docs: When you pass simple sequences (lists or tuples) as VARIANT parameters, the COM server will receive a VARIANT containing a SAFEARRAY of VARIANTs with the typecode VT_ARRAY | VT_VARIANT. This seems to be inline with what MSDN says about passing an array to a Range's Value. I also found this page showing something similar in C#. Can anybody tell me what I'm doing wrong? EDIT I've come up with a simpler example that performs the same way (in that, it does not work): >>>from comtypes.client import CreateObject >>>xl = CreateObject("Excel.application") >>>xl.Workbooks.Add() >>>sheet = xl.Workbooks[1].Sheets["Sheet1"] # at this point, I manually typed into the range A1:B3 >>> sheet.Range("A1","B3").Value() ((u'AAA', 1.0), (u'BBB', 2.0), (u'CCC', 3.0)) >>>sheet.Range("A1","B3").Value[()] = [(x,y) for x,y in zip('xyz',xrange(3))] # Using a generator expression, per @Mike's comment # However, this still blanks out my range :(

    Read the article

  • Javascript: "Dangling" Reference to DOM element?

    - by Channel72
    It seems that in Javascript, if you have a reference to some DOM element, and then modify the DOM by adding additional elements to document.body, your DOM reference becomes invalidated. Consider the following code: <html> <head> <script type = "text/javascript"> function work() { var foo = document.getElementById("foo"); alert(foo == document.getElementById("foo")); document.body.innerHTML += "<div>blah blah</div>"; alert(foo == document.getElementById("foo")); } </script> </head> <body> <div id = "foo" onclick='work()'>Foo</div> </body> </html> When you click on the DIV, this alerts "true", and then "false." So in other words, after modifying document.body, the reference to the DIV element is no longer valid. This behavior is the same on Firefox and MSIE. Some questions: Why does this occur? Is this behavior specified by the ECMAScript standard, or is this a browser-specific issue? Note: there's another question posted on stackoverflow that seems to be referring to the same issue, but neither the question nor the answers are very clear.

    Read the article

  • Managing logs/warnings in Python extensions

    - by Dimitri Tcaciuc
    TL;DR version: What do you use for configurable (and preferably captured) logging inside your C++ bits in a Python project? Details follow. Say you have a a few compiled .so modules that may need to do some error checking and warn user of (partially) incorrect data. Currently I'm having a pretty simplistic setup where I'm using logging framework from Python code and log4cxx library from C/C++. log4cxx log level is defined in a file (log4cxx.properties) and is currently fixed and I'm thinking how to make it more flexible. Couple of choices that I see: One way to control it would be to have a module-wide configuration call. # foo/__init__.py import sys from _foo import import bar, baz, configure_log configure_log(sys.stdout, WARNING) # tests/test_foo.py def test_foo(): # Maybe a custom context to change the logfile for # the module and restore it at the end. with CaptureLog(foo) as log: assert foo.bar() == 5 assert log.read() == "124.24 - foo - INFO - Bar returning 5" Have every compiled function that does logging accept optional log parameters. # foo.c int bar(PyObject* x, PyObject* logfile, PyObject* loglevel) { LoggerPtr logger = default_logger("foo"); if (logfile != Py_None) logger = file_logger(logfile, loglevel); ... } # tests/test_foo.py def test_foo(): with TemporaryFile() as logfile: assert foo.bar(logfile=logfile, loglevel=DEBUG) == 5 assert logfile.read() == "124.24 - foo - INFO - Bar returning 5" Some other way? Second one seems to be somewhat cleaner, but it requires function signature alteration (or using kwargs and parsing them). First one is.. probably somewhat awkward but sets up entire module in one go and removes logic from each individual function. What are your thoughts on this? I'm all ears to alternative solutions as well. Thanks,

    Read the article

  • Python: circular imports needed for type checking

    - by phild
    First of all: I do know that there are already many questions and answers to the topic of the circular imports. The answer is more or less: "Design your Module/Class structure properly and you will not need circular imports". That is true. I tried very hard to make a proper design for my current project, I in my opinion I was successful with this. But my specific problem is the following: I need a type check in a module that is already imported by the module containing the class to check against. But this throws an import error. Like so: foo.py: from bar import Bar class Foo(object): def __init__(self): self.__bar = Bar(self) bar.py: from foo import Foo class Bar(object): def __init__(self, arg_instance_of_foo): if not isinstance(arg_instance_of_foo, Foo): raise TypeError() Solution 1: If I modified it to check the type by a string comparison, it will work. But I dont really like this solution (string comparsion is rather expensive for a simple type check, and could get a problem when it comes to refactoring). bar_modified.py: from foo import Foo class Bar(object): def __init__(self, arg_instance_of_foo): if not arg_instance_of_foo.__class__.__name__ == "Foo": raise TypeError() Solution 2: I could also pack the two classes into one module. But my project has lots of different classes like the "Bar" example, and I want to seperate them into different module files. After my own 2 solutions are no option for me: Has anyone a nicer solution for this problem?

    Read the article

  • How can I lookup an attribute in any scope by name?

    - by Wai Yip Tung
    How can I lookup an attribute in any scope by name? My first trial is to use globals() and locals(). e.g. >>> def foo(name): ... a=1 ... print globals().get(name), locals().get(name) ... >>> foo('a') None 1 >>> b=1 >>> foo('b') 1 None >>> foo('foo') <function foo at 0x014744B0> None So far so good. However it fails to lookup any built-in names. >>> range <built-in function range> >>> foo('range') None None >>> int <type 'int'> >>> foo('int') None None Any idea on how to lookup built-in attributes?

    Read the article

  • Getting registry information using Python

    - by Willy
    I am trying to pull registry info from many servers and put them all into one txt file. I got the code working fine in a .bat file. I hear that there is a way simpler way to do this in Python. I am intrigued and delighted to hear this. Can anyone help finish my code: My working bat file: echo rfsqlcl01app >> foo.txt reg query "\\rfsqlcl01app\HKEY_LOCAL_MACHINE\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\McShield\Configuration\Default" >> foo.txt echo GLADGSQL01 >> foo.txt reg query "\\GLADGSQL01\HKEY_LOCAL_MACHINE\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\McShield\Configuration\Default" >> foo.txt echo GLADGWEB01 >> foo.txt reg query "\\GLADGWEB01\HKEY_LOCAL_MACHINE\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\McShield\Configuration\Default" >> foo.txt echo PAPERVISION >> foo.txt My python code structure: >>> server_list = open('server_test.txt', 'r') >>> for line in server_list: print r'reg query \\%s\blah\blah\blah' % line.strip() reg query \\foo\blah\blah\blah reg query \\moo\blah\blah\blah reg query \\boo\blah\blah\blah >>> server_list.close()

    Read the article

  • echo -e acts differently when run in a script by root on ubuntu

    - by ekrub
    When running a bash script on ubuntu 9.10, I get different behavior from bash echo's "-e" option depending on whether or not I'm running as root. Consider this script: $ cat echo-test if [ "`whoami`" = "root" ]; then echo "Running as root" fi echo Testing /bin/echo -e /bin/echo -e "foo\nbar" echo Testing bash echo -e echo -e "foo\nbar" When run as non-root user, I see this output: $ ./echo-test Testing /bin/echo -e foo bar Testing bash echo -e foo bar When run as root, I see this output: $ sudo ./echo-test Running as root Testing /bin/echo -e foo bar Testing bash echo -e -e foo bar Notice the "-e" being echoed in the last case ("-e foo" instead of "foo" on the second-to-last line). When running a script as root, the echo command runs as if "-e" was given and, if -e is given, the option itself is echoed. I can understand some subtle differences in behavior between /bin/echo and bash echo, but I would expect bash echo to behave the same no matter which user invokes it. Anyone know why this is the case? Is this a bug in bash echo? FYI -- I'm running GNU bash, version 4.0.33(1)-release (x86_64-pc-linux-gnu)

    Read the article

  • OmniCppComplete: Completing on Class Members which are STL containers

    - by Robert S. Barnes
    Completion on class members which are STL containers is failing. Completion on local objects which are STL containers works fine. For example, given the following files: // foo.h #include <string> class foo { public: void set_str(const std::string &); std::string get_str_reverse( void ); private: std::string str; }; // foo.cpp #include "foo.h" using std::string; string foo::get_str_reverse ( void ) { string temp; temp.assign(str); reverse(temp.begin(), temp.end()); return temp; } /* ----- end of method foo::get_str ----- */ void foo::set_str ( const string &s ) { str.assign(s); } /* ----- end of method foo::set_str ----- */ I've generated the tags for these two files using: ctags -R --c++-kinds=+pl --fields=+iaS --extra=+q . When I type temp. in the cpp I get a list of string member functions as expected. But if I type str. omnicppcomplete spits out "Pattern Not Found". I've noticed that the temp. completion only works if I have the using std::string; declaration. How do I get completion to work on my class members which are STL containers?

    Read the article

  • boost::python string-convertible properties

    - by Checkers
    I have a C++ class, which has the following methods: class Bar { ... const Foo& getFoo() const; void setFoo(const Foo&); }; where class Foo is convertible to std::string (it has an implicit constructor from std::string and an std::string cast operator). I define a Boost.Python wrapper class, which, among other things, defines a property based on previous two functions: class_<Bar>("Bar") ... .add_property( "foo", make_function( &Bar::getFoo, return_value_policy<return_by_value>()), &Bar::setFoo) ... I also mark the class as convertible to/from std::string. implicitly_convertible<std::string, Foo>(); implicitly_convertible<Foo, std::string>(); But at runtime I still get a conversion error trying to access this property: TypeError: No to_python (by-value) converter found for C++ type: Foo How to achieve the conversion without too much boilerplate of wrapper functions? (I already have all the conversion functions in class Foo, so duplication is undesirable.

    Read the article

  • SQL Server stored procedure return code oddity

    - by gbn
    Hello The client that calls this code is restricted and can only deal with return codes from stored procs. So, we modified our usual contract to RETURN -1 on error and default to RETURN 0 if no error If the code hits the inner catch block, then the RETURN code default to -4. Where does this come from, does anyone know...? IF OBJECT_ID('dbo.foo') IS NOT NULL DROP TABLE dbo.foo GO CREATE TABLE dbo.foo ( KeyCol char(12) NOT NULL, ValueCol xml NOT NULL, Comment varchar(1000) NULL, CONSTRAINT PK_foo PRIMARY KEY CLUSTERED (KeyCol) ) GO IF OBJECT_ID('dbo.bar') IS NOT NULL DROP PROCEDURE dbo.bar GO CREATE PROCEDURE dbo.bar @Key char(12), @Value xml, @Comment varchar(1000) AS SET NOCOUNT ON DECLARE @StartTranCount tinyint; BEGIN TRY SELECT @StartTranCount = @@TRANCOUNT; IF @StartTranCount = 0 BEGIN TRAN; BEGIN TRY --SELECT @StartTranCount = 'fish' INSERT dbo.foo (KeyCol, ValueCol, Comment) VALUES (@Key, @Value, @Comment); END TRY BEGIN CATCH IF ERROR_NUMBER() = 2627 --PK violation UPDATE dbo.foo SET ValueCol = @Value, Comment = @Comment WHERE KeyCol = @Key; ELSE RAISERROR ('Tits up', 16, 1); END CATCH IF @StartTranCount = 0 COMMIT TRAN; END TRY BEGIN CATCH IF @StartTranCount = 0 AND XACT_STATE() <> 0 ROLLBACK TRAN; RETURN -1 END CATCH --Without this, we'll send -4 if we hit the UPDATE CATCH block above --RETURN 0 GO --Run with RETURN 0 and fish line commented out DECLARE @rtn int EXEC @rtn = dbo.bar 'abcdefghijkl', '<foobar />', 'testing' SELECT @rtn; SELECT * FROM dbo.foo DECLARE @rtn int EXEC @rtn = dbo.bar 'abcdefghijkl', '<foobar2 />', 'testing2' --updated OK but we get @rtn = -4 SELECT @rtn; SELECT * FROM dbo.foo --uncomment fish line DECLARE @rtn int EXEC @rtn = dbo.bar 'abcdefghijkl', '<foobar />', 'testing' --Hit outer CATCH, @rtn = -1 as expected SELECT @rtn; SELECT * FROM dbo.foo

    Read the article

  • How do I write an RSpec test to unit-test this interesting metaprogramming code?

    - by Kyle Kaitan
    Here's some simple code that, for each argument specified, will add specific get/set methods named after that argument. If you write attr_option :foo, :bar, then you will see #foo/foo= and #bar/bar= instance methods on Config: module Configurator class Config def initialize() @options = {} end def self.attr_option(*args) args.each do |a| if not self.method_defined?(a) define_method "#{a}" do @options[:"#{a}"] ||= {} end define_method "#{a}=" do |v| @options[:"#{a}"] = v end else throw Exception.new("already have attr_option for #{a}") end end end end end So far, so good. I want to write some RSpec tests to verify this code is actually doing what it's supposed to. But there's a problem! If I invoke attr_option :foo in one of the test methods, that method is now forever defined in Config. So a subsequent test will fail when it shouldn't, because foo is already defined: it "should support a specified option" do c = Configurator::Config c.attr_option :foo # ... end it "should support multiple options" do c = Configurator::Config c.attr_option :foo, :bar, :baz # Error! :foo already defined # by a previous test. # ... end Is there a way I can give each test an anonymous "clone" of the Config class which is independent of the others?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >