Search Results

Search found 10517 results on 421 pages for 'foo bar'.

Page 160/421 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Boost's "cstdint" Usage

    - by patt0h
    Boost's C99 stdint implementation is awfully handy. One thing bugs me, though. They dump all of their typedefs into the boost namespace. This leaves me with three choices when using this facility: Use "using namespace boost" Use "using boost::[u]<type><width>_t" Explicitly refer to the target type with the boost:: prefix; e.g., boost::uint32_t foo = 0; Option ? 1 kind of defeats the point of namespaces. Even if used within local scope (e.g., within a function), things like function arguments still have to be prefixed like option ? 3. Option ? 2 is better, but there are a bunch of these types, so it can get noisy. Option ? 3 adds an extreme level of noise; the boost:: prefix is often = to the length of the type in question. My question is: What would be the most elegant way to bring all of these types into the global namespace? Should I just write a wrapper around boost/cstdint.hpp that utilizes option ? 2 and be done with it? Also, wrapping the header like so didn't work on VC++ 10 (problems with standard library headers): namespace Foo { #include <boost/cstdint.hpp> using namespace boost; } using namespace Foo; Even if it did work, I guess it would cause ambiguity problems with the ::boost namespace.

    Read the article

  • how to store a file handle in perl class

    - by Haiyuan Zhang
    please look at the following code first. #! /usr/bin/perl package foo; sub new { my $pkg = shift; my $self = {}; my $self->{_fd} = undef; bless $self, $pkg; return $self; } sub Setfd { my $self = shift; my $fd = shift; $self_->{_fd} = $fd; } sub write { my $self = shift; print $self->{_fd} "hello word"; } my $foo = new foo; My intention is to store a file handle within a class using hash. the file handle is undefined at first, but can be initilized afterwards by calling Setfd function. then write can be called to actually write string "hello word" to a file indicated by the file handle, supposed that the file handle is the result of a success "write" open. but, perl compiler just complains that there are syntax error in the "print" line. can anyone of you tells me what's wrong here? thanks in advance.

    Read the article

  • Are function-local typedefs visible inside C++0x lambdas?

    - by GMan - Save the Unicorns
    I've run into a strange problem. The following simplified code reproduces the problem in MSVC 2010 Beta 2: template <typename T> struct dummy { static T foo(void) { return T(); } }; int main(void) { typedef dummy<bool> dummy_type; auto x = [](void){ bool b = dummy_type::foo(); }; // auto x = [](void){ bool b = dummy<bool>::foo(); }; // works } The typedef I created locally in the function doesn't seem to be visible in the lambda. If I replace the typedef with the actual type, it works as expected. Here are some other test cases: // crashes the compiler, credit to Tarydon int main(void) { struct dummy {}; auto x = [](void){ dummy d; }; } // works as expected int main(void) { typedef int integer; auto x = [](void){ integer i = 0; }; } I don't have g++ 4.5 available to test it, right now. Is this some strange rule in C++0x, or just a bug in the compiler? From the results above, I'm leaning towards bug. Though the crash is definitely a bug. For now, I have filed two bug reports. All code snippets above should compile. The error has to do with using the scope resolution on locally defined scopes. (Spotted by dvide.) And the crash bug has to do with... who knows. :) Update According to the bug reports, they have both been fixed for the next release of Visual Studio 2010.

    Read the article

  • How to perform Rails model validation checks within model but outside of filters using ledermann-rails-settings and extensions

    - by user1277160
    Background I'm using ledermann-rails-settings (https://github.com/ledermann/rails-settings) on a Rails 2/3 project to extend virtually the model with certain attributes that don't necessarily need to be placed into the DB in a wide table and it's working out swimmingly for our needs. An additional reason I chose this Gem is because of the post How to create a form for the rails-settings plugin which ties ledermann-rails-settings more closely to the model for the purpose of clean form_for usage for administrator GUI support. It's a perfect solution for addressing form_for support although... Something that I'm running into now though is properly validating the dynamic getters/setters before being passed to the ledermann-rails-settings module. At the moment they are saved immediately, regardless if the model validation has actually fired - I can see through script/console that validation errors are being raised. Example For instance I would like to validate that the attribute :foo is within the range of 0..100 for decimal usage (or even a regex). I've found that with the previous post that I can use standard Rails validators (surprise, surprise) but I want to halt on actually saving any values until those are addressed - ensure that the user of the GUI has given 61.43 as a numerical value. The following code has been borrowed from the quoted post. class User < ActiveRecord::Base has_settings validates_inclusion_of :foo, :in => 0..100 def self.settings_attr_accessor(*args) >>SOME SORT OF UNLESS MODEL.VALID? CHECK HERE args.each do |method_name| eval " def #{method_name} self.settings.send(:#{method_name}) end def #{method_name}=(value) self.settings.send(:#{method_name}=, value) end " end >>END UNLESS end settings_attr_accessor :foo end Anyone have any thoughts here on pulling the state of the model at this point outside of having to put this into a before filter? The goal here is to be able to use the standard validations and avoid rolling custom validation checks for each new settings_attr_accessor that is added. Thanks!

    Read the article

  • Rails syntax for comments in templates: is this bug understood?

    - by brahn
    Using rails 2.3.2 I have a partial _foo.rhtml that begins with a comment as follows: <% # here is a comment %> <li><%= foo %></li> When I render the partial from a view in the traditional way, e.g. <% some_numbers = [1, 2, 3, 4, 5] %> <ul> <%= render :partial => "foo", :collection => some_numbers %> </ul> I found that the <li> and </li> tags are ommitted in the output -- i.e. the resulting HTML is <ul> 1 2 3 4 5 </ul> However, I can solve this problem by fixing _foo.rhtml to eliminate the space between the <% and the # so that the partial now reads: <%# here is a comment %> <li><%= foo %></li> My question: what's going on here? E.g., is <% # comment %> simply incorrect syntax for including comments in a template? Or is the problem more subtle? Thanks!

    Read the article

  • Could I do this blind relative to absolute path conversion (for perforce depot paths) better?

    - by wonderfulthunk
    I need to "blindly" (i.e. without access to the filesystem, in this case the source control server) convert some relative paths to absolute paths. So I'm playing with dotdots and indices. For those that are curious I have a log file produced by someone else's tool that sometimes outputs relative paths, and for performance reasons I don't want to access the source control server where the paths are located to check if they're valid and more easily convert them to their absolute path equivalents. I've gone through a number of (probably foolish) iterations trying to get it to work - mostly a few variations of iterating over the array of folders and trying delete_at(index) and delete_at(index-1) but my index kept incrementing while I was deleting elements of the array out from under myself, which didn't work for cases with multiple dotdots. Any tips on improving it in general or specifically the lack of non-consecutive dotdot support would be welcome. Currently this is working with my limited examples, but I think it could be improved. It can't handle non-consecutive '..' directories, and I am probably doing a lot of wasteful (and error-prone) things that I probably don't need to do because I'm a bit of a hack. I've found a lot of examples of converting other types of relative paths using other languages, but none of them seemed to fit my situation. These are my example paths that I need to convert, from: //depot/foo/../bar/single.c //depot/foo/docs/../../other/double.c //depot/foo/usr/bin/../../../else/more/triple.c to: //depot/bar/single.c //depot/other/double.c //depot/else/more/triple.c And my script: begin paths = File.open(ARGV[0]).readlines puts(paths) new_paths = Array.new paths.each { |path| folders = path.split('/') if ( folders.include?('..') ) num_dotdots = 0 first_dotdot = folders.index('..') last_dotdot = folders.rindex('..') folders.each { |item| if ( item == '..' ) num_dotdots += 1 end } if ( first_dotdot and ( num_dotdots > 0 ) ) # this might be redundant? folders.slice!(first_dotdot - num_dotdots..last_dotdot) # dependent on consecutive dotdots only end end folders.map! { |elem| if ( elem !~ /\n/ ) elem = elem + '/' else elem = elem end } new_paths << folders.to_s } puts(new_paths) end

    Read the article

  • How safe and reliable are C++ String Literals?

    - by DoctorT
    So, I'm wanting to get a better grasp on how string literals in C++ work. I'm mostly concerned with situations where you're assigning the address of a string literal to a pointer, and passing it around. For example: char* advice = "Don't stick your hands in the toaster."; Now lets say I just pass this string around by copying pointers for the duration of the program. Sure, it's probably not a good idea, but I'm curious what would actually be going on behind the scenes. For another example, let's say we make a function that returns a string literal: char* foo() { // function does does stuff return "Yikes!"; // somebody's feeble attempt at an error message } Now lets say this function is called very often, and the string literal is only used about half the time it's called: // situation #1: it's just randomly called without heed to the return value foo(); // situation #2: the returned string is kept and used for who knows how long char* retVal = foo(); In the first situation, what's actually happening? Is the string just created but not used, and never deallocated? In the second situation, is the string going to be maintained as long as the user finds need for it? What happens when it isn't needed anymore... will that memory be freed up then (assuming nothing points to that space anymore)? Don't get me wrong, I'm not planning on using string literals like this. I'm planning on using a container to keep my strings in check (probably std::string). I'm mostly just wanting to know if these situations could cause problems either for memory management or corrupted data.

    Read the article

  • Make function non-recursive

    - by user69514
    I'm not sure how to make this function non-recursive. Any ideas?: void foo(int a, int b){ while( a < len && arr[a][b] != -1){ if(++a == len){ a = 0; b++; } } if( a == len){ size++; return; } if( a < (len-1)){ arr[a][b] = 1; arr[a][(b+1)] = 1; foo(a, b); arr[a][b] = -1; arr[a][(b+1)] = -1; } if( a < (len-1) && arr[(a+1)][b] == -1){ arr[a][b] = 0; arr[(a+1)][b] = 0; foo(a,b); arr[a][b] = -1; arr[(a+1)][b] = -1; } }

    Read the article

  • Why do I get errors when using unsigned integers in an expression with C++?

    - by neuviemeporte
    Given the following piece of (pseudo-C++) code: float x=100, a=0.1; unsigned int height = 63, width = 63; unsigned int hw=31; for (int row=0; row < height; ++row) { for (int col=0; col < width; ++col) { float foo = x + col - hw + a * (col - hw); cout << foo << " "; } cout << endl; } The values of foo are screwed up for half of the array, in places where (col - hw) is negative. I figured because col is int and comes first, that this part of the expression is converted to int and becomes negative. Unfortunately, apparently it doesn't, I get an overflow of an unsigned value and I've no idea why. How should I resolve this problem? Use casts for the whole or part of the expression? What type of casts (C-style or static_cast<...)? Is there any overhead to using casts (I need this to work fast!)? EDIT: I changed all my unsigned ints to regular ones, but I'm still wondering why I got that overflow in this situation.

    Read the article

  • What is the difference between .get() and .fetch(1)

    - by AutomatedTester
    I have written an app and part of it is uses a URL parser to get certain data in a ReST type manner. So if you put /foo/bar as the path it will find all the bar items and if you put /foo it will return all items below foo So my app has a query like data = Paths.all().filter('path =', self.request.path).get() Which works brilliantly. Now I want to send this to the UI using templates {% for datum in data %} <div class="content"> <h2>{{ datum.title }}</h2> {{ datum.content }} </div> {% endfor %} When I do this I get data is not iterable error. So I updated the Django to {% for datum in data.all %} which now appears to pull more data than I was giving it somehow. It shows all data in the datastore which is not ideal. So I removed the .all from the Django and changed the datastore query to data = Paths.all().filter('path =', self.request.path).fetch(1) which now works as I intended. In the documentation it says The db.get() function fetches an entity from the datastore for a Key (or list of Keys). So my question is why can I iterate over a query when it returns with fetch() but can't with get(). Where has my understanding gone wrong?

    Read the article

  • Method having an abstract class as a parameter

    - by Ferhat
    I have an abstract class A, where I have derived the classes B and C. Class A provides an abstract method DoJOB(), which is implemented by both derived classes. There is a class X which has methods inside, which need to call DoJOB(). The class X may not contain any code like B.DoJOB() or C.DoJOB(). Example: public class X { private A foo; public X(A concrete) { foo = concrete; } public FunnyMethod() { foo.DoJOB(); } } While instantiating class X I want to decide which derived class (B or C) must be used. I thought about passing an instance of B or C using the constructor of X. X kewl = new X(new C()); kewl.FunnyMethod(); //calls C.DoJOB() kewl = new X(new B()); kewl.FunnyMethod(); // calls B.DoJOB() My test showed that declaring a method with a parameter A is not working. Am I missing something? How can I implement this correctly? (A is abstract, it cannot be instantiated)

    Read the article

  • Cannot overload function

    - by anio
    So I've got a templatized class and I want to overload the behavior of a function when I have specific type, say char. For all other types, let them do their own thing. However, c++ won't let me overload the function. Why can't I overload this function? I really really do not want to do template specialization, because then I've got duplicate the entire class. Here is a toy example demonstrating the problem: http://codepad.org/eTgLG932 The same code posted here for your reading pleasure: #include <iostream> #include <cstdlib> #include <string> struct Bar { std::string blah() { return "blah"; } }; template <typename T> struct Foo { public: std::string doX() { return m_getY(my_t); } private: std::string m_getY(char* p_msg) { return std::string(p_msg); } std::string m_getY(T* p_msg) { return p_msg->blah(); } T my_t; }; int main(int, char**) { Foo<char> x; Foo<Bar> y; std::cout << "x " << x.doX() << std::endl; return EXIT_SUCCESS; } Thank you everyone for your suggestions. Two valid solutions have been presented. I can either specialize the doX method, or specialize m_getY() method. At the end of the day I prefer to keep my specializations private rather than public so I'm accepting Krill's answer.

    Read the article

  • How to unit test generic classes

    - by Rowland Shaw
    I'm trying to set up some unit tests for an existing compact framework class library. However, I've fallen at the first hurdle, where it appears that the test framework is unable to load the types involved (even though they're both in the class library being tested) Test method MyLibrary.Tests.MyGenericClassTest.MyMethodTest threw exception: System.MissingMethodException: Could not load type 'MyLibrary.MyType' from assembly 'MyLibrary, Version=1.0.3778.36113, Culture=neutral, PublicKeyToken=null'.. My code is loosely: public class MyGenericClass<T> : List<T> where T : MyType, new() { public bool MyMethod(T foo) { throw new NotImplementedException(); } } With test methods: public void MyMethodTestHelper<T>() where T : MyType, new() { MyGenericClass<T> target = new MyGenericClass<T>(); foo = new T(); expected = true; actual = target.MyMethod(foo); Assert.AreEqual(expected, actual); } [TestMethod()] public void MyMethodTest() { MyMethodTestHelper<MyType>(); } I'm a bit stumped though, as I can't even get it to break in the debugger to get to the inner exception, so what else do I check? EDIT this does seem to be something specific to the Compact Framework - recompiling the class libraries and the unit tests for the full framework, gives the expected output (i.e. the debugger stops when I'm going to throw a NotImplementedException).

    Read the article

  • Howto: Access a second related model in a nested attribute builder block

    - by Joe Cairns
    I have a basic has_many through relationship: class Foo < ActiveRecord::Base has_many :bars, :dependent => :destroy has_many :wtfs :through => :bars accepts_nested_attributes_for :bars, :wtfs end On my crud forms I have a builder block for the wtf, but I need the label to come from the bar (an attribute called label for instance). What's the proper method to do this? Here's the most simple scaffold: <h1>New foo</h1> <% form_for(@foo) do |f| %> <%= f.error_messages %> <p> <%= f.label :name %><br /> <%= f.text_field :name %> </p> <h2>Bars</h2> <% f.fields_for :wtfs do |builder| %> <%= builder.hidden_field :bar_id %> <p> <%= builder.text_field :wtf_data_i_need_to_set %> </p> <% end %> <p> <%= f.submit 'Create' %> </p> <% end %> <%= link_to 'Back', foos_path %>

    Read the article

  • Multiple leaf methods problem in composite pattern

    - by Ondrej Slinták
    At work, we are developing an PHP application that would be later re-programmed into Java. With some basic knowledge of Java, we are trying to design everything to be easily re-written, without any headaches. Interesting problem came out when we tried to implement composite pattern with huge number of methods in leafs. What are we trying to achieve (not using interfaces, it's just an example): class Composite { ... } class LeafOne { public function Foo( ); public function Moo( ); } class LeafTwo { public function Bar( ); public function Baz( ); } $c = new Composite( Array( new LeafOne( ), new LeafTwo( ) ) ); // will call method Foo in all classes in composite that contain this method $c->Foo( ); It seems like pretty much classic Composite pattern, but problem is that we will have quite many leaf classes and each of them might have ~5 methods (of which few might be different than others). One of our solutions, which seems to be the best one so far and might actually work, is using __call magic method to call methods in leafs. Unfortunately, we don't know if there is an equivalent of it in Java. So the actual question is: Is there a better solution for this, using code that would be eventually easily re-coded into Java? Or do you recommend any other solution? Perhaps there's some different, better pattern I could use here. In case there's something unclear, just ask and I'll edit this post.

    Read the article

  • In ASP.NET, is it possible to output cache by host name? ie varybyhost or varbyhostheader?

    - by Pure.Krome
    Hi folks, I've got a website that has a number of host headers. Depending on the host header, the results are different - both visually (theme'd) and data. So lets imagine i have a website called 'Foo' - that returns search results (original, eh?). Now, the same code runs both sites. It is physically the same server/website (using Host Headers) :- www.foo.com www.foo.com.au Now, if i goto '.com', the site is theme'd in blue. if i goto the '.com.au' site, it's theme'd in red. And the data is different for the same search result, based on the host name (ie. us results for .com, au results for .com.au) SO .. if i wish to use OutputCaching .. can this be handled / differ by the host name? I don't want to have the first person goto the .com site .. grab the results ... and the a second person goto my .com.au .. same search data .. and get the theme and results for the .com site. Possible?

    Read the article

  • regexp target last main li in list

    - by veilig
    I need to target the starting tag of the last top level LI in a list that may or may-not contain sublists in various positions - without using CSS or Javascript. Is there a simple/elegant regexp that can help with this? I'm no guru w/ them, but it appears the need for greedy/non-greedy selectors when I'm selecting all the middle text (.*) / (.+) changes as nested lists are added and moved around in the list - and this is throwing me off. $pattern = '/^(<ul>.*)<li>(.+<\/li><\/ul>)$/'; $replacement = '$1<li id="lastLi">$3'; Perhaps there is an easier approach?? converting to XML to target the LI and then convert back? ie: Single Element <ul> <li>TARGET</li> </ul> Multiple Elements <ul> <li>foo</li> <li>TARGET</li> </ul> Nested Lists before end <ul> <li> foo <ul> <li>bar</li> </ul> <li> <li>TARGET</li> </ul> Nested List at end <ul> <li>foo</li> <li> TARGET <ul> <li>bar</li> </ul> </li> </ul>

    Read the article

  • mysql_affected_rows() returns 0 for UPDATE statement even when an update actually happens

    - by Alex Moore
    I am trying to get the number of rows affected in a simple mysql update query. However, when I run this code below, PHP's mysql_affected_rows() always equals 0. No matter if foo=1 already (in which case the function should correctly return 0, since no rows were changed), or if foo currently equals some other integer (in which case the function should return 1). $updateQuery = "UPDATE myTable SET foo=1 WHERE bar=2"; mysql_query($updateQuery); if (mysql_affected_rows() > 0) { echo "affected!"; } else { echo "not affected"; // always prints not affected } The UPDATE statement itself works. The INT gets changed in my database. I have also double-checked that the database connection isn't being closed beforehand or anything funky. Keep in mind, mysql_affected_rows doesn't necessarily require you to pass a connection link identifier, though I've tried that too. Details on the function: mysql_affected_rows Any ideas? SOLUTION The part I didn't mention turned out to be the cause of my woes here. This PHP file was being called ten times consecutively in an AJAX call, though I was only looking at the value returned on the last call, ie. a big fat 0. My apologies!

    Read the article

  • Is private method in spring service implement class thread safe

    - by Roger Ray
    I got a service in an project using Spring framework. public class MyServiceImpl implements IMyService { public MyObject foo(SomeObject obj) { MyObject myobj = this.mapToMyObject(obj); myobj.setLastUpdatedDate(new Date()); return myobj; } private MyObject mapToMyObject(SomeObject obj){ MyObject myojb = new MyObject(); ConvertUtils.register(new MyNullConvertor(), String.class); ConvertUtils.register(new StringConvertorForDateType(), Date.class); BeanUtils.copyProperties(myojb , obj); ConvertUtils.deregister(Date.class); return myojb; } } Then I got a class to call foo() in multi-thread; There goes the problem. In some of the threads, I got error when calling BeanUtils.copyProperties(myojb , obj); saying Cannot invoke com.my.MyObject.setStartDate - java.lang.ClassCastException@2da93171 obviously, this is caused by ConvertUtils.deregister(Date.class) which is supposed to be called after BeanUtils.copyProperties(myojb , obj);. It looks like one of the threads deregistered the Date class out while another thread was just about to call BeanUtils.copyProperties(myojb , obj);. So My question is how do I make the private method mapToMyObject() thread safe? Or simply make the BeanUtils thread safe when it's used in a private method. And will the problem still be there if I keep the code this way but instead I call this foo() method in sevlet? If many sevlets call at the same time, would this be a multi-thread case as well?

    Read the article

  • No "redefinition of default parameter error" for class template member function?

    - by STingRaySC
    Why does the following give no compilation error?: // T.h template<class T> class X { public: void foo(int a = 42); }; // Main.cpp #include "T.h" #include <iostream> template<class T> void X<T>::foo(int a = 13) { std::cout << a << std::endl; } int main() { X<int> x; x.foo(); // prints 42 } It seems as though the 13 is just silently ignored by the compiler. Why is this? The cooky thing is that if the template declaration is in Main.cpp instead of a header file, I do indeed get the default parameter redefinition error. Now I know the compiler will complain about this if it were just an ordinary (non-template) function. What does the standard have to say about default parameters in class template member functions or function templates?

    Read the article

  • Am I doing something wrong here (references in C++)?

    - by m4design
    I've been playing around with references (I'm still having issues in this regard). 1- I would like to know if this is an acceptable code: int & foo(int &y) { return y; // is this wrong? } int main() { int x = 0; cout << foo(x) << endl; foo(x) = 9; // is this wrong? cout << x << endl; return 0; } 2- Also this is from an exam sample: Week & Week::highestSalesWeek(Week aYear[52]) { Week max = aYear[0]; for(int i = 1; i < 52; i++) { if (aYear[i].getSales() > max.getSales()) max = aYear[i]; } return max; } It asks about the mistake in this code, also how to fix it. My guess is that it return a local reference. The fix is: Week & max = aYear[0]; Is this correct/enough?

    Read the article

  • How to support comparisons for QVariant objects containing a custom type?

    - by Tyler McHenry
    According to the Qt documentation, QVariant::operator== does not work as one might expect if the variant contains a custom type: bool QVariant::operator== ( const QVariant & v ) const Compares this QVariant with v and returns true if they are equal; otherwise returns false. In the case of custom types, their equalness operators are not called. Instead the values' addresses are compared. How are you supposed to get this to behave meaningfully for your custom types? In my case, I'm storing an enumerated value in a QVariant, e.g. In a header: enum MyEnum { Foo, Bar }; Q_DECLARE_METATYPE(MyEnum); Somewhere in a function: QVariant var1 = QVariant::fromValue<MyEnum>(Foo); QVariant var2 = QVariant::fromValue<MyEnum>(Foo); assert(var1 == var2); // Fails! What do I need to do differently in order for this assertion to be true? I understand why it's not working -- each variant is storing a separate copy of the enumerated value, so they have different addresses. I want to know how I can change my approach to storing these values in variants so that either this is not an issue, or so that they do both reference the same underlying variable. It don't think it's possible for me to get around needing equality comparisons to work. The context is that I am using this enumeration as the UserData in items in a QComboBox and I want to be able to use QComboBox::findData to locate the item index corresponding to a particular enumerated value.

    Read the article

  • Will TFS 2010 support non-contiguous merging?

    - by steve_d
    I know that merging non-contiguous changesets at once may not be a good idea. However there is at least one situation in which merging non-contiguous changesets is (probably) not going to break anything: when there are no intervening changes on the affected individual files. (At least, it wouldn't break any worse than would a series of cherry-picked merges, checked in each time; and at least this way you would discover breakage before checking in). For instance, let's say you have a Main and a Development branch. They start out identical (e.g. after a release). They have two files, foo.cs and bar.cs. Alice makes a change in Development\foo.cs and checks it in as changeset #1001. Bob makes a change in Development\bar.cs and checks it in as #1002. Alice makes another change to Development\foo.cs and checks it in as #1003. Now we could in theory merge both changes #1001 and #1003 from dev-to main in a single operation. If we try to merge at the branch level, dev-to-main, we will have to do it as two operations. In this simple, contrived example it's simple enough to merge the one file - but in the real world where there would be many files involved, it's not so simple. Non-contiguous merging is one of the reasons given for why "merge by workitem" is not implemented in TFS.

    Read the article

  • When do Symfony's user attributes get written to session?

    - by Rob Wilkerson
    I have a Symfony app that populates the "widgets" of a portal application and I'm noticing something (that seems) odd. The portal app has iframes that make calls to the Symfony app. On each of those calls, a random user key is passed on the query string. The Symfony app stores that key its session using myUser->setAttribute(). If the incoming value is different from what it has in session, it overwrites the session value. In pseudo-code (and applying a synchronous nature for clarity even though it may not exist): # Widget request arrives with ?foo=bar if the user attribute 'foo' does not equal 'bar' overwrite the user attribute 'foo' with 'bar' end What I'm noticing is that, on a portal page with multiple widgets (read: multiple requests coming in more or less simultaneously) where the value needs to be overwritten, each request is trying to overwrite. Is this a timing problem? When I look at the log prints, I'd expect the first request that arrives to overwrite and subsequent requests to see that the user attribute they received matches what was just put into cache by the initial request. In this scenario, it could be that subsequent requests begin (and are checked) even before the first one--the one that should overwrite the cached value--has completely finished. Are session values not really available to subsequent requests until one request has completed entirely or could there be something else that I'm missing? Thanks.

    Read the article

  • Django: Create custom template tag -> ImportError

    - by Alexander Scholz
    I'm sorry to ask this again, but I tried several solutions from stack overflow and some tutorials and I couldn't create a custom template tag yet. All I get is ImportError: No module named test_tag when I try to start the server via python manage.py runserver. I created a very basic template tag (found here: django templatetag?) like so: My folder structure: demo manage.py test __init__.py settings.py urls.py ... templatetags __init__.py test_tag.py test_tag.py: from django import template register = template.Library() @register.simple_tag def test_tag(input): if "foo" == input: return "foo" if "bar" == input: return "bar" if "baz" == input: return "baz" return "" index.html: {% load test_tag %} <html> <head> ... </head> <body> {% cms_toolbar %} {% foobarbaz "bar" %} {% foobarbaz "elephant" %} {% foobarbaz "foo" %} </body> </html> and my settings.py: INSTALLED_APPS = ( ... 'test_tag', ... ) Please let me know if you need further information from my settings.py and what I did wrong so I can't even start my server. (If I delete 'test_tag' from installed apps I can start the server but I get the error that test_tag is not known, of course). Thanks

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >