Search Results

Search found 3923 results on 157 pages for 'confused about webdav'.

Page 151/157 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • git: setting a single tracking remote from a public repo.

    - by Gauthier
    I am confused with remote branches. My local repo: (local) ---A---B---C-master My remote repo (called int): (int) ---A---B---C---D---E-master What I want to do is to setup the local repo's master branch to follow that of int. Local repo: (local) ---A---B---C---D---E-master-remotes/int/master So that when int changes to: (int) ---A---B---C---D---E---F-master I can run git pull from the local repo's master and get (local) ---A---B---C---D---E---F-master-remotes/int/master Here's what I have tried: git fetch int gets me all the branches of int into remote branches. This can get messy since int might have hundreds of branches. git fetch int master gets me the commits, but no ref to it, only FETCH_HEAD. No remote branch either. git fetch int master:new_master works but I don't want a new name every time I update, and no remote branch is setup. git pull int master does what I want, but there is still no remote branch setup. I feel that it is ok to do so (that's the best I have now), but I read here and there that with the remote setup it is enough with git pull. git branch --track new_master int/master, as per http://www.gitready.com/beginner/2009/03/09/remote-tracking-branches.html . I get "not a valid object name: int/master". git remote -v does show me that int is defined and points at the correct location (1. worked). What I miss is the int/master branch, which is precisely what I want to get. git fetch in master:int/master. Well, int/master is created, but is no remote. So to summarize, I've tried some stuff with no luck. I would expect 2 to give me the remote branch to master in the repo int. The solution I use now is option 3. I read somewhere that you could change some config file by hand, but isn't that a bit cumbersome? The "cumbersome" way of editting the config file did work: [branch "master"] remote = int merge = master It can be done from command line: $ git config branch.master.remote int $ git config branch.master.merge master Any reason why option 2 above wouldn't do that automatically? Even in that case, git pull fetches all branches from the remote.

    Read the article

  • Multiple layouts in rails [Newbie Q]

    - by BriteLite
    Hi. As a newb, I decided to build a "home inventory" application. I am now stuck on how to programmatically select a layout based on what type of item it is when viewing it in a browser. According to my planning, so far I should have created a few models to represent types of items I can find in my home: Furniture, Electronics and Books. class Book < ActiveRecord::Base end class Furniture < ActiveRecord::Base end class Electronic < ActiveRecord::Base end Now the Books model has things like isbn, pages, address, and category. Furniture model has things like color, price, address, and category. Electronics has things like name, voltage, address, and category. Here is where I got confused. I know the property address is going to be the same for all of them. I also know that, I will need to create multiple "layouts" for 3 different types of items to show the different properties of said items with appropriate graphics and stylesheets. But how will I go about deciding which category the item is so I can determine which layout to render. According to me, this is how I will do it: class DisplayController < ApplicationController def display @item = Params[:item] if @item.category = "electronics" render :layout => 'electronics' end end In my routes.rb map.display ':item', :controller => 'display', :action => 'display' I only seem to have one concern with this, I probably will add a lot of categories later on and think there should be a more DRY-esque way of dealing, rather than hardcoding them. I understand that I need to add into my layout html tags to display relevant information for that particular category. ----Questions---- Is this the right way to approach this type of problem. Will this approach be compatible when I decide to add a gem like *thinking_sphinx* to run search. What issues do you see with my approach and how can I make it better. I was reading something about "Polymorphic Assoc", does that apply in this case, since category exist for all items? Also, I was trying to get a routes to render a URL like "http://localhost/living-room-tv"

    Read the article

  • template pass by const reference

    - by 7vies
    Hi, I've looked over a few similar questions, but I'm still confused. I'm trying to figure out how to explicitly (not by compiler optimization etc) and C++03-compatible avoid copying of an object when passing it to a template function. Here is my test code: #include <iostream> using namespace std; struct C { C() { cout << "C()" << endl; } C(const C&) { cout << "C(C)" << endl; } ~C() { cout << "~C()" << endl; } }; template<class T> void f(T) { cout << "f<T>" << endl; } template<> void f(C c) { cout << "f<C>" << endl; } // (1) template<> void f(const C& c) { cout << "f<C&>" << endl; } // (2) int main() { C c; f(c); return 0; } (1) accepts the object of type C, and makes a copy. Here is the output: C() C(C) f<C> ~C() ~C() So I've tried to specialize with a const C& parameter (2) to avoid this, but this simply doesn't work (apparently the reason is explained in this question). Well, I could "pass by pointer", but that's kind of ugly. So is there some trick that would allow to do that somehow nicely? EDIT: Oh, probably I wasn't clear. I already have a templated function template<class T> void f(T) {...} But now I want to specialize this function to accept a const& to another object: template<> void f(const SpecificObject&) {...} But it only gets called if I define it as template<> void f(SpecificObject) {...}

    Read the article

  • Tab Bar and Nav Controller: Where did I go wrong in my Interface Builder wiring?

    - by editor
    Even if you don't know how I've shot myself in the foot, a story which I've tried to lay out below, if you think I've done a good job showing the parameters of my problem I'd appreciate an upvote so that I might be able to grab some attention for my question. I've been working on an iPhone application in XCode and Interface Builder of the Tab Bar project type. After getting a table view of topics (business sectors) working fine I realized that I would need to add a Navigation Control to allow the user to drill into a subtopics (subsectors) table. As a green Objective-C developer, that was confusing, but I managed to get it working by reading various documentation trying out a few different IB options. My current setup is a Tab Bar Controller with Tab 1 as a Navigation Controller and Tab 2 a plain view with a Table View placed into it. The wiring works: I can log when table rows are selected and I'm ready to push a new View Controller onto the stack so that I can display the subtopics Table View. My problem: For some reason the first tab's Table View is a delegate and dataSource of the second ta. It doesn't make sense to me and I can't figure out why that's the only setup that works. Here is the wiring: Navigation Controller (Sectors) is a delegate of Tab Bar Navigation Bar is a delegate of Navigation Controller (Sectors) View Contoller (Sectors) has a view of Table View Table View (in Navigation Controller (Sectors)) is a delegate of First View Controller (Companies) Table View (in Navigation Controller (Sectors)) is a dataSource outlet of First View Controller (Companies) First View Controller (Companies) First View Contoller (Sectors) has a view of Table View Table View (in First View Controller (Companies)) is not hooked up to a dataSource outlet and is not a delegate When I click the tab buttons and look at the Inspector I see that the first tab is correctly hooked up to my MainWindow.xib and the second tab has selected a nib called SecondView.xib. It's in the File's Owner of MainWindow.xib where I inherit UITableViewDataSource and UITableViewDelegate (and also UITabBarControllerDelegate) in the .h, and in the .m where I implement the delegate methods. Why does this setup only work when the Table View in my first tab (View Controller (Sectors)) is a delegate and dataSource of the second tab? I'm confused: why wouldn't it need to be hooked up to the Navigation Controller-enabled tab in which the Table View is seen (Navigation Controller (Sectors))? The Table View seen on the second tab has neither dataSource and is not a delegate. I'm having trouble getting a pushViewController to fire (self.navigationController is not nil but the new View Controller still doesn't load) and I suspect that I need to work out this IB wiring issue before I can troubleshoot why the Nav Controller won't push a new View Controller onto the stack. if(nil == self.navigationController) { NSLog(@"self.navigationController is nil."); } else { NSLog(@"self.navigationController is not nil."); SectorList *subsectorViewController = [[SectorList alloc] initWithNibName:@"SectorList" bundle:nil]; subsectorViewController.title = @"Subsectors"; [[self navigationController] pushViewController:subsectorViewController animated:YES]; [subsectorViewController release]; }

    Read the article

  • c++ global operator not playing well with template class

    - by John
    ok, i found some similar posts on stackoverflow, but I couldn't find any that pertained to my exact situation and I was confused with some of the answers given. Ok, so here is my problem: I have a template matrix class as follows: template <typename T, size_t ROWS, size_t COLS> class Matrix { public: template<typename, size_t, size_t> friend class Matrix; Matrix( T init = T() ) : _matrix(ROWS, vector<T>(COLS, init)) { /*for( int i = 0; i < ROWS; i++ ) { _matrix[i] = new vector<T>( COLS, init ); }*/ } Matrix<T, ROWS, COLS> & operator+=( const T & value ) { for( vector<T>::size_type i = 0; i < this->_matrix.size(); i++ ) { for( vector<T>::size_type j = 0; j < this->_matrix[i].size(); j++ ) { this->_matrix[i][j] += value; } } return *this; } private: vector< vector<T> > _matrix; }; and I have the following global function template: template<typename T, size_t ROWS, size_t COLS> Matrix<T, ROWS, COLS> operator+( const Matrix<T, ROWS, COLS> & lhs, const Matrix<T, ROWS, COLS> & rhs ) { Matrix<T, ROWS, COLS> returnValue = lhs; return returnValue += lhs; } To me, this seems to be right. However, when I try to compile the code, I get the following error (thrown from the operator+ function): binary '+=' : no operator found which takes a right-hand operand of type 'const matrix::Matrix<T,ROWS,COLS>' (or there is no acceptable conversion) I can't figure out what to make of this. Any help if greatly appreciated!

    Read the article

  • jQuery multiple class selection

    - by morpheous
    I am a bit confused with this: I have a page with a set of buttons (i.e. elements with class attribute 'button'. The buttons belong to one of two classes (grp1 and grp2). These are my requirements For buttons with class enabled, when the mouse hovers over them, a 'button-hover' class is added to them (i.e. the element the mouse is hovering over). Otherwise, the hover event is ignored When one of the buttons with class grp2 is clicked on (it has to be 'enabled' first), then I disable (i.e. remove the 'enabled' class for all elements with class 'enabled' (should probably selecting for elements with class 'button' AND 'enabled' - but I am having enough problems as it is, so I need to keep things simple for now). This is what my page looks like: <html> <head> <title>Demo</title> <style type="text/css" .button {border: 1px solid gray; color: gray} .enabled {border: 1px solid red; color: red} .button-hover {background-color: blue; } </style> <script type="text/javascript" src="jquery.js"></script> </head> <body> <div class="btn-cntnr"> <span class="grp1 button enabled">button 1</span> <span class="grp2 button enabled">button 2</span> <span class="grp2 button enabled">button 3</span> <span class="grp2 button enabled">button 4</span> </div> <script type="text/javascript"> /* <![CDATA[ */ $(document).ready(function(){ $(".button.enabled").hover(function(){ $(this).toggleClass('button-hover'); }, function() { $(this).toggleClass('button-hover'); }); $('.grp2.enabled').click(function(){ $(".grp2").removeClass('enabled');} }); /* ]]> */ </script> </body> </html> Currently, when a button with class 'grp2' is clicked on, the other elements with class 'grp2' have the 'enabled' class removed (works correctly). HOWEVER, I notice that even though the class no longer have a 'enabled' class, SOMEHOW, the hover behaviour is still applied to these elemets (WRONG). Once an element has been 'disabled', I no longer want it to respond to the hover() event. How may I implement this behavior, and what is wrong with the code above (i.e. why is it not working? (I am still learning jQuery)

    Read the article

  • Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work??

    - by themoondothshine
    Hey all, I'm trying to learn more about library versioning in Linux and how to put it all to work. Here's the context: -- I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so. -- An application is linked against libsome1.so. -- This application uses libdl.so to dynamically load another module, say libmagic.so. -- Now libmagic.so is linked against libsome2.so. Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION. -- So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. This works... Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1). -- However, when some data structures are serialized to disk, I noticed some corruption. In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen. I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol@@VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol@@VER_2)... Nothing seems to work!!! Help!!!!!! Edit: I should have mentioned it earlier, but the app in question is Firefox, and libsome1.so is libsqlite3.so shipped with it. I don't quite have the option of recompiling them. Also, using version scripts to hide symbols seems to be the only solution right now. So what really happens when symbols are hidden? Do they become 'local' to the SO? Does rtld have no knowledge of their existence? What happens when an exported function refers to a hidden symbol?

    Read the article

  • Multi-threaded random_r is slower than single threaded version.

    - by Nixuz
    The following program is essentially the same the one described here. When I run and compile the program using two threads (NTHREADS == 2), I get the following run times: real 0m14.120s user 0m25.570s sys 0m0.050s When it is run with just one thread (NTHREADS == 1), I get run times significantly better even though it is only using one core. real 0m4.705s user 0m4.660s sys 0m0.010s My system is dual core, and I know random_r is thread safe and I am pretty sure it is non-blocking. When the same program is run without random_r and a calculation of cosines and sines is used as a replacement, the dual-threaded version runs in about 1/2 the time as expected. #include <pthread.h> #include <stdlib.h> #include <stdio.h> #define NTHREADS 2 #define PRNG_BUFSZ 8 #define ITERATIONS 1000000000 void* thread_run(void* arg) { int r1, i, totalIterations = ITERATIONS / NTHREADS; for (i = 0; i < totalIterations; i++){ random_r((struct random_data*)arg, &r1); } printf("%i\n", r1); } int main(int argc, char** argv) { struct random_data* rand_states = (struct random_data*)calloc(NTHREADS, sizeof(struct random_data)); char* rand_statebufs = (char*)calloc(NTHREADS, PRNG_BUFSZ); pthread_t* thread_ids; int t = 0; thread_ids = (pthread_t*)calloc(NTHREADS, sizeof(pthread_t)); /* create threads */ for (t = 0; t < NTHREADS; t++) { initstate_r(random(), &rand_statebufs[t], PRNG_BUFSZ, &rand_states[t]); pthread_create(&thread_ids[t], NULL, &thread_run, &rand_states[t]); } for (t = 0; t < NTHREADS; t++) { pthread_join(thread_ids[t], NULL); } free(thread_ids); free(rand_states); free(rand_statebufs); } I am confused why when generating random numbers the two threaded version performs much worse than the single threaded version, considering random_r is meant to be used in multi-threaded applications.

    Read the article

  • WCF App using Peer Chat app as example does not work.

    - by splate
    I converted a VB .Net 3.5 app to use peer to peer WCF using the available Microsoft example of the Chat app. I made sure that I copied the app.config file for the sample(modified the names for my app), added the appropriate references. I followed all the tutorials and added the appropriate tags and structure in my app code. Everything runs without any errors, but the clients only get messages from themselves and not from the other clients. The sample chat application does run just fine with multiple clients. The only difference I could find is that the server on the sample is targeting the framework 2.0, but I assume that is wrong and it is building it in at least 3.0 or the System.ServiceModel reference would break. Is there something that has to be registered that the sample is doing behind the scenes or is the sample a special project type? I am confused. My next step is to copy all my classes and logic from my app to the sample app, but that is likely a lot of work. Here is my Client App.config: <client><endpoint name="thldmEndPoint" address="net.p2p://thldmMesh/thldmServer" binding="netPeerTcpBinding" bindingConfiguration="PeerTcpConfig" contract="THLDM_Client.IGameService"></endpoint></client> <bindings><netPeerTcpBinding> <binding name="PeerTcpConfig" port="0"> <security mode="None"></security> <resolver mode="Custom"> <custom address="net.tcp://localhost/thldmServer" binding="netTcpBinding" bindingConfiguration="TcpConfig"></custom> </resolver> </binding></netPeerTcpBinding> <netTcpBinding> <binding name="TcpConfig"> <security mode="None"></security> </binding> </netTcpBinding> </bindings> Here is my server App.config: <services> <service name="System.ServiceModel.PeerResolvers.CustomPeerResolverService"> <host> <baseAddresses> <add baseAddress="net.tcp://localhost/thldmServer"/> </baseAddresses> </host> <endpoint address="net.tcp://localhost/thldmServer" binding="netTcpBinding" bindingConfiguration="TcpConfig" contract="System.ServiceModel.PeerResolvers.IPeerResolverContract"> </endpoint> </service> </services> <bindings> <netTcpBinding> <binding name="TcpConfig"> <security mode="None"></security> </binding> </netTcpBinding> </bindings> Thanks ahead of time.

    Read the article

  • Wierdness debugging Visual Studio C++ 2008

    - by Jeff Dege
    I have a legacy C++ app, that in its most incarnation we've been building with makefiles and VS2003's command-line tool. I'm trying to get it to build using VS2008 and MsBuild. The build is working OK, but I'm getting errors where I'd never seen errors, before, and stepping through in VS2008's debugger only confuses me. The app links a number of static libraries, which fall into two categories: those that are part of the same application suite, and those that are shared between a number of application suites. Originally, I had a .csproj file for each static library, and two .sln files, one for the application suite (including the suite-specific libraries) and one for the non-suite-specific shared libraries. The shared libraries were included in the link, their projects were not included in the application suite .sln. The application instantiates an object from a class that is defined in one of the shared libraries. The class has a member object of a class that wraps a linked list. The constructor of the linked list class sets its "head" pointer to null. When I run the app, and try to add an element to the linked list, I get an error - the head pointer contains the value 0xCCCCCCCC. So I step through with the debugger. And see weirdness. When the current line in the debugger is in a source file belonging to the static library, the head pointer contains 0x00000000. When I step into the constructor, I can see the pointer being set to that value, and when I'm stepped into any other method of the class, I can see that the head pointer still contains 0x00000000. But when I step out into methods that are defined in the application suite .sln, it contains 0xCCCCCCCC. It's not like it's being overwritten. It changes back and forth depending upon which source file I am currently debugging. So I included the shared library's project in the application suite .sln, and now I see the head pointer containing 0xCCCCCCCC all the time. It looks like the constructor of the linked list class is not being called. So now, I'm entirely confused. Anyone have any ideas?

    Read the article

  • iPhone memory management (with specific examples/questions)

    - by donkim
    Hey all. I know this question's been asked but I still don't have a clear picture of memory management in Objective-C. I feel like I have a pretty good grasp of it, but I'd still like some correct answers for the following code. I have a series of examples that I'd love for someone(s) to clarify. Setting a value for an instance variable. Say I have an NSMutableArray variable. In my class, when I initialize it, do I need to call a retain on it? Do I do fooArray = [[[NSMutableArray alloc] init] retain]; or fooArray = [[NSMutableArray alloc] init]; Does doing [[NSMutableArray alloc] init] already set the retain count to 1, so I wouldn't need to call retain on it? On the other hand, if I called a method that I know returns an autoreleased object, I would for sure have to call retain on it, right? Like so: fooString = [[NSString stringWithFormat:@"%d items", someInt] retain]; Properties. I ask about the retain because I'm a bit confused about how @property's automatic setter works. If I had set fooArray to be a @property with retain set, Objective-C will automatically create the following setter, right? - (void)setFooArray:(NSMutableArray *)anArray { [fooArray release]; fooArray = [anArray retain]; } So, if I had code like this: self.fooArray = [[NSMutableArray alloc] init]; (which I believe is valid code), Objective-C creates a setter method that calls retain on the value assigned to fooArray. In this case, will the retain count actually be 2? Correct way of setting a value of a property. I know there are questions on this and (possibly) debates, but which is the right way to set a @property? This? self.fooArray = [[NSMutableArray alloc] init]; Or this? NSMutableArray *anArray = [[NSMutableArray alloc] init]; self.fooArray = anArray; [anArray release]; I'd love to get some clarification on these examples. Thanks!

    Read the article

  • How do I remove descendant elements from a jQuery wrapped set?

    - by Bungle
    I'm a little confused about which jQuery method and/or selectors to use when trying to select an element, and then remove certain descendant elements from the wrapped set. For example, given the following HTML: <div id="article"> <div id="inset"> <ul> <li>This is bullet point #1.</li> <li>This is bullet point #2.</li> <li>This is bullet point #3.</li> </ul> </div> <p>This is the first paragraph of the article</p> <p>This is the second paragraph of the article</p> <p>This is the third paragraph of the article</p> </div> I want to select the article: var $article = $('#article'); but then remove <div id="inset"></div> and its descendants from the wrapped set. I tried the following: var $article = $('#article').not('#inset'); but that didn't work, and in retrospect, I think I can see why. I also tried using remove() unsuccessfully. What would be the correct way to do this? Ultimately, I need to set this up in such a way that I can define a configuration array, such as: var selectors = [ { select: '#article', exclude: ['#inset'] } ]; where select defines a single element that contains text content, and exclude is an optional array that defines one or more selectors to disregard text content from. Given the final wrapped set with the excluded elements removed, I would like to be able to call jQuery's text() method to end up with the following text: This is the first paragraph of the article.This is the second paragraph of the article.This is the third paragraph of the article. The configuration array doesn't need to work exactly like that, but it should provide roughly equivalent configuration potential. Thanks for any help you can provide!

    Read the article

  • Visual studio 2010 colourizers, intellisense and the rest. Where to start!!

    - by Owen
    Ok, before I begin I realize that there is a lot of documentation on this subject but I have thus far failed to get even basic colourization working for VS2010. My goal is to simply get to a point where I can open a document and everything is coloured red, from here I can implement the relevant parsing logic. Here's what I have tried/found: 1) Downloaded all the relevent SDK's and such- Found the ook sample (http://code.msdn.microsoft.com/ookLanguage) - didn't build, didn't work. 2) Knowing almost nothing about MEF read through "Implementing a Language Service By Using the Managed Package Framework" - http://msdn.microsoft.com/en-us/library/bb166533(v=VS.100).aspx This was pretty much a copy and paste of all the basic stuff here, and also updating some references which were out of date with the sample see: http://social.msdn.microsoft.com/Forums/en-US/vsx/thread/a310fe67-afd2-4592-b295-3fc86fec7996 Now, I have got to a point where when running the package MEF appears to have hooked up correctly (I know this because with the debugger open I can see that the packages initialize and FDoIdle methods are being hit). When I open a file of the extension I have registered with the ProvideLanguageExtensionAttribute everything dies as if in an endless loop, yet no debug symbols hit (though they are loaded). Looking at the ook sample and the MEF examples they seem to be totally different approaches to the same problem. In the ook sample there are notions of Clasifications and Completion controllers which aren't mentioned in the MEF example. Also, they don't seem to create a Package or Language service, so I have no idea how it should work? With the MEF example, my assumption is that I need to hook into the "IScanner.ScanTokenAndProvideInfoAboutIt" to provide syntax highlighting? Which would be fine if I could ever hit this method. So my first question I guess is which approach should I be taking here? Or do they both somehow tie together? My second questions is, where can I find a basic fully working project that implements bog standard basic syntax highlighting and intellisense or VS2010? Thirdly, in the MEF example when I created a Package there were a bunch of test projects created for me. I appears that the integration tests launch the VS2010 test rig somehow, but the test fails. It would be good to write my service with tests but I have no idea what/how I can test each interaction so any references to testing Language services would be helpful. Finally, please throw any resource/book links my way that I may find useful. Cheers, Chris. N.B. Sorry I realize this is part question part rant, but I have never been so confused.

    Read the article

  • Java compiler rejects variable declaration with parameterized inner class

    - by Johansensen
    I have some Groovy code which works fine in the Groovy bytecode compiler, but the Java stub generated by it causes an error in the Java compiler. I think this is probably yet another bug in the Groovy stub generator, but I really can't figure out why the Java compiler doesn't like the generated code. Here's a truncated version of the generated Java class (please excuse the ugly formatting): @groovy.util.logging.Log4j() public abstract class AbstractProcessingQueue <T> extends nz.ac.auckland.digitizer.AbstractAgent implements groovy.lang.GroovyObject { protected int retryFrequency; protected java.util.Queue<nz.ac.auckland.digitizer.AbstractProcessingQueue.ProcessingQueueMember<T>> items; public AbstractProcessingQueue (int processFrequency, int timeout, int retryFrequency) { super ((int)0, (int)0); } private enum ProcessState implements groovy.lang.GroovyObject { NEW, FAILED, FINISHED; } private class ProcessingQueueMember<E> extends java.lang.Object implements groovy.lang.GroovyObject { public ProcessingQueueMember (E object) {} } } The offending line in the generated code is this: protected java.util.Queue<nz.ac.auckland.digitizer.AbstractProcessingQueue.ProcessingQueueMember<T>> items; which produces the following compile error: [ERROR] C:\Documents and Settings\Administrator\digitizer\target\generated-sources\groovy-stubs\main\nz\ac\auckland\digitizer\AbstractProcessingQueue.java:[14,96] error: improperly formed type, type arguments given on a raw type The column index of 96 in the compile error points to the <T> parameterization of the ProcessingQueueMember type. But ProcessingQueueMember is not a raw type as the compiler claims, it is a generic type: private class ProcessingQueueMember <E> extends java.lang.Object implements groovy.lang.GroovyObject { ... I am very confused as to why the compiler thinks that the type Queue<ProcessingQueueMember<T>> is invalid. The Groovy source compiles fine, and the generated Java code looks perfectly correct to me too. What am I missing here? Is it something to do with the fact that the type in question is a nested class? (in case anyone is interested, I have filed this bug report relating to the issue in this question) Edit: Turns out this was indeed a stub compiler bug- this issue is now fixed in 1.8.9, 2.0.4 and 2.1, so if you're still having this issue just upgrade to one of those versions. :)

    Read the article

  • javascript error: variable is not defined

    - by Hristo
    to is not defined [Break on this error] setTimeout('updateChat(from, to)', 1); I'm getting this error... I'm using Firebug to test and this comes up in the Console. The error corresponds to line 71 of chat.js and the whole function that wraps this line is: function updateChat(from, to) { $.ajax({ type: "POST", url: "process.php", data: { 'function': 'getFromDB', 'from': from, 'to': to }, dataType: "json", cache: false, success: function(data) { if (data.text != null) { for (var i = 0; i < data.text.length; i++) { $('#chat-box').append($("<p>"+ data.text[i] +"</p>")); } document.getElementById('chat-box').scrollTop = document.getElementById('chat-box').scrollHeight; } instanse = false; state = data.state; setTimeout('updateChat(from, to)', 1); // gives error }, }); } This links to process.php with function call getFromDB and the code for that is: case ('getFromDB'): // get the sender and receiver user IDs from their user names $from = mysql_real_escape_string($_POST['from']); $query = "SELECT `user_id` FROM `Users` WHERE `user_name` = '$from' LIMIT 1"; $result = mysql_query($query) or die(mysql_error()); $row = mysql_fetch_assoc($result); $fromID = $row['user_id']; $to = mysql_real_escape_string($_POST['to']); $query = "SELECT `user_id` FROM `Users` WHERE `user_name` = '$to' LIMIT 1"; $result = mysql_query($query) or die(mysql_error()); $row = mysql_fetch_assoc($result); $toID = $row['user_id']; $query = "SELECT * FROM `Messages` WHERE `from_id` = '$fromID' AND `to_id` = '$toID' LIMIT 1"; $result = mysql_query($query); while($row = mysql_fetch_assoc($result)) { $text[] = $line = $row['message']; $log['text'] = $text; } break; So I'm confused with the line that is giving the error. setTimeout('updateChat(from,to)',1); aren't the parameters to updateChat the same parameters that came into the function? Or are they being pulled in from somewhere else and I have to define to and from else where? Any ideas how to fix this error? Thanks, Hristo

    Read the article

  • What do I need to distribute (keys, certs) for Python w/ SSL-socket connection?

    - by fandingo
    I'm trying to write a generic server-client application that will be able to exchange data amongst servers. I've read over quite a few OpenSSL documents, and I have successfully setup my own CA and created a cert (and private key) for testing purposes. I'm stuck with Python 2.3, so I can't use the standard "ssl" library. Instead, I'm stuck with PyOpenSSL, which doesn't seem bad, but there aren't many documents out there about it. My question isn't really about getting it working. I'm more confused about the certificates and where they need to go. Here are my two programs that do work: Server: #!/bin/env python from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER|SSL.VERIFY_FAIL_IF_NO_PEER_CERT, verify_cb) # ?????? ctx.use_privatekey_file('./Dmgr-key.pem') ctx.use_certificate_file('Dmgr-cert.pem') # ?????? ctx.load_verify_locations('./CAcert.pem') server = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) server.bind(('', 50000)) server.listen(3) a, b = server.accept() c = a.recv(1024) print(c) Client: from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER, verify_cb) # ?????????? ctx.use_privatekey_file('/home/justin/code/work/CA/private/Dmgr-key.pem') ctx.use_certificate_file('/home/justin/code/work/CA/Dmgr-cert.pem') # ????????? ctx.load_verify_locations('/home/justin/code/work/CA/CAcert.pem') sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.connect(('10.0.0.3', 50000)) a = Tester(2, 2) b = pickle.dumps(a) sock.send("Hello, world") sock.flush() sock.send(b) sock.shutdown() sock.close() I found this information from ftp://ftp.pbone.net/mirror/ftp.pld-linux.org/dists/2.0/PLD/i586/PLD/RPMS/python-pyOpenSSL-examples-0.6-2.i586.rpm which contains some example scripts. As you might gather, I don't fully understand the sections between the " # ????????." I don't get why the certificate and private key are needed on both the client and server. I'm not sure where each should go, but shouldn't I only need to distribute one part of the key (probably the public part)? It undermines the purpose of having asymmetric keys if you still need both on each server, right? I tried alternating removing either the pkey or cert on either box, and I get the following error no matter which I remove: OpenSSL.SSL.Error: [('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure'), ('SSL routines', 'SSL3_WRITE_BYTES', 'ssl handshake failure')] Could someone explain if this is the expected behavior for SSL. Do I really need to distribute the private key and public cert to all my clients? I'm trying to avoid any huge security problems, and leaking private keys would tend to be a big one... Thanks for the help!

    Read the article

  • When mocking a class with Moq, how can I CallBase for just specific methods?

    - by Daryn
    I really appreciate Moq's Loose mocking behaviour that returns default values when no expectations are set. It's convenient and saves me code, and it also acts as a safety measure: dependencies won't get unintentionally called during the unit test (as long as they are virtual). However, I'm confused about how to keep these benefits when the method under test happens to be virtual. In this case I do want to call the real code for that one method, while still having the rest of the class loosely mocked. All I have found in my searching is that I could set mock.CallBase = true to ensure that the method gets called. However, that affects the whole class. I don't want to do that because it puts me in a dilemma about all the other properties and methods in the class that hide call dependencies: if CallBase is true then I have to either Setup stubs for all of the properties and methods that hide dependencies -- Even though my test doesn't think it needs to care about those dependencies, or Hope that I don't forget to Setup any stubs (and that no new dependencies get added to the code in the future) -- Risk unit tests hitting a real dependency. Q: With Moq, is there any way to test a virtual method, when I mocked the class to stub just a few dependencies? I.e. Without resorting to CallBase=true and having to stub all of the dependencies? Example code to illustrate (uses MSTest, InternalsVisibleTo DynamicProxyGenAssembly2) In the following example, TestNonVirtualMethod passes, but TestVirtualMethod fails - returns null. public class Foo { public string NonVirtualMethod() { return GetDependencyA(); } public virtual string VirtualMethod() { return GetDependencyA();} internal virtual string GetDependencyA() { return "! Hit REAL Dependency A !"; } // [... Possibly many other dependencies ...] internal virtual string GetDependencyN() { return "! Hit REAL Dependency N !"; } } [TestClass] public class UnitTest1 { [TestMethod] public void TestNonVirtualMethod() { var mockFoo = new Mock<Foo>(); mockFoo.Setup(m => m.GetDependencyA()).Returns(expectedResultString); string result = mockFoo.Object.NonVirtualMethod(); Assert.AreEqual(expectedResultString, result); } [TestMethod] public void TestVirtualMethod() // Fails { var mockFoo = new Mock<Foo>(); mockFoo.Setup(m => m.GetDependencyA()).Returns(expectedResultString); // (I don't want to setup GetDependencyB ... GetDependencyN here) string result = mockFoo.Object.VirtualMethod(); Assert.AreEqual(expectedResultString, result); } string expectedResultString = "Hit mock dependency A - OK"; }

    Read the article

  • C++ vector of strings, pointers to functions, and the resulting frustration.

    - by Kyle
    So I am a first year computer science student, for on of my final projects, I need to write a program that takes a vector of strings, and applies various functions to these. Unfortunately, I am really confused on how to use pointer to pass the vector from function to function. Below is some sample code to give an idea of what I am talking about. I also get an error message when I try to deference any pointer. thanks. #include <iostream> #include <cstdlib> #include <vector> #include <string> using namespace std; vector<string>::pointer function_1(vector<string>::pointer ptr); void function_2(vector<string>::pointer ptr); int main() { vector<string>::pointer ptr; vector<string> svector; ptr = &svector[0]; function_1(ptr); function_2(ptr); } vector<string>::pointer function_1(vector<string>::pointer ptr) { string line; for(int i = 0; i < 10; i++) { cout << "enter some input ! \n"; // i need to be able to pass a reference of the vector getline(cin, line); // through various functions, and have the results *ptr.pushback(line); // reflectedin main(). But I cannot use member functions } // of vector with a deferenced pointer. return(ptr); } void function_2(vector<string>::pointer ptr) { for(int i = 0; i < 10; i++) { cout << *ptr[i] << endl; } }

    Read the article

  • Benchmark of Java Try/Catch Block

    - by hectorg87
    I know that going into a catch block has some significance cost when executing a program, however, I was wondering if entering a try{} block also had any impact so I started looking for an answer in google with many opinions, but no benchmarking at all. Some answers I found were: Java try/catch performance, is it recommended to keep what is inside the try clause to a minimum? Try Catch Performance Java Java try catch blocks However they didn't answer my question with facts, so I decided to try it for myself. Here's what I did. I have a csv file with this format: host;ip;number;date;status;email;uid;name;lastname;promo_code; where everything after status is optional and will not even have the corresponding ; , so when parsing a validation has to be done to see if the value is there, here's where the try/catch issue came to my mind. The current code that in inherited in my company does this: StringTokenizer st=new StringTokenizer(line,";"); String host = st.nextToken(); String ip = st.nextToken(); String number = st.nextToken(); String date = st.nextToken(); String status = st.nextToken(); String email = ""; try{ email = st.nextToken(); }catch(NoSuchElementException e){ email = ""; } and it repeats what it's done for email with uid, name, lastname and promo_code. and I changed everything to: if(st.hasMoreTokens()){ email = st.nextToken(); } and in fact it performs faster. When parsing a file that doesn't have the optional columns. Here are the average times: --- Trying:122 milliseconds --- Checking:33 milliseconds however, here's what confused me and the reason I'm asking: When running the example with values for the optional columns in all 8000 lines of the CSV, the if() version still performs better than the try/catch version, so my question is Does really the try block does not have any performance impact on my code? The average times for this example are: --- Trying:105 milliseconds --- Checking:43 milliseconds Can somebody explain what's going on here? Thanks a lot

    Read the article

  • How to create multiple Repository object inside a Repository class using Unit Of Work?

    - by Santosh
    I am newbie to MVC3 application development, currently, we need following Application technologies as requirement MVC3 framework IOC framework – Autofac to manage object creation dynamically Moq – Unit testing Entity Framework Repository and Unit Of Work Pattern of Model class I have gone through many article to explore an basic idea about the above points but still I am little bit confused on the “Repository and Unit Of Work Pattern “. Basically what I understand Unit Of Work is a pattern which will be followed along with Repository Pattern in order to share the single DB Context among all Repository object, So here is my design : IUnitOfWork.cs public interface IUnitOfWork : IDisposable { IPermitRepository Permit_Repository{ get; } IRebateRepository Rebate_Repository { get; } IBuildingTypeRepository BuildingType_Repository { get; } IEEProjectRepository EEProject_Repository { get; } IRebateLookupRepository RebateLookup_Repository { get; } IEEProjectTypeRepository EEProjectType_Repository { get; } void Save(); } UnitOfWork.cs public class UnitOfWork : IUnitOfWork { #region Private Members private readonly CEEPMSEntities context = new CEEPMSEntities(); private IPermitRepository permit_Repository; private IRebateRepository rebate_Repository; private IBuildingTypeRepository buildingType_Repository; private IEEProjectRepository eeProject_Repository; private IRebateLookupRepository rebateLookup_Repository; private IEEProjectTypeRepository eeProjectType_Repository; #endregion #region IUnitOfWork Implemenation public IPermitRepository Permit_Repository { get { if (this.permit_Repository == null) { this.permit_Repository = new PermitRepository(context); } return permit_Repository; } } public IRebateRepository Rebate_Repository { get { if (this.rebate_Repository == null) { this.rebate_Repository = new RebateRepository(context); } return rebate_Repository; } } } PermitRepository .cs public class PermitRepository : IPermitRepository { #region Private Members private CEEPMSEntities objectContext = null; private IObjectSet<Permit> objectSet = null; #endregion #region Constructors public PermitRepository() { } public PermitRepository(CEEPMSEntities _objectContext) { this.objectContext = _objectContext; this.objectSet = objectContext.CreateObjectSet<Permit>(); } #endregion public IEnumerable<RebateViewModel> GetRebatesByPermitId(int _permitId) { // need to implment } } PermitController .cs public class PermitController : Controller { #region Private Members IUnitOfWork CEEPMSContext = null; #endregion #region Constructors public PermitController(IUnitOfWork _CEEPMSContext) { if (_CEEPMSContext == null) { throw new ArgumentNullException("Object can not be null"); } CEEPMSContext = _CEEPMSContext; } #endregion } So here I am wondering how to generate a new Repository for example “TestRepository.cs” using same pattern where I can create more then one Repository object like RebateRepository rebateRepo = new RebateRepository () AddressRepository addressRepo = new AddressRepository() because , what ever Repository object I want to create I need an object of UnitOfWork first as implmented in the PermitController class. So if I would follow the same in each individual Repository class that would again break the priciple of Unit Of Work and create multiple instance of object context. So any idea or suggestion will be highly appreciated. Thank you

    Read the article

  • Representation of a DateTime as the local to remote user

    - by TwoSecondsBefore
    Hello! I was confused in the problem of time zones. I am writing a web application that will contain some news with dates of publication, and I want the client to see the date of publication of the news in the form of corresponding local time. However, I do not know in which time zone the client is located. I have three questions. I have to ask just in case: does DateTimeOffset.UtcNow always returns the correct UTC date and time, regardless of whether the server is dependent on daylight savings time? For example, if the first time I get the value of this property for two minutes before daylight savings time (or before the transition from daylight saving time back) and the second time in 2 minutes after the transfer, whether the value of properties in all cases differ by only 4 minutes? Or here require any further logic? (Question #1) Please see the following example and tell me what you think. I posted the news on the site. I assume that DateTimeOffset.UtcNow takes into account the time zone of the server and the daylight savings time, and so I immediately get the correct UTC server time when pressing the button "Submit". I write this value to a MS SQL database in the field of type datetime2(0). Then the user opens a page with news and no matter how long after publication. This may occur even after many years. I did not ask him to enter his time zone. Instead, I get the offset of his current local time from UTC using the javascript function following way: function GetUserTimezoneOffset() { var offset = new Date().getTimezoneOffset(); return offset; } Next I make the calculation of the date and time of publication, which will show the user: public static DateTime Get_Publication_Date_In_User_Local_DateTime( DateTime Publication_Utc_Date_Time_From_DataBase, int User_Time_Zone_Offset_Returned_by_Javascript) { int userTimezoneOffset = User_Time_Zone_Offset_Returned_by_Javascript; // For // example Javascript returns a value equal to -300, which means the // current user's time differs from UTC to 300 minutes. Ie offset // is UTC +6. In this case, it may be the time zone UTC +5 which // currently operates summer time or UTC +6 which currently operates the // standard time. // Right? (Question #2) DateTimeOffset utcPublicationDateTime = new DateTimeOffset(Publication_Utc_Date_Time_From_DataBase, new TimeSpan(0)); // get an instance of type DateTimeOffset for the // date and time of publication for further calculations DateTimeOffset publication_DateTime_In_User_Local_DateTime = utcPublicationDateTime.ToOffset(new TimeSpan(0, - userTimezoneOffset, 0)); return publication_DateTime_In_User_Local_DateTime.DateTime;// return to user } Is the value obtained correct? Is this the right approach to solving this problem? (Question #3)

    Read the article

  • Returning different data types C#

    - by user1810659
    i have create a class library (DLL) with many different methods. and the return different types of data(string string[] double double[]). Therefore i have created one class i called CustomDataType for all the methods containing different data types so each method in the Library can return object of the custom class and this way be able to return multiple data types I have done it like this: public class CustomDataType { public double Value; public string Timestamp; public string Description; public string Unit; // special for GetparamterInfo public string OpcItemUrl; public string Source; public double Gain; public double Offset; public string ParameterName; public int ParameterID; public double[] arrayOfValue; public string[] arrayOfTimestamp; // public string[] arrayOfParameterName; public string[] arrayOfUnit; public string[] arrayOfDescription; public int[] arrayOfParameterID; public string[] arrayOfItemUrl; public string[] arrayOfSource; public string[] arrayOfModBusRegister; public string[] arrayOfGain; public string[] arrayOfOffset; } The Library contains methods like these: public CustomDataType GetDeviceParameters(string deviceName) { ...................... code getDeviceParametersObj.arrayOfParameterName; return getDeviceParametersObj; } public CustomDataType GetMaxMin(string parameterName, string period, string maxMin) { .....................................code getMaxMingObj.Value = (double)reader["MaxMinValue"]; getMaxMingObj.Timestamp = reader["MeasurementDateTime"].ToString(); getMaxMingObj.Unit = reader["Unit"].ToString(); getMaxMingObj.Description = reader["Description"].ToString(); return getMaxMingObj; } public CustomDataType GetSelectedMaxMinData(string[] parameterName, string period, string mode) {................................code selectedMaxMinObj.arrayOfValue = MaxMinvalueList.ToArray(); selectedMaxMinObj.arrayOfTimestamp = MaxMintimeStampList.ToArray(); selectedMaxMinObj.arrayOfDescription = MaxMindescriptionList.ToArray(); selectedMaxMinObj.arrayOfUnit = MaxMinunitList.ToArray(); return selectedMaxMinObj; } As illustrated thi different methods returns different data types,and it works fine for me but when i import the DLL and want to use the methods Visual studio shwos all the data types in the CustomDataType class as suggestion for all the methods even though the return different data.This is illusrtated in the picture below. As we can see from the picture with the suggestion of all the different return data the user can get confused and choose wrong return data for some of the methods. So my question is how can i improve this. so Visual studio suggest just the belonging return data type for each method.

    Read the article

  • Hoe to convert a JSON array of paths to images stored on a server into a javaScript array to display them? Using AJAX

    - by MichaelF
    I need for html file/ ajax code to take the JSON message and store the PATHS as a javaScript array. Then my buildImage function can display the first image in the array. I'm new to AJAX and believe my misunderstanding lies within the converting of the JSON to Javascript. I'm confused also about if my code creates a JSON array or object or either. I might need also to download a library to my app to understand JSON? Below is a PHP file loading the paths of the images. I believe json ecode is converting the PHP Array in a Json message. <?php include("mysqlconnect.php"); $select_query = "SELECT `ImagesPath` FROM `offerstbl` ORDER by `ImagesId` DESC"; $sql = mysql_query($select_query) or die(mysql_error()); $data = array(); while($row = mysql_fetch_array($sql,MYSQL_BOTH)){ $data[] = $row['ImagesPath']; } echo $images = json_encode($data); ?> Below is the script in is going to be loaded on an Cordova app. <!DOCTYPE html> <html> <head> <link rel="stylesheet" href="css/styles.css"> <link rel="stylesheet" href="css/cascading.css"> <script> function importJson(str) { // console.log(typeof xmlhttp.responseText); if (str=="") { document.getElementById("content").innerHTML=""; return; } if (window.XMLHttpRequest) { // code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else { // code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4) { //var images = JSON.parse(xmlhttp.responseText); document.getElementById("content").innerHTML=xmlhttp.responseText; } } xmlhttp.open("GET","http://server/content.php"); xmlhttp.send(); } function buildImage(src) { var img = document.createElement('img') img.src = src alert("1"); document.getElementById('content').appendChild(img); } for (var i = 0; i < images.length; i++) { buildImage(images[i]); } </script> </head> <body onload= "importJson();"> <div class="contents" id="content" ></div> <img src="img/logo.png" height="10px" width="10px" onload= "buildImage();"> </body>

    Read the article

  • Spring OpenSessionInViewFilter with @Transactional annotation

    - by Gautam
    This is regarding Spring OpenSessionInViewFilter using with @Transactional annotation at service layer. i went through so many stack overflow post on this but still confused about whether i should use OpenSessionInViewFilter or not to avoid LazyInitializationException It would be great help if somebody help me find out answer to below queries. Is it bad practice to use OpenSessionInViewFilter in application having complex schema. using this filter can cause N+1 problem if we are using OpenSessionInViewFilter does it mean @Transactional not required? Below is my Spring config file <context:component-scan base-package="com.test"/> <context:annotation-config/> <bean id="messageSource" class="org.springframework.context.support.ReloadableResourceBundleMessageSource"> <property name="basename" value="resources/messages" /> <property name="defaultEncoding" value="UTF-8" /> </bean> <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" p:location="/WEB-INF/jdbc.properties" /> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${jdbc.driverClassName}" p:url="${jdbc.databaseurl}" p:username="${jdbc.username}" p:password="${jdbc.password}" /> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation"> <value>classpath:hibernate.cfg.xml</value> </property> <property name="configurationClass"> <value>org.hibernate.cfg.AnnotationConfiguration</value> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${jdbc.dialect}</prop> <prop key="hibernate.show_sql">true</prop> <!-- <prop key="hibernate.hbm2ddl.auto">create</prop> --> </props> </property> </bean> <tx:annotation-driven /> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean>

    Read the article

  • Java array of arry [matrix] of an integer partition with fixed term

    - by user335209
    Hello, for my study purpose I need to build an array of array filled with the partitions of an integer with fixed term. That is given an integer, suppose 10 and given the fixed number of terms, suppose 5 I need to populate an array like this 10 0 0 0 0 9 0 0 0 1 8 0 0 0 2 7 0 0 0 3 ............ 9 0 0 1 0 8 0 0 1 1 ............. 7 0 1 1 0 6 0 1 1 1 ............ ........... 0 6 1 1 1 ............. 0 0 0 0 10 am pretty new to Java and am getting confused with all the for loops. Right now my code can do the partition of the integer but unfortunately it is not with fixed term public class Partition { private static int[] riga; private static void printPartition(int[] p, int n) { for (int i= 0; i < n; i++) System.out.print(p[i]+" "); System.out.println(); } private static void partition(int[] p, int n, int m, int i) { if (n == 0) printPartition(p, i); else for (int k= m; k > 0; k--) { p[i]= k; partition(p, n-k, n-k, i+1); } } public static void main(String[] args) { riga = new int[6]; for(int i = 0; i<riga.length; i++){ riga[i] = 0; } partition(riga, 6, 1, 0); } } the output I get it from is like this: 1 5 1 4 1 1 3 2 1 3 1 1 1 2 3 1 2 2 1 1 2 1 2 1 2 1 1 1 what i'm actually trying to understand how to proceed is to have it with a fixed terms which would be the columns of my array. So, am stuck with trying to get a way to make it less dynamic. Any help?

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157  | Next Page >