Search Results

Search found 2886 results on 116 pages for 'behaviour'.

Page 109/116 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • When mocking a class with Moq, how can I CallBase for just specific methods?

    - by Daryn
    I really appreciate Moq's Loose mocking behaviour that returns default values when no expectations are set. It's convenient and saves me code, and it also acts as a safety measure: dependencies won't get unintentionally called during the unit test (as long as they are virtual). However, I'm confused about how to keep these benefits when the method under test happens to be virtual. In this case I do want to call the real code for that one method, while still having the rest of the class loosely mocked. All I have found in my searching is that I could set mock.CallBase = true to ensure that the method gets called. However, that affects the whole class. I don't want to do that because it puts me in a dilemma about all the other properties and methods in the class that hide call dependencies: if CallBase is true then I have to either Setup stubs for all of the properties and methods that hide dependencies -- Even though my test doesn't think it needs to care about those dependencies, or Hope that I don't forget to Setup any stubs (and that no new dependencies get added to the code in the future) -- Risk unit tests hitting a real dependency. Q: With Moq, is there any way to test a virtual method, when I mocked the class to stub just a few dependencies? I.e. Without resorting to CallBase=true and having to stub all of the dependencies? Example code to illustrate (uses MSTest, InternalsVisibleTo DynamicProxyGenAssembly2) In the following example, TestNonVirtualMethod passes, but TestVirtualMethod fails - returns null. public class Foo { public string NonVirtualMethod() { return GetDependencyA(); } public virtual string VirtualMethod() { return GetDependencyA();} internal virtual string GetDependencyA() { return "! Hit REAL Dependency A !"; } // [... Possibly many other dependencies ...] internal virtual string GetDependencyN() { return "! Hit REAL Dependency N !"; } } [TestClass] public class UnitTest1 { [TestMethod] public void TestNonVirtualMethod() { var mockFoo = new Mock<Foo>(); mockFoo.Setup(m => m.GetDependencyA()).Returns(expectedResultString); string result = mockFoo.Object.NonVirtualMethod(); Assert.AreEqual(expectedResultString, result); } [TestMethod] public void TestVirtualMethod() // Fails { var mockFoo = new Mock<Foo>(); mockFoo.Setup(m => m.GetDependencyA()).Returns(expectedResultString); // (I don't want to setup GetDependencyB ... GetDependencyN here) string result = mockFoo.Object.VirtualMethod(); Assert.AreEqual(expectedResultString, result); } string expectedResultString = "Hit mock dependency A - OK"; }

    Read the article

  • MySQL search for user and their roles

    - by Jenkz
    I am re-writing the SQL which lets a user search for any other user on our site and also shows their roles. An an example, roles can be "Writer", "Editor", "Publisher". Each role links a User to a Publication. Users can take multiple roles within multiple publications. Example table setup: "users" : user_id, firstname, lastname "publications" : publication_id, name "link_writers" : user_id, publication_id "link_editors" : user_id, publication_id Current psuedo SQL: SELECT * FROM ( (SELECT user_id FROM users WHERE firstname LIKE '%Jenkz%') UNION (SELECT user_id FROM users WHERE lastname LIKE '%Jenkz%') ) AS dt JOIN (ROLES STATEMENT) AS roles ON roles.user_id = dt.user_id At the moment my roles statement is: SELECT dt2.user_id, dt2.publication_id, dt.role FROM ( (SELECT 'writer' AS role, link_writers.user_id, link_writers.publication_id FROM link_writers) UNION (SELECT 'editor' AS role, link_editors.user_id, link_editors.publication_id FROM link_editors) ) AS dt2 The reason for wrapping the roles statement in UNION clauses is that some roles are more complex and require a table join to find the publication_id and user_id. As an example "publishers" might be linked accross two tables "link_publishers": user_id, publisher_group_id "link_publisher_groups": publisher_group_id, publication_id So in that instance, the query forming part of my UNION would be: SELECT 'publisher' AS role, link_publishers.user_id, link_publisher_groups.publication_id FROM link_publishers JOIN link_publisher_groups ON lpg.group_id = lp.group_id I'm pretty confident that my table setup is good (I was warned off the one-table-for-all system when researching the layout). My problem is that there are now 100,000 rows in the users table and upto 70,000 rows in each of the link tables. Initial lookup in the users table is fast, but the joining really slows things down. How can I only join on the relevant roles? -------------------------- EDIT ---------------------------------- Explain above (open in a new window to see full resolution). The bottom bit in red, is the "WHERE firstname LIKE '%Jenkz%'" the third row searches WHERE CONCAT(firstname, ' ', lastname) LIKE '%Jenkz%'. Hence the large row count, but I think this is unavoidable, unless there is a way to put an index accross concatenated fields? The green bit at the top just shows the total rows scanned from the ROLES STATEMENT. You can then see each individual UNION clause (#6 - #12) which all show a large number of rows. Some of the indexes are normal, some are unique. It seems that MySQL isn't optimizing to use the dt.user_id as a comparison for the internal of the UNION statements. Is there any way to force this behaviour? Please note that my real setup is not publications and writers but "webmasters", "players", "teams" etc.

    Read the article

  • Directly Jump to another C++ function

    - by maligree
    I'm porting a small academic OS from TriCore to ARM Cortex (Thumb-2 instruction set). For the scheduler to work, I sometimes need to JUMP directly to another function without modifying the stack nor the link register. On TriCore (or, rather, on tricore-g++), this wrapper template (for any three-argument-function) works: template< class A1, class A2, class A3 > inline void __attribute__((always_inline)) JUMP3( void (*func)( A1, A2, A3), A1 a1, A2 a2, A3 a3 ) { typedef void (* __attribute__((interrupt_handler)) Jump3)( A1, A2, A3); ( (Jump3)func )( a1, a2, a3 ); } //example for using the template: JUMP3( superDispatch, this, me, next ); This would generate the assembler instruction J (a.k.a. JUMP) instead of CALL, leaving the stack and CSAs unchanged when jumping to the (otherwise normal) C++ function superDispatch(SchedulerImplementation* obj, Task::Id from, Task::Id to). Now I need an equivalent behaviour on ARM Cortex (or, rather, for arm-none-linux-gnueabi-g++), i.e. generate a B (a.k.a. BRANCH) instruction instead of BLX (a.k.a. BRANCH with link and exchange). But there is no interrupt_handler attribute for arm-g++ and I could not find any equivalent attribute. So I tried to resort to asm volatile and writing the asm code directly: template< class A1, class A2, class A3 > inline void __attribute__((always_inline)) JUMP3( void (*func)( A1, A2, A3), A1 a1, A2 a2, A3 a3 ) { asm volatile ( "mov.w r0, %1;" "mov.w r1, %2;" "mov.w r2, %3;" "b %0;" : : "r"(func), "r"(a1), "r"(a2), "r"(a3) : "r0", "r1", "r2" ); } So far, so good, in my theory, at least. Thumb-2 requires function arguments to be passed in the registers, i.e. r0..r2 in this case, so it should work. But then the linker dies with undefined reference to `r6' on the closing bracket of the asm statement ... and I don't know what to make of it. OK, I'm not the expert in C++, and the asm syntax is not very straightforward... so has anybody got a hint for me? A hint to the correct __attribute__ for arm-g++ would be one way, a hint to fix the asm code would be another. Another way maybe would be to tell the compiler that a1..a3 should already be in the registers r0..r2 when the asm statement is entered (I looked into that a bit, but did not find any hint).

    Read the article

  • Distinguishing between failure and end of file in read loop

    - by celtschk
    The idiomatic loop to read from an istream is while (thestream >> value) { // do something with value } Now this loop has one problem: It will not distinguish if the loop terminated due to end of file, or due to an error. For example, take the following test program: #include <iostream> #include <sstream> void readbools(std::istream& is) { bool b; while (is >> b) { std::cout << (b ? "T" : "F"); } std::cout << " - " << is.good() << is.eof() << is.fail() << is.bad() << "\n"; } void testread(std::string s) { std::istringstream is(s); is >> std::boolalpha; readbools(is); } int main() { testread("true false"); testread("true false tr"); } The first call to testread contains two valid bools, and therefore is not an error. The second call ends with a third, incomplete bool, and therefore is an error. Nevertheless, the behaviour of both is the same. In the first case, reading the boolean value fails because there is none, while in the second case it fails because it is incomplete, and in both cases EOF is hit. Indeed, the program above outputs twice the same line: TF - 0110 TF - 0110 To solve this problem, I thought of the following solution: while (thestream >> std::ws && !thestream.eof() && thestream >> value) { // do something with value } The idea is to detect regular EOF before actually trying to extract the value. Because there might be whitespace at the end of the file (which would not be an error, but cause read of the last item to not hit EOF), I first discard any whitespace (which cannot fail) and then test for EOF. Only if I'm not at the end of file, I try to read the value. For my example program, it indeed seems to work, and I get TF - 0100 TF - 0110 So in the first case (correct input), fail() returns false. Now my question: Is this solution guaranteed to work, or was I just (un-)lucky that it happened to give the desired result? Also: Is there a simpler (or, if my solution is wrong, a correct) way to get the desired result?

    Read the article

  • What is wrong with these jquery statements?

    - by Pandiya Chendur
    I cant able to get this jquery statement to work on page load but it works once when i refresh F5 the page..... <script type="text/javascript"> var itemsPerPage = 5; $(document).ready(function() { getRecordspage(0, itemsPerPage); var maxvalues = $("#HfId").val(); alert(maxvalues); $(".pager").pagination(maxvalues, { //my syntax }); }); </script> On the initial pageload alert(maxvalues); is nothing... But when i refresh it shows the value of maxvalues which is in the hidden field HfId because it is assigned in the function getRecordspage.... Why this strange behaviour.... Any suggestion... EDIT: function getRecordspage(curPage) { $.ajax({ type: "POST", url: "Default.aspx/GetRecords", data: "{'currentPage':" + (curPage + 1) + ",'pagesize':5}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(jsonObj) { $("#ResultsDiv").empty(); $("#HfId").val(""); var strarr = jsonObj.d.split('##'); var jsob = jQuery.parseJSON(strarr[0]); var divs = ''; $.each(jsob.Table, function(i, employee) { divs += '<div class="resultsdiv"><br /><span class="resultName">' + employee.Emp_Name + '</span><span class="resultfields" style="padding-left:100px;">Category&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Desig_Name + '</span><br /><br /><span id="SalaryBasis" class="resultfields">Salary Basis&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.SalaryBasis + '</span><span class="resultfields" style="padding-left:25px;">Salary&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.FixedSalary + '</span><span style="font-size:110%;font-weight:bolder;padding-left:25px;">Address&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Address + '</span></div>'; }); $("#ResultsDiv").append(divs); $(".resultsdiv:even").addClass("resultseven"); $(".resultsdiv").hover(function() { $(this).addClass("resultshover"); }, function() { $(this).removeClass("resultshover"); }); $("#HfId").val(strarr[1]); } }); }

    Read the article

  • Any suggestions for improvement on this style for BDD/TDD?

    - by Sean B
    I was tinkering with doing the setups with our unit test specifciations which go like Specification for SUT when behaviour X happens in scenario Y Given that this thing And also this other thing When I do X... Then It should do ... And It should also do ... I wrapped each of the steps of the GivenThat in Actions... any feed back whether separating with Actions is good / bad / or better way to make the GivenThat clear? /// <summary> /// Given a product is setup for injection /// And Product Image Factory Is Stubbed(); /// And Product Size Is Stubbed(); /// And Drawing Scale Is Stubbed(); /// And Product Type Is Stubbed(); /// </summary> protected override void GivenThat() { base.GivenThat(); Action givenThatAProductIsSetupforInjection = () => { var randomGenerator = new RandomGenerator(); this.Position = randomGenerator.Generate<Point>(); this.Product = new Diffuser { Size = new RectangularProductSize( 2.Inches()), Position = this.Position, ProductType = Dep<IProductType>() }; }; Action andProductImageFactoryIsStubbed = () => Dep<IProductBitmapImageFactory>().Stub(f => f.GetInstance(Dep<IProductType>())).Return(ExpectedBitmapImage); Action andProductSizeIsStubbed = () => { Stub<IDisplacementProduct, IProductSize>(p => p.Size); var productBounds = new ProductBounds(Width.Feet(), Height.Feet()); Dep<IProductSize>().Stub(s => s.Bounds).Return(productBounds); }; Action andDrawingScaleIsStubbed = () => Dep<IDrawingScale>().Stub(s => s.PixelsPerFoot).Return(PixelsPerFoot); Action andProductTypeIsStubbed = () => Stub<IDisplacementProduct, IProductType>(p => p.ProductType); givenThatAProductIsSetupforInjection(); andProductImageFactoryIsStubbed(); andProductSizeIsStubbed(); andDrawingScaleIsStubbed(); andProductTypeIsStubbed(); }

    Read the article

  • PHP Changing Class Variables Outside of Class

    - by Jamie Bicknell
    Apologies for the wording on this question, I'm having difficulties explaining what I'm after, but hopefully it makes sense. Let's say I have a class, and I wish to pass a variable through one of it's methods, then I have another method which outputs this variable. That's all fine, but what I'm after is that if I update the variable which was originally passed, and do this outside the class methods, it should be reflected in the class. I've created a very basic example: class Test { private $var = ''; function setVar($input) { $this->var = $input; } function getVar() { echo 'Var = ' . $this->var . '<br />'; } } If I run $test = new Test(); $string = 'Howdy'; $test->setVar($string); $test->getVar(); I get Var = Howdy However, this is the flow I would like: $test = new Test(); $test->setVar($string); $string = 'Hello'; $test->getVar(); $string = 'Goodbye'; $test->getVar(); Expected output to be Var = Hello Var = Goodbye I don't know what the correct naming of this would be, and I've tried using references to the original variable but no luck. I've come across this in the past, with the PDO prepared statements, see Example #2 $stmt = $dbh->prepare("INSERT INTO REGISTRY (name, value) VALUES (?, ?)"); $stmt->bindParam(1, $name); $stmt->bindParam(2, $value); // insert one row $name = 'one'; $value = 1; $stmt->execute(); // insert another row with different values $name = 'two'; $value = 2; $stmt->execute(); I know I can change the variable to public and do the following, but it isn't quite the same as how the PDO class handles it, and I'm really looking to mimic that behaviour. $test = new Test(); $test->setVar($string); $test->var = 'Hello'; $test->getVar(); $test->var = 'Goodbye'; $test->getVar(); Any help, ideas, pointers, or advice would be greatly appreciated, thanks.

    Read the article

  • C++: Why does gcc prefer non-const over const when accessing operator[]?

    - by JonasW
    This question might be more appropriately asked regarding C++ in general, but as I am using gcc on linux that's the context. Consider the following program: #include <iostream> #include <map> #include <string> using namespace std; template <typename TKey, typename TValue> class Dictionary{ public: map<TKey, TValue> internal; TValue & operator[](TKey const & key) { cout << "operator[] with key " << key << " called " << endl; return internal[key]; } TValue const & operator[](TKey const & key) const { cout << "operator[] const with key " << key << " called " << endl; return internal.at(key); } }; int main(int argc, char* argv[]) { Dictionary<string, string> dict; dict["1"] = "one"; cout << "first one: " << dict["1"] << endl; return 0; } When executing the program, the output is: operator[] with key 1 called operator[] with key 1 called first one: one What I would like is to have the compiler choose the operator[]const method instead in the second call. The reason is that without having used dict["1"] before, the call to operator[] causes the internal map to create the data that does not exist, even if the only thing I wanted was to do some debugging output, which of course is a fatal application error. The behaviour I am looking for would be something like the C# index operator which has a get and a set operation and where you could throw an exception if the getter tries to access something that doesn't exist: class MyDictionary<TKey, TVal> { private Dictionary<TKey, TVal> dict = new Dictionary<TKey, TVal>(); public TVal this[TKey idx] { get { if(!dict.ContainsKey(idx)) throw KeyNotFoundException("..."); return dict[idx]; } set { dict[idx] = value; } } } Thus, I wonder why the gcc prefers the non-const call over the const call when non-const access is not required.

    Read the article

  • Large memory chunk not garbage collected

    - by Niels
    In a hunt for a memory-leak in my app I chased down a behaviour I can't understand. I allocate a large memory block, but it doesn't get garbage-collected resulting in a OOM, unless I explicit null the reference in onDestroy. In this example I have two almost identical activities that switch between each others. Both have a single button. On pressing the button MainActivity starts OOMActivity and OOMActivity returns by calling finish(). After pressing the buttons a few times, Android throws a OOMException. If i add the the onDestroy to OOMActivity and explicit null the reference to the memory chunk, I can see in the log that the memory is correctly freed. Why doesn't the memory get freed automatically without the nulling? MainActivity: package com.example.oom; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; public class MainActivity extends Activity implements OnClickListener { private int buttonId; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); System.gc(); Button OOMButton = new Button(this); OOMButton.setText("OOM"); buttonId = OOMButton.getId(); setContentView(OOMButton); OOMButton.setOnClickListener(this); } @Override public void onClick(View v) { if (v.getId() == buttonId) { Intent leakIntent = new Intent(this, OOMActivity.class); startActivity(leakIntent); } } } OOMActivity: public class OOMActivity extends Activity implements OnClickListener { private static final int WASTE_SIZE = 20000000; private byte[] waste; private int buttonId; protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Button BackButton = new Button(this); BackButton.setText("Back"); buttonId = BackButton.getId(); setContentView(BackButton); BackButton.setOnClickListener(this); waste = new byte[WASTE_SIZE]; } public void onClick(View view) { if (view.getId() == buttonId) { finish(); } } }

    Read the article

  • Navigating to rootViewController of non-visible UINavigationController CRASH

    - by Bertie
    First off I'n not sure if this is exactly the issue... but it appears to be central to the problem. There are vast gaps in what I know and this may be something very obvious. I have a split view controller with various Master and Detail Views. From one 'menu' branch I want to return to the rootView of both the Master & Detail Navigation Controllers when the user taps a 'Done' button. Initially I had the button positioned on the navigation bar of the master view. This is the code I was using: - (IBAction)doneClicked:(id)sender { UINavigationController *detailNav = [self.splitViewController.viewControllers objectAtIndex:1]; NSArray *allDetailViewControllers = detailNav.viewControllers; HomePage *destinationDetailVC = [allDetailViewControllers objectAtIndex:0]; destinationDetailVC.splitViewBarButtonItem = self.splitViewButton; [detailNav popToRootViewControllerAnimated:NO]; [self.navigationController popToRootViewControllerAnimated:NO]; } That works fine but if the device is in portrait mode it means the user has to open the menus to access the button. So I decided to put the 'Done' button onto the detail view NavBar instead. Using a delegate declared in the detailVC and adopted by the masterVC I am now using this method: - (void)theUserClickedDoneInTheDetailView:(DetailVC *)controller withButton:(UIBarButtonItem *)splitViewButton { UINavigationController *detailNav = [self.splitViewController.viewControllers objectAtIndex:1]; NSArray *allDetailViewControllers = detailNav.viewControllers; HomePage *destinationDetailVC = [allDetailViewControllers objectAtIndex:0]; destinationDetailVC.splitViewBarButtonItem = splitViewButton; [detailNav popToRootViewControllerAnimated: NO]; [self.navigationController popToRootViewControllerAnimated:NO]; } Both methods are in the MasterVC.m file The only difference between the two is that on is passed a UIBarButtonItem to use and the other has already had it passed when it is set in the detail view. That seems to be working because I do get a button. When the device is in Landscape mode both methods work fine. All behaviour as expected. When the device is in Portrait the first method still works fine. The second appears to work fine and the detail view is the rootView, the button for the menus is there BUT, as soon as I turn the device to landscape it crashes. The only thing I can think of is that it is because the masterView is hidden. Can anyone help...?

    Read the article

  • Is it OK to put a standard, pure C header #include directive inside a namespace?

    - by mic_e
    I've got a project with a class log in the global namespace (::log). So, naturally, after #include <cmath>, the compiler gives an error message each time I try to instantiate an object of my log class, because <cmath> pollutes the global namespace with lots of three-letter methods, one of them being the logarithm function log(). So there are three possible solutions, each having their unique ugly side-effects. Move the log class to it's own namespace and always access it with it's fully qualified name. I really want to avoid this because the logger should be as convenient as possible to use. Write a mathwrapper.cpp file which is the only file in the project that includes <cmath>, and makes all the required <cmath> functions available through wrappers in a namespace math. I don't want to use this approach because I have to write a wrapper for every single required math function, and it would add additional call penalty (cancelled out partially by the -flto compiler flag) The solution I'm currently considering: Replace #include <cmath> by namespace math { #include "math.h" } and then calculating the logarithm function via math::log(). I have tried it out and it does, indeed, compile, link and run as expected. It does, however, have multiple downsides: It's (obviously) impossible to use <cmath>, because the <cmath> code accesses the functions by their fully qualified names, and it's deprecated to use in C++. I've got a really, really bad feeling about it, like I'm gonna get attacked and eaten alive by raptors. So my question is: Is there any recommendation/convention/etc that forbid putting include directives in namespaces? Could anything go wrong with diferent C standard library implementations (I use glibc), different compilers (I use g++ 4.7, -std=c++11), linking? Have you ever tried doing this? Are there any alternate ways to banish the math functions from the global namespace? I've found several similar questions on stackoverflow, but most were about including other C++ headers, which obviously is a bad idea, and those that weren't made contradictory statements about linking behaviour for C libraries. Also, would it be beneficial to additionally put the #include <math.h> inside extern "C" {}?

    Read the article

  • <o:massAttribute> affects another components in same <h:panelGrid>

    - by Ignacio Ayuste
    I'm using the new version of OmniFaces 1.8.1, and particullary I start to use the new tag: <o:massAttribute>. Basically, I have the following form with conditionally rendered and disabled fields: <h:form id="formABMProducto"> <h:panelGrid id="datosProducto" columns="4"> <o:massAttribute name="rendered" value="#{cc.attrs.page != 'baja'}"> <h:outputLabel for="codigo" ... /> <h:inputText id="codigo" ... /> <rich:message for="codigo" /> <h:panelGroup /> </o:massAttribute> <o:massAttribute name="rendered" value="#{cc.attrs.page eq 'baja'}"> <h:outputLabel for="codigo" .../> <rich:autocomplete id="codigoProducto" ... /> <rich:message for="codigo" /> <h:panelGroup /> </o:massAttribute> <o:massAttribute name="disabled" value="#{cc.attrs.disableComponents}"> <h:outputLabel for="nombre" ... /> <h:inputTextarea id="nombre" ... /> <rich:message for="nombre" /> <span /> <h:outputLabel for="descripcion" ... /> <h:inputTextarea id="descripcion" ... /> <rich:message for="descripcion" /> <span /> </o:massAttribute> <h:outputLabel value="#{msgs['producto.abm.panel.proveedor.tipo']}" for="CmbTipoProveedor"/> <rich:select id="CmbTipoProveedor" ... /> <rich:message for="CmbTipoProveedor" /> <a4j:commandButton ... /> </h:panelGrid> </h:form> However, when I open the page, the third <o:massAttribute> is also disabling another input fields codigo and codigoProducto. I think this isn't the expected behaviour.

    Read the article

  • Parameter pack argument consumption

    - by yuri kilochek
    It is possible to get the first element of the parameter pack like this template <typename... Elements> struct type_list { }; template <typename TypeList> struct type_list_first_element { }; template <typename FirstElement, typename... OtherElements> struct type_list_first_element<type_list<FirstElement, OtherElements...>> { typedef FirstElement type; }; int main() { typedef type_list<int, float, char> list; typedef type_list_first_element<list>::type element; return 0; } but not possible to similary get the last element like this template <typename... Elements> struct type_list { }; template <typename TypeList> struct type_list_last_element { }; template <typename LastElement, typename... OtherElements> struct type_list_last_element<type_list<OtherElements..., LastElement>> { typedef LastElement type; }; int main() { typedef type_list<int, float, char> list; typedef type_list_last_element<list>::type element; return 0; } with gcc 4.7.1 complaining: error: 'type' in 'struct type_list_last_element<type_list<int, float, char>>' does not name a type What paragraps from the standard describe this behaviour? It seems to me that template parameter packs are greedy in a sense that they consume all matching arguments, which in this case means that OtherElements consumes all three arguments (int, float and char) and then there is nothing left for LastElement so the compilation fails. Am i correct in the assumption? EDIT: To clarify: I am not asking how to extract the last element from the parameter pack, i know how to do that. What i actually want is to pick the pack apart from the back as opposed to the front, and as such recursing all the way to the back for each element would be ineffective. Apparentely reversing the sequence beforehand is the most sensible choice.

    Read the article

  • Restrict sprite movement to vertical and horizontal

    - by Daniel Granger
    I have been battling with this for some time and my noob brain can't quite work it out. I have a standard tile map and currently use the following code to move my enemy sprite around the map -(void) movePlayer:(ccTime)deltaTime { if (CGPointEqualToPoint(self.position, requestedPosition)) return; float step = kPlayerSpeed * deltaTime; float dist = ccpDistance(self.position, requestedPosition); CGPoint vectorBetweenAB = ccpSub(self.position, requestedPosition); if (dist <= step) { self.position = requestedPosition; [self popPosition]; } else { CGPoint normVectorBetweenAB = ccpNormalize(vectorBetweenAB); CGPoint movementVectorForThisFrame = ccpMult(normVectorBetweenAB, step); if (abs(vectorBetweenAB.x) > abs(vectorBetweenAB.y)) { if (vectorBetweenAB.x > 0) { [self runAnimation:walkLeft]; } else { [self runAnimation:walkRight]; } } else { if (vectorBetweenAB.y > 0) { [self runAnimation:walkDown]; } else { [self runAnimation:walkUp]; } } if (self.position.x > movementVectorForThisFrame.x) { movementVectorForThisFrame.x = -movementVectorForThisFrame.x; } if (self.position.y > movementVectorForThisFrame.y) { movementVectorForThisFrame.y = -movementVectorForThisFrame.y; } self.position = ccpAdd(self.position, movementVectorForThisFrame); } } movePlayer: is called by the classes updateWithDeltaTime: method. the ivar requestedPosition is set in the updateWithDeltaTime method as well, it basically gets the next point out of a queue to move to. These points can be anywhere on the map, so if they are in a diagonal direction from the enemy the enemy sprite will move directly to that point. But how do I change the above code to restrict the movement to vertical and horizontal movement only so that the enemies movement 'staircases' its way along a diagonal path, taking the manhattan distance (I think its called). As shown by my crude drawing below... S being the start point F being the finish and the numbers being each intermediate point along its path to create a staircase type diagonal movement. Finally I intend to be able to toggle this behaviour on and off, so that I can choose whether or not I want the enemy to move free around the map or be restricted to this horizontal / vertical movement only. | | | | | | | | | | | | | | | | | | | | | |F| | | | | | | | | |5|4| | | | | | | | | |3|2| | | | | | | | | |1|S| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |

    Read the article

  • Passing a panel widget to a function

    - by user2939801
    I have written a quite complex script which calls a server handler to refresh elements in a grid at the press of a button. For code re-use and consistent behaviour, I am wanting to call that server handler directly during the initial painting of the grid. When the server handler gets called by clicking on the button, all expected widgets are available and can be queried with e.parameter.widget etc. When I call the function directly and pass it the panel variable, the value of e is just 'AbsolutePanel'. Is there some way I can emulate the addCallbackElement way of passing the entire panel and all widgets it contains to the function? Or a way of automatically firing a server handler on script start? Please forgive any syntax errors below, I have pruned 500 lines of code down to the pertinent bits! Thanks Tony function doGet() { var app = UiApp.createApplication(); var mainPanel = app.createAbsolutePanel(); var monthsAbbr = ['Jan.', 'Feb.', 'Mar.', 'Apr.', 'May.', 'Jun.', 'Jul.', 'Aug.', 'Sep.', 'Oct.', 'Nov.', 'Dec.']; var Dates = Array(); var period = 5; var dateHidden = Array(); var dayOfMonth = new Date(((period * 28) + 15887) * 86400000); var dateString = ''; var dayOfWeek = 0; for (var i=0; i<84; i++) { dateString = dayOfMonth.getDate() + ' ' + monthsAbbr[dayOfMonth.getMonth()] + ' ' + (dayOfMonth.getFullYear() - 2000); Dates [i] = dateString; dateHidden[i] = app.createHidden('dates'+i, dateString).setId('dates'+i); mainPanel.add(dateHidden[i]); dayOfMonth = new Date(dayOfMonth.getTime() + 86400000); } var buttonReset = app.createButton('Reset').setId('buttonReset'); var handlerChange = app.createServerHandler('myHandlerChange'); handlerChange.addCallbackElement(mainPanel); mainPanel.add(buttonReset.addChangeHandler(handlerChange)); app.add(mainPanel); myHandlerChange(mainPanel); return app; } function myHandlerChange(e) { var app = UiApp.getActiveApplication(); Logger.log('Here are the widgets passed into the function: ' + Utilities.jsonStringify(e)); return app; }

    Read the article

  • BIND DNS Master with Zerigo Slaves - BIND won't update the slave servers

    - by Anthony
    I've tried to resolve this myself and have looked through Google and Stack but haven't found the answer I'm looking for. Currently on a VPS server I have BIND DNS installed as a MASTER DNS Server. I use Zerigo's DNS service as SLAVE servers for public use: The Master doesn't receive queries - It's job is to simply create and modify DNS entries locally of which the SLAVE use to serve. Here is an excerpt of the BIND log, I set it to INFO event logging: 14-Apr-2012 23:00:00.234 general: info: received control channel command 'reload' 14-Apr-2012 23:00:00.234 general: info: loading configuration from 'C:\DNS\BIND\etc\named.conf' 14-Apr-2012 23:00:00.234 general: info: using default UDP/IPv4 port range: [1024, 65535] 14-Apr-2012 23:00:00.234 general: info: using default UDP/IPv6 port range: [1024, 65535] 14-Apr-2012 23:00:00.250 general: info: reloading configuration succeeded 14-Apr-2012 23:00:00.250 general: info: reloading zones succeeded 14-Apr-2012 23:16:22.750 xfer-out: info: client 174.36.24.251#47135: transfer of 'ajmakeup.com/IN': AXFR started 14-Apr-2012 23:16:22.750 xfer-out: info: client 174.36.24.251#47135: transfer of 'ajmakeup.com/IN': AXFR ended 14-Apr-2012 23:16:23.015 xfer-out: info: client 68.71.141.22#36212: transfer of 'ajmakeup.com/IN': AXFR started 14-Apr-2012 23:16:23.031 xfer-out: info: client 68.71.141.22#36212: transfer of 'ajmakeup.com/IN': AXFR ended As you can see there is no problem with Zerigo's DNS servers requesting new DNS data, when I force a reload that is; I don't believe, as per the way they are set as SLAVE, that they poll for changes. However the problem is the other way; the MASTER is not updating the SLAVE servers when reload is run (on the MASTER); it is a batch on a 15 minute timer. Below is my NAMED.CONF: key "rndc-key" { algorithm hmac-md5; secret "REMOVED FOR SECURITY"; }; acl "trusted" { 174.36.24.251/32; 68.71.141.22/32; localhost; }; options { version "not currently available"; directory "C:\DNS\BIND\etc"; allow-query { trusted; }; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; logging{ channel simple_log { file "C:\DNS\BIND\logging\bind.log" versions 3 size 5m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ simple_log; }; }; zone "ajmakeup.com" in { type master; file "c:\dns\BIND\zones\db.ajmakeup.com.txt"; allow-transfer { 174.36.24.251; 68.71.141.22; }; allow-update { none; }; }; Does my problem have something to do with 'allow-query' under options? You will notice that 'allow-transfer' is set explicitly on each DNS zone. In case you need it here is my RNDC.CONF: key "rndc-key" { algorithm hmac-md5; secret "REMOVED FOR SECURITY"; }; options { default-key "rndc-key"; default-server 127.0.0.1; default-port 953; }; server localhost { key "rndc-key"; }; Note: I am using WebsitePanel as my hosting panel and is such why it creates the zone enteries the way it does. Although I know I can change this behaviour, I do not wish to do so nor do I believe is the root of the problem. Thanks for your help.

    Read the article

  • Permissions restoring from Time Machine - Finder copy vs "cp" copy

    - by Ben Challenor
    Note: this question was starting to sprawl so I rewrote it. I have a folder that I'm trying to restore from a Time Machine backup. Using cp -R works fine, but certain folders cannot be restored with either the Time Machine UI or Finder. Other users have reported similar errors and the cp -R workaround was suggested (e.g. Restoring from Time Machine - Permissions Error). But I wanted to understand: Why cp -R works when the Finder and the Time Machine UI do not. Whether I could prevent the errors by changing file permissions before the backup. There do indeed seem to be some permissions that Finder works with and some that it does not. I've narrowed the errors down to folders with the user ben (that's me) and the group wheel. Here's a simplified reproduction. I have four folders with the owner/group combinations I've seen so far: ben ~/Desktop/test $ ls -lea total 16 drwxr-xr-x 7 ben staff 238 27 Nov 14:31 . drwx------+ 17 ben staff 578 27 Nov 14:29 .. 0: group:everyone deny delete -rw-r--r--@ 1 ben staff 6148 27 Nov 14:31 .DS_Store drwxr-xr-x 3 ben staff 102 27 Nov 14:30 ben-staff drwxr-xr-x 3 ben wheel 102 27 Nov 14:30 ben-wheel drwxr-xr-x 3 root admin 102 27 Nov 14:31 root-admin drwxr-xr-x 3 root wheel 102 27 Nov 14:31 root-wheel Each contains a single file called file with the same owner/group: ben ~/Desktop/test $ cd ben-staff ben ~/Desktop/test/ben-staff $ ls -lea total 0 drwxr-xr-x 3 ben staff 102 27 Nov 14:30 . drwxr-xr-x 7 ben staff 238 27 Nov 14:31 .. -rw-r--r-- 1 ben staff 0 27 Nov 14:30 file In the backup, they look like this: ben /Volumes/Deimos/Backups.backupdb/Ben’s MacBook Air/Latest/Macintosh HD/Users/ben/Desktop/test $ ls -leA total 16 -rw-r--r--@ 1 ben staff 6148 27 Nov 14:34 .DS_Store 0: group:everyone deny write,delete,append,writeattr,writeextattr,chown drwxr-xr-x@ 3 ben staff 102 27 Nov 14:51 ben-staff 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 ben wheel 102 27 Nov 14:51 ben-wheel 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 root admin 102 27 Nov 14:52 root-admin 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 root wheel 102 27 Nov 14:52 root-wheel 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown Of these, ben-staff can be restored with Finder without errors. root-wheel and root-admin ask for my password and then restore without errors. But ben-wheel does not prompt for my password and gives the error: The operation can’t be completed because you don’t have permission to access “file”. Interestingly, I can restore the file from this folder by dragging it directly to my local drive (instead of dragging its parent folder), but when I do so its permissions are changed to ben/staff. Here are the permissions after the restore for the three folders that worked correctly, and the file from ben-wheel that was changed to ben/staff. ben ~/Desktop/test-restore $ ls -leA total 16 -rw-r--r--@ 1 ben staff 6148 27 Nov 14:46 .DS_Store drwxr-xr-x 3 ben staff 102 27 Nov 14:30 ben-staff -rw-r--r-- 1 ben staff 0 27 Nov 14:30 file drwxr-xr-x 3 root admin 102 27 Nov 14:31 root-admin drwxr-xr-x 3 root wheel 102 27 Nov 14:31 root-wheel Can anyone explain this behaviour? Why do Finder and the Time Machine UI break with the ben / wheel permissions? And why does cp -R work (even without sudo)?

    Read the article

  • Why do we get a sudden spike in response times?

    - by Christian Hagelid
    We have an API that is implemented using ServiceStack which is hosted in IIS. While performing load testing of the API we discovered that the response times are good but that they deteriorate rapidly as soon as we hit about 3,500 concurrent users per server. We have two servers and when hitting them with 7,000 users the average response times sit below 500ms for all endpoints. The boxes are behind a load balancer so we get 3,500 concurrents per server. However as soon as we increase the number of total concurrent users we see a significant increase in response times. Increasing the concurrent users to 5,000 per server gives us an average response time per endpoint of around 7 seconds. The memory and CPU on the servers are quite low, both while the response times are good and when after they deteriorate. At peak with 10,000 concurrent users the CPU averages just below 50% and the RAM sits around 3-4 GB out of 16. This leaves us thinking that we are hitting some kind of limit somewhere. The below screenshot shows some key counters in perfmon during a load test with a total of 10,000 concurrent users. The highlighted counter is requests/second. To the right of the screenshot you can see the requests per second graph becoming really erratic. This is the main indicator for slow response times. As soon as we see this pattern we notice slow response times in the load test. How do we go about troubleshooting this performance issue? We are trying to identify if this is a coding issue or a configuration issue. Are there any settings in web.config or IIS that could explain this behaviour? The application pool is running .NET v4.0 and the IIS version is 7.5. The only change we have made from the default settings is to update the application pool Queue Length value from 1,000 to 5,000. We have also added the following config settings to the Aspnet.config file: <system.web> <applicationPool maxConcurrentRequestsPerCPU="5000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" /> </system.web> More details: The purpose of the API is to combine data from various external sources and return as JSON. It is currently using an InMemory cache implementation to cache individual external calls at the data layer. The first request to a resource will fetch all data required and any subsequent requests for the same resource will get results from the cache. We have a 'cache runner' that is implemented as a background process that updates the information in the cache at certain set intervals. We have added locking around the code that fetches data from the external resources. We have also implemented the services to fetch the data from the external sources in an asynchronous fashion so that the endpoint should only be as slow as the slowest external call (unless we have data in the cache of course). This is done using the System.Threading.Tasks.Task class. Could we be hitting a limitation in terms of number of threads available to the process?

    Read the article

  • Apache doesn't run multiple requests

    - by Reinderien
    I'm currently running this simple Python CGI script to test rudimentary IPC: #!/usr/bin/python -u import cgi, errno, fcntl, os, os.path, sys, time print("""Content-Type: text/html; charset=utf-8 <!doctype html> <html lang="en"> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> """) ftempname = '/tmp/ipc-messages' master = not os.path.exists(ftempname) if master: fmode = 'w' else: fmode = 'r' print('<p>Opening file</p>') sys.stdout.flush() ftemp = open(ftempname, fmode) print('<p>File opened</p>') if master: print('<p>Operating as master</p>') sys.stdout.flush() for i in range(10): print('<p>' + str(i) + '</p>') sys.stdout.flush() time.sleep(1) ftemp.close() os.remove(ftempname) else: print('<p>Operating as a slave</p>') ftemp.close() print(""" </body> </html>""") The 'server-push' portion works; that is, for the first request, I do see piecewise updates. However, while the first request is being serviced, subsequent requests are not started, only to be started after the first request has finished. Any ideas on why, and how to fix it? Edit: I see the same non-concurrent behaviour with vanilla PHP, running this: <!doctype html> <html lang="en"> <!-- $Id: $--> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> <p> <?php function echofl($str) { echo $str . "</b>\n"; ob_flush(); flush(); } define('tempfn', '/tmp/emailsync'); if (file_exists(tempfn)) $perms = 'r+'; else $perms = 'w'; assert($fsync = fopen(tempfn, $perms)); assert(chmod(tempfn, 0600)); if (!flock($fsync, LOCK_EX | LOCK_NB, $wouldblock)) { assert($wouldblock); $master = false; } else $master = true; if ($master) { echofl('Running as master.'); assert(fwrite($fsync, 'content') != false); assert(sleep(5) == 0); assert(flock($fsync, LOCK_UN)); } else { echofl('Running as slave.'); echofl(fgets($fsync)); } assert(fclose($fsync)); echofl('Done.'); ?> </p> </body> </html>

    Read the article

  • GNU/Linux swapping blocks system

    - by Ole Tange
    I have used GNU/Linux on systems from 4 MB RAM to 512 GB RAM. When they start swapping, most of the time you can still log in and kill off the offending process - you just have to be 100-1000 times more patient. On my new 32 GB system that has changed: It blocks when it starts swapping. Sometimes with full disk activity but other times with no disk activity. To examine what might be the issue I have written this program. The idea is: 1 grab 3% of the memory free right now 2 if that caused swap to increase: stop 3 keep the chunk used for 30 seconds by forking off 4 goto 1 - #!/usr/bin/perl sub freekb { my $free = `free|grep buffers/cache`; my @a=split / +/,$free; return $a[3]; } sub swapkb { my $swap = `free|grep Swap:`; my @a=split / +/,$swap; return $a[2]; } my $swap = swapkb(); my $lastswap = $swap; my $free; while($lastswap >= $swap) { print "$swap $free"; $lastswap = $swap; $swap = swapkb(); $free = freekb(); my $used_mem = "x"x(1024 * $free * 0.03); if(not fork()) { sleep 30; exit(); } } print "Swap increased $swap $lastswap\n"; Running the program forever ought to keep the system at the limit of swapping, but only grabbing a minimal amount of swap and do that very slowly (i.e. a few MB at a time at most). If I run: forever free | stdbuf -o0 timestamp > freelog I ought to see swap slowly rising every second. (forever and timestamp from https://github.com/ole-tange/tangetools). But that is not the behaviour I see: I see swap increasing in jumps and that the system is completely blocked during these jumps. Here the system is blocked for 30 seconds with the swap usage increases with 1 GB: secs 169.527 Swap: 18440184 154184 18286000 170.531 Swap: 18440184 154184 18286000 200.630 Swap: 18440184 1134240 17305944 210.259 Swap: 18440184 1076228 17363956 Blocked: 21 secs. Swap increase 2400 MB: 307.773 Swap: 18440184 581324 17858860 308.799 Swap: 18440184 597676 17842508 330.103 Swap: 18440184 2503020 15937164 331.106 Swap: 18440184 2502936 15937248 Blocked: 20 secs. Swap increase 2200 MB: 751.283 Swap: 18440184 885288 17554896 752.286 Swap: 18440184 911676 17528508 772.331 Swap: 18440184 3193532 15246652 773.333 Swap: 18440184 1404540 17035644 Blocked: 37 secs. Swap increase 2400 MB: 904.068 Swap: 18440184 613108 17827076 905.072 Swap: 18440184 610368 17829816 942.424 Swap: 18440184 3014668 15425516 942.610 Swap: 18440184 2073580 16366604 This is bad enough, but what is even worse is that the system sometimes stops responding at all - even if I wait for hours. I have the feeling it is related to the swapping issue, but I cannot tell for sure. My first idea was to tweak /proc/sys/vm/swappiness from 60 to 0 or 100, just to see if that had any effect at all. 0 did not have an effect, but 100 did cause the problem to arise less often. How can I prevent the system from blocking for such a long time? Why does it decide to swapout 1-3 GB when less than 10 MB would suffice?

    Read the article

  • How to prevent delays associated with IPv6 AAAA records?

    - by Nic
    Our Windows servers are registering IPv6 AAAA records with our Windows DNS servers. However, we don't have IPv6 routing enabled on our network, so this frequently causes stall behaviours. Microsoft RDP is the worst offender. When connecting to a server that has a AAAA record in DNS, the remote desktop client will try IPv6 first, and won't fall back to IPv4 until the connection times out. Power users can work around this by connecting to the IP address directly. Resolving the IPv4 address with ping -4 hostname.foo always works instantly. What can I do to avoid this delay? Disable IPv6 on client? Nope, Microsoft says IPv6 is a mandatory part of the Windows operating system. Too many clients to ensure this is set everywhere consistently. Will cause more problems later when we finally implement IPv6. Disable IPv6 on the server? Nope, Microsoft says IPv6 is a mandatory part of the Windows operating system. Requires an inconvenient registry hack to disable the entire IPv6 stack. Ensuring this is correctly set on all servers is inconvenient. Will cause more problems later when we finally implement IPv6. Mask IPv6 records on the user-facnig DNS recursor? Nope, we're using NLNet Unbound and it doesn't support that. Prevent registration of IPv6 AAAA records on the Microsoft DNS server? I don't think that's even possible. At this point, I'm considering writing a script that purges all AAAA records from our DNS zones. Please, help me find a better way. UPDATE: DNS resolution is not the problem. As @joeqwerty points out in his answer, the DNS records are returned instantly. Both A and AAAA records are immediately available. The problem is that some clients (mstsc.exe) will preferentially attempt a connection over IPv6, and take a while to fall back to IPv4. This seems like a routing problem. The ping command produces a "General failure" error message because the destination address is unroutable. C:\Windows\system32>ping myhost.mydomain Pinging myhost.mydomain [2002:1234:1234::1234:1234] with 32 bytes of data: General failure. General failure. General failure. General failure. Ping statistics for 2002:1234:1234::1234:1234: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), I can't get a packet capture of this behaviour. Running this (failing) ping command does not produce any packets in Microsoft Network Monitor. Similarly, attempting a connection with mstsc.exe to a host with an AAAA record produces no traffic until it does a fallback to IPv4. UPDATE: Our hosts are all using publicly-routable IPv4 addresses. I think this problem might come down to a broken 6to4 configuration. 6to4 behaves differently on hosts with public IP addresses vs RFC1918 addresses. UPDATE: There is definitely something fishy with 6to4 on my network. When I disable 6to4 on the Windows client, connections resolve instantly. netsh int ipv6 6to4 set state disabled But as @joeqwerty says, this only masks the problem. I'm still trying to find out why IPv6 communication on our network is completely non-working.

    Read the article

  • Ubuntu 10.04: OpenVZ Kernel and pure-ftpd issues on HOST (no guest setup yet)

    - by Seidr
    After compiling and installing the OpenVZ flavour of kernel under Ubuntu 10.04, I am unable to browse to certain directories when connecting to the pure-ftpd server. The clients are dropping into PASSIVE mode, which is fine. This behaviour was happening before the change of kernel, however now when I browse to certain directories the connection just gets dropped. This only happens with a few directories under one login (web in specific), where as with another login it happens as soon as I connect. I've got the nf_conntrack_ftp kernel module installed (required to keep track of passive FTP connections as I understand, and an alias of the ip_conntrack_ftp module), however this has provided no alleviation of my problem. This module was actually required upon initial setup of my OS to get passive FTP working correctly, however when I compiled the OpenVZ kernel a lot of these modules were missing (iptables, conntrack etc). I recompiled the kernel with the missing modules, but to no effect. I've turned verbosity for the pure-ftpd server up, and still no clues have been spotted in either syslog or the transfer log. Neither did an strace provide any clues (that I could discern anyway) - although one strange thing is both in the output to the client and in the strace I notice that it does infact probe the directory and return the number of matches - it just fails after that. One more thing to mention is that if I FTP using the same credentials locally, everything works fine. This suggests that it is in fact an issue with either the conntrack_ftp module not functioning as expected, or a deeper networking issue. The Kernel was compiled and installed following the instructions at https://help.ubuntu.com/community/OpenVZ - bar the changes to the Kernel configuration (such as add iptables as a module). Below is an example of the log sent to the data (under FileZilla). Status: Resolving address of xxxx.co.uk Status: Connecting to 78.46.xxx.xxx:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 4 of 10 allowed. Response: 220-Local time is now 08:52. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER xxx Response: 331 User xxx OK. Password required Command: PASS ******** Response: 230-User xxx has group access to: client1 sshusers Response: 230 OK. Current restricted directory is / Command: OPTS UTF8 ON Response: 200 OK, UTF-8 enabled Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Status: Directory listing successful Status: Retrieving directory listing... Command: CWD /web Response: 250 OK. Current directory is /web Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PORT 10,0,2,30,14,143 Response: 500 I won't open a connection to 10.0.2.30 (only to 188.220.xxx.xxx) Command: PASV Response: 227 Entering Passive Mode (78,46,79,147,234,110) Command: MLSD Response: 150 Accepted data connection Response: 226-ASCII Response: 226-Options: -a -l Response: 226 57 matches total Error: Could not read from transfer socket: ECONNRESET - Connection reset by peer Error: Failed to retrieve directory listing Any suggestions please? I'm willing to try anything!

    Read the article

  • Auth-Type :- Reject in RADIUS users file matches inner tunnel request but sends Access-Accept

    - by mgorven
    I have WPA2 802.11x EAP authentication setup using FreeRADIUS 2.1.8 on Ubuntu 10.04.4 talking to OpenLDAP, and can successfully authenticate using PEAP/MSCHAPv2, TTLS/MSCHAPv2 and TTLS/PAP (both via the AP and using eapol_test). I am now trying to restrict access to specific SSIDs based on the LDAP groups which the user belongs to. I have configured group membership checking in /etc/freeradius/modules/ldap like so: groupname_attribute = cn groupmembership_filter = "(|(&(objectClass=posixGroup)(memberUid=%{User-Name}))(&(objectClass=posixGroup)(uniquemember=%{User-Name})))" and I have configured extraction of the SSID from Called-Station-Id into Called-Station-SSID based on the Mac Auth wiki page. In /etc/freeradius/eap.conf I have enabled copying attributes from the outer tunnel into the inner tunnel, and usage of the inner tunnel response in the outer tunnel (for both PEAP and TTLS). I had the same behaviour before changing these options however. copy_request_to_tunnel = yes use_tunneled_reply = yes I'm running eapol_test like this to test the setup: eapol_test -c peap-mschapv2.conf -a 172.16.0.16 -s testing123 -N 30:s:01-23-45-67-89-01:Example-EAP with the following peap-mschapv2.conf file: network={ ssid="Example-EAP" key_mgmt=WPA-EAP eap=PEAP identity="mgorven" anonymous_identity="anonymous" password="foobar" phase2="autheap=MSCHAPV2" } With the following in /etc/freeradius/users: DEFAULT Ldap-Group == "employees" and running freeradius-Xx, I can see that the LDAP group retrieval works, and that the SSID is extracted. Debug: [ldap] performing search in dc=example,dc=com, with filter (&(cn=employees)(|(&(objectClass=posixGroup)(memberUid=mgorven))(&(objectClass=posixGroup)(uniquemember=mgorven)))) Debug: rlm_ldap::ldap_groupcmp: User found in group employees ... Info: expand: %{7} -> Example-EAP Next I try to only allow access to users in the employees group (regardless of SSID), so I put the following in /etc/freeradius/users: DEFAULT Ldap-Group == "employees" DEFAULT Auth-Type := Reject But this immediately rejects the Access-Request in the outer tunnel because the anonymous user is not in the employees group. So I modify it to only match inner tunnel requests like so: DEFAULT Ldap-Group == "employees" DEFAULT FreeRADIUS-Proxied-To == "127.0.0.1" Auth-Type := Reject, Reply-Message = "User does not belong to any groups which may access this SSID." Now users which are in the employees group are authenticated, but so are users which are not in the employees group. I see the reject entry being matched, and the Reply-Message is set, but the client receives an Access-Accept. Debug: rlm_ldap::ldap_groupcmp: Group employees not found or user is not a member. Info: [files] users: Matched entry DEFAULT at line 209 Info: ++[files] returns ok ... Auth: Login OK: [mgorven] (from client test port 0 cli 02-00-00-00-00-01 via TLS tunnel) Info: WARNING: Empty section. Using default return values. ... Info: [peap] Got tunneled reply code 2 Auth-Type := Reject Reply-Message = "User does not belong to any groups which may access this SSID." ... Info: [peap] Got tunneled reply RADIUS code 2 Auth-Type := Reject Reply-Message = "User does not belong to any groups which may access this SSID." ... Info: [peap] Tunneled authentication was successful. Info: [peap] SUCCESS Info: [peap] Saving tunneled attributes for later ... Sending Access-Accept of id 11 to 172.16.2.44 port 60746 Reply-Message = "User does not belong to any groups which may access this SSID." User-Name = "mgorven" and eapol_test reports: RADIUS message: code=2 (Access-Accept) identifier=11 length=233 Attribute 18 (Reply-Message) length=64 Value: 'User does not belong to any groups which may access this SSID.' Attribute 1 (User-Name) length=9 Value: 'mgorven' ... SUCCESS Why isn't the request being rejected, and is this the right way to implement this?

    Read the article

  • IP address reuse on macvlan devices

    - by Alex Bubnoff
    I'm trying to create easy to use and possibly simple testing environment for some product and got some strange behaviour of macvlan's. What I'm trying to achieve: make a toolset for one-line start/stop of lxc containers(via docker) bound to external ip(I have enough of it on host machine). So, I'm doing something like this: docker run -d -name=container_name container_image pipework eth1 container_name ip/prefix_len@gateway and pipework here does this: GUEST_IFNAME=ph$NSPID$eth1 ip link add link eth1 dev $GUEST_IFNAME type macvlan mode bridge ip link set eth1 up ip link set $GUEST_IFNAME netns $NSPID ip netns exec $NSPID ip link set $GUEST_IFNAME name eth1 ip netns exec $NSPID ip addr add $IPADDR dev eth1 ip netns exec $NSPID ip route delete default ip netns exec $NSPID ip link set eth1 up ip netns exec $NSPID ip route replace default via $GATEWAY ip netns exec $NSPID arping -c 1 -A -I eth1 $IPADDR And it works for first time per IP. But for second time and later packets for containers IP isn't getting into container, while all configuration seem fine. So it looks like this: External machine ? ping 212.76.131.212 ....silence.... Host machine root@ubuntu:~# ip link show eth1 2: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:15:17:c9:e1:c9 brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip addr show eth1 2: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:15:17:c9:e1:c9 brd ff:ff:ff:ff:ff:ff root@ubuntu:~# tcpdump -v -i eth1 icmp tcpdump: WARNING: eth1: no IPv4 address assigned tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes 00:00:46.542042 IP (tos 0x0, ttl 60, id 9623, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2345, length 64 00:00:47.549969 IP (tos 0x0, ttl 60, id 9624, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2346, length 64 00:00:48.558143 IP (tos 0x0, ttl 60, id 9625, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2347, length 64 00:00:49.566319 IP (tos 0x0, ttl 60, id 9626, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2348, length 64 00:00:50.573999 IP (tos 0x0, ttl 60, id 9627, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2349, length 64 ^C 5 packets captured 5 packets received by filter 0 packets dropped by kernel 1 packet dropped by interface Host machine, netns of container root@ubuntu:~# ip netns exec 32053 ip link show eth1 48: eth1@if2: mtu 1500 qdisc noqueue state UNKNOWN link/ether b2:12:f7:cc:a1:9d brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip netns exec 32053 ip addr show eth1 48: eth1@if2: mtu 1500 qdisc noqueue state UNKNOWN link/ether b2:12:f7:cc:a1:9d brd ff:ff:ff:ff:ff:ff inet 212.76.131.212/29 scope global eth1 inet6 fe80::b012:f7ff:fecc:a19d/64 scope link valid_lft forever preferred_lft forever root@ubuntu:~# ip netns exec 32053 tcpdump -v -i eth1 icmp tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes ....silence.... ^C 0 packets captured 0 packets received by filter 0 packets dropped by kernel So, can anyone say, what can it be? Can this be caused by not a bug in macvlan implementation? Is there any tools I can use to debug that configuration?

    Read the article

  • Difference between `curl -I` and `curl -X HEAD`

    - by chmeee
    I was wathcing the funny server type from http://www.reddit.com with curl -I http://www.reddit.com when I guessed that curl -X HEAD http://www.reddit.com would do the same. But, in fact, it doesn't. I'm curious about why. This is what I observe running the two commands: curl -I: works as expected, outputs the header and exists. curl -X HEAD: does not show anything and seems to wait for user input. But, sniffing with tshark I see the second command actually sends the same HTML query and receives the correct answer, but it does not show it and it doesn't close the connection. curl -I 0.000000 333.33.33.33 -> 213.248.111.106 TCP 59675 > http [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=47267342 TSER=0 WS=6 0.045392 213.248.111.106 -> 333.33.33.33 TCP http > 59675 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 TSV=2552532839 TSER=47267342 WS=1 0.045441 333.33.33.33 -> 213.248.111.106 TCP 59675 > http [ACK] Seq=1 Ack=1 Win=5888 Len=0 TSV=47267353 TSER=2552532839 0.045623 333.33.33.33 -> 213.248.111.106 HTTP HEAD / HTTP/1.1 0.091665 213.248.111.106 -> 333.33.33.33 TCP http > 59675 [ACK] Seq=1 Ack=155 Win=6432 Len=0 TSV=2552532886 TSER=47267353 0.861782 213.248.111.106 -> 333.33.33.33 HTTP HTTP/1.1 200 OK 0.861830 333.33.33.33 -> 213.248.111.106 TCP 59675 > http [ACK] Seq=155 Ack=321 Win=6912 Len=0 TSV=47267557 TSER=2552533656 0.862127 333.33.33.33 -> 213.248.111.106 TCP 59675 > http [FIN, ACK] Seq=155 Ack=321 Win=6912 Len=0 TSV=47267557 TSER=2552533656 0.910810 213.248.111.106 -> 333.33.33.33 TCP http > 59675 [FIN, ACK] Seq=321 Ack=156 Win=6432 Len=0 TSV=2552533705 TSER=47267557 0.910880 333.33.33.33 -> 213.248.111.106 TCP 59675 > http [ACK] Seq=156 Ack=322 Win=6912 Len=0 TSV=47267570 TSER=2552533705 curl -X HEAD 34.106389 333.33.33.33 -> 213.248.111.90 TCP 51690 > http [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=47275868 TSER=0 WS=6 34.149507 213.248.111.90 -> 333.33.33.33 TCP http > 51690 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 TSV=3920268348 TSER=47275868 WS=1 34.149560 333.33.33.33 -> 213.248.111.90 TCP 51690 > http [ACK] Seq=1 Ack=1 Win=5888 Len=0 TSV=47275879 TSER=3920268348 34.149646 333.33.33.33 -> 213.248.111.90 HTTP HEAD / HTTP/1.1 34.191484 213.248.111.90 -> 333.33.33.33 TCP http > 51690 [ACK] Seq=1 Ack=155 Win=6432 Len=0 TSV=3920268390 TSER=47275879 34.192657 213.248.111.90 -> 333.33.33.33 TCP [TCP Dup ACK 15#1] http > 51690 [ACK] Seq=1 Ack=155 Win=6432 Len=0 TSV=3920268390 TSER=47275879 34.823399 213.248.111.90 -> 333.33.33.33 HTTP HTTP/1.1 200 OK 34.823453 333.33.33.33 -> 213.248.111.90 TCP 51690 > http [ACK] Seq=155 Ack=321 Win=6912 Len=0 TSV=47276048 TSER=3920269022 Any idea about why this difference in behaviour?

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >