Search Results

Search found 2288 results on 92 pages for 'bugs bugs'.

Page 77/92 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • .NET and C# Exceptions. What is it reasonable to catch.

    - by djna
    Disclaimer, I'm from a Java background. I don't do much C#. There's a great deal of transfer between the two worlds, but of course there are differences and one is in the way Exceptions tend to be thought about. I recently answered a C# question suggesting that under some circstances it's reasonable to do this: try { some work } catch (Exeption e) { commonExceptionHandler(); } (The reasons why are immaterial). I got a response that I don't quite understand: until .NET 4.0, it's very bad to catch Exception. It means you catch various low-level fatal errors and so disguise bugs. It also means that in the event of some kind of corruption that triggers such an exception, any open finally blocks on the stack will be executed, so even if the callExceptionReporter fuunction tries to log and quit, it may not even get to that point (the finally blocks may throw again, or cause more corruption, or delete something important from the disk or database). May I'm more confused than I realise, but I don't agree with some of that. Please would other folks comment. I understand that there are many low level Exceptions we don't want to swallow. My commonExceptionHandler() function could reasonably rethrow those. This seems consistent with this answer to a related question. Which does say "Depending on your context it can be acceptable to use catch(...), providing the exception is re-thrown." So I conclude using catch (Exception ) is not always evil, silently swallowing certain exceptions is. The phrase "Until .NET 4 it is very bad to Catch Exception" What changes in .NET 4? IS this a reference to AggregateException, which may give us some new things to do with exceptions we catch, but I don't think changes the fundamental "don't swallow" rule. The next phrase really bothers be. Can this be right? It also means that in the event of some kind of corruption that triggers such an exception, any open finally blocks on the stack will be executed (the finally blocks may throw again, or cause more corruption, or delete something important from the disk or database) My understanding is that if some low level code had lowLevelMethod() { try { lowestLevelMethod(); } finally { some really important stuff } } and in my code I call lowLevel(); try { lowLevel() } catch (Exception e) { exception handling and maybe rethrowing } Whether or not I catch Exception this has no effect whatever on the excution of the finally block. By the time we leave lowLevelMethod() the finally has already run. If the finally is going to do any of the bad things, such as corrupt my disk, then it will do so. My catching the Exception made no difference. If It reaches my Exception block I need to do the right thing, but I can't be the cause of dmis-executing finallys

    Read the article

  • Why is FLD1 loading NaN instead?

    - by Bernd Jendrissek
    I have a one-liner C function that is just return value * pow(1.+rate, -delay); - it discounts a future value to a present value. The interesting part of the disassembly is 0x080555b9 : neg %eax 0x080555bb : push %eax 0x080555bc : fildl (%esp) 0x080555bf : lea 0x4(%esp),%esp 0x080555c3 : fldl 0xfffffff0(%ebp) 0x080555c6 : fld1 0x080555c8 : faddp %st,%st(1) 0x080555ca : fxch %st(1) 0x080555cc : fstpl 0x8(%esp) 0x080555d0 : fstpl (%esp) 0x080555d3 : call 0x8051ce0 0x080555d8 : fmull 0xfffffff8(%ebp) While single-stepping through this function, gdb says (rate is 0.02, delay is 2; you can see them on the stack): (gdb) si 0x080555c6 30 return value * pow(1.+rate, -delay); (gdb) info float R7: Valid 0x4004a6c28f5c28f5c000 +41.68999999999999773 R6: Valid 0x4004e15c28f5c28f6000 +56.34000000000000341 R5: Valid 0x4004dceb851eb851e800 +55.22999999999999687 R4: Valid 0xc0008000000000000000 -2 =R3: Valid 0x3ff9a3d70a3d70a3d800 +0.02000000000000000042 R2: Valid 0x4004ff147ae147ae1800 +63.77000000000000313 R1: Valid 0x4004e17ae147ae147800 +56.36999999999999744 R0: Valid 0x4004efb851eb851eb800 +59.92999999999999972 Status Word: 0x1861 IE PE SF TOP: 3 Control Word: 0x037f IM DM ZM OM UM PM PC: Extended Precision (64-bits) RC: Round to nearest Tag Word: 0x0000 Instruction Pointer: 0x73:0x080555c3 Operand Pointer: 0x7b:0xbff41d78 Opcode: 0xdd45 And after the fld1: (gdb) si 0x080555c8 30 return value * pow(1.+rate, -delay); (gdb) info float R7: Valid 0x4004a6c28f5c28f5c000 +41.68999999999999773 R6: Valid 0x4004e15c28f5c28f6000 +56.34000000000000341 R5: Valid 0x4004dceb851eb851e800 +55.22999999999999687 R4: Valid 0xc0008000000000000000 -2 R3: Valid 0x3ff9a3d70a3d70a3d800 +0.02000000000000000042 =R2: Special 0xffffc000000000000000 Real Indefinite (QNaN) R1: Valid 0x4004e17ae147ae147800 +56.36999999999999744 R0: Valid 0x4004efb851eb851eb800 +59.92999999999999972 Status Word: 0x1261 IE PE SF C1 TOP: 2 Control Word: 0x037f IM DM ZM OM UM PM PC: Extended Precision (64-bits) RC: Round to nearest Tag Word: 0x0020 Instruction Pointer: 0x73:0x080555c6 Operand Pointer: 0x7b:0xbff41d78 Opcode: 0xd9e8 After this, everything goes to hell. Things get grossly over or undervalued, so even if there were no other bugs in my freeciv AI attempt, it would choose all the wrong strategies. Like sending the whole army to the arctic. (Sigh, if only I were getting that far.) I must be missing something obvious, or getting blinded by something, because I can't believe that fld1 should ever possibly fail. Even less that it should fail only after a handful of passes through this function. On earlier passes the FPU correctly loads 1 into ST(0). The bytes at 0x080555c6 definitely encode fld1 - checked with x/... on the running process. What gives?

    Read the article

  • Google Code + SVN or GitHub + Git

    - by Nazgulled
    Let me start by telling you that I never used anything besides SVN and I'm also a Windows user. I have a couple of simple projects that are open-source, others are on there way when I'm happy enough to release their source code but either way, I was thinking of using Google Code and SVN to share the source code of my projects instead of providing a link to the source on my website. This as always been a pain cause I had to update the binaries and the code every time I released a new version. This would also help me out to have a backup of my code some where instead of just my local machine (I used to have a local Subversion server running). What I want from a service like this is very simple... I just want a place to store my source code that people can download if they want, allows me to control revisions and provide a simple and easy issue system so people can submit bugs and stuff like that. I guess both of them have this. But I don't want to host any binaries in their websites, I want this to be hosted on my website so I can control download statistics with my own scripts, I also don't have the need for wiki pages as I prefer to have all the documentation in my own website. Does anyone of this services provide a way to "disable" features like wiki and downloads and don't show them at all for my project(s)? Now, I'm sure there are lots of pros and cons about using Google Code with SVN and GitHub with Git (of course) but here's what it's important for me on each one and why I like them: Google Code: As with any Google page, the complexity is almost non-existent Everyone (or almost) as a Google account and this is nice if people want to report problems using the issues system GitHub: May (or may not) be a little more complex (not a problem for me though) than Google's pages but... ...has a much prettier interface than Google's service It needs people to be registered on GitHub to post about issues I like the fact that with Git, you have your own revisions locally (can I use TortoiseGit for this or?) Basically that's it, not much I know... What other, most common, pros and cons can you tell me about each site/software? Keep in mind that my projects are simple, I'm probably the only one who will ever develop these projects on these repositories (or maybe not, for now I will)

    Read the article

  • What classes should I map against with NHibernate?

    - by apollodude217
    Currently, we use NHibernate to map business objects to database tables. Said business objects enforce business rules: The set accessors will throw an exception on the spot if the contract for that property is violated. Also, the properties enforce relationships with other objects (sometimes bidirectional!). Well, whenever NHibernate loads an object from the database (e.g. when ISession.Get(id) is called), the set accessors of the mapped properties are used to put the data into the object. What's good is that the middle tier of the application enforces business logic. What's bad is that the database does not. Sometimes crap finds its way into the database. If crap is loaded into the application, it bails (throws an exception). Sometimes it clearly should bail because it cannot do anything, but what if it can continue working? E.g., an admin tool that gathers real-time reports runs a high risk of failing unnecessarily instead of allowing an admin to even fix a (potential) problem. I don't have an example on me right now, but in some instances, letting NHibernate use the "front door" properties that also enforce relationships (especially bidi) leads to bugs. What are the best solutions? Currently, I will, on a per-property basis, create a "back door" just for NHibernate: public virtual int Blah {get {return _Blah;} set {/*enforces BR's*/}} protected virtual int _Blah {get {return blah;} set {blah = value;}} private int blah; I showed the above in C# 2 (no default properties) to demonstrate how this gets us basically 3 layers of, or views, to blah!!! While this certainly works, it does not seem ideal as it requires the BL to provide one (public) interface for the app-at-large, and another (protected) interface for the data access layer. There is an additional problem: To my knowledge, NHibernate does not give you a way to distinguish between the name of the property in the BL and the name of the property in the entity model (i.e. the name you use when you query, e.g. via HQL--whenever you give NHibernate the name (string) of a property). This becomes a problem when, at first, the BR's for some property Blah are no problem, so you refer to it in your O/R mapping... but then later, you have to add some BR's that do become a problem, so then you have to change your O/R mapping to use a new _Blah property, which breaks all existing queries using "Blah" (common problem with programming against strings). Has anyone solved these problems?!

    Read the article

  • c++ class member functions instatiated by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched stackoverflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instantiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits that are tested at runtime. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state info was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. The compiler (vc++ 2008) always complained that things didn't match. I would yell, "SFINAE, you moron!" but the moron is probably me. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • Setting default values for inherited property without using accessor in Objective-C?

    - by Ben Stock
    I always see people debating whether or not to use a property's setter in the -init method. I don't know enough about the Objective-C language yet to have an opinion one way or the other. With that said, lately I've been sticking to ivars exclusively. It seems cleaner in a way. I don't know. I digress. Anyway, here's my problem … Say we have a class called Dude with an interface that looks like this: @interface Dude : NSObject { @private NSUInteger _numberOfGirlfriends; } @property (nonatomic, assign) NSUInteger numberOfGirlfriends; @end And an implementation that looks like this: @implementation Dude - (instancetype)init { self = [super init]; if (self) { _numberOfGirlfriends = 0; } } @end Now let's say I want to extend Dude. My subclass will be called Playa. And since a playa should have mad girlfriends, when Playa gets initialized, I don't want him to start with 0; I want him to have 10. Here's Playa.m: @implementation Playa - (instancetype)init { self = [super init]; if (self) { // Attempting to set the ivar directly will result in the compiler saying, // "Instance variable `_numberOfGirlfriends` is private." // _numberOfGirlfriends = 10; <- Can't do this. // Thus, the only way to set it is with the mutator: self.numberOfGirlfriends = 10; // Or: [self setNumberOfGirlfriends:10]; } } @end So what's a Objective-C newcomer to do? Well, I mean, there's only one thing I can do, and that's set the property. Unless there's something I'm missing. Any ideas, suggestions, tips, or tricks? Sidenote: The other thing that bugs me about setting the ivar directly — and what a lot of ivar-proponents say is a "plus" — is that there are no KVC notifications. A lot of times, I want the KVC magic to happen. 50% of my setters end in [self setNeedsDisplay:YES], so without the notification, my UI doesn't update unless I remember to manually add -setNeedsDisplay. That's a bad example, but the point stands. I utilize KVC all over the place, so without notifications, things can act wonky. Anyway, any info is much appreciated. Thanks!

    Read the article

  • Linked lists in Java - help with assignment

    - by user368241
    Representation of a string in linked lists In every intersection in the list there will be 3 fields : The letter itself. The number of times it appears consecutively. A pointer to the next intersection in the list. The following class CharNode represents a intersection in the list : public class CharNode { private char _data; private int _value; private charNode _next; public CharNode (char c, int val, charNode n) { _data = c; _value = val; _next = n; } public charNode getNext() { return _next; } public void setNext (charNode node) { _next = node; } public int getValue() { return _value; } public void setValue (int v) { value = v; } public char getData() { return _data; } public void setData (char c) { _data = c; } } The class StringList represents the whole list : public class StringList { private charNode _head; public StringList() { _head = null; } public StringList (CharNode node) { _head = node; } } Add methods to the class StringList according to the details : (I will add methods gradually according to my specific questions) (Pay attention, these are methods from the class String and we want to fulfill them by the representation of a string by a list as explained above) public int indexOf (int ch) - returns the index in the string it is operated on of the first appeareance of the char "ch". If the char "ch" doesn't appear in the string, returns -1. If the value of fromIndex isn't in the range, returns -1. Pay attention to all the possible error cases. Write what is the time complexity and space complexity of every method that you wrote. Make sure the methods you wrote are effective. It is NOT allowed to use ready classes of Java. It is NOT allowed to move to string and use string operations. Here is my try to write the method indexOf (int ch). Kindly assist me with fixing the bugs so I can move on. public int indexOf (int ch) { int count = 0; charNode pose = _head; if (pose == null ) { return -1; } for (pose = _head; pose!=null && pose.getNext()!='ch'; pose = pose.getNext()) { count++; } if (pose!=null) return count; else return -1; }

    Read the article

  • What are some things you'd like fresh college grads to know?

    - by bradhe
    So I proposed this to the Reddit community and I'd like to get SO's perspective on this. This is pretty much the copypasta of what I put there. I was thinking about this last night and thought it would be neat to compile a list. I'm still a pretty fresh college grad -- been in industry for 2 years -- but I think that I might have a few interesting things to lend. You don't know as much as you think you do. Somehow, college students think they know a lot more than they do (or maybe that was just me). Likewise, they think they can do more than they actually can. You should fairly assess your skills. QA people are not out to get you. Humans introduce bugs to code. It's not (nescessarily) a personal reflection on you and your skills if your code has a bug and it's caught by the QA/testing team. Listen to your senior (developers). They are not actually fuddy duddies who don't know about the new L337 hax in Ruby (okay, sometimes they are, but still...). They have a wealth of knowledge that you can learn from and it's in your best interest to do so. You will most likely not be doing what you want to for a while. This is mostly true in the corporate world -- startups are a different matter. Also, this is due to more than just the economy, man! Junior devs need to earn their keep, so to speak. Everyone wants to be lead dev on the next project and there are a lot of people in line ahead of you! For every elite developer there are 100 average developers. Joel Spolsky, I'm looking at you. Somehow this concept of ninja coders has really ingrained itself in our culture. While I encourage you to be the best you can be don't be disappointed if people aren't writing blog posts about you in the near future. Anyone else have anything they would see added to this list?

    Read the article

  • How do you unit test the real world?

    - by Kim Sun-wu
    I'm primarily a C++ coder, and thus far, have managed without really writing tests for all of my code. I've decided this is a Bad Idea(tm), after adding new features that subtly broke old features, or, depending on how you wish to look at it, introduced some new "features" of their own. But, unit testing seems to be an extremely brittle mechanism. You can test for something in "perfect" conditions, but you don't get to see how your code performs when stuff breaks. A for instance is a crawler, let's say it crawls a few specific sites, for data X. Do you simply save sample pages, test against those, and hope that the sites never change? This would work fine as regression tests, but, what sort of tests would you write to constantly check those sites live and let you know when the application isn't doing it's job because the site changed something, that now causes your application to crash? Wouldn't you want your test suite to monitor the intent of the code? The above example is a bit contrived, and something I haven't run into (in case you haven't guessed). Let me pick something I have, though. How do you test an application will do its job in the face of a degraded network stack? That is, say you have a moderate amount of packet loss, for one reason or the other, and you have a function DoSomethingOverTheNetwork() which is supposed to degrade gracefully when the stack isn't performing as it's supposed to; but does it? The developer tests it personally by purposely setting up a gateway that drops packets to simulate a bad network when he first writes it. A few months later, someone checks in some code that modifies something subtly, so the degradation isn't detected in time, or, the application doesn't even recognize the degradation, this is never caught, because you can't run real world tests like this using unit tests, can you? Further, how about file corruption? Let's say you're storing a list of servers in a file, and the checksum looks okay, but the data isn't really. You want the code to handle that, you write some code that you think does that. How do you test that it does exactly that for the life of the application? Can you? Hence, brittleness. Unit tests seem to test the code only in perfect conditions(and this is promoted, with mock objects and such), not what they'll face in the wild. Don't get me wrong, I think unit tests are great, but a test suite composed only of them seems to be a smart way to introduce subtle bugs in your code while feeling overconfident about it's reliability. How do I address the above situations? If unit tests aren't the answer, what is? Thanks!

    Read the article

  • How does your team work together in a remote setup?

    - by Carl Rosenberger
    Hi, we are a distributed team working on the object database db4o. The way we work: We try to program in pairs only. We use Skype and VNC or SharedView to connect and work together. In our online Tuesday meeting every week (usually about 1 hour) we talk about the tasks done last week we create new pairs for the next week with a random generator so knowledge and friendship distribute evenly we set the priority for any new tasks or bugs that have come in each team picks the tasks it likes to do from the highest prioritized ones. From Tuesday to Wednesday we estimate tasks. We have a unit of work we call "Ideal Developer Session" (IDS), maybe 2 or 3 hours of working together as a pair. It's not perfectly well defined (because we know estimation always is inaccurate) but from our past shared experience we have a common sense of what an IDS is. If we can't estimate a task because it feels too long for a week we break it down into estimatable smaller tasks. During a short meeting on Wednesday we commit to a workload we feel is well doable in a week. We commit to complete. If a team runs out of committed tasks during the week, it can pick new ones from the prioritized queue we have in Jira. When we started working this way, some of us found that remote pair programming takes a lot of energy because you are so focussed. If you pair program for more than 5 or 6 hours per day, you get drained. On the other hand working like this has turned out to be very efficient. The knowledge about our codebase is evenly distributed and we have really learnt lots from eachother. I would be very interested to hear about the experiences from other teams working in a similar way. Things like: How often do you meet? Have you tried different sprint lengths (one week, two week, longer) ? Which tools do you use? Which issue tracker do you use? What do you do about time zone differences? How does it work for you to integrate new people into the team? How many hours do you usually work per week? How does your management interact with the way you are working? Do you get put on a waterfall with hard deadlines? What's your unit of work? What is your normal velocity? (units of work done per week) Programming work should be fun and for us it usually is great fun. I would be happy about any new ideas how to make it even more fun and/or more efficient.

    Read the article

  • C++ class member functions instantiated by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched Stack Overflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instantiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits that are tested at runtime. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state information was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. The compiler (VC++ 2008) always complained that things didn't match. I would yell, "SFINAE, you moron!" but the moron is probably me. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • C++ stream as a parameter when overloading operator<<

    - by TheOm3ga
    I'm trying to write my own logging class and use it as a stream: logger L; L << "whatever" << std::endl; This is the code I started with: #include <iostream> using namespace std; class logger{ public: template <typename T> friend logger& operator <<(logger& log, const T& value); }; template <typename T> logger& operator <<(logger& log, T const & value) { // Here I'd output the values to a file and stdout, etc. cout << value; return log; } int main(int argc, char *argv[]) { logger L; L << "hello" << '\n' ; // This works L << "bye" << "alo" << endl; // This doesn't work return 0; } But I was getting an error when trying to compile, saying that there was no definition for operator<<: pruebaLog.cpp:31: error: no match for ‘operator<<’ in ‘operator<< [with T = char [4]](((logger&)((logger*)operator<< [with T = char [4]](((logger&)(& L)), ((const char (&)[4])"bye")))), ((const char (&)[4])"alo")) << std::endl’ So, I've been trying to overload operator<< to accept this kind of streams, but it's driving me mad. I don't know how to do it. I've been loking at, for instance, the definition of std::endl at the ostream header file and written a function with this header: logger& operator <<(logger& log, const basic_ostream<char,char_traits<char> >& (*s)(basic_ostream<char,char_traits<char> >&)) But no luck. I've tried the same using templates instead of directly using char, and also tried simply using "const ostream& os", and nothing. Another thing that bugs me is that, in the error output, the first argument for operator<< changes, sometimes it's a reference to a pointer, sometimes looks like a double reference...

    Read the article

  • Chrome is creating duplicate sessions with the same id

    - by dlwiest
    I encountered an issue while I was revising my session library today, and this might be the first time I've ever seen a browser-specific problem on a back end script. I hope somebody can shed some light. Basically how the session library works is: when instantiated, it checks for a cookie called 'id' (in the form of a uniqid result) on the client machine. If a cookie is found, the script checks that and a hashed copy of the user agent string against entries in a session table. If a matching entry is found, the script resumes the session. If no cookie named 'id' is found, or if no matching entry exists in the sessions table, the script creates both. Fairly standard, I think. Now here's the weird part: in Firefox, everything works as predicted. The user gets one session, which he'll always resume upon connection, as long as 24 hours of inactivity has not elapsed. But when I visit the page in Chrome, even though it looks the same and appears to be executing queries in the same order, I see two entries in the session table. The sessions share an agent string, but the ids are different, and timestamp logs indicate that the ghost session is being created shortly (within a second) after the one created for the user. For debugging purposes, I've been printing queries to the screen as they're executed, and this is an example of what I'm seeing when Chrome should be opening one session and is somehow opening two instead: // Attempting to resume a session SELECT id FROM sessions WHERE id = '4fd24a5cd8df12.62439982' AND agent = '9bcd5c6aac911f8bcd938a9563bc4eca' // No result, so it creates a new one INSERT INTO sessions (id, agent, start, last) VALUES ('4fd24ef0347f26.72354606', '9bcd5c6aac911f8bcd938a9563bc4eca', '1339182832', '1339182832') // Clear old sessions DELETE FROM sessions WHERE last < 1339096432 And here's what I'm seeing in the database afterward: id, agent, start, last 4fd24ef0347f26.72354606, 9bcd5c6aac911f8bcd938a9563bc4eca, 1339182832, 1339182832 4fd24ef0857f94.72251285, 9bcd5c6aac911f8bcd938a9563bc4eca, 1339182833, 1339182833 Am I missing something obvious? The only thing I can think of is that Chrome might be creating a hidden session in the background, possibly to crawl the page. If that's the case though, it could become a problem later, when I begin associating active sessions with entries in the users table. I've been looking for possible bugs in my script, but I haven't found anything so far, and everything works as expected in Firefox.

    Read the article

  • Run CGI in IIS 7 to work with GET without Requiring POST Request

    - by Mohamed Meligy
    I'm trying to migrate an old CGI application from an existing Windows 2003 server (IIS 6.0) where it works just fine to a new Windows 2008 server with IIS 7.0 where we're getting the following problem: After setting up the module handler and everything, I find that I can only access the CGI application (rdbweb.exe) file if I'm calling it via POST request (form submit from another page). If I just try to type in the URL of the file (issuing a GET request) I get the following error: HTTP Error 502.2 - Bad Gateway The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are "Exception EInOutError in module rdbweb.exe at 00039B44. I/O error 6. ". This is a very old application for one of our clients. When we tried to call the vendor they said we need to pay ~ $3000 annual support fee in order to start the talk about it. Of course I'm trying to avoid that! Note that: If we create a normal HTML form that submits to "rdbweb.exe", we get the CGI working normally. We can't use this as workaround though because some pages in the application link to "rdbweb.exe" with normal link not form submit. If we run "rdbweb.exe". from a Console (Command Prompt) Window not IIS, we get the normal HTML we'd typically expect, no problem. We have tried the following: Ensuring the CGI module mapped to "rdbweb.exe".in IIS has all permissions (read, write, execute) enabled and also all verbs are allowed not just specific ones, also tried allowing GET, POST explicitely. Ensuring the application bool has "enable 32 bit applications" set to true. Ensuring the website runs with an account that has full permissions on the "rdbweb.exe".file and whole website (although we know it "read", "execute" should be enough). Ensuring the machine wide IIS setting for "ISAPI and CGI Restrictions" has the full path to "rdbweb.exe".allowed. Making sure we have the latest Windows Updates (for IIS6 we found knowledge base articles stating bugs that require hot fixes for IIS6, but nothing similar was found for IIS7). Changing the module from CGI to Fast CGI, not working also Now the only remaining possibility we have instigated is the following Microsoft Knowledge Base article:http://support.microsoft.com/kb/145661 - Which is about: CGI Error: The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are: the article suggests the following solution: Modify the source code for the CGI application header output. The following is an example of a correct header: print "HTTP/1.0 200 OK\n"; print "Content-Type: text/html\n\n\n"; Unfortunately we do not have the source to try this out, and I'm not sure anyway whether this is the issue we're having. Can you help me with this problem? Is there a way to make the application work without requiring POST request? Note that on the old IIS6 server the application is working just fine, and I could not find any special IIS configuration that I may want to try its equivalent on IIS7.

    Read the article

  • ubuntu wifi disconnection & frustratingly connects to unavailable wifi

    - by ashishsony
    Hi, i have already posted this here: here This has happened before with ubuntu 9.1 Beta2 build too that my wifi disconnects if im idle for 5 minutes... so i cant leave my lappy to download anything... i have to keep on continuously using it.. as soon i leave it idle for abt 5 minutes... wifi disconnects... and the pop up asking for password for wifi pops up...with the password already filled in... i just click on connect and it connects again... so whats the use of asking the password if the pre filled in pass works correctly... and this is happening on ubuntu 10.04 Beta2 too... and the workaround is that just open any menu like the applications menu in the taskbar and keep it open... under this state the ubuntu idleness never activates and so the wifi gets never disconnected... this has been confirmed by me many times.. this seems to be repeating again and again... i dont know why... and the second thing i want to report is that there is no way to report this bug from ubuntu... the launchpad.net talks of going through bug reporting process which is done against a definite package... now how does a user know which package would be causing this error?? there should be a more clear process of reporting such bugs to ubuntu team... thirdly the apport utility that reports crashing apps is totally uselss on 10.04 beta 2... as it collests information and reports that i cant submit the report because i dont have 100 other packages... without updating which i cant submit the report.... surely on a beta build there would be packages continuously being updated... so no system would be reported as fully updated... and so no practical apport reporting is possible?? please address these issues... really frustrating all this ... im a big fan of ubuntu but these things really bug me... and just to add fourthly... the suspend/hibernate feature has never ever worked on my toshiba m70-113 laptop... on any ubuntu version... always have to hard reboot after putting into suspend/hibernate mode.. on windows this has never been the case... why cant ubuntu beat windows in such cases too?? i would really like to see this soon... most importantly, when the router switches off... the wifi signals go off... then why the hell ubuntu keeps on connecting to that very wifi like hell and when doesnt connect shows the prompt to manually connect... with the wifi key already filled in... whats the use of saving the key when it has to ask the question from me either to connect or not?? and if its isnt available... just wait when its available.. i have only option to cancel and if i cancel it wont auto-connect!! what the heck?? one can see in the image that it says "authentication required by wireless network" when there isnt any.. as router has gone down!!

    Read the article

  • disk-to-disk backup without costly backup redundancy?

    - by AaronLS
    A good backup strategy involves a combination of 1) disconnected backups/snapshots that will not be affected by bugs, viruses, and/or security breaches 2) geographically distributed backups to protect against local disasters 3) testing backups to ensure that they can be restored as needed Generally I take an onsite backup daily, and an offsite backup weekly, and do test restores periodically. In the rare circumstance that I need to restore files, I do some from the local backup. Should a catastrophic event destroy the servers and local backups, then the offsite weekly tape backup would be used to restore the files. I don't need multiple offsite backups with redundancy. I ALREADY HAVE REDUNDANCY THROUGH THE USE OF BOTH LOCAL AND REMOTE BACKUPS. I have recovery blocks and par files with the backups, so I already have protection against a small percentage of corrupt bits. I perform test restores to ensure the backups function properly. Should the remote backups experience a dataloss, I can replace them with one of the local backups. There are historical offsite backups as well, so if a dataloss was not noticed for a few weeks(such as a bug/security breach/virus), the data could be restored from an older backup. By doing this, the only scenario that poses a risk to complete data loss would be one where both the local, remote, and servers all experienced a data loss in the same time period. I'm willing to risk that happening since the odds of that trifecta negligibly small, and the data isn't THAT valuable to me. So I hope I have emphasized that I don't need redundancy in my offsite backups because I have covered all the bases. I know this exact technique is employed by numerous businesses. Of course there are some that take multiple offsite backups, because the data is so incredibly valuable that they don't even want to risk that trifecta disaster, but in the majority of cases the trifecta disaster is an accepted risk. I HAD TO COVER ALL THIS BECAUSE SOME PEOPLE DON'T READ!!! I think I have justified my backup strategy and the majority of businesses who use offsite tape backups do not have any additional redundancy beyond what is mentioned above(recovery blocks, par files, historical snapshots). Now I would like to eliminate the use of tapes for offsite backups, and instead use a backup service. Most however are extremely costly for $/gb/month storage. I don't mind paying for transfer bandwidth, but the cost of storage is way to high. All of them advertise that they maintain backups of the data, and I imagine they use RAID as well. Obviously if you were using them to host servers this would all be necessary, but for my scenario, I am simply replacing my offsite backups with such a service. So there is no need for RAID, and absolutely no value in another layer of backups of backups. My one and only question: "Are there online data-storage/backup services that do not use redundancy or offer backups(backups of my backups) as part of their packages, and thus are more reasonably priced?" NOT my question: "Is this a flawed strategy?" I don't care if you think this is a good strategy or not. I know it pretty standard. Very few people make an extra copy of their offsite backups. They already have local backups that they can use to replace the remote backups if something catastrophic happens at the remote site. Please limit your responses to the question posed. Sorry if I seem a little abrasive, but I had some trolls in my last post who didn't read my requirements nor my question, and were trying to go off answering a totally different question. I made it pretty clear, but didn't try to justify my strategy, because I didn't ask about whether my strategy was justifyable. So I apologize if this was lengthy, as it really didn't need to be, but since there are so many trolls here who try to sidetrack questions by responding without addressing the question at hand.

    Read the article

  • What is wrong in my DKIM setup? I'm getting all fails

    - by djechelon
    I own a domain name I have implemented SPF and DKIM to avoid my mails being junked. I have also upgraded to DMARC in monitor mode. Since I received a few failure reports recently I wanted to investigate more. I have only one server sending outbound emails, running postfix + dkimproxy. I trust that dkimproxy has no major software bugs resulting in bad messages. I have tested ReturnPath's automated DKIM test and this is the part related to DKIM/DomainKeys DKIM Results ============ Result = failed: invalid key for signature: Syntax error in tag: \"v Domain = domain.org Selector = sel DNS Record(s) = sel._domainkey.domain.org TXT "v=1; p=MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAsMMLhxzXkU+tagc44oMi7eX2BsFb8BsWeT8MRL+hxi4Lsosx7tuPm90iYgilNteyJoXuSP5SUf8B2tDAifdzYQhfhctr0hX9b6ocBCukGq5p0GHpNsCPWyFvxZsCkGqLRmkfb0c36quEAWBeQLe4Z/BwXBBiW1g96WFNb2/GRI1+9OHhligdfuo4PPuU+xiwX4GB0Ik50cJL4xTdBf7lrFwoGYa03ZkXuzKxeGE4cTk50OeIs6eqrzAfbmej4nCex2qGOUt1TWI7ZvCY7u3Gxj+XKaE7VFrQACZof+NP0k2pXPHg9saGJqZrr2i6+RoxGD0w/ibjAWij9enwqlnv2ORsZfe+FmXNOLJAhlYvhHaruubDpte1c7V3ZKDceM45ZawnVmSdLCfBrMbsqipzy8NXN5MxuANYFBkx5EDT+Ieab+zqcnf08m9bgDc4RXMYppDT1/lUy6On+nyfZEnJWiH3BUtgxS8X0uXciXbsooTmPnpkzzvvKXAE/Tv3XqL90q51geqP0EmaZI6lRTpiqoX7zFGlEBiiF7/u8oheszATks8LsNZ/boTFy0OVldbYNhxlIuRmqeXkqD6+kM5ObKtMEv3AdaeBiZmvyJTP8tCsSmPt+e954RLlz2HaDjjNnZNgsj/39U2RzZsFbVqW6uyQh36/y1X4joOiPf366GkCAwEAAQ==; t=s" Public Key Length = 4096 DomainKeys Results ================== Domain = domain.org Selector = sel DNS Record(s) = sel._domainkey.domain.org TXT "v=1; p=MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAsMMLhxzXkU+tagc44oMi7eX2BsFb8BsWeT8MRL+hxi4Lsosx7tuPm90iYgilNteyJoXuSP5SUf8B2tDAifdzYQhfhctr0hX9b6ocBCukGq5p0GHpNsCPWyFvxZsCkGqLRmkfb0c36quEAWBeQLe4Z/BwXBBiW1g96WFNb2/GRI1+9OHhligdfuo4PPuU+xiwX4GB0Ik50cJL4xTdBf7lrFwoGYa03ZkXuzKxeGE4cTk50OeIs6eqrzAfbmej4nCex2qGOUt1TWI7ZvCY7u3Gxj+XKaE7VFrQACZof+NP0k2pXPHg9saGJqZrr2i6+RoxGD0w/ibjAWij9enwqlnv2ORsZfe+FmXNOLJAhlYvhHaruubDpte1c7V3ZKDceM45ZawnVmSdLCfBrMbsqipzy8NXN5MxuANYFBkx5EDT+Ieab+zqcnf08m9bgDc4RXMYppDT1/lUy6On+nyfZEnJWiH3BUtgxS8X0uXciXbsooTmPnpkzzvvKXAE/Tv3XqL90q51geqP0EmaZI6lRTpiqoX7zFGlEBiiF7/u8oheszATks8LsNZ/boTFy0OVldbYNhxlIuRmqeXkqD6+kM5ObKtMEv3AdaeBiZmvyJTP8tCsSmPt+e954RLlz2HaDjjNnZNgsj/39U2RzZsFbVqW6uyQh36/y1X4joOiPf366GkCAwEAAQ==; t=s" The mail displays an anonymised DNS record with genuine public key. It reports an error in tag v. A few hours ago I noticed my v tag was v=DKIM1 instead of v=1 as specified in RFC. I thought it was an error made by me during the initial setup months ago and fixed to v=1, but anyway I received one DMARC success from Google. Let me explain better: I enforced DMARC a couple of days ago. On 4/16 morning I got a mail from Google telling me that DMARC fully passes, then since 4/17 I get all failures. Then I discovered the v=DKIM1 tag and replaced with v=1 without success I have not modified my DNS records before that. So, keeping in topic with the question, why does ReturnPath refuse my DKIM DNS record? Is something wrong in my DKIM implementation at DNS level? [Add] I have just tried port25.com's tester but at least DKIM passes ---------------------------------------------------------- DomainKeys check details: ---------------------------------------------------------- Result: permerror (DK_STAT_BADKEY: Unusable key, public if verifying, private if signing.) ID(s) verified: header.From=########### DNS record(s): sel._domainkey.domain.org. 1800 IN TXT ""v=1; p=MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAsMMLhxzXkU+tagc44oMi7eX2BsFb8BsWeT8MRL+hxi4Lsosx7tuPm90iYgilNteyJoXuSP5SUf8B2tDAifdzYQhfhctr0hX9b6ocBCukGq5p0GHpNsCPWyFvxZsCkGqLRmkfb0c36quEAWBeQLe4Z/BwXBBiW1g96WFNb2/GRI1+9OHhligdfuo4PPuU+xiwX4GB0Ik50cJL4xTdBf7lrFwoGYa03ZkXuzKxeGE4cTk50OeIs6eqrzAfbmej4nCex2qGOUt1TWI7ZvCY7u3Gxj+XKaE7VFrQACZof+NP0k2pXPHg9saGJqZrr2i6+RoxGD0w/ibjAWij9enwqlnv2ORsZfe+FmXNOLJAhlYvhHaruubDpte1c7V3ZKDceM45ZawnVmSdLCfBrMbsqipzy8NXN5MxuANYFBkx5EDT+Ieab+zqcnf08m9bgDc4RXMYppDT1/lUy6On+nyfZEnJWiH3BUtgxS8X0uXciXbsooTmPnpkzzvvKXAE/Tv3XqL90q51geqP0EmaZI6lRTpiqoX7zFGlEBiiF7/u8oheszATks8LsNZ/boTFy0OVldbYNhxlIuRmqeXkqD6+kM5ObKtMEv3AdaeBiZmvyJTP8tCsSmPt+e954RLlz2HaDjjNnZNgsj/39U2RzZsFbVqW6uyQh36/y1X4joOiPf366GkCAwEAAQ==; t=s"" ---------------------------------------------------------- DKIM check details: ---------------------------------------------------------- Result: pass (matches From: #########) ID(s) verified: header.d=domain.org Canonicalized Headers: message-id:<[email protected]>'0D''0A' date:Thu,'20'18'20'Apr'20'2013'20'11:40:26'20'+0200'0D''0A' from:#############'0D''0A' mime-version:1.0'0D''0A' to:[email protected]'0D''0A' subject:Test'0D''0A' content-type:text/plain;'20'charset=ISO-8859-15;'20'format=flowed'0D''0A' content-transfer-encoding:7bit'0D''0A' dkim-signature:v=1;'20'a=rsa-sha1;'20'c=relaxed;'20'd=domain.org;'20'h='20'message-id:date:from:mime-version:to:subject:content-type'20':content-transfer-encoding;'20's=dom;'20'bh=uoq1oCgLlTqpdDX/iUbLy7J1Wi'20'c=;'20'b= Canonicalized Body: '0D''0A' DNS record(s): sel._domainkey.domain.org. 1800 IN TXT ""v=1; p=MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAsMMLhxzXkU+tagc44oMi7eX2BsFb8BsWeT8MRL+hxi4Lsosx7tuPm90iYgilNteyJoXuSP5SUf8B2tDAifdzYQhfhctr0hX9b6ocBCukGq5p0GHpNsCPWyFvxZsCkGqLRmkfb0c36quEAWBeQLe4Z/BwXBBiW1g96WFNb2/GRI1+9OHhligdfuo4PPuU+xiwX4GB0Ik50cJL4xTdBf7lrFwoGYa03ZkXuzKxeGE4cTk50OeIs6eqrzAfbmej4nCex2qGOUt1TWI7ZvCY7u3Gxj+XKaE7VFrQACZof+NP0k2pXPHg9saGJqZrr2i6+RoxGD0w/ibjAWij9enwqlnv2ORsZfe+FmXNOLJAhlYvhHaruubDpte1c7V3ZKDceM45ZawnVmSdLCfBrMbsqipzy8NXN5MxuANYFBkx5EDT+Ieab+zqcnf08m9bgDc4RXMYppDT1/lUy6On+nyfZEnJWiH3BUtgxS8X0uXciXbsooTmPnpkzzvvKXAE/Tv3XqL90q51geqP0EmaZI6lRTpiqoX7zFGlEBiiF7/u8oheszATks8LsNZ/boTFy0OVldbYNhxlIuRmqeXkqD6+kM5ObKtMEv3AdaeBiZmvyJTP8tCsSmPt+e954RLlz2HaDjjNnZNgsj/39U2RzZsFbVqW6uyQh36/y1X4joOiPf366GkCAwEAAQ==; t=s"" Public key used for verification: sel._domainkey.domain.org (4096 bits)

    Read the article

  • NPM not installing dependencies?

    - by neezer
    Having trouble getting NPM to install dependencies with npm install -d in my project directory with a defined package.json file. Here's my package.json: https://gist.github.com/3068312 And after wiping my project root's node modules folder (rm -rf node_modules), I run npm install -d in my project root and am greeted with this: (ssh) /vagrant git:master ? npm install -d npm info it worked if it ends with ok npm info using [email protected] npm info using [email protected] npm info preinstall [email protected] npm http GET https://registry.npmjs.org/sinon npm http GET https://registry.npmjs.org/underscore npm http GET https://registry.npmjs.org/mocha npm http GET https://registry.npmjs.org/request npm http 304 https://registry.npmjs.org/sinon npm http 304 https://registry.npmjs.org/underscore npm http 304 https://registry.npmjs.org/mocha npm http 304 https://registry.npmjs.org/request npm info into /vagrant [email protected] npm info into /vagrant [email protected] npm info into /vagrant [email protected] npm info into /vagrant [email protected] npm info installOne [email protected] npm info installOne [email protected] npm info installOne [email protected] npm info installOne [email protected] npm info unbuild /vagrant/node_modules/underscore npm info unbuild /vagrant/node_modules/mocha npm info unbuild /vagrant/node_modules/sinon npm info unbuild /vagrant/node_modules/request npm ERR! error installing [email protected] npm info unbuild /vagrant/node_modules/underscore npm ERR! error rolling back [email protected] Error: UNKNOWN, unknown error '/vagrant/node_modules/underscore' npm ERR! Error: ENOENT, no such file or directory '/vagrant/node_modules/underscore/package.json' npm ERR! You may report this log at: npm ERR! <http://bugs.debian.org/npm> npm ERR! or use npm ERR! reportbug --attach /vagrant/npm-debug.log npm npm ERR! npm ERR! System Linux 3.2.0-23-generic npm ERR! command "node" "/usr/bin/npm" "install" "-d" npm ERR! cwd /vagrant npm ERR! node -v v0.6.12 npm ERR! npm -v 1.1.4 npm ERR! path /vagrant/node_modules/underscore/package.json npm ERR! code ENOENT npm ERR! message ENOENT, no such file or directory '/vagrant/node_modules/underscore/package.json' npm ERR! errno {} npm ERR! error installing [email protected] npm info unbuild /vagrant/node_modules/request npm ERR! error rolling back [email protected] Error: UNKNOWN, unknown error '/vagrant/node_modules/request' npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /vagrant/npm-debug.log npm not ok If I rerun npm install -d, the error changes to whatever the next package is... if I keep running it it over and over again, it eventually doesn't complain anymore and outputs: (ssh) /vagrant git:master ? npm install -d npm info it worked if it ends with ok npm info using [email protected] npm info using [email protected] npm info preinstall [email protected] npm info build /vagrant npm info linkStuff [email protected] npm info install [email protected] npm info postinstall [email protected] npm info ok However, none of the dependencies for any of these packages get installed. For instance, cheerio has a few dependencies, so when I try running my test suite, I'm greeted with: (ssh) /vagrant git:master ? mocha --compilers coffee:coffee-script --watch spec/* node.js:201 throw e; // process.nextTick error, or 'error' event on first tick ^ Error: Cannot find module 'cheerio-select' at Function._resolveFilename (module.js:332:11) at Function._load (module.js:279:25) at Module.require (module.js:354:17) What gives? I'm on Ubuntu Precise64 in a Vagrant virtual box.

    Read the article

  • Apache SSL reverse proxy to a Embed Tomcat

    - by ggarcia24
    I'm trying to put in place a reverse proxy for an application that is running a tomcat embed server over SSL. The application needs to run over SSL on the port 9002 so I have no way of "disabling SSL" for this app. The current setup schema looks like this: [192.168.0.10:443 - Apache with mod_proxy] --> [192.168.0.10:9002 - Tomcat App] After googling on how to make such a setup (and testing) I came across this: https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/861137 Which lead to make my current configuration (to try to emulate the --secure-protocol=sslv3 option of wget) /etc/apache2/sites/enabled/default-ssl: <VirtualHost _default_:443> SSLEngine On SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key SSLProxyEngine On SSLProxyProtocol SSLv3 SSLProxyCipherSuite SSLv3 ProxyPass /test/ https://192.168.0.10:9002/ ProxyPassReverse /test/ https://192.168.0.10:9002/ LogLevel debug ErrorLog /var/log/apache2/error-ssl.log CustomLog /var/log/apache2/access-ssl.log combined </VirtualHost> The thing is that the error log is showing error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported protocol Complete request log: [Wed Mar 13 20:05:57 2013] [debug] mod_proxy.c(1020): Running scheme https handler (attempt 0) [Wed Mar 13 20:05:57 2013] [debug] mod_proxy_http.c(1973): proxy: HTTP: serving URL https://192.168.0.10:9002/ [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2011): proxy: HTTPS: has acquired connection for (192.168.0.10) [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2067): proxy: connecting https://192.168.0.10:9002/ to 192.168.0.10:9002 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2193): proxy: connected / to 192.168.0.10:9002 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2444): proxy: HTTPS: fam 2 socket created to connect to 192.168.0.10 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2576): proxy: HTTPS: connection complete to 192.168.0.10:9002 (192.168.0.10) [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection to child 0 established (server demo1agrubu01.demo.lab:443) [Wed Mar 13 20:05:57 2013] [info] Seeding PRNG with 656 bytes of entropy [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1866): OpenSSL: Handshake: start [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: before/connect initialization [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: unknown state [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1897): OpenSSL: read 7/7 bytes from BIO#7f122800a100 [mem: 7f1230018f60] (BIO dump follows) [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1830): +-------------------------------------------------------------------------+ [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1869): | 0000: 15 03 01 00 02 02 50 ......P | [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1875): +-------------------------------------------------------------------------+ [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1903): OpenSSL: Exit: error in unknown state [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] SSL Proxy connect failed [Wed Mar 13 20:05:57 2013] [info] SSL Library Error: 336032002 error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported protocol [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection closed to child 0 with abortive shutdown (server example1.domain.tld:443) [Wed Mar 13 20:05:57 2013] [error] (502)Unknown error 502: proxy: pass request body failed to 172.31.4.13:9002 (192.168.0.10) [Wed Mar 13 20:05:57 2013] [error] [client 192.168.0.10] proxy: Error during SSL Handshake with remote server returned by /dsfe/ [Wed Mar 13 20:05:57 2013] [error] proxy: pass request body failed to 192.168.0.10:9002 (172.31.4.13) from 172.31.4.13 () [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2029): proxy: HTTPS: has released connection for (172.31.4.13) [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1884): OpenSSL: Write: SSL negotiation finished successfully [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection closed to child 6 with standard shutdown (server example1.domain.tld:443) If I do a wget --secure-protocol=sslv3 --no-check-certificate https://192.168.0.10:9002/ it works perfectly, but from apache is not working. I'm on an Ubuntu Server with the latest updates running apache2 with mod_proxy and mod_ssl enabled: ~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" ~# dpkg -s apache2 ... Version: 2.2.22-1ubuntu1.2 ... ~# dpkg -s openssl ... Version: 1.0.1-4ubuntu5.7 ... Hope that anyone may help

    Read the article

  • Mysqld not starting due to apparent db corruption

    - by pitosalas
    I am very new at admining mysql, and bad for me, something caused the db to get clobbered. There are many error messages in the log that I am not sure how to safely proceed. Can you give some tips? Here's the log: 110107 15:07:15 mysqld started 110107 15:07:15 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 110107 15:07:15 InnoDB: Starting log scan based on checkpoint at InnoDB: log sequence number 35 515914826. InnoDB: Doing recovery: scanned up to log sequence number 35 515915839 InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 1 row operations to undo InnoDB: Trx id counter is 0 1697553664 110107 15:07:15 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Starting rollback of uncommitted transactions InnoDB: Rolling back trx with id 0 1697553198, 1 rows to undoInnoDB: Error: trying to access page number 3522914176 in space 0, InnoDB: space name ./ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10 110107 15:07:15InnoDB: Assertion failure in thread 3086403264 in file fil0fil.c line 3922 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/mysql/en/Forcing_recovery.html InnoDB: about forcing recovery. mysqld got signal 11; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=0 read_buffer_size=131072 max_used_connections=0 max_connections=100 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 217599 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd=(nil) Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Cannot determine thread, fp=0xbffc55ac, backtrace may not be correct. Stack range sanity check OK, backtrace follows: 0x8139eec 0x83721d5 0x833d897 0x833db71 0x832aa38 0x835f025 0x835f7a3 0x830a77e 0x8326b57 0x831c825 0x8317b8d 0x82a9e66 0x8315732 0x834fc9a 0x828d7c3 0x81c29dd 0x81b5620 0x813d9fe 0x40fdf3 0x80d5ff1 New value of fp=(nil) failed sanity check, terminating stack trace! Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and follow instructions on how to resolve the stack trace. Resolved stack trace is much more helpful in diagnosing the problem, so please do resolve it The manual page at http://www.mysql.com/doc/en/Crashing.html contains information that should help you find out what is causing the crash. 110107 15:07:15 mysqld ended

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • Ajax Control Toolkit December 2013 Release

    - by Stephen.Walther
    Today, we released a new version of the Ajax Control Toolkit that contains several important bug fixes and new features. The new release contains a new Tabs control that has been entirely rewritten in jQuery. You can download the December 2013 release of the Ajax Control Toolkit at http://Ajax.CodePlex.com. Alternatively, you can install the latest version directly from NuGet: The Ajax Control Toolkit and jQuery The Ajax Control Toolkit now contains two controls written with jQuery: the ToggleButton control and the Tabs control.  The goal is to rewrite the Ajax Control Toolkit to use jQuery instead of the Microsoft Ajax Library gradually over time. The motivation for rewriting the controls in the Ajax Control Toolkit to use jQuery is to modernize the toolkit. We want to continue to accept new controls written for the Ajax Control Toolkit contributed by the community. The community wants to use jQuery. We want to make it easy for the community to submit bug fixes. The community understands jQuery. Using the Ajax Control Toolkit with a Website that Already uses jQuery But what if you are already using jQuery in your website?  Will adding the Ajax Control Toolkit to your website break your existing website?  No, and here is why. The Ajax Control Toolkit uses jQuery.noConflict() to avoid conflicting with an existing version of jQuery in a page.  The version of jQuery that the Ajax Control Toolkit uses is represented by a variable named actJQuery.  You can use actJQuery side-by-side with an existing version of jQuery in a page without conflict.Imagine, for example, that you add jQuery to an ASP.NET page using a <script> tag like this: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="TestACTDec2013.WebForm1" %> <!DOCTYPE html> <html > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <script src="Scripts/jquery-2.0.3.min.js"></script> <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:TabContainer runat="server"> <ajaxToolkit:TabPanel runat="server"> <HeaderTemplate> Tab 1 </HeaderTemplate> <ContentTemplate> <h1>First Tab</h1> </ContentTemplate> </ajaxToolkit:TabPanel> <ajaxToolkit:TabPanel runat="server"> <HeaderTemplate> Tab 2 </HeaderTemplate> <ContentTemplate> <h1>Second Tab</h1> </ContentTemplate> </ajaxToolkit:TabPanel> </ajaxToolkit:TabContainer> </div> </form> </body> </html> The page above uses the Ajax Control Toolkit Tabs control (TabContainer and TabPanel controls).  The Tabs control uses the version of jQuery that is currently bundled with the Ajax Control Toolkit (jQuery version 1.9.1). The page above also includes a <script> tag that references jQuery version 2.0.3.  You might need that particular version of jQuery, for example, to use a particular jQuery plugin. The two versions of jQuery in the page do not create a conflict. This fact can be demonstrated by entering the following two commands in the JavaScript console window: actJQuery.fn.jquery $.fn.jquery Typing actJQuery.fn.jquery will display the version of jQuery used by the Ajax Control Toolkit and typing $.fn.jquery (or jQuery.fn.jquery) will show the version of jQuery used by other jQuery plugins in the page.      Preventing jQuery from Loading Twice So by default, the Ajax Control Toolkit will not conflict with any existing version of jQuery used in your application. However, this does mean that if you are already using jQuery in your application then jQuery will be loaded twice. For performance reasons, you might want to avoid loading the jQuery library twice. By taking advantage of the <remove> element in the AjaxControlToolkit.config file, you can prevent the Ajax Control Toolkit from loading its version of jQuery. <ajaxControlToolkit> <scripts> <remove name="jQuery.jQuery.js" /> </scripts> <controlBundles> <controlBundle> <control name="TabContainer" /> <control name="TabPanel" /> </controlBundle> </controlBundles> </ajaxControlToolkit> Be careful here:  the name of the script being removed – jQuery.jQuery.js – is case-sensitive. If you remove jQuery then it is your responsibility to add the exact same version of jQuery back into your application.  You can add jQuery back using a <script> tag like this: <script src="Scripts/jquery-1.9.1.min.js"></script>     Make sure that you add the <script> tag before the server-side <form> tag or the Ajax Control Toolkit won’t detect the presence of jQuery. Alternatively, you can use the ToolkitScriptManager like this: <ajaxToolkit:ToolkitScriptManager runat="server"> <Scripts> <asp:ScriptReference Name="jQuery.jQuery.js" /> </Scripts> </ajaxToolkit:ToolkitScriptManager> The Ajax Control Toolkit is tested against the particular version of jQuery that is bundled with the Ajax Control Toolkit. Currently, the Ajax Control Toolkit uses jQuery version 1.9.1. If you attempt to use a different version of jQuery with the Ajax Control Toolkit then you will get the exception jQuery 1.9.1 is required in your JavaScript console window: If you need to use a different version of jQuery in the same page as the Ajax Control Toolkit then you should not use the <remove> element. Instead, allow the Ajax Control Toolkit to load its version of jQuery side-by-side with the other version of jQuery. Lots of Bug Fixes As usual, we implemented several important bug fixes with this release. The bug fixes concerned the following three controls: Tabs control – In the course of rewriting the Tabs control to use jQuery, we fixed several bugs related to the Tabs control. AjaxFileUpload control – We resolved an issue concerning the AjaxFileUpload and the TMP directory. HTMLEditor control – We updated the HTMLEditor control to use the new Ajax Control Toolkit bundling and minification framework. Summary I would like to thank the Superexpert team for their hard work on this release. Many long hours of coding and testing went into making this release possible.

    Read the article

  • Week in Geek: New Security Flaw Confirmed for Internet Explorer Edition

    - by Asian Angel
    This week we learned how to use a PC to stay entertained while traveling for the holidays, create quality photo prints with free software, share links between any browser and any smartphone, create perfect Christmas photos using How-To Geek’s 10 best how-to photo guides, and had fun decorating Firefox with a collection of Holiday 2010 Personas themes. Photo by Repoort. Random Geek Links Photo by Asian Angel. Critical 0-Day Flaw Affects All Internet Explorer Versions, Microsoft Warns Microsoft has confirmed a zero-day vulnerability affecting all supported versions of Internet Explorer, including IE8, IE7 and IE6. Note: Article contains link to Microsoft Security Advisory detailing two work-arounds until a security update is released. Hackers targeting human rights, indie media groups Hackers are increasingly hitting the Web sites of human rights and independent media groups in an attempt to silence them, says a new study released this week by Harvard University’s Berkman Center for Internet & Society. OpenBSD: audits give no indication of back doors So far, the analyses of OpenBSD’s crypto and IPSec code have not provided any indication that the system contains back doors for listening to encrypted VPN connections. But the developers have already found two bugs during their current audits. Sophos: Beware Facebook’s new facial-recognition feature Facebook’s new facial recognition software might result in undesirable photos of users being circulated online, warned a security expert, who urged users to keep abreast with the social network’s privacy settings to prevent the abovementioned scenario from becoming a reality. Microsoft withdraws flawed Outlook update Microsoft has withdrawn update KB2412171 for Outlook 2007, released last Patch Tuesday, after a number of user complaints. Skype: Millions still without service Skype was still working to right itself going into the holiday weekend from a major outage that began this past Wednesday. Mozilla improves sync setup and WebGL in Firefox 4 beta 8 Firefox 4.0 beta 8 brings better support for WebGL and introduces an improved setup process for Firefox Sync that simplifies the steps for configuring the synchronization service across multiple devices. Chrome OS the litmus test for cloud The success or failure of Google’s browser-oriented Chrome OS will be the litmus test to decide if the cloud is capable of addressing user needs for content and services, according to a new Ovum report released Monday. FCC Net neutrality rules reach mobile apps The Federal Communications Commission (FCC) finally released its long-expected regulations on Thursday and the related explanations total a whopping 194 pages. One new item that was not previously disclosed: mobile wireless providers can’t block “applications that compete with the provider’s” own voice or video telephony services. KDE and the Document Foundation join Open Invention Network The KDE e.V. and the Document Foundation (TDF) have both joined the Open Invention Network (OIN) as licensees, expanding the organization’s roster of supporters. Report: SEC looks into Hurd’s ousting from HP The scandal surrounding Mark Hurd’s departure from the world’s largest technology company in August has officially drawn attention from the U.S. Securities and Exchange Commission. Report: Google requests delay of new Google TVs Google TV is apparently encountering a bit of static that has resulted in a programming change. Geek Video of the Week This week we have a double dose of geeky video goodness for you with the original Mac vs PC video and the trailer for the sequel. Photo courtesy of Peacer. Mac vs PC Photo courtesy of Peacer. Mac vs PC 2 Trailer Random TinyHacker Links Awesome Tools To Extract Audio From Video Here’s a list of really useful, and free tools to rip audio from videos. Getting Your iPhone Out of Recovery Mode Is your iPhone stuck in recovery mode? This tutorial will help you get it out of that state. Google Shared Spaces Quickly create a shared space and collaborate with friends online. McAfee Internet Security 2011 – Upgrade not worthy of a version change McAfee has released their 2011 version of security products. And as this review details, the upgrades are minimal when compared to their 2010 products. For more information, check out the review. 200 Countries Plotted Hans Rosling’s famous lectures combine enormous quantities of public data with a sport’s commentator’s style to reveal the story of the world’s past, present and future development. Now he explores stats in a way he has never done before – using augmented reality animation. Super User Questions Enjoy looking through this week’s batch of popular questions and answers from Super User. How to restore windows 7 to a known working state every time it boots? Is there an easy way to mass-transfer all files between two computers? Coffee spilled inside computer, damaged hard drive Computer does not boot after ram upgrade Keyboard not detected when trying to install Ubuntu 10.10 How-To Geek Weekly Article Recap Have you had a super busy week while preparing for the holiday weekend? Then here is your chance to get caught up on your reading with our five hottest articles for the week. Ask How-To Geek: Rescuing an Infected PC, Installing Bloat-free iTunes, and Taming a Crazy Trackpad How to Use the Avira Rescue CD to Clean Your Infected PC Eight Geektacular Christmas Projects for Your Day Off VirtualBox 4.0 Rocks Extensions and a Simplified GUI Ask the Readers: How Many Monitors Do You Use with Your Computer? One Year Ago on How-To Geek Here are more great articles from one year ago for you to read and enjoy during the holiday break. Enjoy Distraction-Free Writing with WriteMonkey Shutter is a State of Art Screenshot Tool for Ubuntu Get Hex & RGB Color Codes the Easy Way Find User Scripts for Your Favorite Websites the Easy Way Access Your Unsorted Bookmarks the Easy Way (Firefox) The Geek Note That “wraps” things up for this week and we hope that everyone enjoys the rest of their holiday break! Found a great tip during the break? Then be sure to send it in to us at [email protected]. Photo by ArSiSa7. Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? Simon’s Cat Explores the Christmas Tree! [Video] The Outdoor Lights Scene from National Lampoon’s Christmas Vacation [Video] The Famous Home Alone Pizza Delivery Scene [Classic Video] Chronicles of Narnia: The Voyage of the Dawn Treader Theme for Windows 7 Cardinal and Rabbit Sharing a Tree on a Cold Winter Morning Wallpaper An Alternate Star Wars Christmas Special [Video]

    Read the article

  • Parallelism in .NET – Part 18, Task Continuations with Multiple Tasks

    - by Reed
    In my introduction to Task continuations I demonstrated how the Task class provides a more expressive alternative to traditional callbacks.  Task continuations provide a much cleaner syntax to traditional callbacks, but there are other reasons to switch to using continuations… Task continuations provide a clean syntax, and a very simple, elegant means of synchronizing asynchronous method results with the user interface.  In addition, continuations provide a very simple, elegant means of working with collections of tasks. Prior to .NET 4, working with multiple related asynchronous method calls was very tricky.  If, for example, we wanted to run two asynchronous operations, followed by a single method call which we wanted to run when the first two methods completed, we’d have to program all of the handling ourselves.  We would likely need to take some approach such as using a shared callback which synchronized against a common variable, or using a WaitHandle shared within the callbacks to allow one to wait for the second.  Although this could be accomplished easily enough, it requires manually placing this handling into every algorithm which requires this form of blocking.  This is error prone, difficult, and can easily lead to subtle bugs. Similar to how the Task class static methods providing a way to block until multiple tasks have completed, TaskFactory contains static methods which allow a continuation to be scheduled upon the completion of multiple tasks: TaskFactory.ContinueWhenAll. This allows you to easily specify a single delegate to run when a collection of tasks has completed.  For example, suppose we have a class which fetches data from the network.  This can be a long running operation, and potentially fail in certain situations, such as a server being down.  As a result, we have three separate servers which we will “query” for our information.  Now, suppose we want to grab data from all three servers, and verify that the results are the same from all three. With traditional asynchronous programming in .NET, this would require using three separate callbacks, and managing the synchronization between the various operations ourselves.  The Task and TaskFactory classes simplify this for us, allowing us to write: var server1 = Task.Factory.StartNew( () => networkClass.GetResults(firstServer) ); var server2 = Task.Factory.StartNew( () => networkClass.GetResults(secondServer) ); var server3 = Task.Factory.StartNew( () => networkClass.GetResults(thirdServer) ); var result = Task.Factory.ContinueWhenAll( new[] {server1, server2, server3 }, (tasks) => { // Propogate exceptions (see below) Task.WaitAll(tasks); return this.CompareTaskResults( tasks[0].Result, tasks[1].Result, tasks[2].Result); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is clean, simple, and elegant.  The one complication is the Task.WaitAll(tasks); statement. Although the continuation will not complete until all three tasks (server1, server2, and server3) have completed, there is a potential snag.  If the networkClass.GetResults method fails, and raises an exception, we want to make sure to handle it cleanly.  By using Task.WaitAll, any exceptions raised within any of our original tasks will get wrapped into a single AggregateException by the WaitAll method, providing us a simplified means of handling the exceptions.  If we wait on the continuation, we can trap this AggregateException, and handle it cleanly.  Without this line, it’s possible that an exception could remain uncaught and unhandled by a task, which later might trigger a nasty UnobservedTaskException.  This would happen any time two of our original tasks failed. Just as we can schedule a continuation to occur when an entire collection of tasks has completed, we can just as easily setup a continuation to run when any single task within a collection completes.  If, for example, we didn’t need to compare the results of all three network locations, but only use one, we could still schedule three tasks.  We could then have our completion logic work on the first task which completed, and ignore the others.  This is done via TaskFactory.ContinueWhenAny: var server1 = Task.Factory.StartNew( () => networkClass.GetResults(firstServer) ); var server2 = Task.Factory.StartNew( () => networkClass.GetResults(secondServer) ); var server3 = Task.Factory.StartNew( () => networkClass.GetResults(thirdServer) ); var result = Task.Factory.ContinueWhenAny( new[] {server1, server2, server3 }, (firstTask) => { return this.ProcessTaskResult(firstTask.Result); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, instead of working with all three tasks, we’re just using the first task which finishes.  This is very useful, as it allows us to easily work with results of multiple operations, and “throw away” the others.  However, you must take care when using ContinueWhenAny to properly handle exceptions.  At some point, you should always wait on each task (or use the Task.Result property) in order to propogate any exceptions raised from within the task.  Failing to do so can lead to an UnobservedTaskException.

    Read the article

  • Securing an ASP.NET MVC 2 Application

    - by rajbk
    This post attempts to look at some of the methods that can be used to secure an ASP.NET MVC 2 Application called Northwind Traders Human Resources.  The sample code for the project is attached at the bottom of this post. We are going to use a slightly modified Northwind database. The screen capture from SQL server management studio shows the change. I added a new column called Salary, inserted some random salaries for the employees and then turned off AllowNulls.   The reporting relationship for Northwind Employees is shown below.   The requirements for our application are as follows: Employees can see their LastName, FirstName, Title, Address and Salary Employees are allowed to edit only their Address information Employees can see the LastName, FirstName, Title, Address and Salary of their immediate reports Employees cannot see records of non immediate reports.  Employees are allowed to edit only the Salary and Title information of their immediate reports. Employees are not allowed to edit the Address of an immediate report Employees should be authenticated into the system. Employees by default get the “Employee” role. If a user has direct reports, they will also get assigned a “Manager” role. We use a very basic empId/pwd scheme of EmployeeID (1-9) and password test$1. You should never do this in an actual application. The application should protect from Cross Site Request Forgery (CSRF). For example, Michael could trick Steven, who is already logged on to the HR website, to load a page which contains a malicious request. where without Steven’s knowledge, a form on the site posts information back to the Northwind HR website using Steven’s credentials. Michael could use this technique to give himself a raise :-) UI Notes The layout of our app looks like so: When Nancy (EmpID 1) signs on, she sees the default page with her details and is allowed to edit her address. If Nancy attempts to view the record of employee Andrew who has an employeeID of 2 (Employees/Edit/2), she will get a “Not Authorized” error page. When Andrew (EmpID 2) signs on, he can edit the address field of his record and change the title and salary of employees that directly report to him. Implementation Notes All controllers inherit from a BaseController. The BaseController currently only has error handling code. When a user signs on, we check to see if they are in a Manager role. We then create a FormsAuthenticationTicket, encrypt it (including the roles that the employee belongs to) and add it to a cookie. private void SetAuthenticationCookie(int employeeID, List<string> roles) { HttpCookiesSection cookieSection = (HttpCookiesSection) ConfigurationManager.GetSection("system.web/httpCookies"); AuthenticationSection authenticationSection = (AuthenticationSection) ConfigurationManager.GetSection("system.web/authentication"); FormsAuthenticationTicket authTicket = new FormsAuthenticationTicket( 1, employeeID.ToString(), DateTime.Now, DateTime.Now.AddMinutes(authenticationSection.Forms.Timeout.TotalMinutes), false, string.Join("|", roles.ToArray())); String encryptedTicket = FormsAuthentication.Encrypt(authTicket); HttpCookie authCookie = new HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket); if (cookieSection.RequireSSL || authenticationSection.Forms.RequireSSL) { authCookie.Secure = true; } HttpContext.Current.Response.Cookies.Add(authCookie); } We read this cookie back in Global.asax and set the Context.User to be a new GenericPrincipal with the roles we assigned earlier. protected void Application_AuthenticateRequest(Object sender, EventArgs e){ if (Context.User != null) { string cookieName = FormsAuthentication.FormsCookieName; HttpCookie authCookie = Context.Request.Cookies[cookieName]; if (authCookie == null) return; FormsAuthenticationTicket authTicket = FormsAuthentication.Decrypt(authCookie.Value); string[] roles = authTicket.UserData.Split(new char[] { '|' }); FormsIdentity fi = (FormsIdentity)(Context.User.Identity); Context.User = new System.Security.Principal.GenericPrincipal(fi, roles); }} We ensure that a user has permissions to view a record by creating a custom attribute AuthorizeToViewID that inherits from ActionFilterAttribute. public class AuthorizeToViewIDAttribute : ActionFilterAttribute{ IEmployeeRepository employeeRepository = new EmployeeRepository(); public override void OnActionExecuting(ActionExecutingContext filterContext) { if (filterContext.ActionParameters.ContainsKey("id") && filterContext.ActionParameters["id"] != null) { if (employeeRepository.IsAuthorizedToView((int)filterContext.ActionParameters["id"])) { return; } } throw new UnauthorizedAccessException("The record does not exist or you do not have permission to access it"); }} We add the AuthorizeToView attribute to any Action method that requires authorization. [HttpPost][Authorize(Order = 1)]//To prevent CSRF[ValidateAntiForgeryToken(Salt = Globals.EditSalt, Order = 2)]//See AuthorizeToViewIDAttribute class[AuthorizeToViewID(Order = 3)] [ActionName("Edit")]public ActionResult Update(int id){ var employeeToEdit = employeeRepository.GetEmployee(id); if (employeeToEdit != null) { //Employees can edit only their address //A manager can edit the title and salary of their subordinate string[] whiteList = (employeeToEdit.IsSubordinate) ? new string[] { "Title", "Salary" } : new string[] { "Address" }; if (TryUpdateModel(employeeToEdit, whiteList)) { employeeRepository.Save(employeeToEdit); return RedirectToAction("Details", new { id = id }); } else { ModelState.AddModelError("", "Please correct the following errors."); } } return View(employeeToEdit);} The Authorize attribute is added to ensure that only authorized users can execute that Action. We use the TryUpdateModel with a white list to ensure that (a) an employee is able to edit only their Address and (b) that a manager is able to edit only the Title and Salary of a subordinate. This works in conjunction with the AuthorizeToViewIDAttribute. The ValidateAntiForgeryToken attribute is added (with a salt) to avoid CSRF. The Order on the attributes specify the order in which the attributes are executed. The Edit View uses the AntiForgeryToken helper to render the hidden token: ......<% using (Html.BeginForm()) {%><%=Html.AntiForgeryToken(NorthwindHR.Models.Globals.EditSalt)%><%= Html.ValidationSummary(true, "Please correct the errors and try again.") %><div class="editor-label"> <%= Html.LabelFor(model => model.LastName) %></div><div class="editor-field">...... The application uses View specific models for ease of model binding. public class EmployeeViewModel{ public int EmployeeID; [Required] [DisplayName("Last Name")] public string LastName { get; set; } [Required] [DisplayName("First Name")] public string FirstName { get; set; } [Required] [DisplayName("Title")] public string Title { get; set; } [Required] [DisplayName("Address")] public string Address { get; set; } [Required] [DisplayName("Salary")] [Range(500, double.MaxValue)] public decimal Salary { get; set; } public bool IsSubordinate { get; set; }} To help with displaying readonly/editable fields, we use a helper method. //Simple extension method to display a TextboxFor or DisplayFor based on the isEditable variablepublic static MvcHtmlString TextBoxOrLabelFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, bool isEditable){ if (isEditable) { return htmlHelper.TextBoxFor(expression); } else { return htmlHelper.DisplayFor(expression); }} The helper method is used in the view like so: <%=Html.TextBoxOrLabelFor(model => model.Title, Model.IsSubordinate)%> As mentioned in this post, there is a much easier way to update properties on an object. Download Demo Project VS 2008, ASP.NET MVC 2 RTM Remember to change the connectionString to point to your Northwind DB NorthwindHR.zip Feedback and bugs are always welcome :-)

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >