Search Results

Search found 13749 results on 550 pages for 'reason'.

Page 22/550 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Python Twisted Client Connection Lost

    - by MovieYoda
    I have this twisted client, which connects with a twisted server having an index. I ran this client from command-line. It worked fine. Now I modified it to run in loop (see main()) so that I can keep querying. But the client runs only once. Next time it simply says connection lost \n Connection lost - goodbye!. What am i doing wrong? In the loop I am reconnecting to the server, it that wrong? from twisted.internet import reactor from twisted.internet import protocol from settings import AS_SERVER_HOST, AS_SERVER_PORT # a client protocol class Spell_client(protocol.Protocol): """Once connected, send a message, then print the result.""" def connectionMade(self): self.transport.write(self.factory.query) def dataReceived(self, data): "As soon as any data is received, write it back." if data == '!': self.factory.results = '' else: self.factory.results = data self.transport.loseConnection() def connectionLost(self, reason): print "\tconnection lost" class Spell_Factory(protocol.ClientFactory): protocol = Spell_client def __init__(self, query): self.query = query self.results = '' def clientConnectionFailed(self, connector, reason): print "\tConnection failed - goodbye!" reactor.stop() def clientConnectionLost(self, connector, reason): print "\tConnection lost - goodbye!" reactor.stop() # this connects the protocol to a server runing on port 8090 def main(): print 'Connecting to %s:%d' % (AS_SERVER_HOST, AS_SERVER_PORT) while True: print query = raw_input("Query:") if query == '': return f = Spell_Factory(query) reactor.connectTCP(AS_SERVER_HOST, AS_SERVER_PORT, f) reactor.run() print f.results return if __name__ == '__main__': main()

    Read the article

  • Why is a NullReferenceException thrown when a ToolStrip button is clicked twice with code in the `Click` event handler?

    - by Patrick
    I created a clean WindowsFormsApplication solution, added a ToolStrip to the main form, and placed one button on it. I've added also an OpenFileDialog, so that the Click event of the ToolStripButton looks like the following: private void toolStripButton1_Click(object sender, EventArgs e) { openFileDialog1.ShowDialog(); } I didn't change any other properties or events. The funny thing is that when I double-click the ToolStripButton (the second click must be quite fast, before the dialog opens), then cancel both dialogs (or choose a file, it doesn't really matter) and then click in the client area of main form, a NullReferenceException crashes the application (error details attached at the end of the post). Please note that the Click event is implemented while DoubleClick is not. What's even more strange that when the OpenFileDialog is replaced by any user-implemented form, the ToolStripButton blocks from being clicked twice. I'm using VS2008 with .NET3.5. I didn't change many options in VS (only fontsize, workspace folder and line numbering). Does anyone know how to solve this? It is 100% replicable on my machine, is it on others too? One solution that I can think of is disabling the button before calling OpenFileDialog.ShowDialog() and then enabling the button back (but it's not nice). Any other ideas? And now the promised error details: System.NullReferenceException was unhandled Message="Object reference not set to an instance of an object." Source="System.Windows.Forms" StackTrace: at System.Windows.Forms.NativeWindow.WindowClass.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.PeekMessage(MSG& msg, HandleRef hwnd, Int32 msgMin, Int32 msgMax, Int32 remove) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(Form mainForm) at WindowsFormsApplication1.Program.Main() w C:\Users\Marchewek\Desktop\Workspaces\VisualStudio\WindowsFormsApplication1\Program.cs:line 20 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException:

    Read the article

  • Usage of Assert.Inconclusive

    - by Johannes Rudolph
    Hi, Im wondering how someone should use Assert.Inconclusive(). I'm using it if my Unit test would be about to fail for a reason other than what it is for. E.g. i have a method on a class that calculates the sum of an array of ints. On the same class there is also a method to calculate the average of the element. It is implemented by calling sum and dividing it by the length of the array. Writing a Unit test for Sum() is simple. However, when i write a test for Average() and Sum() fails, Average() is likely to fail also. The failure of Average is not explicit about the reason it failed, it failed for a reason other than what it should test for. That's why i would check if Sum() returns the correct result, otherwise i Assert.Inconclusive(). Is this to be considered good practice? What is Assert.Inconclusive intended for? Or should i rather solve the previous example by means of an Isolation Framework?

    Read the article

  • Is sending a hashed password over the wire a security hole?

    - by Ubiquitous Che
    I've come across a system that is in use by a company that we are considering partnering with on a medium-sized (for us, not them) project. They have a web service that we will need to integrate with. My current understanding of proper username/password management is that the username may be stored as plaintext in the database. Every user should have a unique pseudo-random salt, which may also be stored in plaintext. The text of their password must be concatenated with the salt and then this combined string may be hashed and stored in the database in an nvarchar field. So long as passwords are submitted to the website (or web service) over plaintext, everything should be just lovely. Feel free to rip into my understanding as summarized above if I'm wrong. Anyway, back to the subject at hand. The WebService run by this potential partner doesn't accept username and password, which I had anticipated. Instead, it accepts two string fields named 'Username' and 'PasswordHash'. The 'PasswordHash' value that I have been given does indeed look like a hash, and not just a value for a mis-named password field. This is raising a red flag for me. I'm not sure why, but I feel uncomfortable sending a hashed password over the wire for some reason. Off the top of my head I can't think of a reason why this would be a bad thing... Technically, the hash is available on the database anyway. But it's making me nervous, and I'm not sure if there's a reason for this or if I'm just being paranoid.

    Read the article

  • Scheduler with Asp Mvc

    - by Samuel
    Hi, I want to use a Scheduler like Telerik Scheduler in my Mvc project. The problem is that the Scheduler is a Asp.Net WebForm control. For this reason, I must create a WebForm page in my Mvc project to put the Scheduler control. When I show the page, it work fine to render the layout of the control but if I try to interact with it; click for change date, change to day view to week view, the control don't change. I know that postback doesn't work in mvc project but does it work in a WebForm page in a Mvc project? If it doesn't work, it is the reason why when I try to interact with the control, the control don't respond. I think it's because the postback don't work and the Scheduler use 100 % Databinding where when I change date, the postback don't contain any data that I have changed and for this reason, the control can't change is layout. Have you any ideas about postback with WebForm in mvc project? What type of design can I adopt? (Two differents projets: One for my Scheduler with WebForm and another for all the rest of my website in Mvc project) Any other control easily to use with Scheduler? Tips and tricks when needing both WebForm control and Mvc control in Mvc project? Thank you very much.

    Read the article

  • PHP Object Access Syntax Question with the $

    - by ImperialLion
    I've been having trouble searching for this answer because I am not quite sure how to phrase it. I am new to PHP and still getting my feet on the ground. I was writing a page with a class in it that had the property name. When I originally wrote the page there was no class so I just had a variable called $name. When I went to encapsulate it in a class I accidental changed it to be $myClass->$name. It tool me a while to realize that the syntax I needed was $myClass->name. The reason it took so long was the error I kept getting was "Attempt to access a null property" or something along those lines. The error lead me to believe it was a data population error. My question is does $myClass->$name have a valid meaning? In other words is there a time you would use this and a reason why it doesn't create a syntax error? If so what is the semantic meaning of that code? When would I use it if it is valid? If its not valid, is there a reason that it doesn't create a syntax error?

    Read the article

  • Macbook Pro - Randomly sleeps and won't wake up

    - by James
    All, I have a Macbook Pro 13" (mid 2009) that has had a long time issues which seems to be getting worse. Occasionally, I will go to wake the computer with the keyboard and can't wake it. The HDD spins up, the light on the front of the computer stops blinking, but as soon as it seems like the display should light up, the HDD stops and the light begins blinking again. More rarely, the computer will suddenly sleep while I am using it and then enters the same sleep loop. The only way to resume working on the computer is to wait. Doing a hard restart just puts it right back into the 'sleep loop.' Here is an excerpt from kernel.log showing the laptops apparent narcolepsy: Jun 5 22:20:40 james-hales-macbook-pro kernel[0]: Wake reason: OHC1 Jun 5 22:20:40 james-hales-macbook-pro kernel[0]: Previous Sleep Cause: 5 Jun 5 22:20:40 james-hales-macbook-pro kernel[0]: The USB device Apple Internal Keyboard / Trackpad (Port 6 of Hub at 0x4000000) may have caused a wake by issuing a remote wakeup (2) Jun 5 22:20:40 james-hales-macbook-pro kernel[0]: HID tickle 31 ms Jun 5 22:20:41 james-hales-macbook-pro kernel[0]: 00000000 00000020 NVEthernet::setLinkStatus - not Active Jun 5 22:20:45 james-hales-macbook-pro kernel[0]: MacAuthEvent en1 Auth result for: 20:4e:7f:48:c0:ef MAC AUTH succeeded Jun 5 22:20:45 james-hales-macbook-pro kernel[0]: wlEvent: en1 en1 Link UP Jun 5 22:20:45 james-hales-macbook-pro kernel[0]: AirPort: Link Up on en1 Jun 5 22:20:45 james-hales-macbook-pro kernel[0]: en1: BSSID changed to 20:4e:7f:48:c0:ef Jun 5 22:20:46 james-hales-macbook-pro kernel[0]: AirPort: RSN handshake complete on en1 Jun 5 22:20:48 james-hales-macbook-pro kernel[0]: 00000000 00000020 NVEthernet::setLinkStatus - not Active Jun 5 22:20:54 james-hales-macbook-pro kernel[0]: Jun 5 22:20:55 james-hales-macbook-pro kernel[0]: Wake reason: OHC1 Jun 5 22:20:55 james-hales-macbook-pro kernel[0]: Previous Sleep Cause: 5 Jun 5 22:20:55 james-hales-macbook-pro kernel[0]: The USB device Apple Internal Keyboard / Trackpad (Port 6 of Hub at 0x4000000) may have caused a wake by issuing a remote wakeup (2) Jun 5 22:20:55 james-hales-macbook-pro kernel[0]: wlEvent: en1 en1 Link DOWN Jun 5 22:20:55 james-hales-macbook-pro kernel[0]: AirPort: Link Down on en1. Reason 4 (Disassociated due to inactivity). Jun 5 22:20:55 james-hales-macbook-pro kernel[0]: HID tickle 26 ms Jun 5 22:20:55 james-hales-macbook-pro kernel[0]: 00000000 00000020 NVEthernet::setLinkStatus - not Active Jun 5 22:20:58 james-hales-macbook-pro kernel[0]: MacAuthEvent en1 Auth result for: 20:4e:7f:48:c0:ef MAC AUTH succeeded Jun 5 22:20:58 james-hales-macbook-pro kernel[0]: wlEvent: en1 en1 Link UP Jun 5 22:20:58 james-hales-macbook-pro kernel[0]: AirPort: Link Up on en1 Jun 5 22:20:58 james-hales-macbook-pro kernel[0]: en1: BSSID changed to 20:4e:7f:48:c0:ef Jun 5 22:20:58 james-hales-macbook-pro kernel[0]: AirPort: RSN handshake complete on en1 Jun 5 22:21:02 james-hales-macbook-pro kernel[0]: 00000000 00000020 NVEthernet::setLinkStatus - not Active Jun 5 22:21:08 james-hales-macbook-pro kernel[0]: I have tried reseting the SMC and reinstalling Lion (short of erasing and installing) to no avail. The Genius bar has insisted that the problem would be resolved by reinstalling Lion (which they did, but didn't fix anything, still insisting...). Please don't say "logic board." Thoughts?

    Read the article

  • ASP.NET MVC 2 RTM Unit Tests not compiling

    - by nmarun
    I found something weird this time when it came to ASP.NET MVC 2 release. A very handful of people ‘made noise’ about the release.. at least on the asp.net blog site, usually there’s a big ‘WOOHAA… <something> is released’, kind of a thing. Hmm… but here’s the reason I’m writing this post. I’m not sure how many of you read the release notes before downloading the version.. I did, I did, I did. Now there’s a ‘Known issues’ section in the document and I’m quoting the text as is from this section: Unit test project does not contain reference to ASP.NET MVC 2 project: If the Solution Explorer window is hidden in Visual Studio, when you create a new ASP.NET MVC 2 Web application project and you select the option Yes, create a unit test project in the Create Unit Test Project dialog box, the unit test project is created but does not have a reference to the associated ASP.NET MVC 2 project. When you build the solution, Visual Studio will display compilation errors and the unit tests will not run. There are two workarounds. The first workaround is to make sure that the Solution Explorer is displayed when you create a new ASP.NET MVC 2 Web application project. If you prefer to keep Solution Explorer hidden, the second workaround is to manually add a project reference from the unit test project to the ASP.NET MVC 2 project. This definitely looks like a bug to me and see below for a visual: At the top right corner you’ll see that the Solution Explorer is set to auto hide and there’s no reference for the TestMvc2 project and that is the reason we get compilation errors without even writing a single line of code. So thanks to <VeryBigFont>ME</VeryBigFont> and <VerySmallFont>Microsoft</VerySmallFont>) , we’ve shown the world how to resolve a major issue and to live in Peace with the rest of humanity!

    Read the article

  • MS Expression Web 4 SuperPreview – Big Disappointment

    - by smehaffie
    I just downloaded Expression 4 and expected to see some improvements in the Web4 SuperPreview application.  The one main function I was expecting to be in this release is the ability to enter data and click on links so pages of the sites could be assessed.  There a many use cases where this functionality is needed and there were quite a few people vocal about it when MS first released the application. 1) Where you have to login to a site to access either all the content or some of the content on the site 2) Where you have to enter date in a certain order and cannot go to next page until the previous pages data is filled out (payment process, storefront, etc). 3) Where you just want to make sure things are displayed correctly based on data entered (validation messages, etc). 4 ) You need to make sure the links go to the page in all the different browsers.  I have seen scenerios where links worked fine in all but one browser, or for some reason the text showed on screen but it was not a clickable link. IMO this application is a great idea, but until MS fixed the above issue and add the functionality above the SuperPreview is totally worthless unless you need it to test a totally static site that does not require any user input at all to get access to the content.  There is no reason this feature should not have been in this release, and it should have been a priority to make sure it was. Let me know how you feel about the new version of the Web4 SuperPreview application.  Did MS really miss the target on this by not adding this functionality, or do I think it is a bigger deal that it really is?  If you are actively using SuperPreview, please post how you are using it and the type of sites you are using it on.

    Read the article

  • NPS EAP authentication failing after Windows Update

    - by sqlreader
    I have a Windows 2008 Std server running NPS. After applying the latest round of updates (including Root Certificates for April 2012 KB931125 (See:http://support.microsoft.com/kb/933430/)), EAP authentication is failing due to being malformed. Sample error (Security/Event ID 6273), truncated for brevity: Authentication Details: Proxy Policy Name: Use Windows authentication for all users Network Policy Name: Wireless Access Authentication Provider: Windows Authentication Server: nps-host.corp.contoso.com Authentication Type: PEAP EAP Type: - Account Session Identifier: - Reason Code: 266 Reason: The message received was unexpected or badly formatted. The NPS policy (Wireless Access) is configured accordingly (for Constraints/Authentication methods) EAP Types: Microsoft: Protected EAP (PEAP) - with a valid certificate from ADCS Microsoft: Secured password (EAP-MSCHAP v2) Less secure authentication methods: Microsoft Encrypted Authentication version 2 (MS-CHAP-v2) User can change password after it has expired Microsoft Encrypted Authentication (MS-CHAP) User can change password after it has expired We've tested a different RADIUS server without the aforementioned patch, and removed EAP as an authentication type and experienced success. Has anyone else experienced this issue?

    Read the article

  • Running emacs in GNU Screen overrides .emacs settings for [home] key binding in FreeBSD 8.2

    - by javanix
    If I use the following .emacs file, I am able to go to the beginning/end of the current line using the home/end keys as I would expect. (keyboard-translate ?\C-h ?\C-?) (add-to-list 'load-path "/home/sam/programs/go/go/misc/emacs/" t) (require 'go-mode-load) (global-set-key [kp-home] 'beginning-of-line) ; [Home] (global-set-key [home] 'beginning-of-line) ; [Home] (global-set-key [kp-end] 'end-of-line) ; [End] (global-set-key [end] 'end-of-line) ; [End] However, if I open up a screen session it does not function like this (the [home] key still brings me to the beginning of the buffer for some reason). Here is my .screenrc file if anyone can spot anything funky in there: term xterm defutf8 on defflow off startup_message off # terminfo and termcap for nice 256 color terminal # allow bold colors - necessary for some reason attrcolor b ".I" # tell screen how to set colors. AB = background, AF=foreground termcapinfo xterm 'Co#256:AB=\E[48;5;%dm:AF=\E[38;5;%dm' #use bash as the default login shell defshell -bash

    Read the article

  • Why is this PHP loop rendering every row twice?

    - by Christopher
    I'm working on a real frankensite here not of my own design. There's a rudimentary CMS and one of the pages shows customer records from a MySQL DB. For some reason, it has no probs picking up the data from the DB - there's no duplicate records - but it renders each row twice. The page PHP is viewable at http://christopher.pastebin.com/DQkjjG3s (attempted to include in this post but it was horribly mangled, think it's important to have it all in context). I'm not the world's best PHP expert but I think I can see an error in a for loop when there is one... But everything looks ok to me. You'll notice that the customer name is clickable; clicking takes you to another page where you can view their full info as held in the DB - and for both rows, the customer ID is identical, and manually checking the DB shows there's no duplicate entries. The code is definitely rendering each row twice, but for what reason I have no idea. All pointers / advice appreciated.

    Read the article

  • User Profile cannot be loaded - Windows 7

    - by Ryan
    After uninstalling an HP Vector Mouse driver, then rebooting, when Windows tries to auto log me in, i get an error message saying tyhe following: The User Profile Service failed the Login. User Profile cannot be loaded. Due to the fact that it is the only account on this PC, I cannot even go into another account. I rebooted the machine several times, before going into Safe Mode with Networking. For some reason, I cannot create a new account whilst in safe mode (I think it is to do with UAC, nothing with UAC is clickable). Thus, I am stuck. I cannot get into my account, nor can I create a new one to copy files over to. Any ideas? Thanks in advance! Ryan EDIT: System Restore was, for some reason turned off. Thus, I cannot restore to a working point.

    Read the article

  • What is recommended minimum object size for gzip performance benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwith offloaded from origin), or is there a performance gain in doing so?

    Read the article

  • Ubuntu One file sync error: SSL Handshake

    - by Jay Ó Broin
    Ubuntu One repeatedly tries to sync my files but keeps disconnecting before anything is uploaded. Here are some of the messages from syncdaemon.log: 2012-01-08 12:12:34,068 - ubuntuone.SyncDaemon.ActionQueue - INFO - Connection started to host fs-2.ubuntuone.com, port 443. 2012-01-08 12:12:34,256 - ubuntuone.SyncDaemon.ActionQueue - INFO - Connection made. 2012-01-08 12:12:34,257 - ubuntuone.SyncDaemon.StorageClient - INFO - Connection made. 2012-01-08 12:13:08,832 - ubuntuone.SyncDaemon.StorageClient - INFO - Connection lost, reason: [Failure instance: Traceback (failure with no frames): <class 'OpenSSL.SSL.Error'>: [('SSL routines', 'SSL23_READ', 'ssl handshake failure')]]. 2012-01-08 12:13:08,833 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'protocol_version' failed with the error: [('SSL routines', 'SSL23_READ', 'ssl handshake failure')] 2012-01-08 12:13:08,844 - ubuntuone.SyncDaemon.ActionQueue - WARNING - Connection lost: [('SSL routines', 'SSL23_READ', 'ssl handshake failure')] 2012-01-08 12:13:38,550 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'WAITING' (queues WORKING connection 'With User With Network')>; queue: 1378; hash: 0) ---- 2012-01-08 12:15:08,870 - ubuntuone.SyncDaemon.ActionQueue - INFO - Connection started to host fs-2.ubuntuone.com, port 443. 2012-01-08 12:15:09,033 - ubuntuone.SyncDaemon.ActionQueue - INFO - Connection made. 2012-01-08 12:15:09,034 - ubuntuone.SyncDaemon.StorageClient - INFO - Connection made. 2012-01-08 12:15:33,676 - ubuntuone.SyncDaemon.StorageClient - INFO - Connection lost, reason: [Failure instance: Traceback (failure with no frames): <class 'OpenSSL.SSL.Error'>: [('SSL routines', 'SSL23_READ', 'ssl handshake failure')]]. 2012-01-08 12:15:33,677 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'protocol_version' failed with the error: [('SSL routines', 'SSL23_READ', 'ssl handshake failure')] 2012-01-08 12:15:33,692 - ubuntuone.SyncDaemon.ActionQueue - WARNING - Connection lost: [('SSL routines', 'SSL23_READ', 'ssl handshake failure')] 2012-01-08 12:15:38,551 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'WAITING' (queues WORKING connection 'With User With Network')>; queue: 1378; hash: 0) ---- I'm using Ubuntu 11.10.

    Read the article

  • Charging by the hour/project

    - by thesam18888
    This is related to a question I asked earlier - How to end a relationship with a client without pissing them off? What are your obligations when charging by the hour vs charging by project? If you agree to take on a project, give a rough estimate that it might take 10 days for you to work on and charge £X per hour - are you obligated to work for free after those 10 days are up and you have still not managed to complete your project due to unanticipated issues? What if you have delivered the project but bugs are found - should you fix these bugs for free if the 10 days are up or should you charge your client? Also, for the above project, what should be the result when you start on the project, but after the 10 days for whatever reason you have to give up and tell your client that you cannot do it anymore? I realise that this does nothing to build your reputation and relationship with the client but are you obligated to pay back the money paid to you or do you just deliver the half/nearly completed source code and help them find someone else to complete it? The reason I am asking the above questions is because I am very new to freelancing and would like to know how to deal with the above situations if they ever crop up. Thanks!

    Read the article

  • Cyrus on CentOS with sasl / pam / ldap

    - by Oscar
    SASL/PAM/LDAP is driving me crazy... that's what I read a lot when googling for problems in this area, and what I experience myself :-S I'm trying to get Cyrus imap working for virtual hosting on CentOS with this authorisation backend and really don't know what's happening. In saslauthd I configured the LDAP search filter to use, but it looks like pam completely ignores it. Here's what I do for testing (done more tests but all with similar results): [root@testserv ~]# imtest -u [email protected] -a [email protected] WARNING: no hostname supplied, assuming localhost S: * OK [CAPABILITY IMAP4 IMAP4rev1 LITERAL+ ID STARTTLS] testserv. Cyrus IMAP4 v2.3.7-Invoca-RPM-2.3.7-7.el5_6.4 server ready C: C01 CAPABILITY S: * CAPABILITY IMAP4 IMAP4rev1 LITERAL+ ID STARTTLS ACL RIGHTS=kxte QUOTA MAILBOX-REFERRALS NAMESPACE UIDPLUS NO_ATOMIC_RENAME UNSELECT CHILDREN MULTIAPPEND BINARY SORT SORT=MODSEQ THREAD=ORDEREDSUBJECT THREAD=REFERENCES ANNOTATEMORE CATENATE CONDSTORE IDLE LISTEXT LIST-SUBSCRIBED X-NETSCAPE URLAUTH S: C01 OK Completed Please enter your password: C: L01 LOGIN [email protected] {6} S: + go ahead C: <omitted> S: L01 NO Login failed: authentication failure Authentication failed. generic failure Security strength factor: 0 C: Q01 LOGOUT * BYE LOGOUT received Q01 OK Completed Connection closed. The LDAP entry does exist (and so does the mailbox in Cyrus): [root@testserv ~]# ldapsearch -WxD cn=Manager,o=mydomain,c=com [email protected] Enter LDAP Password: # extended LDIF # # LDAPv3 # base <> with scope subtree # filter: [email protected] # requesting: ALL # # myuser, accounts, testserv.mydomain.com, mydomain, com dn: uid=myuser,ou=accounts,dc=testserv.mydomain.com,o=mydomain,c=com objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uidNumber: 16 uid: myuser gidNumber: 5 givenName: My sn: Name mail: [email protected] cn: My Name userPassword:: dYN5ebB0fXhNRn1pZllhRnJX7Uk= shadowLastChange: 15176 homeDirectory: /dev/null # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 This is what I get in /var/log/messages Aug 2 04:00:11 testserv cyrus/imap[12514]: auxpropfunc error invalid parameter supplied Aug 2 04:00:19 testserv saslauthd[5926]: do_auth : auth failure: [[email protected]] [service=imap] [realm=testserv.mydomain.com] [mech=pam] [reason=PAM auth error] ... /var/adm/auth.log Aug 2 04:00:11 testserv cyrus/imap[12514]: auxpropfunc error invalid parameter supplied Aug 2 04:00:11 testserv cyrus/imap[12514]: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: ldapdb Aug 2 04:00:19 testserv saslauthd[5926]: DEBUG: auth_pam: pam_authenticate failed: User not known to the underlying authentication module Aug 2 04:00:19 testserv saslauthd[5926]: do_auth : auth failure: [[email protected]] [service=imap] [realm=testserv.mydomain.com] [mech=pam] [reason=PAM auth error] (AFAIK I can ignore the auxprop msg) ... and /var/log/slapd.log: Aug 2 04:00:19 testserv slapd[5968]: conn=61 fd=27 ACCEPT from IP=127.0.0.1:51403 (IP=0.0.0.0:389) Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=0 BIND dn="" method=128 Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=0 RESULT tag=97 err=0 text= Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=1 SRCH base="o=mydomain,c=com" scope=2 deref=0 filter="([email protected])" Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=1 SEARCH RESULT tag=101 err=0 nentries=0 text= Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=2 UNBIND Aug 2 04:00:19 testserv slapd[5968]: conn=61 fd=27 closed These are the settings in In /etc/imapd.conf: sasl_mech_list: PLAIN LOGIN sasl_pwcheck_method: saslauthd ## sasl_auxprop_plugin: sasldb sasl_auto_transition: no and my sasl config: [root@testserv ~]# cat /etc/sysconfig/saslauthd # Directory in which to place saslauthd's listening socket, pid file, and so # on. This directory must already exist. SOCKETDIR=/var/run/saslauthd # Mechanism to use when checking passwords. Run "saslauthd -v" to get a list # of which mechanism your installation was compiled with the ablity to use. MECH=pam # Additional flags to pass to saslauthd on the command line. See saslauthd(8) # for the list of accepted flags. FLAGS="-c -r -O /etc/saslauthd.conf" [root@testserv ~]# cat /etc/saslauthd.conf ldap_servers: ldap://127.0.0.1/ ldap_search_base: dc=%d,o=mydomain,c=com ldap_auth_method: bind #ldap_filter: (|(uid=%u)((&(mail=%u@%d)(accountStatus=active))) ldap_filter: (&(mail=%u@%d)(accountStatus=active)) ldap_debug: 1 ldap_version: 3 The accountStatus=active is not in ldap yet, but that doesn't make a difference since I don't see it in the filter... that's not the reason for the failure. The weird thing is, I do get an error when I rename or remove /etc/saslauthd.conf, but when the file exists it seems happily ignored... The filter in slapd.log seems to be taken from /etc/ldap.conf. Apart from some timers, that only contains: host 127.0.0.1 base o=mydomain,c=com pam_login_attribute mail Outcommenting the pam_login_attribute results in this filter in slapd.log: filter="([email protected])" Pam-imap looks like this: [root@testserv ~]# cat /etc/pam.d/imap auth required pam_ldap.so debug account required pam_ldap.so debug #auth sufficient pam_unix.so likeauth nullok #auth sufficient pam_ldap.so use_first_pass #auth required pam_deny.so #account sufficient pam_unix.so #account sufficient pam_ldap.so The outcommented stuff is because I don't have the cyrus admin user in Ldap; that's a Linux user. That works fine when uncommented, but I still need to play around with that a little and first I wanna get imap working. Finally nsswitch: [root@testserv ~]# cat /etc/nsswitch.conf # # /etc/nsswitch.conf # # An example Name Service Switch config file. This file should be # sorted with the most-used services at the beginning. # # The entry '[NOTFOUND=return]' means that the search for an # entry should stop if the search in the previous entry turned # up nothing. Note that if the search failed due to some other reason # (like no NIS server responding) then the search continues with the # next entry. # # Legal entries are: # # nisplus or nis+ Use NIS+ (NIS version 3) # nis or yp Use NIS (NIS version 2), also called YP # dns Use DNS (Domain Name Service) # files Use the local files # db Use the local database (.db) files # compat Use NIS on compat mode # hesiod Use Hesiod for user lookups # [NOTFOUND=return] Stop searching if not found so far # # To use db, put the "db" in front of "files" for entries you want to be # looked up first in the databases # # Example: #passwd: db files nisplus nis #shadow: db files nisplus nis #group: db files nisplus nis passwd: compat ldap group: compat ldap shadow: compat ldap hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files netgroup: nisplus publickey: nisplus automount: files nisplus aliases: files nisplus Any info where to start looking will be greatly appreciated! Thnx in advance

    Read the article

  • Ubuntu whois package and request limits

    - by Sam Hammamy
    I'm writing a django app with a form that accepts an IP and does a whois lookup on the discovered domain names. I've found the Ubuntu package whois which I plan to call from a python subprocess, and read the stdout into a StringIO, then parse for things like Registrar, Name Servers, etc. My question is, it seems that there are many paid whois services, which means that there must be a reason why people don't just use this Ubuntu package. I'm wondering if there's a request limit on the number of requests from a single IP to the package's whois server? I will probably be making 250 domain lookups per IP or maybe more. Also, I've found that some domains aren't searchable: qmul.ac.uk is searchable kat.ph is not searchable ahram.org.eg is not searchable Any particular reason for that?

    Read the article

  • Cable Internet connectivity Problem?

    - by LightHeaded
    I just got Internet cable from Time Warner/ EarthLink. For some reason I tried everything that Time Warner and Earthlink techs told me to do but nothing seems to work. I still can't connect to the Internet. For some reason they think it's the IPv4 address since it begins with 169. I did everything the told me to 8 times, but they both give me the run around tell me it's the other's fault. How can I fix this once and for all? I have no router. I use cable to connect to the Internet. Windows Vista Cable broadband.

    Read the article

  • How to use supervisord to run a PHP script as a daemon?

    - by Alasdair
    I need to have 8 threads of the same PHP script running in the background on a server continuously (as a daemon), and each script need to be automatically restarted if it exits for any reason. I've been advised to use supervisord to do this, but I don't at all understand their documentation, which seems very complicated to me. I also want each of the 8 threads of the script to be initially started at 2 minute intervals (2 minutes in between each launch) but then after this all 8 threads of the same PHP script should continue running on the server forever (and restarting if any exit for any reason). Could someone please explain how to do this with supervisord, or any other easy way of doing it? I'm on CentOS 6. Thank you!

    Read the article

  • Daylight Saving Time Visualized

    - by Jason Fitzpatrick
    When you map out the Daylight Saving Time adjusted sunrise and sunset times over the course of the year, an interesting pattern emerges. Chart designer Germanium writes: I tried to come up with the reason for the daylight saving time change by just looking at the data for sunset and sunrise times. The figure represents sunset and sunrise times thought the year. It shows that the daylight saving time change marked by the lines (DLS) is keeping the sunrise time pretty much constant throughout the whole year, while making the sunset time change a lot. The spread of sunrise times as measured by the standard deviation is 42 minutes, which means that the sunrise time changes within that range the whole year, while the standard deviation for the sunset times is 1:30 hours. Whatever the argument for doing this is, it’s pretty clear that reason is to keep the sunrise time constant. You can read more about the controversial history of Daylight Saving Time here. Daylight Saving Time Explained [via Cool Infographics] 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Set-Cookie Headers getting stripped in ASP.NET HttpHandlers

    - by Rick Strahl
    Yikes, I ran into a real bummer of an edge case yesterday in one of my older low level handler implementations (for West Wind Web Connection in this case). Basically this handler is a connector for a backend Web framework that creates self contained HTTP output. An ASP.NET Handler captures the full output, and then shoves the result down the ASP.NET Response object pipeline writing out the content into the Response.OutputStream and seperately sending the HttpHeaders in the Response.Headers collection. The headers turned out to be the problem and specifically Http Cookies, which for some reason ended up getting stripped out in some scenarios. My handler works like this: Basically the HTTP response from the backend app would return a full set of HTTP headers plus the content. The ASP.NET handler would read the headers one at a time and then dump them out via Response.AppendHeader(). But I found that in some situations Set-Cookie headers sent along were simply stripped inside of the Http Handler. After a bunch of back and forth with some folks from Microsoft (thanks Damien and Levi!) I managed to pin this down to a very narrow edge scenario. It's easiest to demonstrate the problem with a simple example HttpHandler implementation. The following simulates the very much simplified output generation process that fails in my handler. Specifically I have a couple of headers including a Set-Cookie header and some output that gets written into the Response object.using System.Web; namespace wwThreads { public class Handler : IHttpHandler { /* NOTE: * * Run as a web.config set handler (see entry below) * * Best way is to look at the HTTP Headers in Fiddler * or Chrome/FireBug/IE tools and look for the * WWHTREADSID cookie in the outgoing Response headers * ( If the cookie is not there you see the problem! ) */ public void ProcessRequest(HttpContext context) { HttpRequest request = context.Request; HttpResponse response = context.Response; // If ClearHeaders is used Set-Cookie header gets removed! // if commented header is sent... response.ClearHeaders(); response.ClearContent(); // Demonstrate that other headers make it response.AppendHeader("RequestId", "asdasdasd"); // This cookie gets removed when ClearHeaders above is called // When ClearHEaders is omitted above the cookie renders response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); // *** This always works, even when explicit // Set-Cookie above fails and ClearHeaders is called //response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); response.Write(@"Output was created.<hr/> Check output with Fiddler or HTTP Proxy to see whether cookie was sent."); } public bool IsReusable { get { return false; } } } } In order to see the problem behavior this code has to be inside of an HttpHandler, and specifically in a handler defined in web.config with: <add name=".ck_handler" path="handler.ck" verb="*" type="wwThreads.Handler" preCondition="integratedMode" /> Note: Oddly enough this problem manifests only when configured through web.config, not in an ASHX handler, nor if you paste that same code into an ASPX page or MVC controller. What's the problem exactly? The code above simulates the more complex code in my live handler that picks up the HTTP response from the backend application and then peels out the headers and sends them one at a time via Response.AppendHeader. One of the headers in my app can be one or more Set-Cookie. I found that the Set-Cookie headers were not making it into the Response headers output. Here's the Chrome Http Inspector trace: Notice, no Set-Cookie header in the Response headers! Now, running the very same request after removing the call to Response.ClearHeaders() command, the cookie header shows up just fine: As you might expect it took a while to track this down. At first I thought my backend was not sending the headers but after closer checks I found that indeed the headers were set in the backend HTTP response, and they were indeed getting set via Response.AppendHeader() in the handler code. Yet, no cookie in the output. In the simulated example the problem is this line:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); which in my live code is more dynamic ( ie. AppendHeader(token[0],token[1[]) )as it parses through the headers. Bizzaro Land: Response.ClearHeaders() causes Cookie to get stripped Now, here is where it really gets bizarre: The problem occurs only if: Response.ClearHeaders() was called before headers are added It only occurs in Http Handlers declared in web.config Clearly this is an edge of an edge case but of course - knowing my relationship with Mr. Murphy - I ended up running smack into this problem. So in the code above if you remove the call to ClearHeaders(), the cookie gets set!  Add it back in and the cookie is not there. If I run the above code in an ASHX handler it works. If I paste the same code (with a Response.End()) into an ASPX page, or MVC controller it all works. Only in the HttpHandler configured through Web.config does it fail! Cue the Twilight Zone Music. Workarounds As is often the case the fix for this once you know the problem is not too difficult. The difficulty lies in tracking inconsistencies like this down. Luckily there are a few simple workarounds for the Cookie issue. Don't use AppendHeader for Cookies The easiest and obvious solution to this problem is simply not use Response.AppendHeader() to set Cookies. Duh! Under normal circumstances in application level code there's rarely a reason to write out a cookie like this:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); but rather create the cookie using the Response.Cookies collection:response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); Unfortunately, in my case where I dynamically read headers from the original output and then dynamically  write header key value pairs back  programmatically into the Response.Headers collection, I actually don't look at each header specifically so in my case the cookie is just another header. My first thought was to simply trap for the Set-Cookie header and then parse out the cookie and create a Cookie object instead. But given that cookies can have a lot of different options this is not exactly trivial, plus I don't really want to fuck around with cookie values which can be notoriously brittle. Don't use Response.ClearHeaders() The real mystery in all this is why calling Response.ClearHeaders() prevents a cookie value later written with Response.AppendHeader() to fail. I fired up Reflector and took a quick look at System.Web and HttpResponse.ClearHeaders. There's all sorts of resetting going on but nothing that seems to indicate that headers should be removed later on in the request. The code in ClearHeaders() does access the HttpWorkerRequest, which is the low level interface directly into IIS, and so I suspect it's actually IIS that's stripping the headers and not ASP.NET, but it's hard to know. Somebody from Microsoft and the IIS team would have to comment on that. In my application it's probably safe to simply skip ClearHeaders() in my handler. The ClearHeaders/ClearContent was mainly for safety but after reviewing my code there really should never be a reason that headers would be set prior to this method firing. However, if for whatever reason headers do need to be cleared, it's easy enough to manually clear the headers out:private void RemoveHeaders(HttpResponse response) { List<string> headers = new List<string>(); foreach (string header in response.Headers) { headers.Add(header); } foreach (string header in headers) { response.Headers.Remove(header); } response.Cookies.Clear(); } Now I can replace the call the Response.ClearHeaders() and I don't get the funky side-effects from Response.ClearHeaders(). Summary I realize this is a total edge case as this occurs only in HttpHandlers that are manually configured. It looks like you'll never run into this in any of the higher level ASP.NET frameworks or even in ASHX handlers - only web.config defined handlers - which is really, really odd. After all those frameworks use the same underlying ASP.NET architecture. Hopefully somebody from Microsoft has an idea what crazy dependency was triggered here to make this fail. IAC, there are workarounds to this should you run into it, although I bet when you do run into it, it'll likely take a bit of time to find the problem or even this post in a search because it's not easily to correlate the problem to the solution. It's quite possible that more than cookies are affected by this behavior. Searching for a solution I read a few other accounts where headers like Referer were mysteriously disappearing, and it's possible that something similar is happening in those cases. Again, extreme edge case, but I'm writing this up here as documentation for myself and possibly some others that might have run into this. © Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • InvalidProgramException Running Unit Test (Bug Closed)

    - by Anthony Trudeau
    In a previous post I reported an InvalidProgramException that occurs in a certain circumstance with unit tests involving accessors on a private generic method.  It turns out that Bug #635093 reported through Microsoft Connect will not be fixed. The reason cited is that private accessors have been discontinued.  And why have private accessors been discontinued?  They don't have time is the reason listed in the blog post titled "Generation of Private Accessors (Publicize) and Code Generation for Visual Studio 2010". In my opinion, it's a piss poor decision to discontinue support for a feature that they're still using within automatically generated unit tests against private classes and methods.  But, I think what is worse is the lack of guidance cited in the aforementioned blog post.  Their advice?  Use the PrivateObject to help, but develop your own framework. At the end of the day what Microsoft is saying is, "I know you spent a lot of money for this product.  I know that you don't have time to develop a framework to deal with this.  We don't have time and that is all that's important."

    Read the article

  • MSSQL instance shuts down

    - by citronas
    I'm currently developing a new ASP.net project hosted on a Windows Server 2008 RC2 with an MSSQL 2008 Express Database. I have three SQL instances (for different purposes) running which currently all contain a single database. For apprently no reason, these instances tend to shut down after some days, for no apparent reason. There might be low or none traffic to these instances, because there might be some days in a row, where I can't develop. It now occured several times, that one or two of these three instances just shut down, so that I can't access the database, without manually starting the instance. I can't seem to find a event log entry for the shutdown, which is most likely because I just enabled logging (why is the default setting off?) So the questions are: * Why does a SQL instance shut down? (Is there such thing as a "Shut down instance after 3 days of inactivity"? * How can I achieve that the instances are running 24/7?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >