Search Results

Search found 18954 results on 759 pages for 'connection reset'.

Page 684/759 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • Why does my Jabber bot only work if I'm debugging my Perl script?

    - by TheGNUGuy
    I am trying to make a jabber bot from scratch and my script is acting funny. I was originally developing the bot on a remote CentOS box, but I have switched to a local Win7 machine. Right now I'm using ActiveState Perl and I'm using Eclipse with the Perl plugin to run a debug the script. The funny behavior I'm experiencing occurs when I run or debug the script. If I run the script using the debugger it works fine, meaning I can send messages to the bot and it can send messages to me. However when I just execute the script normally the bot sends the successful connection message then it disconnects from my jabber server and the script ends. I'm a novice when it comes to Perl and I can't figure out what I'm doing wrong. My guess is it has something to do with the subroutines and sending the presence of the bot. (I know for sure that it has something to do with sending the bot's presence because if the presence code is removed, the script behaves as expected except the bot doesn't appear to be online.) If anyone can help me with this that would be great. I originally had everything in 1 file but separated them into several trying to figure out my problem here are the pastebin links to my source code. jabberBot.pl: http://pastebin.com/cVifv0mm chatRoutine.pm: http://pastebin.com/JXmMT7av trimSpaces.pm: http://pastebin.com/SkeuWtu1 Thanks again for any help!

    Read the article

  • how to store/model users/faceboook users/linkedin users, etc, with ActiveRecord?

    - by crankharder
    My app has "normal" users: those which come through a typical signup page facebook(FB) users: those which come from Facebook connect "FB-normal" users: a user that can log with both email/password * FB connect Further, there's the a slew of other openID-ish login methods (I don't think openID itself will be acceptable since it doesn't link up the accounts and allow the 3rd party specific features (posting to twitter, adding a FB post, etc etc)) So, how do I model this? Right now we have User class with #facebook_user? defined -- but it gets messy with the "FB-normal" users - plus all the validations become very tricky and hard to interpret. Also, there are methods like #deliver_password_reset! which make no sense in the context for facebook-only users. (this is lame) I've thought out STI (User::Facebook, User::Normal, User::FBNormal, etc.) This makes validations super slick, but it doesn't scale to other connection types, and all the permutations between them... User::FacebookLinkedInNormal(wtf?) Doing this with a bunch of modules I think would suck a lot. Any other ideas?

    Read the article

  • Strange Sql Server 2005 behavior

    - by Justin C
    Background: I have a site built in ASP.NET with Sql Server 2005 as it's database. The site is the only site on a Windows Server 2003 box sitting in my clients server room. The client is a local school district, so for data security reasons there is no remote desktop access and no remote Sql Server connection, so if I have to service the database I have to be at the terminal. I do have FTP access to update ASP code. Problem: I was contacted yesterday about an issue with the system. When I looked in to it, it seems a bug that I had solved nearly a year ago had returned. I have a stored procedure that used to take an int as a parameter but a year ago we changed the structure of the system and updated the stored procedure to take an nvarchar(10). The stored procedure somehow changed back to taking an int instead of an nvarchar. There is an external hard drive connected to the server that copies data periodically and has the ability to restore the server in case of failure. I would have assumed that somehow an older version of the database had been restored, but data that I know was inserted 7 days and 1 day before the bug occurred is still in the database. Question: Is there anyway that the structure of a Sql Server 2005 database can revert to a previous version or be restored to a previous version without touching the actual data? No one else should have access to the server so I'm going a little insane trying to figure out how this even happened. Any ideas?

    Read the article

  • MAPI find the contacts and calendar folder

    - by Rogier21
    In my outlook I have 1 exchange connection and 2 Personal Folders. I want to go fetch ALL items from the calendars and contacts so I use: /** * Create outlook application */ Outlook.Application oApp = new Outlook.Application(); Outlook.NameSpace oNS = oApp.GetNamespace("mapi"); oNS.Logon(Missing.Value, Missing.Value, true, true); /** * Loop through all the folders */ foreach (Outlook.MAPIFolder oFolder in oNS.Folders) { if (oFolder.Name == "Public Folders") { break; } /** * Get calendar items */ //Outlook.MAPIFolder oCalendar = oNS.GetDefaultFolder(Outlook.OlDefaultFolders.olFolderCalendar); Outlook.MAPIFolder oCalendar = oFolder.Folders[5]; Outlook.Items oCalendarItems = oCalendar.Items; //Outlook.MAPIFolder oContacts = oNS.GetDefaultFolder(Outlook.OlDefaultFolders.olFolderContacts); Outlook.MAPIFolder oContacts = oFolder.Folders[7]; Outlook.Items oContactItems = oContacts.Items; But this does not work oFolder.Folders[5] is not always 5 for the calendar, sometimes it's a different value. I cannot find the items by name oFolder.Folders["Calendar"]; because in Dutch the folder will be named "Agenda". Usually I use: Outlook.MAPIFolder oCalendar = oNS.GetDefaultFolder(Outlook.OlDefaultFolders.olFolderCalendar); But then I only get the default calendar. How can I get the other calendars?

    Read the article

  • WCF with SSL- not finding localhost

    - by SteveCav
    Hi guys, I'm trying to get WCF to use SSL with ANYTHING for FIVE DAYS now. I've gone through countless walkthroughs, generated more certificates than a mail order diploma company, even tried hot fixes. After working with MS dev tools since VB1, I am now considering flipping burgers as a career option. WCF, as far as I can see, is a complete lemon. Anyway, to get to my actual question: If I run through this walkthrough: http://msdn.microsoft.com/en-us/library/ff648840.aspx I get to step 11 (adding the service reference) and get "There was an error downloading metadata from the address. Please verify that you have entered a valid address". Details of the error gives: There was an error downloading 'https://localhost/SSL6/Service.svc'. Unable to connect to the remote server No connection could be made because the target machine actively refused it 127.0.0.1:443 I'm using VS2008 on Windows 7 with IIS7. I followed the walkthrough exactly (apart from step 5 which was different on IIS7- I went into "SSL Settings" for the VD), so it shows my config (yes I've used httpsGetEnabled and mexHttpsBinding). Anyone care to save my sanity and job?

    Read the article

  • Using AJAX to display error message Ruby on Rails

    - by bgadoci
    I have built a blog using Ruby on Rails. New to both. I am implementing AJAX pretty effectively until I get to the error handling portion. I allow for comments on posts and do this by rendering a comment partial and remote form in the /views/posts/show.html.erb page. Upon successful save of a comment the show page is updated using views/comments/create.js.rjs and displays a flash notice. I am simply trying to flash a notice when it doesn't save. Searched around and worked this a bit on my own. Can't get it to fly. Here is my code: /views/posts/show.html.erb <div id="comments"> <%= render :partial => @post.comments %> <div id="notice"><%= flash[:notice] %></div> </div> <% remote_form_for [@post, Comment.new] do |f| %> <p> <%= f.label :body, "New Comment" %><br/> <%= f.text_area (:body, :class => "textarea") %> </p> <p> <%= f.label :name, "Name" %><br/> <%= f.text_field (:name, :class => "textfield") %> </p> <p> <%= f.label :email, "Email" %><br/> <%= f.text_field (:email, :class => "textfield") %> </p> <p><%= f.submit "Add Comment" %></p> <% end %> /views/comments/_comment.html.erb <% div_for comment do %> <div id="comment-wrapper"> <% if admin? %> <div id="comment-destroy"><%=link_to_remote "X", :url => [@post, comment], :method => :delete %></div> <% end %> <%= h(comment.body) %><br/><br/> <div class="small">Posted <%= time_ago_in_words(comment.created_at) %> ago by <%= h(comment.name) %> <% if admin? %> | <%= h(comment.email) %> <% end %></div> </div> <% end %> /views/comments/create.js.rjs page.insert_html :bottom, :comments, :partial => @comment page[@comment].visual_effect :highlight page[:new_comment].reset page.replace_html :notice, flash[:notice] flash.discard CommentsController#create def create @post = Post.find(params[:post_id]) @comment = @post.comments.create!(params[:comment]) respond_to do |format| if @comment.save flash[:notice] = "Thanks for adding this comment" format.html { redirect_to @post } format.js else flash[:notice] = "Make sure you include your name and a valid email address" format.html { redirect_to @post } format.js end end end

    Read the article

  • Sample twitter application

    - by Jack
    I have installed WAMP and running PHP scripts on localhost. I have enabled cURL. Here is my code. <?php function updateTwitter($status) { // Twitter login information $username = 'xxxxx'; $password = 'xxxxx'; // The url of the update function $url = 'http://twitter.com/statuses/update.xml'; // Arguments we are posting to Twitter $postargs = 'status='.urlencode($status); // Will store the response we get from Twitter $responseInfo=array(); // Initialize CURL $ch = curl_init($url); // Tell CURL we are doing a POST curl_setopt($ch, CURLOPT_PROXY,"localhost:80"); curl_setopt ($ch, CURLOPT_POST, true); // Give CURL the arguments in the POST curl_setopt ($ch, CURLOPT_POSTFIELDS, $postargs); // Set the username and password in the CURL call curl_setopt($ch, CURLOPT_USERPWD, $username.':'.$password); // Set some cur flags (not too important) curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_NOBODY, 0); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_FOLLOWLOCATION,1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // execute the CURL call $response = curl_exec($ch); if($response === false) { echo 'Curl error: ' . curl_error($ch); } else { echo 'Operation completed without any errors<br/>'; } // Get information about the response $responseInfo=curl_getinfo($ch); // Close the CURL connection curl_close($ch); // Make sure we received a response from Twitter if(intval($responseInfo['http_code'])==200){ // Display the response from Twitter echo $response; }else{ // Something went wrong echo "Error: " . $responseInfo['http_code']; } curl_close($ch); } updateTwitter("Just finished a sweet tutorial on http://brandontreb.com"); ?> Here's my output Operation completed without any errors Error: 404 Here's what error 404 means 404 Not Found: The URI requested is invalid or the resource requested, such as a user, does not exists. Please help.

    Read the article

  • IIS6, ASP.NET MVC 1 and random slowdowns

    - by Mr Snuffle
    I've recently deployed a MVC application to an IIS6 web server. One strange behaviour I've been having is the load times will randomly blow up to 30sec+ and then return to normal. Our tests have shown this occurring on multiple connections at the same time. Once the wait has passed, the site become responsive again. It's completely random when this will occur, but will probably happen about once every 15 minutes or so. My first thought was the application was being restarted by the web server for some reason, but I determined this wasn't the case because the process recycling is set very infrequently, and I placed some logging in the application startup. It's also nothing to do with the database connection. This slowdown happens simply by moving between static pages too. I've watched the database with a SQL profiler, and nothing is hitting it when these slowdowns occur. Finally, I've placed entry and exit logging on my controller actions, the slowdown always happens outside of the controller. The entry and exit time for a controller action is always appropriately fast. Does anyone have any ideas of what could be causing this? I've tried running it locally on IIS7 and I haven't had the issue. I can only think it's something to do with our hosting provider.

    Read the article

  • PHP returns invalid MySQL resource

    - by DeadMG
    $LDATE = '#' . $_REQUEST['LDateDay'] . '/' . $_REQUEST['LDateMonth'] . '/' . $_REQUEST['LDateYear'] . '#'; $RDATE = '#' . $_REQUEST['RDateDay'] . '/' . $_REQUEST['RDateMonth'] . '/' . $_REQUEST['RDateYear'] . '#'; include("../../sql.php"); $myconn2 = mysql_connect(/*removed*/, $username, $password); mysql_select_db(/*removed*/, $myconn2); $LSQLRequest = "SELECT * FROM flight WHERE DepartureDate = ".$LDATE; $LFlights = mysql_query($LSQLRequest, $myconn2); $RSQLRequest = "SELECT * FROM flight WHERE DepartureDate = ".$RDATE; $RFlights = mysql_query($RSQLRequest, $myconn2); Assuming that all the $_REQUESTs are valid numerical values for their appropriate fields in the day/month/year field, how can LFlights and RFlights be invalid? When I polled the whole database I got hundreds of results so I know that the database and connection data is fine, and the field DepartureDate exists too.

    Read the article

  • Why can't I make an http request to the ASP.NET development server on localhost?

    - by Chris Farmer
    I have an ASP.NET project (VS2008 on Windows 7 with either webforms, MVC1, or MVC2 -- all the same result for me) which is just the File-New hello world web project. It's using the default ASP.NET development server, and when I start the server with F5, the browser never connects and I get a timeout. I tried to debug this by telnetting to the development server's port while it was running, and I got the same result: C:\Users\farmercs>telnet localhost 54752 Connecting To localhost...Could not open connection to the host, on port 54752: Connect failed I can see in the system tray that the server thinks it's running, and a netstat -s -n command shows that there is indeed an active TCP listener on that port. This worked in the not-too-distant past, and I could work on web projects using the development server. One thing that has changed since then was that I installed the Microsoft Loopback Adapter to accommodate a local development Oracle installation. I'm not sure this is the problem, but it seems a likely culprit. So, what could be blocking me from connecting? And if it's the loopback, then what is a good way for me to retain my ability to connect to my development Oracle server while still being able to use the ASP.NET development server?

    Read the article

  • How OpenStack Swift handles concurrent restful API request?

    - by Chen Xie
    I installed a swift service and was trying to know the capability of handling concurrent request. So I created massive amount of threads in Java, and sent it via the RestFUL API Not surprisingly, when the number of requests climb up, the program started to throw out exceptions. Caused by: java.net.ConnectException: Connection timed out: connect at java.net.DualStackPlainSocketImpl.connect0(Native Method) at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:69) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:157) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:378) at sun.net.www.http.HttpClient.openServer(HttpClient.java:473) at sun.net.www.http.HttpClient.(HttpClient.java:203) But can anyone tell me how that time outhappened? I am curious of how SWIFT handles those requests. Is that by queuing the requests and because there are too many requests in the queue and wait for too long time and it's just get kicked out from the queue? If this holds, does it mean that it's an asynchronized mechanism to handle requests? Thanks.

    Read the article

  • ODP.NET Procedure Compilation

    - by Bobcat1506
    When I try to execute a create procedure using ODP.NET I get back ORA-24344: success with compilation error. However, when I run the same statement in SQL Developer it compiles successfully. Does anyone know what I need to change to get my procedure to compile? Is it a character set issue? I am using Oracle 10g Express, .NET 3.5 SP 1, and ODP.NET 2.111.7.20 (version from Oracle.DataAccess.dll) [TestMethod] public void OdpNet_CreateProcedure() { ConnectionStringSettings settings = ConfigurationManager.ConnectionStrings["ODP.NET"]; using (var con = new OracleConnection(settings.ConnectionString)) { con.InfoMessage += new OracleInfoMessageEventHandler(con_InfoMessage); con.Open(); var cmd = new OracleCommand(); cmd.Connection = con; cmd.CommandText = @" CREATE OR REPLACE PROCEDURE TABLE1_GET ( P_CURSOR OUT SYS_REFCURSOR ) IS BEGIN OPEN P_CURSOR FOR SELECT * FROM TABLE1; END;"; cmd.ExecuteNonQuery(); // ORA-24344: success with compilation error cmd.CommandText = @"ALTER PROCEDURE TABLE1_GET COMPILE"; cmd.ExecuteNonQuery(); // ORA-24344: success with compilation error } } void con_InfoMessage(object sender, OracleInfoMessageEventArgs eventArgs) { System.Diagnostics.Debug.WriteLine(eventArgs.Message); }

    Read the article

  • How to: Simulating keystroke inputs in shell to an app running in an embedded target

    - by fzkl
    I am writing an automation script that runs on an embedded linux target. A part of the script involves running an app on the target and obtaining some data from the stdout. Stdout here is the ssh terminal connection I have to the target. However, this data is available on the stdout only if certain keys are pressed and the key press has to be done on the keyboard connected to the embedded target and not on the host system from which I have ssh'd into the target. Is there any way to simulate this? Edit: Elaborating on what I need - I have an OpenGL app that I run on the embedded linux (works like regular linux) target. This displays some graphics on the embedded system's display device. Pressing f on the keyboard connected to the target outputs the fps data onto the ssh terminal from which I control the target. Since I am automating the process of running this OpenGL app and obtaining the fps scores, I can't expect a keyboard to be connected to the target let alone expect a user to input a keystroke on the embedded target keyboard. How do I go about this? Thanks.

    Read the article

  • Active Directory Membership Provider - how to expand on this?

    - by Jaxidian
    I'm working on getting an MVC app up and running via AD Membership Provider and I'm having some issues figuring this out. I have a base configuration setup and working when I login as [email protected] + password. <connectionStrings> <add name="MyConnString" connectionString="LDAP://domaincontroller/OU=Product Users,DC=my,DC=domain,DC=com" /> </connectionStrings> <membership defaultProvider="MyProvider"> <providers> <clear /> <add name="MyProvider" connectionStringName="MyConnString" connectionUsername="my.domain.com\service_account" connectionPassword="biguglypassword" type="System.Web.Security.ActiveDirectoryMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </providers> </membership> However, I'd LIKE to do some other things and I'm not sure how to go about them. Login without typing the domain (i.e. the "@my.domain.com"). I realize that this could only work if I limit myself to just one domain - that's fine. Organize users in up to N different OUs within a single OU. As you can tell from my current connection string, I'm authenticating users in my Product Users OU. I would LIKE to create OUs for various companies within this OU and put the users into those OUs. How can I authenticate across all of these different OUs? I'm trying to figure out how the Active Directory Membership Provider ties in with the Profile and Role providers. Are there AD versions of those too or am I stuck with SQL, home-grown, or finding something somebody else has coded up? Many thanks!!

    Read the article

  • How to change internal buffer size of DataInputStream

    - by Gaks
    I'm using this kind of code for my TCP/IP connection: sock = new Socket(host, port); sock.setKeepAlive(true); din = new DataInputStream(sock.getInputStream()); dout = new DataOutputStream(sock.getOutputStream()); Then, in separate thread I'm checking din.available() bytes to see if there are some incoming packets to read. The problem is, that if a packet bigger than 2048 bytes arrives, the din.available() returns 2048 anyway. Just like there was a 2048 internal buffer. I can't read those 2048 bytes when I know it's not the full packet my application is waiting for. If I don't read it however - it'll all stuck at 2048 bytes and never receive more. Can I enlarge the buffer size of DataInputStream somehow? Socket receive buffer is 16384 as returned by sock.getReceiveBufferSize() so it's not the socket limiting me to 2048 bytes. If there is no way to increase the DataInputStream buffer size - I guess the only way is to declare my own buffer and read everything from DataInputStream to that buffer? Regards

    Read the article

  • At which line in the following code should I commit my unit of work?

    - by Pure.Krome
    I have the following code which is in a transaction. I'm not sure where/when I should be commiting my unit of work. On purpose, I've not mentioned what type of Respoistory i'm using - eg. Linq-To-Sql, Entity Framework 4, NHibernate, etc. If someone knows where, can they please explain WHY they have said, where? (i'm trying to understand the pattern through example(s), as opposed to just getting my code to work). Here's what i've got :- using ( TransactionScope transactionScope = new TransactionScope ( TransactionScopeOption.RequiresNew, new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted } ) ) { _logEntryRepository.InsertOrUpdate(logEntry); //_unitOfWork.Commit(); // Here, commit #1 ? // Now, if this log entry was a NewConnection or an LostConnection, // then we need to make sure we update the ConnectedClients. if (logEntry.EventType == EventType.NewConnection) { _connectedClientRepository.Insert( new ConnectedClient { LogEntryId = logEntry.LogEntryId }); //_unitOfWork.Commit(); // Here, commit #2 ? } // A (PB) BanKick does _NOT_ register a lost connection, // so we need to make sure we handle those scenario's as a LostConnection. if (logEntry.EventType == EventType.LostConnection || logEntry.EventType == EventType.BanKick) { _connectedClientRepository.Delete( logEntry.ClientName, logEntry.ClientIpAndPort); //_unitOfWork.Commit(); // Here, commit #3 ? } _unitOfWork.Commit(); // Here, commit #4 ? transactionScope.Complete(); }

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

  • Java getInputStream 400 errors

    - by Bill Szerdy
    When I contact a web service using a Java HttpUrlConnection it just returns a 400 Bad Request (IOException). How do I get the XML information that the server is returning; it does not appear to be in the getErrorStream of the connection nor is it in any of the exception information. When I run the following PHP code against a web service: <?php $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "https://www.myclientaddress.com/here/" ); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1 ); curl_setopt($ch, CURLOPT_POST,1 ); curl_setopt($ch, CURLOPT_POSTFIELDS,"username=ted&password=scheckler&type=consumer&id=123456789&zip=12345"); $result=curl_exec ($ch); echo $result; ?> it returns the following: <?xml version="1.0" encoding="utf-8"?> <response> <status>failure</status> <errors> <error name="RES_ZIP">Zip code is not valid.</error> <error name="ZIP">Invalid zip code for residence address.</error> </errors> </response> so I know the information exists

    Read the article

  • Created files on Archos 5 invisible on Windows Xp

    - by user352042
    I am fairly new to Android and this is my first post so I apologise in advance if I am breaking protocol or posting to the wrong board. Please feel free to move this post to somewhere more appropriate if required. I am developing for the 160 Gb Archos 5 Internet tablet. Not ideal as a development platform I know, but customer requirements mean we have no choice. It is running Android 1.6. I have updated the device firmware to the most recent available. Updating the version of Android is not an option at this point. Part of my app's requirement is to write information out to .txt files on the external storage directory so that these can be copied over the USB connection to a Windows XP PC using the Mobile media device (MTP) mode. I have followed all instructions I have come across carefully, eg I check that the storage is available using the technique described at http://developer.android.com/guide/topics/data/data-storage.html#filesExternal. However, althoug the files are created succesfully on the device (I can browse them and open them using the device's File Explorer - they are fine), when I connect the device to a Windows XP computer none of the directories or files I created appear and the size of their parent files suggest they do not exist. I have tried running over the ADB, checked logcat, tried a (signed) release version and even written a second test application which just creates a folder (this behaves the same, ie it creates the folder but this is not visible in Windows Explorer) - nothing anywhere gives me any suggestion as to what the problem might be. If anyone has heard of this before or has any ideas as to what else | could try to fix it please get in touch! We do not have any other devices to test on at the moment, although I hope to remedy this soon, customer permitting.

    Read the article

  • Qt cross thread call

    - by QLatvia
    I have a Qt/C++ application, with the usual GUI thread, and a network thread. The network thread is using an external library, which has its own select() based event loop... so the network thread isn't using Qt's event system. At the moment, the network thread just emit()s signals when various events occur, such as a successful connection. I think this works okay, as the signals/slots mechanism posts the signals correctly for the GUI thread. Now, I need for the network thread to be able to call the GUI thread to ask questions. For example, the network thread may require the GUI thread to request put up a dialog, to request a password. Does anyone know a suitable mechanism for doing this? My current best idea is to have the network thread wait using a QWaitCondition, after emitting an object (emit passwordRequestedEvent(passwordRequest);. The passwordRequest object would have a handle on the particular QWaitCondition, and so can signal it when a decision has been made.. Is this sort of thing sensible? or is there another option?

    Read the article

  • Using Reachability for Internet *or* local WiFi?

    - by randallmeadows
    I've searched SO for the answer to this question, and it's not really addressed, at least not to a point where I can make it work. I was originally only checking for Internet reachability, using: self.wwanReach = [Reachability reachabilityWithHostName:@"www.apple.com"]; [wwanReach startNotifer]; I now need to support a local WiFi connection (in the absence of reaching the Internet in general), and when I found +reachabilityForLocalWiFi, I also noticed there was +reachabilityForInternetConnection. I figured I could use these, instead of hard-coding "www.apple.com" in there, but alas, when I use self.wwanReach = [Reachability reachabilityForInternetConnection]; [wwanReach startNotifer]; self.wifiReach = [Reachability reachabilityForLocalWiFi]; [wifiReach startNotifer]; the reachability callback that I've set up "never" gets called, for values of "never" up to 10, 12, 15 minutes or so (which was as long as my patience lasted. (User's patience will be much less, I'm sure.) Switching back to +reachabilityWithHostName: works within seconds. I also tried each "pair" individually, in case there was an issue with two notifiers in progress simultaneously, but that made no difference. So: what is the appropriate way to determine reachability to either the Internet/WWAN or a local Wifi network (either one, or both)? [This particular use case is an iPhone or iPad connecting to a Mac mini computer-to-computer network; I'm sure other situations apply.]

    Read the article

  • How perform USort() on an Array of Objects class definition as a method?

    - by user558724
    class Contact{ public $name; public $bgcolor; public $lgcolor; public $email; public $element; public function __construct($name, $bgcolor, $lgcolor, $email, $element) { $this->name = $name; $this->bgcolor = $bgcolor; $this->lgcolor = $lgcolor; $this->email = $email; $this->element = $element; } public static function sortByName(Contact $p1, Contact $p2) { return strcmp($p1->name, $p2->name); } } class ContactList implements Iterator, ArrayAccess { protected $_label; protected $_contacts = array(); public function __construct($label) { $this->_label = $label; } public function getLabel() { return $this->_label; } public function addContact(Contact $contact) { $this->_contacts[] = $contact; } public function current() { return current($this->_contacts); } public function key() { return key($this->_contacts); } public function next() { return next($this->_contacts); } public function rewind() { return reset($this->_contacts); } public function valid() { return current($this->_contacts); } public function offsetGet($offset) { return $this->_contacts[$offset]; } public function offsetSet($offset, $data) { if (!$data instanceof Contact) throw new InvalidArgumentException('Only Contact objects allowed in a ContactList'); if ($offset == '') { $this->_contacts[] = $data; } else { $this->_contacts[$offset] = $data; } } public function offsetUnset($offset) { unset($this->_contacts[$offset]); } public function offsetExists($offset) { return isset($this->_contacts[$offset]); } public function sort($attribute = 'name') { $sortFct = 'sortBy' . ucfirst(strtolower($attribute)); if (!in_array($sortFct, get_class_methods('Contact'))) { throw new Exception('contact->sort(): Can\'t sort by ' . $attribute); } usort($this->contact, 'ContactList::' . $sortFct); } } public function Sort($property, $asc=true) { // this is where sorting logic takes place $_pd = $this->_contact->getProperty($property); if ($_pd == null) { user_error('Property '.$property.' does not exist in class '.$this->_contact->getName(), E_WARNING); return; } // set sortDescriptor ContactList::$sortProperty = $_pd; // and apply sorting usort($this->_array, array('ContactList', ($asc?'USortAsc':'USortDesc'))); } function getItems(){ return $this->_array; } class SortableItem extends ContactList { static public $sortProperty; static function USortAsc($a, $b) { /*@var $_pd ReflectionProperty*/ /* $_pd = self::$sortProperty; if ($_pd !== null) { if ($_pd->getValue($a) === $_pd->getValue($b)) return 0; else return (($_pd->getValue($a) < $_pd->getValue($b))?-1:1); } return 0; } static function USortDesc($a, $b) { return -(self::USortAsc($a,$b)); } } This approach keeps giving me PHP Warnings: usort() [function.usort]: of all kinds which I can provide later as needed to comment out those methods and definitions in order to test and fix some minor bugs of our program. **$billy parameters are already defined. $all -> addContact($billy); // --> ended up adding each contact manually above $all->Sort('name',true); $items = $all->getItems(); foreach($items as $contact) { echo $contact->__toString(); } $all->sort(); The reason for using usort is to re-arrange the order alphabetically by name but somehow is either stating that the function comparison needs to be an array or another errors which obviously I have seemed to pass. Any help would be greatly appreciated, thanks in advance.

    Read the article

  • Python: two loops at once

    - by Stephan Meijer
    I've got a problem: I am new to Python and I want to do multiple loops. I want to run a WebSocket client (Autobahn) and I want to run a loop which shows the filed which are edited in a specific folder (pyinotify or else Watchdog). Both are running forever, Great. Is there a way to run them at once and send a message via the WebSocket connection while I'm running the FileSystemWatcher, like with callbacks, multithreading, multiprocessing or just separate files? factory = WebSocketClientFactory("ws://localhost:8888/ws", debug=False) factory.protocol = self.webSocket connectWS(factory) reactor.run() If we run this, it will have success. But if we run this: factory = WebSocketClientFactory("ws://localhost:8888/ws", debug=False) factory.protocol = self.webSocket connectWS(factory) reactor.run() # Websocket client running now,running the filewatcher wm = pyinotify.WatchManager() mask = pyinotify.IN_DELETE | pyinotify.IN_CREATE # watched events class EventHandler(pyinotify.ProcessEvent): def process_IN_CREATE(self, event): print "Creating:", event.pathname def process_IN_DELETE(self, event): print "Removing:", event.pathname handler = EventHandler() notifier = pyinotify.Notifier(wm, handler) wdd = wm.add_watch('/tmp', mask, rec=True) notifier.loop() This will create 2 loops, but since we already have a loop, the code after 'reactor.run()' will not run at all.. For your information: this project is going to be a sync client. Thanks a lot!

    Read the article

  • SQL 2 INNER JOINS with 3 tables

    - by Jelmer Holtes
    I've a question about a SQL query.. I'm building a prototype webshop in ASP.NET Visual Studio. Now I'm looking for a solution to view my products. I've build a database in MS Access, it consists of multiple tables. The tables which are important for my question are: Product Productfoto Foto Below you'll see the relations between the tables For me it is important to get three datatypes: Product title, price and image. The product title, and the price are in the Product table. The images are in the Foto table. Because a product can have more than one picture, there is a N - M relation between them. So I've to split it up, I did it in the Productfoto table. So the connection between them is: product.artikelnummer -> productfoto.artikelnummer productfoto.foto_id -> foto.foto_id Then I can read the filename (in the database: foto.bestandnaam) I've created the first inner join, and tested it in Access, this works: SELECT titel, prijs, foto_id FROM Product INNER JOIN Productfoto ON product.artikelnummer = productfoto.artikelnummer But I need another INNER JOIN, how could I create that? I guess something like this (this one will give me an error) SELECT titel, prijs, bestandnaam FROM Product (( INNER JOIN Productfoto ON product.artikelnummer = productfoto.artikkelnummer ) INNER JOIN foto ON productfoto.foto_id = foto.foto_id) Can anyone help me?

    Read the article

  • How to detect how many time the users have click the button...

    - by Jerry
    Hello guys. Just want to know if there is a way to detect how many times a user has clicked a button by using Jquery. My main application has a button that can add input fields depend on the users. He/She can adds as many input fields as they need. When they submit the form, The add page will add the data to my database. My current idea is to create a hidden input field and set the value to zero. Every time a user clicks the button, jquery would update the attribute of the hidden input field value. Then the "add page" can detect the loop time. See the example below. I just want to know if there are better practices to do this. Thanks for the helps. main page <form method='post' action='add.php'> //omit <input type="hidden" id="add" name="add" value="0"/> <input type="button" id="addMatch" value="Add a match"/> //omit </form> jquery $(document).ready(function(){ var a =0; $("#addMatch").live('click', function(){ $('#table').append("<input name='match"+a+"Name' />") //the input field will append //as many as the user wants. a++; $('#add').attr('name', 'a'); //pass the a value to hidden input field return false; }); Add Page $a=$_POST['a']; // for($k=0;$k<$a;$k++){ //get all matchName input field $matchName=$_POST['match'.$k.'Name']; //insert the match $updateQuery=mysql_query("INSERT INTO game (team) values('$matchName')",$connection); if(!$updateQuery){ DIE('mysql Error:'+mysql_error()); }

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >