Search Results

Search found 7128 results on 286 pages for 'httpcontext cache'.

Page 274/286 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • How to send web browser a loading page, then some time later a results page

    - by Kurt W. Leucht
    I've wasted at least a half day of my company's time searching the Internet for an answer and I'm getting wrapped around the axle here. I can't figure out the difference between all the different technology choices (long polling, ajax streaming, comet, XMPP, etc.) and I can't get a simple hello world example working on my PC. I am running Apache 2.2 and ActivePerl 5.10.0. JavaScript is completely acceptable for this solution. All I want to do is write a simple Perl CGI script that when accessed, it immediately returns some HTML that tells the user to wait or maybe sends an animated GIF. Then without any user intervention (no mouse clicks or anything) I want the CGI script to at some time later replace the wait message or the animated GIF with the actual results from their query. I know this is simple stuff and websites do it all the time using JavaScript, but I can't find a single working example that I can cut and paste onto my machine that will work in Perl. Here is my simple Hello World example that I've compiled from various Internet sources, but it doesn't seem to work. When I refresh this Perl CGI script in my web browser it prints nothing for 5 seconds, then it prints the PLEASE BE PATIENT web page, but not the results web page. So the Ajax XMLHttpRequest stuff obviously isn't working right. What am I doing wrong? #!C:\Perl\bin\perl.exe use CGI; use CGI::Carp qw/fatalsToBrowser warningsToBrowser/; sub Create_HTML { my $html = <<EOHTML; <html> <head> <meta http-equiv="pragma" content="no-cache" /> <meta http-equiv="expires" content="-1" /> <script type="text/javascript" > var xmlhttp=false; /*@cc_on @*/ /*@if (@_jscript_version >= 5) // JScript gives us Conditional compilation, we can cope with old IE versions. // and security blocked creation of the objects. try { xmlhttp = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try { xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } catch (E) { xmlhttp = false; } } @end @*/ if (!xmlhttp && typeof XMLHttpRequest!='undefined') { try { xmlhttp = new XMLHttpRequest(); } catch (e) { xmlhttp=false; } } if (!xmlhttp && window.createRequest) { try { xmlhttp = window.createRequest(); } catch (e) { xmlhttp=false; } } </script> <title>Ajax Streaming Connection Demo</title> </head> <body> Some header text. <p> <div id="response">PLEASE BE PATIENT</div> <p> Some footer text. </body> </html> EOHTML return $html; } my $cgi = new CGI; print $cgi->header; print Create_HTML(); sleep(5); print "<script type=\"text/javascript\">\n"; print "\$('response').innerHTML = 'Here are your results!';\n"; print "</script>\n";

    Read the article

  • How to resolve "HTTP/1.1 403 Forbidden" errors from iCal/CalDAV server after upgrade to Snow Leopard Server?

    - by morgant
    We recently upgraded our Open Directory Master & Replica to Mac OS X 10.6.4 Snow Leopard Server. We had a mismatched server FQDN & LDAP Search Base/Kerberos Realm, so we exported all users & groups, created the new Open Directory Master w/matching FQDN & Search Base/Realm, reimported users & groups, and re-bound all servers & workstations to the new OD Master. At the same time as all of this, we upgraded our iCal/CalDAV server to Mac OS X 10.6.4 Snow Leopard Server. Ever since doing so, we've seen the following issues with our iCal/CalDAV server and iCal clients on both Mac OS X 10.5 Leopard & Mac OS X 10.6: If a user attempts to move or delete an event (single or repeating) that was created prior to the upgrade to 10.6 Server, they get the following error: Access to "blah" in "blah" in account "blah" is not permitted. The server responded: "HTTP/1.1 403 Forbidden" to operation CalDAVWriteEntityQueueableOperation. New users added to the directory get the following error when attempting to add their account to in iCal's preferences: The user "blah" has no configured pricipals. Confirm with your network administrator that your account has at least one CalDAV principal configured. Interestingly, we've since discovered that users seem to be able to delete individual events from an old repeating event without error, but that's a massive amount of work to get rid of a repeating event. I will note that we have not yet added an SRV record in DNS as instructed on page 19 of iCal_Server_Admin_v10.6.pdf. Further Investigation: In this particular case, a user is attempting to decline repeating events created prior to the upgrade to Snow Leopard Server. Granting the user full write access with sudo calendarserver_manage_principals --add-write-proxy users:user1 users:user2 (which did work) doesn't allow deletion of the events. Still get the usual error: Access to "blah blah" in "blah blah" in account "blah blah" is not permitted. The server responded: "HTTP/1.1 403 Forbidden" to operation CalDAVWriteEntityQueueableOperation. The error that shows up in /var/log/caldavd/error.log on the iCal Server when attempting to delete one of the events is: 2011-03-17 15:14:30-0400 [-] [caldav-8009] [PooledMemCacheProtocol,client] [twistedcaldav.extensions#info] PUT /calendars/__uids__/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/calendar/XXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX.ics HTTP/1.1 2011-03-17 15:14:30-0400 [-] [caldav-8009] [PooledMemCacheProtocol,client] [twistedcaldav.scheduling.implicit#error] Cannot change ORGANIZER: UID:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX And the error in /var/log/system.log on the client is: Mar 17 15:14:30 192-168-21-169-dhcp iCal[33509]: CalDAV CalDAVWriteEntityQueueableOperation failed: status 'HTTP/1.1 403 Forbidden' request:\n\nBEGIN:VCALENDAR^M\nVERSION:2.0^M\nPRODID:-//Apple Inc.//iCal 3.0//EN^M\nCALSCALE:GREGORIAN^M\nBEGIN:VTIMEZONE^M\nTZID:US/Eastern^M\nBEGIN:DAYLIGHT^M\nTZOFFSETFROM:-0500^M\nTZOFFSETTO:-0400^M\nDTSTART:20070311T020000^M\nRRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU^M\nTZNAME:EDT^M\nEND:DAYLIGHT^M\nBEGIN:STANDARD^M\nTZOFFSETFROM:-0400^M\nTZOFFSETTO:-0500^M\nDTSTART:20071104T020000^M\nRRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU^M\nTZNAME:EST^M\nEND:STANDARD^M\nEND:VTIMEZONE^M\nBEGIN:VEVENT^M\nSEQUENCE:5^M\nDTSTART;TZID=US/Eastern:20090117T094500^M\nDTSTAMP:20081227T143043Z^M\nSUMMARY:blah blah^M\nATTENDEE;CN="First Last";CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT:urn:uuid^M\n :XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX^M\nATTENDEE;CN="First Last";CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:mailto:user@d^M\n omain.tld^M\nEXDATE;TZID=US/Eastern:20110319T094500^M\nDTEND;TZID=US/Eastern:20090117T183000^M\nRRULE:FREQ=WEEKLY;INTERVAL=1^M\nTRANSP:OPAQUE^M\nUID:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX^M\nORGANIZER;CN="First Last":mailto:[email protected]^M\nX-WR-ITIPSTATUSML:UNCLEAN^M\nCREATED:20110317T191348Z^M\nEND:VEVENT^M\nEND:VCALENDAR^M\n\n\n... response:\nHTTP/1.1 403 Forbidden^M\nDate: Thu, 17 Mar 2011 19:14:30 GMT^M\nDav: 1, access-control, calendar-access, calendar-schedule, calendar-auto-schedule, calendar-availability, inbox-availability, calendar-proxy, calendarserver-private-events, calendarserver-private-comments, calendarserver-principal-property-search^M\nContent-Type: text/xml^M\nContent-Length: 134^M\nServer: Twisted/8.2.0 TwistedWeb/8.2.0 TwistedCalDAV/2.5 (iCal Server v12.56.21)^M\n^M\n<?xml version='1.0' encoding='UTF-8'?><error xmlns='DAV:'>^M\n <valid-attendee-change xmlns='urn:ietf:params:xml:ns:caldav'/>^M\n</error> One thing I have noticed, and I'm not sure if this has any real effect is that in many of these pre-Snow Leopard Server migration events, the ORGANIZER is specified like the following: ORGANIZER;CN=First Last:mailto:[email protected] But newer ones are more like one of the two following: ORGANIZER;CN=First Last;[email protected];SCHEDULE-STATUS=1 ORGANIZER;CN=First Last;[email protected]:urn:uuid:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX iCal_Server_Admin_v10.6.pdf notes that the ".db.sqlite" files are completely disposable, they're merely a performance cache and are re-built on the fly, so are safe to delete. I did delete the one for the organizer's calendars and it took longer to process the attempted event delete while it rebuilt the database, but still errored out in the end. FWIW the error is thrown by this code: https://trac.calendarserver.org/browser/CalendarServer/trunk/twistedcaldav/scheduling/implicit.py Any further suggestions? I see lots of questions about this in my Google searches, but not solutions and this is a widespread problem on our iCal Server. Now, we mostly try to get users to ignore them (hence the amount of time this question has been open), but every now and then I dig in deeper trying to find the culprit and/or solution.

    Read the article

  • "Row not found or changed" Problem

    - by winston schröder
    Hi there, I'm working on a SQL CE Database and get into the "Row nor found or changed" exception. The exception only occurs when I try to update. On the first Run after the insert it shows up a MemberChangeConflict which says, that my Column Created_at has in all three values (current, original, database) the same. But in a second attempt it doesn't appear anymore. The DataContext is instanciated on Startup and freed on Exit of my Local(!) Application. I use a sqlmetal generated mapping and code file. In the map I added some Associations and set the timemstamp columns UpdateCheck property to Always while all other have the setting never. The Timestamp Column is marked as isVersion="true", the Id Column as Primary Key. Since I don't dispose the datacontext I expected to be using implicit transaction. When I run the SubmitChanges Method within a TransactionScope. Can anyone tell me how I can update the timestamp within the code ? I know about the Problems one has to deal with if you dispose the datacontext. So I decided not to do this since I use a Single User Local DB Cache File. (I did already use a version where I disposed the datacontext after every usage, but this version had a real bad performance and error rate, so I decided to choose the other variant.) LibDB.Client.Vehicles tmp = null; try { tmp = e.Parameter as LibDB.Client.Vehicles; if (tmp == null) return; if (!this._dc.Vehicles.Contains(tmp)) { this._dc.Vehicles.Attach(tmp); } this.ShowChangesReport(this._dc.GetChangeSet()); using (TransactionScope ts = new TransactionScope()) { try { this._dc.SubmitChanges(); ts.Complete(); } catch (ChangeConflictException cce) { Console.WriteLine("Optimistic concurrency error."); Console.WriteLine(cce.Message); Console.ReadLine(); foreach (ObjectChangeConflict occ in this._dc.ChangeConflicts) { MetaTable metatable = this._dc.Mapping.GetTable(occ.Object.GetType()); LibDB.Client.Vehicles entityInConflict = (LibDB.Client.Vehicles)occ.Object; Console.WriteLine("Table name: {0}", metatable.TableName); Console.Write("Vin: "); Console.WriteLine(entityInConflict.Vin); foreach (MemberChangeConflict mcc in occ.MemberConflicts) { object currVal = mcc.CurrentValue; object origVal = mcc.OriginalValue; object databaseVal = mcc.DatabaseValue; MemberInfo mi = mcc.Member; Console.WriteLine("Member: {0}", mi.Name); Console.WriteLine("current value: {0}", currVal); Console.WriteLine("original value: {0}", origVal); Console.WriteLine("database value: {0}", databaseVal); } throw cce; } } catch (Exception ex) { this.ShowChangeConflicts(this._dc.ChangeConflicts); Console.WriteLine(ex.Message); } } this.ShowChangesReport(this._dc.GetChangeSet());

    Read the article

  • Saving a .xls file with fwrite

    - by kielie
    hi guys, I have to create a script that takes a mySQL table, and exports it into .XSL format, and then saves that file into a specified folder on the web host. I got it working, but now I can't seem to get it to automatically save the file to the location without prompting the user. It needs to run every day at a specified time, so it can save the previous days leads into a .XSL file on the web host. Here is the code: <?php // DB TABLE Exporter // // How to use: // // Place this file in a safe place, edit the info just below here // browse to the file, enjoy! // CHANGE THIS STUFF FOR WHAT YOU NEED TO DO $dbhost = "-"; $dbuser = "-"; $dbpass = "-"; $dbname = "-"; $dbtable = "-"; // END CHANGING STUFF $cdate = date("Y-m-d"); // get current date // first thing that we are going to do is make some functions for writing out // and excel file. These functions do some hex writing and to be honest I got // them from some where else but hey it works so I am not going to question it // just reuse // This one makes the beginning of the xls file function xlsBOF() { echo pack("ssssss", 0x809, 0x8, 0x0, 0x10, 0x0, 0x0); return; } // This one makes the end of the xls file function xlsEOF() { echo pack("ss", 0x0A, 0x00); return; } // this will write text in the cell you specify function xlsWriteLabel($Row, $Col, $Value ) { $L = strlen($Value); echo pack("ssssss", 0x204, 8 + $L, $Row, $Col, 0x0, $L); echo $Value; return; } // make the connection an DB query $dbc = mysql_connect( $dbhost , $dbuser , $dbpass ) or die( mysql_error() ); mysql_select_db( $dbname ); $q = "SELECT * FROM ".$dbtable." WHERE date ='$cdate'"; $qr = mysql_query( $q ) or die( mysql_error() ); // Ok now we are going to send some headers so that this // thing that we are going make comes out of browser // as an xls file. // header("Pragma: public"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Content-Type: application/force-download"); header("Content-Type: application/octet-stream"); header("Content-Type: application/download"); //this line is important its makes the file name header("Content-Disposition: attachment;filename=export_".$dbtable.".xls "); header("Content-Transfer-Encoding: binary "); // start the file xlsBOF(); // these will be used for keeping things in order. $col = 0; $row = 0; // This tells us that we are on the first row $first = true; while( $qrow = mysql_fetch_assoc( $qr ) ) { // Ok we are on the first row // lets make some headers of sorts if( $first ) { foreach( $qrow as $k => $v ) { // take the key and make label // make it uppper case and replace _ with ' ' xlsWriteLabel( $row, $col, strtoupper( ereg_replace( "_" , " " , $k ) ) ); $col++; } // prepare for the first real data row $col = 0; $row++; $first = false; } // go through the data foreach( $qrow as $k => $v ) { // write it out xlsWriteLabel( $row, $col, $v ); $col++; } // reset col and goto next row $col = 0; $row++; } xlsEOF(); exit(); ?> I tried using, fwrite to accomplish this, but it didn't seem to go very well, I removed the header information too, but nothing worked. Here is the original code, as I found it, any help would be greatly appreciated. :-) Thanx in advance. :-)

    Read the article

  • CISCO 2911 Router configuration

    - by bala
    Device cisco 2911 router configuration support is required please. I have exchange server 2010 configured and working without any errors the problem is in cisco router configuration when exchange server sends emails out the receives WAN IP not the public ip. I have configured RDNS lookups with our MX record IP addesses that match the FQDN but all our emails are rejected because it does not match with the public ip. Receiving mails problem is not an problem all mails are coming through. i am sure i am missing something on the router configuration that does not sends the public ip, can any one help me to solve this issue. Note; I've got 1 WAN IP & 8 Public IP from ISP . Find below the running configuration. Building configuration... Current configuration : 2734 bytes ! ! Last configuration change at 06:32:13 UTC Tue Apr 3 2012 ! NVRAM config last updated at 06:32:14 UTC Tue Apr 3 2012 ! NVRAM config last updated at 06:32:14 UTC Tue Apr 3 2012 version 15.1 service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname BSBG-LL ! boot-start-marker boot-end-marker ! ! enable secret 5 $x$xHrxxxxx5ox0 enable password 7 xx23xx5FxxE1xx044 ! no aaa new-model ! no ipv6 cef ip source-route ip cef ! ! ! ! ! ip flow-cache timeout active 1 ip domain name yourdomain.com ip name-server 213.42.20.20 ip name-server 195.229.241.222 multilink bundle-name authenticated ! ! crypto pki token default removal timeout 0 ! ! license udi pid CISCO2911/K9 ! ! username bsbg ! ! ! ! ! ! interface Embedded-Service-Engine0/0 no ip address shutdown ! interface GigabitEthernet0/0 ip address 192.168.0.9 255.255.255.0 ip flow ingress ip nat inside ip virtual-reassembly in duplex auto speed 100 no cdp enable ! interface GigabitEthernet0/1 ip address 213.42.xx.x2 255.255.255.252 ip nat outside ip virtual-reassembly in duplex auto speed auto no cdp enable ! interface GigabitEthernet0/2 no ip address shutdown duplex auto speed auto ! ip forward-protocol nd ! no ip http server no ip http secure-server ! ip nat inside source list 120 interface GigabitEthernet0/1 overload ip nat inside source static tcp 192.168.0.4 25 94.56.89.100 25 extendable ip nat inside source static tcp 192.168.0.4 53 94.56.89.100 53 extendable ip nat inside source static udp 192.168.0.4 53 94.56.89.100 53 extendable ip nat inside source static tcp 192.168.0.4 110 94.56.89.100 110 extendable ip nat inside source static tcp 192.168.0.4 443 94.56.89.100 443 extendable ip nat inside source static tcp 192.168.0.4 587 94.56.89.100 587 extendable ip nat inside source static tcp 192.168.0.4 995 94.56.89.100 995 extendable ip nat inside source static tcp 192.168.0.4 3389 94.56.89.100 3389 extendable ip nat inside source static tcp 192.168.0.4 443 94.56.89.101 443 extendable ip nat inside source static tcp 192.168.0.12 80 94.56.89.102 80 extendable ip nat inside source static tcp 192.168.0.12 443 94.56.89.102 443 extendable ip nat inside source static tcp 192.168.0.12 3389 94.56.89.102 3389 extendable ip route 0.0.0.0 0.0.0.0 213.42.69.41 ! access-list 120 permit ip 192.168.0.0 0.0.0.255 any ! ! ! control-plane ! ! ! line con 0 exec-timeout 5 0 line aux 0 line 2 no activation-character no exec transport preferred none transport input all transport output pad telnet rlogin lapb-ta mop udptn v120 ssh stopbits 1 line vty 0 4 password 7 xx64xxD530D26086Dxx login transport input all ! scheduler allocate 20000 1000 end

    Read the article

  • Ruby Gem LoadError mysql2/mysql2 required

    - by Kalli Dalli
    Im trying to setup my rails server on OSX 10.8 but I can't get my rails server to run. - Currently Im using a Zend Server with mysql 5.1. - I also have istalled brew and brew mysql. - And I used: gem install mysql2 -- --srcdir=/usr/local/mysql/include --with-opt-include=/usr/local/mysql/include the server worked already but now, I always get this loadError below. This is what my Gemfile says: ralphs-macbook-pro:admin-mockup zero$ bundle install Using rake (10.0.2) Using i18n (0.6.1) Using multi_json (1.3.7) Using activesupport (3.2.7) Using builder (3.0.4) Using activemodel (3.2.7) Using erubis (2.7.0) Using journey (1.0.4) Using rack (1.4.1) Using rack-cache (1.2) Using rack-test (0.6.2) Using hike (1.2.1) Using tilt (1.3.3) Using sprockets (2.1.3) Using actionpack (3.2.7) Using mime-types (1.19) Using polyglot (0.3.3) Using treetop (1.4.12) Using mail (2.4.4) Using actionmailer (3.2.7) Using arel (3.0.2) Using tzinfo (0.3.35) Using activerecord (3.2.7) Using activeresource (3.2.7) Using annotate (2.5.0) Using coffee-script-source (1.4.0) Using execjs (1.4.0) Using coffee-script (2.2.0) Using rack-ssl (1.3.2) Using json (1.7.5) Using rdoc (3.12) Using thor (0.16.0) Using railties (3.2.7) Using coffee-rails (3.2.2) Using columnize (0.3.6) Using debugger-ruby_core_source (1.1.5) Using debugger-linecache (1.1.2) Using debugger (1.2.2) Using formtastic (2.2.1) Using haml (3.1.7) Using haml-rails (0.3.5) Using hirb (0.7.0) Using hpricot (0.8.6) Using jquery-rails (2.1.4) Using kgio (2.7.4) Using mysql2 (0.3.11) Using php_serialize (1.2) Using polyamorous (0.5.0) Using rabl (0.7.8) Using railroady (1.1.0) Using bundler (1.2.3) Using rails (3.2.7) Using raindrops (0.10.0) Using randumb (0.3.0) Using sass (3.2.3) Using sass-rails (3.2.5) Using squeel (1.0.13) Using uglifier (1.3.0) Using unicorn (4.4.0) Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed. And after starting rails s /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/mysql2-0.3.11/lib/mysql2.rb:9:in `require': cannot load such file -- mysql2/mysql2 (LoadError) from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/mysql2-0.3.11/lib/mysql2.rb:9:in `<top (required)>' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:68:in `require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:68:in `block (2 levels) in require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:66:in `each' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:66:in `block in require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:55:in `each' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:55:in `require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler.rb:128:in `require' from /Users/zero/GitHub/admin-mockup/config/application.rb:7:in `<top (required)>' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:53:in `require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:53:in `block in <top (required)>' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:50:in `tap' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:50:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>' Thx for any help!

    Read the article

  • Getting timing consistency in Linux

    - by Jim Hunziker
    I can't seem to get a simple program (with lots of memory access) to achieve consistent timing in Linux. I'm using a 2.6 kernel, and the program is being run on a dual-core processor with realtime priority. I'm trying to disable cache effects by declaring the memory arrays as volatile. Below are the results and the program. What are some possible sources of the outliers? Results: Number of trials: 100 Range: 0.021732s to 0.085596s Average Time: 0.058094s Standard Deviation: 0.006944s Extreme Outliers (2 SDs away from mean): 7 Average Time, excluding extreme outliers: 0.059273s Program: #include <stdio.h> #include <stdlib.h> #include <math.h> #include <sched.h> #include <sys/time.h> #define NUM_POINTS 5000000 #define REPS 100 unsigned long long getTimestamp() { unsigned long long usecCount; struct timeval timeVal; gettimeofday(&timeVal, 0); usecCount = timeVal.tv_sec * (unsigned long long) 1000000; usecCount += timeVal.tv_usec; return (usecCount); } double convertTimestampToSecs(unsigned long long timestamp) { return (timestamp / (double) 1000000); } int main(int argc, char* argv[]) { unsigned long long start, stop; double times[REPS]; double sum = 0; double scale, avg, newavg, median; double stddev = 0; double maxval = -1.0, minval = 1000000.0; int i, j, freq, count; int outliers = 0; struct sched_param sparam; sched_getparam(getpid(), &sparam); sparam.sched_priority = sched_get_priority_max(SCHED_FIFO); sched_setscheduler(getpid(), SCHED_FIFO, &sparam); volatile float* data; volatile float* results; data = calloc(NUM_POINTS, sizeof(float)); results = calloc(NUM_POINTS, sizeof(float)); for (i = 0; i < REPS; ++i) { start = getTimestamp(); for (j = 0; j < NUM_POINTS; ++j) { results[j] = data[j]; } stop = getTimestamp(); times[i] = convertTimestampToSecs(stop-start); } free(data); free(results); for (i = 0; i < REPS; i++) { sum += times[i]; if (times[i] > maxval) maxval = times[i]; if (times[i] < minval) minval = times[i]; } avg = sum/REPS; for (i = 0; i < REPS; i++) stddev += (times[i] - avg)*(times[i] - avg); stddev /= REPS; stddev = sqrt(stddev); for (i = 0; i < REPS; i++) { if (times[i] > avg + 2*stddev || times[i] < avg - 2*stddev) { sum -= times[i]; outliers++; } } newavg = sum/(REPS-outliers); printf("Number of trials: %d\n", REPS); printf("Range: %fs to %fs\n", minval, maxval); printf("Average Time: %fs\n", avg); printf("Standard Deviation: %fs\n", stddev); printf("Extreme Outliers (2 SDs away from mean): %d\n", outliers); printf("Average Time, excluding extreme outliers: %fs\n", newavg); return 0; }

    Read the article

  • Java Refuses to Start - Could not reserve enough space for object heap

    - by Randyaa
    Background We have a pool of aproximately 20 linux blades. Some are running Suse, some are running Redhat. ALL share NAS space which contains the following 3 folders: /NAS/app/java - a symlink that points to an installation of a Java JDK. Currently version 1.5.0_10 /NAS/app/lib - a symlink that points to a version of our application. /NAS/data - directory where our output is written All our machines have 2 processors (hyperthreaded) with 4gb of physical memory and 4gb of swap space. We limit the number of 'jobs' each machine can process at a given time to 6 (this number likely needs to change, but that does not enter into the current problem so please ignore it for the time being). Some of our jobs set a Max Heap size of 512mb, some others reserve a Max Heap size of 2048mb. Again, we realize we could go over our available memory if 6 jobs started on the same machine with the heap size set to 2048, but to our knowledge this has not yet occurred. The Problem Once and a while a Job will fail immediately with the following message: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. We used to chalk this up to too many jobs running at the same time on the same machine. The problem happened infrequently enough (MAYBE once a month) that we'd just restart it and everything would be fine. The problem has recently gotten much worse. All of our jobs which request a max heap size of 2048m fail immediately almost every time and need to get restarted several times before completing. We've gone out to individual machines and tried executing them manually with the same result. Debugging It turns out that the problem only exists for our SuSE boxes. The reason it has been happening more frequently is becuase we've been adding more machines, and the new ones are SuSE. 'cat /proc/version' on the SuSE boxes give us: Linux version 2.6.5-7.244-bigsmp (geeko@buildhost) (gcc version 3.3.3 (SuSE Linux)) #1 SMP Mon Dec 12 18:32:25 UTC 2005 'cat /proc/version' on the RedHat boxes give us: Linux version 2.4.21-32.0.1.ELsmp ([email protected]) (gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-52)) #1 SMP Tue May 17 17:52:23 EDT 2005 'uname -a' gives us the following on BOTH types of machines: UTC 2005 i686 i686 i386 GNU/Linux No jobs are running on the machine, and no other processes are utilizing much memory. All of the processes currently running might be using 100mb total. 'top' currently shows the following: Mem: 4146528k total, 3536360k used, 610168k free, 132136k buffers Swap: 4194288k total, 0k used, 4194288k free, 3283908k cached 'vmstat' currently shows the following: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 610292 132136 3283908 0 0 0 2 26 15 0 0 100 0 If we kick off a job with the following command line (Max Heap of 1850mb) it starts fine: java/bin/java -Xmx1850M -cp helloworld.jar HelloWorld Hello World If we bump up the max heap size to 1875mb it fails: java/bin/java -Xmx1875M -cp helloworld.jar HelloWorld Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. It's quite clear that the memory currently being used is for Buffering/Caching and that's why so little is being displayed as 'free'. What isn't clear is why there is a magical 1850mb line where anything higher means Java can't start. Any explanations would be greatly appreciated.

    Read the article

  • How to restart RoR services after server has been rebooted

    - by Alan DeLonga
    Update I have been searching around to see what services would possibly need to be restarted in my project after reboot. One of them was thinking sphinx, which I finally got to the point where it logs: [Fri Nov 16 19:34:29.820 2012] [29623] accepting connections But I still cant run searchd or searchd --stop because there was no generated sphinx.conf file in the etc/sphinxsearch for more info refer to this open thread on thinking_sphinx after reboot I then turned to looking into restarting unicorn or thin based on some insight I got. The issue is when I check my gems I see one for thin AND unicorn. But when I try to start either one of them they have no file residing in etc/init.d/ where the nginx and sphinxsearch files reside... Would rebooting totally erase the files for an app server like thin or unicorn? We are hosted on Rackspace running ruby 1.9.2p290 rails (3.2.8, 3.2.7, 3.2.0) nginx/1.1.19 notice that there are gems for unicorn and thin but there is no unicorn.rb or thin.rb in my config folder for my app... I am still super lost if any one can give me some insight on some steps to take to figure this out I would really appreciate it. Anything would help, thanks for reading. thin 1.4.1 unicorn 4.3.1 When I run unicorn I get the same issue as referenced here : > /usr/local/bin/unicorn start /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:610:in `parse_rackup_file': rackup file (start) not readable (ArgumentError) from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:76:in `reload' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:67:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `<top (required)>' from /usr/local/bin/unicorn:19:in `load' from /usr/local/bin/unicorn:19:in `<main>' When I run thin it just opens a command line prompt... /usr/local/bin/thin start >> Using rack adapter Other gems: * LOCAL GEMS * actionmailer (3.2.8, 3.2.7, 3.2.0) actionpack (3.2.8, 3.2.7, 3.2.0) activemodel (3.2.8, 3.2.7, 3.2.0) activerecord (3.2.8, 3.2.7, 3.2.0) activeresource (3.2.8, 3.2.7, 3.2.0) activesupport (3.2.8, 3.2.7, 3.2.0) arel (3.0.2) builder (3.0.0) bundler (1.1.5) carmen (1.0.0.beta2) carmen-rails (1.0.0.beta3) cocaine (0.2.1) coffee-rails (3.2.2) coffee-script (2.2.0) coffee-script-source (1.3.3) daemons (1.1.9) erubis (2.7.0) eventmachine (0.12.10) execjs (1.4.0) faraday (0.8.4) faraday_middleware (0.8.8) foursquare2 (1.8.2) geokit (1.6.5) hashie (1.2.0) hike (1.2.1) httparty (0.8.3) httpauth (0.1) i18n (0.6.0) journey (1.0.4) jquery-rails (2.0.2) json (1.7.4, 1.7.3) jwt (0.1.5) kgio (2.7.4) lastfm (1.8.0) libv8 (3.3.10.4 x86_64-linux) mail (2.4.4) mime-types (1.19, 1.18) minitest (1.6.0) multi_json (1.3.6) multi_xml (0.5.1) multipart-post (1.1.5) mysql2 (0.3.11) oauth2 (0.8.0) paperclip (3.1.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack-ssl (1.3.2) rack-test (0.6.1) rails (3.2.8, 3.2.7, 3.2.0) railties (3.2.8, 3.2.7, 3.2.0) raindrops (0.10.0, 0.9.0) rake (0.9.2.2, 0.8.7) rdoc (3.12, 2.5.8) riddle (1.5.3) sass (3.2.0, 3.1.19) sass-rails (3.2.5) sprockets (2.1.3) sqlite3 (1.3.6) sqlite3-ruby (1.3.3) therubyracer (0.10.2, 0.10.1) thin (1.4.1) thinking-sphinx (2.0.10) thor (0.16.0, 0.15.4, 0.14.6) tilt (1.3.3) treetop (1.4.10) tzinfo (0.3.33) uglifier (1.2.7, 1.2.4) unicorn (4.3.1) xml-simple (1.1.1) I am working on a project that was built by another group. I made some modifications to a constants file in the config folder (changing some values for arrays that populated some drop down fields), but the app had to be rebooted before those changes would be recognized. The hosting is through Rackspace, we rebooted through the option on their site. I contacted them and checked the status of our server, the port is open and operational. The problem is the app is not running when you go to the address for the site. Then when I put in the ip address of the server it just says "Welcome to Nginx". But in a log files I see: [Thu Nov 15 02:34:37.945 2012] [15916] caught SIGTERM, shutting down [Thu Nov 15 02:34:37.996 2012] [15916] shutdown complete I am not very versed in server side set up. I have also never worked on a Rails project that had to have specific services started before the application will start. Any insight as to how to figure out what services need to be restarted and how to go about restarting them would be greatly appreciated. I feel kind of dead in the water at this point... Thanks, Alan

    Read the article

  • KVM Virtual guest Paused on Reboot

    - by David Hamilton
    I'm running REHL 6 and just installed a Ubuntu Server Guest via KVM set to start at boot. This works correctly and the guest loads, but it loads "paused" and requires that I manually un-pause it. Can someone give me a hint as to how I can I get the Guest OS to actually become active on boot? Here is the libvert dump as requested...Also tried libvert auto-start --- no effect. <domain type='kvm' id='1'> <name>MailServer</name> <uuid>a61dae75-1f5c-d536-718f-3c615d9b4868</uuid> <memory>4194304</memory> <currentMemory>4194304</currentMemory> <vcpu>4</vcpu> <os> <type arch='x86_64' machine='rhel6.0.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/home/MailServer/MailServer-1.img'/> <target dev='hda' bus='ide'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:cd:f9:9f'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='5900' autoport='yes'/> <sound model='ac97'> <alias name='sound0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='cirrus' vram='9216' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='selinux'> <label>system_u:system_r:svirt_t:s0:c211,c271</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c211,c271</imagelabel> </seclabel></domain>

    Read the article

  • add an image in listview

    - by danish
    Hi I would like to add more images in my list view as this code below only displays image 1 and 2 continuously in each row. What I want to do is display a different image for each different row. Here is mycode below; Thanks for any help. I am not good at java please change the code where necessary and I can then refer to it. public class starters extends ListActivity { private static class EfficientAdapter extends BaseAdapter { private LayoutInflater mInflater; private Bitmap mIcon1; private Bitmap mIcon2; private Bitmap mIcon3; private Bitmap mIcon4; private Bitmap mIcon5; private Bitmap mIcon6; private Bitmap mIcon7; private Bitmap mIcon8; private Bitmap mIcon9; private Bitmap mIcon10; public EfficientAdapter(Context context) { // Cache the LayoutInflate to avoid asking for a new one each time. mInflater = LayoutInflater.from(context); // Icons bound to the rows. mIcon1 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters1); mIcon2 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters2); mIcon3 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters3); mIcon4 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters4); mIcon5 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters5); mIcon6 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters6); mIcon7 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters7); mIcon8 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters8); mIcon9 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters9); mIcon10 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters10); } public int getCount() { return DATA.length; } public Object getItem(int position) { return position; } public long getItemId(int position) { return position; } public View getView(int position, View convertView, ViewGroup parent) { // A ViewHolder keeps references to children views to avoid unneccessary calls // to findViewById() on each row. ViewHolder holder; // When convertView is not null, we can reuse it directly, there is no need // to reinflate it. We only inflate a new View when the convertView supplied // by ListView is null. if (convertView == null) { convertView = mInflater.inflate(R.layout.starters, null); // Creates a ViewHolder and store references to the two children views // we want to bind data to. holder = new ViewHolder(); holder.text = (TextView) convertView.findViewById(R.id.text01); holder.text = (TextView) convertView.findViewById(R.id.secondLine); holder.icon = (ImageView) convertView.findViewById(R.id.icon01); convertView.setTag(holder); } else { // Get the ViewHolder back to get fast access to the TextView // and the ImageView. holder = (ViewHolder) convertView.getTag(); } // Bind the data efficiently with the holder. holder.text.setText(DATA[position]); holder.icon.setImageBitmap((position & 1) ==1 ? mIcon1 : mIcon2); return convertView; } static class ViewHolder { TextView text; ImageView icon; } } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setListAdapter(new EfficientAdapter(this)); } private static final String[] DATA = { "Original nachos", "Toasted chicken and cheese quesadillas", "Chicken, lime and coriander nachos", "Spicy bean and cheese quesadillas", "Tuna and corn quesadillas", "Cheesy bean and sweetcorn nachos", "Crispy chicken, avocado and lime salad", "Beef and baby corn tostada", "Spicy mexican rice with chicken and prawns", "Chilli potato boats"}; }

    Read the article

  • PHP MINISERVER DOWNLOAD RESUME-ERROR! Resource id # 4

    - by snikolov
    $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $range = $input[12]; $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; if(isset($range)) { list($a, $range) = explode("=",$range); str_replace($range, "-", $range); $size2 = $size-1; $new_length = $size-$range; $output = "HTTP/1.1 206 Partial Content\r\n"; $output .= "Content-Length: $new_length\r\n"; $output .= "Content-Range: bytes $range$size2/$size\r\n"; } else { $size2=$size-1; $output .= "Content-Length: $new_length\r\n"; } $chunksize = 1*(1024*1024); $bytes_send = 0; $file = "a.mp3"; $filesize = filesize($file); if ($file = fopen($file, 'r')) { if(isset($range)) $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= "Content-Length: $filesize\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; fseek($file, $range); $download_rate = 1000; while(!feof($file) and (connection_status()==0)) { $var_stat = fread($file, round($download_rate *1024)); $output .= $var_stat;//echo($buffer); // is also possible flush(); sleep(1);//// decrease download speed } fclose($file); } /** $filename = "dada"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $send = array("Output"=$buffer,"filesize"=$filesize,"filename"=$filename); $file = $send['filename']; */ //@ob_end_clean(); // $output .= "Content-Transfer-Encoding: binary"; //$output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); hey guys i have create a miniwebserver downloader it can download files from your server, however i am unable to resume my download when i download the file i get Resource id # 4 and also i cant resume the download,i would like to know how i can monitor record the client output how much bandwidth he has downloaded perl has something like this put its hardcore if possible kindly provide me with some pointers thank you :)

    Read the article

  • Why is my mdadm raid-1 recovery so slow?

    - by dimmer
    On a system I'm running Ubuntu 10.04. My raid-1 restore started out fast but quickly became ridiculously slow (at this rate the restore will take 150 days!): dimmer@paimon:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdc1[2] sdb1[1] 1953513408 blocks [2/1] [_U] [====>................] recovery = 24.4% (477497344/1953513408) finish=217368.0min speed=113K/sec unused devices: <none> Eventhough I have set the kernel variables to reasonably quick values: dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_min 1000000 dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_max 100000000 I am using 2 2.0TB Western Digital Hard Disks, WDC WD20EARS-00M and WDC WD20EARS-00J. I believe they have been partitioned such that their sectors are aligned. dimmer@paimon:/sys$ sudo parted /dev/sdb GNU Parted 2.2 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00M (scsi) Disk /dev/sdb: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 (parted) unit s (parted) p Number Start End Size File system Name Flags 1 2048s 3907028991s 3907026944s ext4 (parted) q dimmer@paimon:/sys$ sudo parted /dev/sdc GNU Parted 2.2 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00J (scsi) Disk /dev/sdc: 2000GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 I am beginning to think that I have a hardware problem, otherwise I can't imagine why the mdadm restore should be so slow. I have done a benchmark on /dev/sdc using Ubuntu's disk utility GUI app, and the results looked normal so I know that sdc has the capability to write faster than this. I also had the same problem on a similar WD drive that I RMAd because of bad sectors. I suppose it's possible they sent me a replacement with bad sectors too, although there are no SMART values showing them yet. Any ideas? Thanks. As requested, output of top sorted by cpu usage (notice there is ~0 cpu usage). iowait is also zero which seems strange: top - 11:35:13 up 2 days, 9:40, 3 users, load average: 2.87, 2.58, 2.30 Tasks: 142 total, 1 running, 141 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3096304k total, 1482164k used, 1614140k free, 617672k buffers Swap: 1526132k total, 0k used, 1526132k free, 535416k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 45 root 20 0 0 0 0 S 0 0.0 2:17.02 scsi_eh_0 1 root 20 0 2808 1752 1204 S 0 0.1 0:00.46 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.17 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/1 ... dmesg errors, definitely looking like hardware: [202884.000157] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [202884.007015] ata5.00: failed command: FLUSH CACHE EXT [202884.013728] ata5.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 [202884.013730] res 40/00:00:ff:59:2e/00:00:35:00:00/e0 Emask 0x4 (timeout) [202884.033667] ata5.00: status: { DRDY } [202884.040329] ata5: hard resetting link [202889.400050] ata5: link is slow to respond, please be patient (ready=0) [202894.048087] ata5: COMRESET failed (errno=-16) [202894.054663] ata5: hard resetting link [202899.412049] ata5: link is slow to respond, please be patient (ready=0) [202904.060107] ata5: COMRESET failed (errno=-16) [202904.066646] ata5: hard resetting link [202905.840056] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [202905.849178] ata5.00: configured for UDMA/133 [202905.849188] ata5: EH complete [203899.000292] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [203899.007096] ata5.00: failed command: IDENTIFY DEVICE [203899.013841] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [203899.013843] res 40/00:00:ff:f9:f6/00:00:38:00:00/e0 Emask 0x4 (timeout) [203899.041232] ata5.00: status: { DRDY } [203899.048133] ata5: hard resetting link [203899.816134] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [203899.826062] ata5.00: configured for UDMA/133 [203899.826079] ata5: EH complete [204375.000200] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204375.007421] ata5.00: failed command: IDENTIFY DEVICE [204375.014799] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204375.014800] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204375.044374] ata5.00: status: { DRDY } [204375.051842] ata5: hard resetting link [204380.408049] ata5: link is slow to respond, please be patient (ready=0) [204384.440076] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204384.449938] ata5.00: configured for UDMA/133 [204384.449955] ata5: EH complete [204395.988135] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204395.988140] ata5.00: failed command: IDENTIFY DEVICE [204395.988147] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204395.988149] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204395.988151] ata5.00: status: { DRDY } [204395.988156] ata5: hard resetting link [204399.320075] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204399.330487] ata5.00: configured for UDMA/133 [204399.330503] ata5: EH complete

    Read the article

  • Packets marked by iptables only sent to the correct routing table sometimes

    - by cookiecaper
    I am trying to route packets generated by a specific user out over a VPN. I have this configuration: $ sudo iptables -S -t nat -P PREROUTING ACCEPT -P OUTPUT ACCEPT -P POSTROUTING ACCEPT -A POSTROUTING -o tun0 -j MASQUERADE $ sudo iptables -S -t mangle -P PREROUTING ACCEPT -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -P POSTROUTING ACCEPT -A OUTPUT -m owner --uid-owner guy -j MARK --set-xmark 0xb/0xffffffff $ sudo ip rule show 0: from all lookup local 32765: from all fwmark 0xb lookup 11 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table 11 10.8.0.5 dev tun0 proto kernel scope link src 10.8.0.6 10.8.0.6 dev tun0 scope link 10.8.0.1 via 10.8.0.5 dev tun0 0.0.0.0/1 via 10.8.0.5 dev tun0 $ sudo iptables -S -t raw -P PREROUTING ACCEPT -P OUTPUT ACCEPT -A OUTPUT -m owner --uid-owner guy -j TRACE -A OUTPUT -p tcp -m tcp --dport 80 -j TRACE It seems that some sites work fine and use the VPN, but others don't and fall back to the normal interface. This is bad. This is a packet trace that used VPN: Oct 27 00:24:28 agent kernel: [612979.976052] TRACE: raw:OUTPUT:rule:2 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 Oct 27 00:24:28 agent kernel: [612979.976105] TRACE: raw:OUTPUT:policy:3 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 Oct 27 00:24:28 agent kernel: [612979.976164] TRACE: mangle:OUTPUT:rule:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 Oct 27 00:24:28 agent kernel: [612979.976210] TRACE: mangle:OUTPUT:policy:2 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:24:28 agent kernel: [612979.976269] TRACE: nat:OUTPUT:policy:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:24:28 agent kernel: [612979.976320] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:24:28 agent kernel: [612979.976367] TRACE: mangle:POSTROUTING:policy:1 IN= OUT=tun0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:24:28 agent kernel: [612979.976414] TRACE: nat:POSTROUTING:rule:1 IN= OUT=tun0 SRC=XXX.YYY.ZZZ.AAA DST=23.1.17.194 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=14494 DF PROTO=TCP SPT=57502 DPT=80 SEQ=2294732931 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6E01D0000000001030307) UID=999 GID=999 MARK=0xb and this is one that didn't: Oct 27 00:22:41 agent kernel: [612873.662559] TRACE: raw:OUTPUT:rule:2 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 Oct 27 00:22:41 agent kernel: [612873.662609] TRACE: raw:OUTPUT:policy:3 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 Oct 27 00:22:41 agent kernel: [612873.662664] TRACE: mangle:OUTPUT:rule:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 Oct 27 00:22:41 agent kernel: [612873.662709] TRACE: mangle:OUTPUT:policy:2 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:22:41 agent kernel: [612873.662761] TRACE: nat:OUTPUT:policy:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:22:41 agent kernel: [612873.662808] TRACE: filter:OUTPUT:policy:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 MARK=0xb Oct 27 00:22:41 agent kernel: [612873.662855] TRACE: mangle:POSTROUTING:policy:1 IN= OUT=eth0 SRC=XXX.YYY.ZZZ.AAA DST=209.68.27.16 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=40425 DF PROTO=TCP SPT=45305 DPT=80 SEQ=604973951 ACK=0 WINDOW=5840 RES=0x00 SYN URGP=0 OPT (020405B40402080A03A6B6960000000001030307) UID=999 GID=999 MARK=0xb I have already tried "ip route flush cache", to no avail. I do not know why the first packet goes through the correct routing table, and the second doesn't. Both are marked. Once again, I do not want ALL packets system-wide to go through the VPN, I only want packets from a specific user (UID=999) to go through the VPN. I am testing ipchicken.com and walmart.com via links, from the same user, same shell. walmart.com appears to use the VPN; ipchicken.com does not. Any help appreciated. Will send 0.5 bitcoins to answerer who makes this fixed.

    Read the article

  • How do I make solr/jetty find the installed slf4j jars in Ubuntu 12.04?

    - by J. Pablo Fernández
    I'm running Ubuntu 12.04's packaged Jetty in which I installed solr 4.3.1 (by copying the war file to /var/lib/jetty/webapps. When I start Jetty, I get this error: failed SolrRequestFilter: org.apache.solr.common.SolrException: Could not find necessary SLF4j logging jars. If using Jetty, the SLF4j logging jars need to go in the jetty lib/ext directory. The package libslf4j-java is installed, and the jars are in /usr/share/java: /usr/share/java/log4j-over-slf4j.jar /usr/share/java/slf4j-api.jar /usr/share/java/slf4j-jcl.jar /usr/share/java/slf4j-jdk14.jar /usr/share/java/slf4j-log4j12.jar /usr/share/java/slf4j-migrator.jar /usr/share/java/slf4j-nop.jar /usr/share/java/slf4j-simple.jar but somehow, Jetty and/or Solr are not finding them. How do I make them find them? or how do I install some other jars where jetty/solr would find them? The full error is: 88 [main] INFO org.mortbay.log - jetty-6.1.24 443 [main] INFO org.mortbay.log - Deploy /etc/jetty/contexts/javadoc.xml -> org.mortbay.jetty.handler.ContextHandler@cec0c5{/javadoc,file:/usr/share/jetty/javadoc} 522 [main] INFO org.mortbay.log - Extract file:/var/lib/jetty/webapps/solr.war to /var/cache/jetty/data/Jetty__8080_solr.war__solr__zdafkg/webapp 1501 [main] WARN org.mortbay.log - failed SolrRequestFilter: org.apache.solr.common.SolrException: Could not find necessary SLF4j logging jars. If using Jetty, the SLF4j logging jars need to go in the jetty lib/ext directory. For other containers, the corresponding directory should be used. For more information, see: http://wiki.apache.org/solr/SolrLogging 1501 [main] ERROR org.mortbay.log - Failed startup of context org.mortbay.jetty.webapp.WebAppContext@5329c5{/solr,file:/var/lib/jetty/webapps/solr.war} org.apache.solr.common.SolrException: Could not find necessary SLF4j logging jars. If using Jetty, the SLF4j logging jars need to go in the jetty lib/ext directory. For other containers, the corresponding directory should be used. For more information, see: http://wiki.apache.org/solr/SolrLogging at org.apache.solr.servlet.SolrDispatchFilter.<init>(SolrDispatchFilter.java:105) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:532) at java.lang.Class.newInstance0(Class.java:374) at java.lang.Class.newInstance(Class.java:327) at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153) at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:92) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:662) at org.mortbay.jetty.servlet.Context.startContext(Context.java:140) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:467) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.Server.doStart(Server.java:224) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.mortbay.start.Main.invokeMain(Main.java:194) at org.mortbay.start.Main.start(Main.java:534) at org.mortbay.jetty.start.daemon.Bootstrap.start(Bootstrap.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243) Caused by: java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory at org.apache.solr.servlet.SolrDispatchFilter.<init>(SolrDispatchFilter.java:103) ... 36 more Caused by: java.lang.ClassNotFoundException: org.slf4j.LoggerFactory at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:392) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:363) ... 37 more 1505 [main] WARN org.mortbay.log - failed org.mortbay.jetty.webapp.WebAppContext@5329c5{/solr,file:/var/lib/jetty/webapps/solr.war}: java.lang.NoClassDefFoundError: org/slf4j/Logger 1579 [main] WARN org.mortbay.log - failed ContextHandlerCollection@19d0a1: java.lang.NoClassDefFoundError: org/slf4j/Logger 1582 [main] INFO org.mortbay.log - Opened /var/log/jetty/2013_06_27.request.log 1582 [main] WARN org.mortbay.log - failed HandlerCollection@cbf30e: java.lang.NoClassDefFoundError: org/slf4j/Logger 1582 [main] ERROR org.mortbay.log - Error starting handlers java.lang.NoClassDefFoundError: org/slf4j/Logger at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2454) at java.lang.Class.getMethod0(Class.java:2697) at java.lang.Class.getMethod(Class.java:1622) at org.mortbay.log.Log.unwind(Log.java:228) at org.mortbay.log.Log.warn(Log.java:197) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:475) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.Server.doStart(Server.java:224) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.mortbay.start.Main.invokeMain(Main.java:194) at org.mortbay.start.Main.start(Main.java:534) at org.mortbay.jetty.start.daemon.Bootstrap.start(Bootstrap.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243) Caused by: java.lang.ClassNotFoundException: org.slf4j.Logger at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:392) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:363) ... 29 more

    Read the article

  • RackSpace Cloud Strips $_SESSION if URL Has Certain File Extensions

    - by macinjosh
    The Situation I am creating a video training site for a client on the RackSpace Cloud using the traditional LAMP stack (RackSpace's cloud has both Windows and LAMP stacks). The videos and other media files I'm serving on this site need to be protected as my client charges money for access to them. There is no DRM or funny business like that, essentially we store the files outside of the web root and use PHP to authenticate user's before they are able to access the files by using mod_rewrite to run the request through PHP. So let's say the user requests a file at this URL: http://www.example.com/uploads/preview_image/29.jpg I am using mod_rewrite to rewrite that url to: http://www.example.com/files.php?path=%2Fuploads%2Fpreview_image%2F29.jpg Here is a simplified version of the files.php script: <?php // Setups the environment and sets $logged_in // This part requires $_SESSION require_once('../../includes/user_config.php'); if (!$logged_in) { // Redirect non-authenticated users header('Location: login.php'); } // This user is authenticated, continue $content_type = "image/jpeg"; // getAbsolutePathForRequestedResource() takes // a Query Parameter called path and uses DB // lookups and some string manipulation to get // an absolute path. This part doesn't have // any bearing on the problem at hand $file_path = getAbsolutePathForRequestedResource($_GET['path']); // At this point $file_path looks something like // this: "/path/to/a/place/outside/the/webroot" if (file_exists($file_path) && !is_dir($file_path)) { header("Content-Type: $content_type"); header('Content-Length: ' . filesize($file_path)); echo file_get_contents($file_path); } else { header('HTTP/1.0 404 Not Found'); header('Status: 404 Not Found'); echo '404 Not Found'; } exit(); ?> The Problem Let me start by saying this works perfectly for me. On local test machines it works like a charm. However once deployed to the cloud it stops working. After some debugging it turns out that if a request to the cloud has certain file extensions like .JPG, .PNG, or .SWF (i.e. extensions of typically static media files.) the request is routed to a cache system called Varnish. The end result of this routing is that by the time this whole process makes it to my PHP script the session is not present. If I change the extension in the URL to .PHP or if I even add a query parameter Varnish is bypassed and the PHP script can get the session. No problem right? I'll just add a meaningless query parameter to my requests! Here is the rub: The media files I am serving through this system are being requested through compiled SWF files that I have zero control over. They are generated by third-party software and I have no hope of adding or changing the URLs that they request. Are there any other options I have on this? Update: I should note that I have verified this behavior with RackSpace support and they have said there is nothing they can do about it.

    Read the article

  • ASP.NET MVC - Form Post always redirect when I just want to bind json result to a div

    - by Saxman
    Hi all, I'm having a little problem with json result. I'm submitting a contact form, and after the submission, I just want to return some json data (indicating either a success or failed and displays a message) back to the view, without causing a redirect, but it kept redirecting me to the action, here are the codes: HTML <div id="contactForm2" class="grid_6"> <div id="contactFormContainer"> @using (Html.BeginForm(MVC.Home.ActionNames.ContactForm, MVC.Home.Name, FormMethod.Post, new { id = "contactForm" })) { <p> <input type="text" tabindex="1" size="22" value="" id="contactName" class="text_input required" name="contactName" /> <label for="contactName"> <strong class="leftSpace">Your Name (required)</strong></label></p> <p> <input type="text" tabindex="2" size="22" value="" id="contactEmail" class="text_input required" name="contactEmail" /> <label for="contactEmail"> <strong class="leftSpace">Email (required)</strong></label></p> <p> <input type="text" tabindex="2" size="22" value="" id="contactPhone" class="text_input" name="contactPhone" /> <label for="contactPhone"> <strong class="leftSpace">Phone</strong></label></p> <p> <label> <strong class="leftSpace n">Message (required)</strong></label> <textarea tabindex="4" rows="4" cols="56" id="contactMessage" class="text_area required" name="contactMessage"></textarea></p> <p> <input type="submit" value="Send" tabindex="5" id="contactSubmit" class="button submit" name="contactSubmit" /></p> } </div> <div id="contactFormStatus"> </div> </div> Controller [HttpPost] public virtual JsonResult ContactForm(FormCollection formCollection) { var name = formCollection["contactName"]; var email = formCollection["contactEmail"]; var phone = formCollection["contactPhone"]; var message = formCollection["contactMessage"]; if (string.IsNullOrEmpty(name) || string.IsNullOrEmpty(email) || string.IsNullOrEmpty(message)) { return Json(new { success = false, message = "Please complete all the required fields so that I can get back to you. Thanks." }); } // Insert contact form data here... return Json(new { success = true, message = "Your inquery has been sent. Thank you." }); } javascript $(document).ready(function () { $('#contactSubmit').live('click', function () { var form = $('#contactForm'); var formData = form.serialize(); $.post('/Home/ContactForm', formData, function (result) { var status = $('#contactFormStatus'); if (result.success) { $('#contactForm')[0].reset; } status.append(result.message); }, 'json' ); return false; }); }); I've also tried this javascript, but also got a redirect $(document).ready(function () { $('#contactSubmit').live('click', function () { var form = $('#contactForm'); var formData = form.serialize(); $.ajax({ type: 'POST', url: '/Home/ContactForm', data: formData, success: function (result) { SubmitContactResult(result); }, cache: false }); }); function SubmitContactResult(result) { var status = $('#contactFormStatus'); if (result.success) { $('#contactForm')[0].reset; } status.append(result.message); } }); Any idea what's going on with my code? Thank you very much.

    Read the article

  • ERROR: Failed to build gem native extension (mysql2 on rails 3.2.3)

    - by Ryan Arneson
    I'm trying to install the mysql2 gem with Rails 3.2.3 and it's failing: ? bundle install Fetching gem metadata from https://rubygems.org/......... Using rake (0.9.2.2) Using i18n (0.6.0) Using multi_json (1.2.0) Using activesupport (3.2.3) Using builder (3.0.0) Using activemodel (3.2.3) Using erubis (2.7.0) Using journey (1.0.3) Using rack (1.4.1) Using rack-cache (1.2) Using rack-test (0.6.1) Using hike (1.2.1) Using tilt (1.3.3) Using sprockets (2.1.2) Using actionpack (3.2.3) Using mime-types (1.18) Using polyglot (0.3.3) Using treetop (1.4.10) Using mail (2.4.4) Using actionmailer (3.2.3) Using arel (3.0.2) Using tzinfo (0.3.32) Using activerecord (3.2.3) Using activeresource (3.2.3) Using bundler (1.1.3) Using coffee-script-source (1.2.0) Using execjs (1.3.0) Using coffee-script (2.2.0) Using rack-ssl (1.3.2) Using json (1.6.6) Using rdoc (3.12) Using thor (0.14.6) Using railties (3.2.3) Using coffee-rails (3.2.2) Using jquery-rails (2.0.2) Installing mysql2 (0.3.11) with native extensions Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /Users/rarneson/.rvm/rubies/ruby-1.9.3-p125/bin/ruby extconf.rb checking for rb_thread_blocking_region()... yes checking for rb_wait_for_single_fd()... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lmygcc... no checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/Users/rarneson/.rvm/rubies/ruby-1.9.3-p125/bin/ruby --with-mysql-config --without-mysql-config --with-mysql-dir --without-mysql-dir --with-mysql-include --without-mysql-include=${mysql-dir}/include --with-mysql-lib --without-mysql-lib=${mysql-dir}/lib --with-mysqlclientlib --without-mysqlclientlib --with-mlib --without-mlib --with-mysqlclientlib --without-mysqlclientlib --with-zlib --without-zlib --with-mysqlclientlib --without-mysqlclientlib --with-socketlib --without-socketlib --with-mysqlclientlib --without-mysqlclientlib --with-nsllib --without-nsllib --with-mysqlclientlib --without-mysqlclientlib --with-mygcclib --without-mygcclib --with-mysqlclientlib --without-mysqlclientlib Gem files will remain installed in /Users/rarneson/.rvm/gems/ruby-1.9.3-p125/gems/mysql2-0.3.11 for inspection. Results logged to /Users/rarneson/.rvm/gems/ruby-1.9.3-p125/gems/mysql2-0.3.11/ext/mysql2/gem_make.out An error occured while installing mysql2 (0.3.11), and Bundler cannot continue. Make sure that `gem install mysql2 -v '0.3.11'` succeeds before bundling. I'm running bundle install and this is in my Gemfile: gem 'mysql2', '~> 0.3.11' I've currently got MySQL running through MAMP. I'm not sure if this is a bad idea and I should run a vanilla MySQl but it seems my current problem is just getting the gem installed. I've seen quite a few of these problems here on stackoverflow but all seem a bit different or have really complicated solutions. Is there something I'm missing? Something simple? Something stupid? I can provide additional info from the out file if necessary. I've read that some people use SQLite for dev and test then MySQL in prod but that sounds like a pretty horrible idea.

    Read the article

  • Choosing a type for search results in C#

    - by Chris M
    I have a result set that will never exceed 500; the results that come back from the web-service are assigned to a search results object. The data from the webservice is about 2mb; the bit I want to use is about a third of each record, so this allows me to cache and quickly manipulate it. I want to be able to sort and filter the results with the least amount of overhead and as fast as possible so I used the VCSKICKS timing class to measure their performance Average Total (10,000) Type Create Sort Create Sort HashSet 0.1579 0.0003 1579 3 IList 0.0633 0.0002 633 2 IQueryable 0.0072 0.0432 72 432 Measured in Seconds using http://www.vcskicks.com/algorithm-performance.php I created the hashset through a for loop over the web-service response (adding to the hashset). The List & IQueryable were created using LINQ. Question I can understand why HashSet takes longer to create (the foreach loop vs linq); but why would IQueryable take longer to sort than the other two; and finally is there a better way to assign the HashSet. Thanks Actual Program public class Program { private static AuthenticationHeader _authHeader; private static OPSoapClient _opSession; private static AccommodationSearchResponse _searchResults; private static HashSet<SearchResults> _myHash; private static IList<SearchResults> _myList; private static IQueryable<SearchResults> _myIQuery; static void Main(string[] args) { #region Setup WebService _authHeader = new AuthenticationHeader { UserName = "xx", Password = "xx" }; _opSession = new OPSoapClient(); #region Setup Search Results _searchResults = _opgSession.SearchCR(_authHeader, "ENG", "GBP", "GBR"); #endregion Setup Search Results #endregion Setup WebService // HASHSET SpeedTester hashTest = new SpeedTester(TestHashSet); hashTest.RunTest(); Console.WriteLine("- Hash Test \nAverage Running Time: {0}; Total Time: {1}", hashTest.AverageRunningTime, hashTest.TotalRunningTime); SpeedTester hashSortTest = new SpeedTester(TestSortingHashSet); hashSortTest.RunTest(); Console.WriteLine("- Hash Sort Test \nAverage Running Time: {0}; Total Time: {1}", hashSortTest.AverageRunningTime, hashSortTest.TotalRunningTime); // ILIST SpeedTester listTest = new SpeedTester(TestList); listTest.RunTest(); Console.WriteLine("- List Test \nAverage Running Time: {0}; Total Time: {1}", listTest.AverageRunningTime, listTest.TotalRunningTime); SpeedTester listSortTest = new SpeedTester(TestSortingList); listSortTest.RunTest(); Console.WriteLine("- List Sort Test \nAverage Running Time: {0}; Total Time: {1}", listSortTest.AverageRunningTime, listSortTest.TotalRunningTime); // IQUERIABLE SpeedTester iqueryTest = new SpeedTester(TestIQueriable); iqueryTest.RunTest(); Console.WriteLine("- iquery Test \nAverage Running Time: {0}; Total Time: {1}", iqueryTest.AverageRunningTime, iqueryTest.TotalRunningTime); SpeedTester iquerySortTest = new SpeedTester(TestSortableIQueriable); iquerySortTest.RunTest(); Console.WriteLine("- iquery Sort Test \nAverage Running Time: {0}; Total Time: {1}", iquerySortTest.AverageRunningTime, iquerySortTest.TotalRunningTime); } static void TestHashSet() { var test = _searchResults.Items; _myHash = new HashSet<SearchResults>(); foreach(var x in test) { _myHash.Add(new SearchResults { Ref = x.Ref, Price = x.StandardPrice }); } } static void TestSortingHashSet() { var sorted = _myHash.OrderBy(s => s.Price); } static void TestList() { var test = _searchResults.Items; _myList = (from x in test select new SearchResults { Ref = x.Ref, Price = x.StandardPrice }).ToList(); } static void TestSortingList() { var sorted = _myList.OrderBy(s => s.Price); } static void TestIQueriable() { var test = _searchResults.Items; _myIQuery = (from x in test select new SearchResults { Ref = x.Ref, Price = x.StandardPrice }).AsQueryable(); } static void TestSortableIQueriable() { var sorted = _myIQuery.OrderBy(s => s.Price); } }

    Read the article

  • Apache/PHP serving file multiple times

    - by easement
    I have a system with a download.php page. The page takes and id and loads a file based on from the DB Record and then serves it up. I've noticed a couple instances where files are requested multiple times in short time spans (20ms). Times that are too quick for human input. There are plenty of instances where the downloader functions fine. However, in taking a closer look at the downloader’s usage, I did see some interesting behavior. For instance, the IP address xxx.xxx.xxx.xxx (which is one in a range owned by xxxxxx.de in Germany) came to the site through Google. They browsed around and then came to the page http://site.com/xxxx/press+125.php There they issued a request for /download.php?id=/ZZ/n+aH55Y= (a PDF) at 9:04:23AM. That alone is not a big deal. However, what is interesting is that the server seems to have been quite preoccupied with serving that request. In the logs the request first completes between 9:09:48 and 9:10:00. It looks like the user must have gotten tired of waiting during that time and requested the document two more times. Between 09:14:47 and 09:15:00 the same request appears again, except it is from 9:04:43AM, 20ms later than the first request. Then it pops up a third time, with a request that started at 09:05:06 completing between 09:19:55 and 09:19:58! I’m suspicious of that document. In looking through the logs I see other instances where it takes the server a little while to handle that specific file. Check out this list of requests from zzz.zzz.zzz.zzz[different than above] for the file /download.php?id=/ZZ/n+aH55Y= (the same docuemnt as before): Request time Complete Time 04:32:43 04:33:36 04:32:50 04:33:36 04:32:51 04:33:38 04:33:05 04:33:38 04:33:34 04:33:42 04:33:05 04:33:42 So something is definitely going on. Whether it has to do with this specific document tripping up the server, the download.php page’s code, or if we’re just seeing the evidence of some server level overload as it plays out in real time I’m not yet sure. In fairness, there are other instances of people downloading /download.php?id=/ZZ/n+aH55Y= (the same PDF) without error. However, it is interesting that the multiple processes only seem to happen with this one file, and then only when it is accessed through the page http://site.com/press+125.php . It bears further investigation if there’s something amiss inside the code that causes the system to fire off multiple download requests that occupy the server. I don't know if this press+125.php is a rabbit hole, but there is weird consicence. Any ideas? I'm totally out of ideas. Apache maxed out? Things like that. ///DOWNLOAD.php $file = new files(); $file->comparison_filter("id", "=", $id); //sql to load if ($file->load()) { $file->serve(); } //FILES function serve() { if ($this->is_loaded) { if (file_exists($this->get_value("filename"))) { if ($this->get_value("content_type") != "") { header("Content-Type: " . $this->get_value("content_type")); } header("Content-Length: " . filesize($this->get_value("filename"))); if ($this->get_value("flag_image") == 0 || $this->get_value("flag_image") == false) { header("Cache-Control: private"); header("Content-Disposition: attachment; filename=" . urlencode($this->get_value("original_filename"))); } set_time_limit(0); @readfile($this->get_value("filename")); exit; } } }

    Read the article

  • PHP mini-server download resulme-error! Resource id # 4

    - by snikolov
    <?php $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $range = $input[12]; $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; if(isset($range)) { list($a, $range) = explode("=",$range); str_replace($range, "-", $range); $size2 = $size-1; $new_length = $size-$range; $output = "HTTP/1.1 206 Partial Content\r\n"; $output .= "Content-Length: $new_length\r\n"; $output .= "Content-Range: bytes $range$size2/$size\r\n"; } else { $size2=$size-1; $output .= "Content-Length: $new_length\r\n"; } $chunksize = 1*(1024*1024); $bytes_send = 0; $file = "a.mp3"; $filesize = filesize($file); if ($file = fopen($file, 'r')) { if(isset($range)) $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= "Content-Length: $filesize\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; fseek($file, $range); $download_rate = 1000; while(!feof($file) and (connection_status()==0)) { $var_stat = fread($file, round($download_rate *1024)); $output .= $var_stat;//echo($buffer); // is also possible flush(); sleep(1);//// decrease download speed } fclose($file); } /** $filename = "dada"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $send = array("Output"=>$buffer,"filesize"=>$filesize,"filename"=>$filename); $file = $send['filename']; */ //@ob_end_clean(); // $output .= "Content-Transfer-Encoding: binary"; //$output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); Hey guys, I haved create a miniwebserver downloader. It can download files from your server. However, I am unable to resume my download when I download the file – I get Resource id # 4 – and I also can't resume the download. I would like to know how I can monitor and record the client output and how much bandwidth he has downloaded. Perl has something like this, but it's hardcore; if possible, kindly provide me with some pointers thank you :)

    Read the article

  • Why is processing a sorted array faster than an unsorted array?

    - by GManNickG
    Here is a piece of code that shows some very peculiar performance. For some strange reason, sorting the data miraculously speeds up the code by almost 6x: #include <algorithm> #include <ctime> #include <iostream> int main() { // generate data const unsigned arraySize = 32768; int data[arraySize]; for (unsigned c = 0; c < arraySize; ++c) data[c] = std::rand() % 256; // !!! with this, the next loop runs faster std::sort(data, data + arraySize); // test clock_t start = clock(); long long sum = 0; for (unsigned i = 0; i < 100000; ++i) { // primary loop for (unsigned c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC; std::cout << elapsedTime << std::endl; std::cout << "sum = " << sum << std::endl; } Without std::sort(data, data + arraySize);, the code runs in 11.54 seconds. With the sorted data, the code runs in 1.93 seconds. Initially I thought this might be just a language or compiler anomaly. So I tried it Java... import java.util.Arrays; import java.util.Random; public class Main { public static void main(String[] args) { // generate data int arraySize = 32768; int data[] = new int[arraySize]; Random rnd = new Random(0); for (int c = 0; c < arraySize; ++c) data[c] = rnd.nextInt() % 256; // !!! with this, the next loop runs faster Arrays.sort(data); // test long start = System.nanoTime(); long sum = 0; for (int i = 0; i < 100000; ++i) { // primary loop for (int c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } System.out.println((System.nanoTime() - start) / 1000000000.0); System.out.println("sum = " + sum); } } with a similar but less extreme result. My first thought was that sorting brings the data into cache, but my next thought was how silly that is because the array was just generated. What is going on? Why is a sorted array faster than an unsorted array? The code is summing up some independent terms, the order should not matter.

    Read the article

  • CentOS - Configuring Puppet to play nice with SELinux

    - by Mike Purcell
    I am running into an issue every time I attempt to start the puppetmasterd service, for which I receive the following error message: root@service1 ~ # -> /etc/init.d/puppetmaster start Starting puppetmaster: Could not prepare for execution: Got 1 failure(s) while initializing: change from absent to directory failed: Could not set 'directory on ensure: Permission denied - /etc/puppet/ssl [FAILED] Apparently there was a known issue with this scenario as outlined in this bug report, however in the bug report it states the issue has been resolved in selinux-policy-3.9.16-29.fc15, but the latest CentOS default upstream version is 3.7.19-155.el6_3.4. So I am trying to figure out the best solution. I can either create a local security policy to allow puppetmasterd the access it needs, or keep researching and install a newer version of selinux-policy outside of the default upstream channel. Anyone have any recommendations? Please don't recommend disabling SELinux... ----- Update ----- Here is the puppet.conf: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl [master] certname=puppetmaster.ownij.lan dns_alt_names=puppetmaster.ownij.lan [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig server=puppetmaster.ownij.lan And here are the denials per the audit log: type=AVC msg=audit(1349751364.985:666): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751364.985:666): arch=c000003e syscall=4 success=no exit=-13 a0=1391420 a1=7fffef09ed10 a2=7fffef09ed10 a3=120c500 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.302:667): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.302:667): arch=c000003e syscall=4 success=no exit=-13 a0=1d18530 a1=7fffef0d04d0 a2=7fffef0d04d0 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.465:668): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.465:668): arch=c000003e syscall=4 success=no exit=-13 a0=1af3930 a1=7fffef0c5c70 a2=7fffef0c5c70 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.467:669): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.467:669): arch=c000003e syscall=4 success=no exit=-13 a0=1b17aa0 a1=7fffef0c5c70 a2=7fffef0c5c70 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751366.401:670): avc: denied { write } for pid=15093 comm="puppetmasterd" name="puppet" dev=dm-0 ino=132035 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:puppet_etc_t:s0 tclass=dir type=SYSCALL msg=audit(1349751366.401:670): arch=c000003e syscall=83 success=no exit=-13 a0=2d7a400 a1=1f9 a2=2d7a40f a3=7fffef0a6df0 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) And the audit log if I pass through audit2allow: root@service1 ~ # -> fgrep puppetmasterd /var/log/audit/audit.log | audit2allow -m puppetmasterd module puppetmasterd 1.0; require { type home_root_t; type puppetmaster_t; type puppet_etc_t; type puppet_var_run_t; type httpd_sys_content_t; class lnk_file { relabelfrom relabelto }; class file { relabelfrom read getattr open }; class dir { write read search getattr setattr }; } #============= puppetmaster_t ============== allow puppetmaster_t home_root_t:dir { search getattr }; allow puppetmaster_t httpd_sys_content_t:dir read; allow puppetmaster_t httpd_sys_content_t:file { read getattr open }; #!!!! The source type 'puppetmaster_t' can write to a 'dir' of the following types: # puppet_log_t, puppet_var_lib_t, puppet_var_run_t, puppetmaster_tmp_t allow puppetmaster_t puppet_etc_t:dir { write setattr }; allow puppetmaster_t puppet_etc_t:lnk_file { relabelfrom relabelto }; allow puppetmaster_t puppet_var_run_t:file relabelfrom;

    Read the article

  • jQuery hold form submit until "continue" button pressed

    - by Seán McCabe
    I am trying to submit a form, which I have had working, but have now modified it to include a modal jQuery UI box, so that it won't submit until the user presses "continue". I've had various problems with this, including getting the form to hold until that button is pressed, but I think I have found a solution to that, but implementing it, I am getting a SyntaxError which I can't find the source of. With the help of kevin B managed to find the answer was the form was submitting, but the returned JSON response wasn't quite formatted right. The response was that the form wasn't being submitted, so that problem is still occurring. So updated the code with the provided feedback, now need to find out why the form isnt submitting. I know its something to do with the 2nd function isnt recognising the submit button has been pressed, so need to know how to submit that form data without the form needing to be submitted again. Below is the new code: function submitData() { $("#submitProvData").submit(function(event) { event.preventDefault(); var gTotal, sTotal, dfd; var dfd = new $.Deferred(); $('html,body').animate({ scrollTop: 0 }, 'fast'); $("#submitProvData input").css("border", "1px solid #aaaaaa"); $("#submitProvData input[readonly='readonly']").css("border", "none"); sTotal = $('#summaryTotal').val(); gTotal = $('#gptotal').val(); if(gTotal !== 'sTotal'){ $("#newsupinvbox").append('<div id="newsupinvdiagbox" title="Warning - Totals do not match" class="hidden"><p>Press "Continue", to submit the invoice flagged for attention.</p> <br /><p class="italic">or</p><br /> <p>Press "Correct" to correct the discrepancy.</p></div>') //CREATE DIV //SET $("#newsupinvdiagbox").dialog({ resizable: false, autoOpen:false, modal: true, draggable: false, width:380, height:240, closeOnEscape: false, position: ['center',20], buttons: { 'Continue': function() { $(this).dialog('close'); reData(); }, // end continue button 'Correct': function() { $(this).dialog('close'); return false; } //end cancel button }//end buttons });//end dialog $('#newsupinvdiagbox').dialog('open'); } return false; }); } function reData() { console.log('submitted'); $("#submitProvData").submit(function(resubmit){ console.log('form submit'); var formData; formData = new FormData($(this)[0]); $.ajax({ type: "POST", url: "functions/invoicing_upload_provider.php", data: formData, async: false, success: function(result) { $.each($.parseJSON(result), function(item, value){ if(item == 'Success'){ $('#newsupinv_window_message_success_mes').html('The provider invoice was uploaded successfully.'); $('#newsupinv_window_message_success').fadeIn(300, function (){ reset(); }).delay(2500).fadeOut(700); } else if(item == 'Error'){ $('#newsupinv_window_message_error_mes').html(value); $('#newsupinv_window_message_error').fadeIn(300).delay(3000).fadeOut(700); } else if(item == 'Warning'){ $('#newsupinv_window_message_warning_mes').html(value); $('#newsupinv_window_message_warning').fadeIn(300, function (){ reset(); }).delay(2500).fadeOut(700); } }); }, error: function() { $('#newsupinv_window_message_error_mes').html("An error occured, the form was not submitted"); $('#newsupinv_window_message_error').fadeIn(300); $('#newsupinv_window_message_error').delay(3000).fadeOut(700); }, cache: false, contentType: false, processData: false }); }); }

    Read the article

  • Various problems with software raid1 array built with Samsung 840 Pro SSDs

    - by Andy B
    I am bringing to ServerFault a problem that is tormenting me for 6+ months. I have a CentOS 6 (64bit) server with an md software raid-1 array with 2 x Samsung 840 Pro SSDs (512GB). Problems: Serious write speed problems: root [~]# time dd if=arch.tar.gz of=test4 bs=2M oflag=sync 146+1 records in 146+1 records out 307191761 bytes (307 MB) copied, 23.6788 s, 13.0 MB/s real 0m23.680s user 0m0.000s sys 0m0.932s When doing the above (or any other larger copy) the load spikes to unbelievable values (even over 100) going up from ~ 1. When doing the above I've also noticed very weird iostat results: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1589.50 0.00 54.00 0.00 13148.00 243.48 0.60 11.17 0.46 2.50 sdb 0.00 1627.50 0.00 16.50 0.00 9524.00 577.21 144.25 1439.33 60.61 100.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 1602.00 0.00 12816.00 8.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 And it keeps it this way until it actually writes the file to the device (out from swap/cache/memory). The problem is that the second SSD in the array has svctm and await roughly 100 times larger than the second. For some reason the wear is different between the 2 members of the array root [~]# smartctl --attributes /dev/sda | grep -i wear 177 Wear_Leveling_Count 0x0013 094% 094 000 Pre-fail Always - 180 root [~]# smartctl --attributes /dev/sdb | grep -i wear 177 Wear_Leveling_Count 0x0013 070% 070 000 Pre-fail Always - 1005 The first SSD has a wear of 6% while the second SSD has a wear of 30%!! It's like the second SSD in the array works at least 5 times as hard as the first one as proven by the first iteration of iostat (the averages since reboot): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 10.44 51.06 790.39 125.41 8803.98 1633.11 11.40 0.33 0.37 0.06 5.64 sdb 9.53 58.35 322.37 118.11 4835.59 1633.11 14.69 0.33 0.76 0.29 12.97 md1 0.00 0.00 1.88 1.33 15.07 10.68 8.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 1109.02 173.12 10881.59 1620.39 9.75 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.41 0.01 3.10 0.02 7.42 0.00 0.00 0.00 0.00 What I've tried: I've updated the firmware to DXM05B0Q (following reports of dramatic improvements for 840Ps after this update). I have looked for "hard resetting link" in dmesg to check for cable/backplane issues but nothing. I have checked the alignment and I believe they are aligned correctly (1MB boundary, listing below) I have checked /proc/mdstat and the array is Optimal (second listing below). root [~]# fdisk -ul /dev/sda Disk /dev/sda: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00026d59 Device Boot Start End Blocks Id System /dev/sda1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sda2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sda3 4605952 814106623 404750336 fd Linux raid autodetect root [~]# fdisk -ul /dev/sdb Disk /dev/sdb: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dede Device Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sdb2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sdb3 4605952 814106623 404750336 fd Linux raid autodetect /proc/mdstat root # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204736 blocks super 1.0 [2/2] [UU] md2 : active raid1 sdb3[1] sda3[0] 404750144 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 2096064 blocks super 1.1 [2/2] [UU] unused devices: Running a read test with hdparm root [~]# hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 664 MB in 3.00 seconds = 221.33 MB/sec root [~]# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 288 MB in 3.01 seconds = 95.77 MB/sec But look what happens if I add --direct root [~]# hdparm --direct -t /dev/sda /dev/sda: Timing O_DIRECT disk reads: 788 MB in 3.01 seconds = 262.08 MB/sec root [~]# hdparm --direct -t /dev/sdb /dev/sdb: Timing O_DIRECT disk reads: 534 MB in 3.02 seconds = 176.90 MB/sec Both tests increase but /dev/sdb doubles while /dev/sda increases maybe 20%. I just don't know what to make of this. As suggested by Mr. Wagner I've done another read test with dd this time and it confirms the hdparm test: root [/home2]# dd if=/dev/sda of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 38.0855 s, 282 MB/s root [/home2]# dd if=/dev/sdb of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 115.24 s, 93.2 MB/s So sda is 3 times faster than sdb. Or maybe sdb is doing also something else besides what sda does. Is there some way to find out if sdb is doing more than what sda does? UPDATE Again, as suggested by Mr. Wagner, I have swapped the 2 SSDs. And as he thought it would happen, the problem moved from sdb to sda. So I guess I'll RMA one of the SSDs. I wonder if the cage might be problematic. What is wrong with this array? Please help!

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >