Search Results

Search found 59554 results on 2383 pages for 'distributed data mining'.

Page 577/2383 | < Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >

  • Zend and Jquery (Ajax Post)

    - by Zend_Newbie_Dev
    I'm using zend framework, i would like to get POST data using Jquery ajax post on a to save without refreshing the page. //submit.js $(function() { $('#buttonSaveDetails').click(function (){ var details = $('textarea#details').val(); var id = $('#task_id').val(); $.ajax({ type: 'POST', url: 'http://localhost/myproject/public/module/save', async: false, data: 'id=' + id + '&details=' + details, success: function(responseText) { //alert(responseText) console.log(responseText); } }); }); }); On my controller, I just don't know how to retrieve the POST data from ajax. public function saveAction() { $data = $this->_request->getPost(); echo $id = $data['id']; echo $details = $data['details']; //this wont work; } Thanks in advance.

    Read the article

  • R - Subsetting XTS via Time and Which

    - by user2844947
    Currently, I have a XTS, called Data, which contains a Date, and two value columns which are numbers. I would like to get a single number as output and which would be the mean of Value1 in the time period from a point where Value2 < mean(Value2) and going forward 14 data points, weeks in this particular data set. In order to get the dates where Value2 < mean(Value2), I wrote the below code Data[which(Data$Value2 < mean(Data$Value2)),"Date"] However, I am not sure how to get the mean of Value1 in the period, going 14 days forward from each of the resultant dates from the above code.

    Read the article

  • question about in -place sort

    - by davit-datuashvili
    for example we have following array char data[]=new char[]{'A','S','O','R','T','I','N','G','E','X','A','M','P','L','E'}; and index array int a[]=new int[]{0,1,2,3,4,5,6,7,8,9,10,11,12,13,14}: void insitu(char data[],int a[],N){ for (int i=0;i<N;i++) { char v=data[i]; int j,int k; for (k=i;a[k]!=i;k=a[j];a[j]=j) { j=k;data[k]=data[a[k]; } data[k]=v; a[k]=k; } i have question what is initialize value of j? when i run this code it asks me to initialize j and what should do? }

    Read the article

  • How to prevent negative number in Mysql

    - by Jerry
    Hello guys.. I have data which is starting from 0 in my database. My php will add 1 or -1 to the data depending on the user's input. My problem is that if data is 0 and a user try to subtract 1. The data become 4294967295 which is the maximum value of INT data type. Are there anyways to make the data stays in 0 even when the user asks for -1? Thanks for the reply.. my sql command is like below update board set score=score-1 where team='TeamA' //this would generate 4294967295 if the score is 0.....

    Read the article

  • iPhone view architecture question

    - by Joe
    So I have (for instance) three views: A: root view B: a view functionally identical to root C: a data entry view which collects a few piece of info What I'm trying to do is reuse C to supply the data it collects to either A or B. It should supply the data to whichever of the two it is pushed onto. The data for A is similar, but functionally distinct, to what collects for B. Right now, I'm passing data from C to A or B via a singleton class. What I'm trying to avoid is having two instances of C, one to supply data to A and B (because, in actuality, the program will have 5 total views like C. Does the question make sense?

    Read the article

  • Export CSV from Mysql

    - by ss888
    Hi, I'm having a bit of trouble exporting a csv file that is created from one of my mysql tables using php. The code I'm using prints the correct data, but I can't see how to download this data in a csv file, providing a download link to the created file. I thought the browser was supposed to automatically provide the file for download, but it doesn't. (Could it be because the below code is called using ajax?) Any help greatly appreciated - code below, S. include('../config/config.php'); //db connection settings $query = "SELECT * FROM isregistered"; $export = mysql_query ($query ) or die ( "Sql error : " . mysql_error( ) ); $fields = mysql_num_fields ( $export ); for ( $i = 0; $i < $fields; $i++ ) { $header .= mysql_field_name( $export , $i ) . "\t"; } while( $row = mysql_fetch_row( $export ) ) { $line = ''; foreach( $row as $value ) { if ( ( !isset( $value ) ) || ( $value == "" ) ) { $value = "\t"; } else { $value = str_replace( '"' , '""' , $value ); $value = '"' . $value . '"' . "\t"; } $line .= $value; } $data .= trim( $line ) . "\n"; } $data = str_replace( "\r" , "" , $data ); if ( $data == "" ) { $data = "\n(0) Records Found!\n"; } //header("Content-type: application/octet-stream"); //have tried all of these at sometime //header("Content-type: text/x-csv"); header("Content-type: text/csv"); //header("Content-type: application/csv"); header("Content-Disposition: attachment; filename=export.csv"); //header("Content-Disposition: attachment; filename=export.xls"); header("Pragma: no-cache"); header("Expires: 0"); echo '<a href="">Download Exported Data</a>'; //want my link to go in here... print "$header\n$data";

    Read the article

  • Toggling between states on 3 containers in jQuery

    - by Saif Bechan
    Hello, i have an php/mysql/ajax auction application application. Now there is a section that can have 3 states. State one, the user can place a bid.(Bid button is shown) State two, the user has to wait before he can place a new bid(number of sec is shown) State three, the user has an auto bidding system enabled.(The price maximum and amount ate shown). Now i get these values trough jquery ajax, and i want to present the user the right section. Now the code i have is as follows: if(data[i].bidwait >= 0 && data[i].amount == ''){ // In this section the user has no autobiddings, and can place a bidding $(productContainer+' .bid-wrapper').removeClass('none'); $(productContainer+' .bidwait-wrapper').addClass('none'); $(productContainer+' .autobid-wrapper').addClass('none'); }else if(data[i].bidwait < 0 && data[i].amount == ''){ // In this section the user has to wait the amount of sec to place a new bid $(productContainer+' .bid-wrapper').addClass('none'); $(productContainer+' .bidwait-wrapper').removeClass('none'); $(productContainer+' .autobid-wrapper').addClass('none'); $(productContainer+' .bidwait-text').text(data[i].bidwait * -1); }else{ // In the last section the user has the auto bidder enabled, so that is shown $(productContainer+' .bid-wrapper').addClass('none'); $(productContainer+' .bidwait-wrapper').addClass('none'); $(productContainer+' .autobid-wrapper').removeClass('none'); $(productContainer+' .amount').html(data[i].amount == 0 ? '&#8734;' : data[i].amount); $(productContainer+' .maxprice').html(data[i].maxprice == 0 ? '&#8734;' : data[i].maxprice); } This looks like an awful lot of code for something so small. I was wondering if there is an easier method to accomplish such a thing. Speed is a huge issue for me because this has to run every second in the users browser. If there is no other option i am just going to remove this option and go with a no ajax approuch. If you have been, thank you for reading!

    Read the article

  • django - where to clean extra whitespace from form field inputs?

    - by Westerley
    I've just discovered that Django doesn't automatically strip out extra whitespace from form field inputs, and I think I understand the rationale ('frameworks shouldn't be altering user input'). I think I know how to remove the excess whitespace using python's re: #data = re.sub('\A\s+|\s+\Z', '', data) data = data.strip() data = re.sub('\s+', ' ', data) The question is where should I do this? Presumably this should happen in one of the form's clean stages, but which one? Ideally, I would like to clean all my fields of extra whitespace. If it should be done in the clean_field() method, that would mean I would have to have a lot of clean_field() methods that basically do the same thing, which seems like a lot of repetition. If not the form's cleaning stages, then perhaps in the model that the form is based on? Thanks for your help! W.

    Read the article

  • In SQL Server 2000, how to delete the specified rows in a table that does not have a primary key?

    - by Yousui
    Hi, Let's say we have a table with some data in it. IF OBJECT_ID('dbo.table1') IS NOT NULL BEGIN DROP TABLE dbo.table1; END CREATE TABLE table1 ( DATA INT ); --------------------------------------------------------------------- -- Generating testing data --------------------------------------------------------------------- INSERT INTO dbo.table1(data) SELECT 100 UNION ALL SELECT 200 UNION ALL SELECT NULL UNION ALL SELECT 400 UNION ALL SELECT 400 UNION ALL SELECT 500 UNION ALL SELECT NULL; How to delete the 2nd, 5th, 6th records in the table? The order id defined by the following query. SELECT data FROM dbo.table1 ORDER BY data DESC; Note, this is in SQL Server 2000 environment. Thanks.

    Read the article

  • SSIS - user variable used in derived column transform is not available - in some cases

    - by soo
    Unfortunately I don't have a repro for my issue, but I thought I would try to describe it in case it sounds familiar to someone... I am using SSIS 2005, SP2. My package has a package-scope user variable - let's call it user_var first step in the control flow is an Execute SQL task which runs a stored procedure. All that SP does is insert a record in a SQL table (with an identity column) and then go back and get the max ID value. The Execute SQL task saves this output into user_var the control flow then has a Data Flow Task - it goes and gets some source data, has a derived column which sets a column called run_id to user_var - and saves the data to a SQL destination In most cases (this template is used for many packages, running every day) this all works great. All of the destination records created get set with a correct run_id. However, in some cases, there is a set of the destination data that does not get run_id equal to user_var, but instead gets a value of 0 (0 is the default value for user_var). I have 2 instances where this has happened, but I can't make it happen. In both cases, it was just less that 10,000 records that have run_id = 0. Since SSIS writes data out in 10,000 record blocks, this really makes me think that, for the first set of data written out, user_var was not yet set. Then, after that first block, for the rest of the data, run_id is set to a correct value. But control passed on to my data flow from the Execute SQL task - it would have seemed reasonable to me that it wouldn't go on until the SP has completed and user_var is set. Maybe it just runs the SP, but doesn't wait for it to complete? In both cases where this has happened there seemed to be a few packages hitting the table to get a new user_var at about the same time. And in both cases lots of data was written (40 million rows, 60 million rows) - my thinking is that that means the writes were happening for a while. Sorry to be both long-winded AND vague. A winning combination! Does this sound familiar to anyone? Thanks.

    Read the article

  • jQuery/JSON/PHP failing

    - by user730936
    I am trying to call a php script that accepts JSON data, writes it into a file and returns simple text response using jQuery/AJAX call. jQuery code : $("input.callphp").click(function() { var url_file = myurl"; $.ajax({type : "POST", url : url_file, data : {'puzzle': 'Reset!'}, success : function(data){ alert("Success: " + data); }, error : function (data) { alert("Error: " + data); }, dataType : 'text' }); }); PHP Code : <?php $thefile = "new.json"; /* Our filename as defined earlier */ $towrite = $_POST["puzzle"]; /* What we'll write to the file */ $openedfile = fopen($thefile, "w"); fwrite($openedfile, $towrite); fclose($openedfile); echo "<br> <br>".$towrite; ?> However, the call is never a success and always gives an error with an alert "Error : [Object object]".

    Read the article

  • Unable to return/process JSON in JQuery $.get()

    - by shyam
    I have a problem returning/processing JSON data while calling $.get() function. Here's the code: JQuery: $.get("script.php?", function(data) { if (data.status) { alert('ok'); } else { alert(data.error); } },'json'); PHP if ($r) { $ret = array("status"=>true); } else { $ret = array("status"=>false,"error"=>$error); } echo json_encode( $ret ); So this is the code. But the response is always taken as string in the jquery. data.status and data.error is undefined.

    Read the article

  • C++ - Implementing my own stream

    - by HardCoder1986
    Hello! My problem can be described the following way: I have some data which actually is an array and could be represented as char* data with some size I also have some legacy code (function) that takes some abstract std::istream object as a param and uses that stream to retrieve data to operate. So, my question is the following - what would be the easy way to map my data to some std::istream object so that I can pass it to my function? I thought about creating a std::stringstream object from my data, but that means copying and (as I assume) isn't the best solution. Any ideas how this could be done so that my std::istream operates on the data directly? Thank you.

    Read the article

  • Merging DataTable(s) Column by Column.

    - by Omky
    Hello All, I want to merge two or more DataTables Colum by Column. I am developing C# Windows Application. My use case is below: I have empty data grid in my application. user will drag and drop one column from available column list box into data grid. The data grid will start displaying data for that column. Now, I will drag another column into data grid and now grid should get populated data of two columns. This will repeat till user feels that he has dropped all necessary columns. What is best way to do this? Is there any performance hits with large number of rows typically 1 million? Please help. Thanks, Omky

    Read the article

  • Byte manipulation in PHP

    - by Michael Angstadt
    In PHP, if you have a variable with binary data, how do you get specific bytes from the data? For example, if I have some data that is 30 bytes long, how do I get the first 8 bytes? Right now, I'm treating it like a string, using the substr() function: $data = //... $first8Bytes = substr($data, 0, 8); Is it safe to use substr with binary data? Or are there other functions that I should be using? Thanks.

    Read the article

  • trying to work through a list in sections

    - by user1714887
    I have a list of lists sorted by the second value of the list (the groups). I now need to iterate through this to work on each "group" at a time. the data is [name, group, data1, data2, data3, data4]. I wasn't sure if I need a while or some other sort of loop, or maybe groupby but I've never used that. any help would be appreciated. for i in range (int(max_group)): x1 = [] x2 = [] x3 = [] x4 = [] if data[i][1] == i+1: x1.append(data[2]) x2.append(data[3]) x3.append(data[4]) x4.append(data[5]) print x1 print 'next' # these are just to test where we're at

    Read the article

  • CreationName for SSIS 2008 and adding components programmatically

    If you are building SSIS 2008 packages programmatically and adding data flow components, you will probably need to know the creation name of the component to add. I can never find a handy reference when I need one, hence this rather mundane post. See also CreationName for SSS 2005. We start with a very simple snippet for adding a component: // Add the Data Flow Task package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = package.Executables[0] as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add OLE-DB source component - ** This is where we need the creation name ** IDTSComponentMetaData90 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "OLEDBSource"; componentSource.ComponentClassID = "DTSAdapter.OLEDBSource.2"; So as you can see the creation name for a OLE-DB Source is DTSAdapter.OLEDBSource.2. CreationName Reference  ADO NET Destination Microsoft.SqlServer.Dts.Pipeline.ADONETDestination, Microsoft.SqlServer.ADONETDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 ADO NET Source Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter, Microsoft.SqlServer.ADONETSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Aggregate DTSTransform.Aggregate.2 Audit DTSTransform.Lineage.2 Cache Transform DTSTransform.Cache.1 Character Map DTSTransform.CharacterMap.2 Checksum Konesans.Dts.Pipeline.ChecksumTransform.ChecksumTransform, Konesans.Dts.Pipeline.ChecksumTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Conditional Split DTSTransform.ConditionalSplit.2 Copy Column DTSTransform.CopyMap.2 Data Conversion DTSTransform.DataConvert.2 Data Mining Model Training MSMDPP.PXPipelineProcessDM.2 Data Mining Query MSMDPP.PXPipelineDMQuery.2 DataReader Destination Microsoft.SqlServer.Dts.Pipeline.DataReaderDestinationAdapter, Microsoft.SqlServer.DataReaderDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Derived Column DTSTransform.DerivedColumn.2 Dimension Processing MSMDPP.PXPipelineProcessDimension.2 Excel Destination DTSAdapter.ExcelDestination.2 Excel Source DTSAdapter.ExcelSource.2 Export Column TxFileExtractor.Extractor.2 Flat File Destination DTSAdapter.FlatFileDestination.2 Flat File Source DTSAdapter.FlatFileSource.2 Fuzzy Grouping DTSTransform.GroupDups.2 Fuzzy Lookup DTSTransform.BestMatch.2 Import Column TxFileInserter.Inserter.2 Lookup DTSTransform.Lookup.2 Merge DTSTransform.Merge.2 Merge Join DTSTransform.MergeJoin.2 Multicast DTSTransform.Multicast.2 OLE DB Command DTSTransform.OLEDBCommand.2 OLE DB Destination DTSAdapter.OLEDBDestination.2 OLE DB Source DTSAdapter.OLEDBSource.2 Partition Processing MSMDPP.PXPipelineProcessPartition.2 Percentage Sampling DTSTransform.PctSampling.2 Performance Counters Source DataCollectorTransform.TxPerfCounters.1 Pivot DTSTransform.Pivot.2 Raw File Destination DTSAdapter.RawDestination.2 Raw File Source DTSAdapter.RawSource.2 Recordset Destination DTSAdapter.RecordsetDestination.2 RegexClean Konesans.Dts.Pipeline.RegexClean.RegexClean, Konesans.Dts.Pipeline.RegexClean, Version=2.0.0.0, Culture=neutral, PublicKeyToken=d1abe77e8a21353e Row Count DTSTransform.RowCount.2 Row Count Plus Konesans.Dts.Pipeline.RowCountPlusTransform.RowCountPlusTransform, Konesans.Dts.Pipeline.RowCountPlusTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Number Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Sampling DTSTransform.RowSampling.2 Script Component Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost, Microsoft.SqlServer.TxScript, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Slowly Changing Dimension DTSTransform.SCD.2 Sort DTSTransform.Sort.2 SQL Server Compact Destination Microsoft.SqlServer.Dts.Pipeline.SqlCEDestinationAdapter, Microsoft.SqlServer.SqlCEDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 SQL Server Destination DTSAdapter.SQLServerDestination.2 Term Extraction DTSTransform.TermExtraction.2 Term Lookup DTSTransform.TermLookup.2 Trash Destination Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc TxTopQueries DataCollectorTransform.TxTopQueries.1 Union All DTSTransform.UnionAll.2 Unpivot DTSTransform.UnPivot.2 XML Source Microsoft.SqlServer.Dts.Pipeline.XmlSourceAdapter, Microsoft.SqlServer.XmlSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Here is a simple console program that can be used to enumerate the pipeline components installed on your machine, and dumps out a list of all components like that above. You will need to add a reference to the Microsoft.SQLServer.ManagedDTS assembly. using System; using System.Diagnostics; using Microsoft.SqlServer.Dts.Runtime; public class Program { static void Main(string[] args) { Application application = new Application(); PipelineComponentInfos componentInfos = application.PipelineComponentInfos; foreach (PipelineComponentInfo componentInfo in componentInfos) { Debug.WriteLine(componentInfo.Name + "\t" + componentInfo.CreationName); } Console.Read(); } }

    Read the article

  • Why is iTunes starting and stopping play randomly, and how do I stop it?

    - by Chris R
    Since yesterday morning my copy of iTunes has been starting and stopping randomly. If iTunes is not running, then it opens and sometimes begins playing, other times sits idle. Eventually, after a random interval it will begin playing a song, and then stop, and so on... Needless to say, it's driving me mad. (Mac OSX, 10.6.3, on a new-ish (< 1 year old) 24" iMac) I've made five changes to my system that may or may not be connected to this: My office phone was replaced with a Linksys IP Phone, which necessitated a change to my networking; where previously my Mac was connected directly to the office network port, now it is connected through the phone. My network connection now uses auto link detection in lieu of forcing 100Mbit I unpaired my bluetooth headset. I removed the USB audio device associated with another headset. I upgraded to Safari 5. I don't use it as a primary browser, but it's often open to run web apps that I'm developing. All of these things happened in pretty close proximity to each other, so one or more of them may be the culprit. One other thing that may or may not be related; for some reason my built-in microphone is no longer picking up audio. It seems like this might be connected to the iTunes issue, because it happened around the same time. In terms of things that I've tried in order to solve this, I'm at a bit of a loss. I followed the instructions at http://developer.apple.com/mac/library/technotes/tn2004/tn2124.html#SECLAUNCHDLOGGING to enable detailed launchd logging to see if I could track down which process was asking iTunes to open (when it's not already open) but I wasn't able to make heads or tails of the output. I'm not even sure if I'm looking in the right place, to be honest; it actually acts like something is activating the application with AppleScript, but I have no processes running that are doing that, as far as I know. I'm running a few apps that have iTunes integration: Adium, iChat with Chax, Quicksilver. None of these have been changed lately, so I consider them low risks of causing this, but it's not impossible. Moreover, I'm not using any of those features intentionally. This is a snippet of launchd debug logging from around the time it just launched: 10-06-09 9:14:29 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:29 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) Dispatching kevent callback. 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) EVFILT_PROC event for job: 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x1004076f0 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) fork()ed 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Conceived 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Created PID 22197 anonymously by PPID 26 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.system.notification_center 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.notification_center 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.CoreServices.coreservicesd 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.CoreServices.coreservicesd 10-06-09 9:14:29 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 22197 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_EXIT 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Dispatching kevent callback. 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) EVFILT_PROC event for job: 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100401720 data = 0x0 ident = 22197 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_EXIT 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Reaping 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Total rusage: utime 0.000000 stime 0.000000 maxrss 0 ixrss 0 idrss 0 isrss 0 minflt 0 majflt 0 nswap 0 inblock 0 oublock 0 msgsnd 0 msgrcv 0 nsignals 0 nvcsw 0 nivcsw 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Removed 10-06-09 9:14:30 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:30 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 22197 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR|EV_EOF|EV_ONESHOT fflags = NOTE_REAP 10-06-09 9:14:32 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:32 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:33 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:33 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 143 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Dispatching kevent callback. 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) EVFILT_PROC event for job: 10-06-09 9:14:33 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10041e9a0 data = 0x0 ident = 143 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) fork()ed 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.distributed_notifications.2 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.distributed_notifications.2 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.system.notification_center 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.notification_center 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.CoreServices.coreservicesd 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.CoreServices.coreservicesd 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.SystemConfiguration.configd 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.SystemConfiguration.configd 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.audio.coreaudiod 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.audio.coreaudiod 10-06-09 9:14:34 AM com.apple.launchd[1] System: Looking up service com.apple.system.logger 10-06-09 9:14:34 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.logger 10-06-09 9:14:35 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:35 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:35 AM com.apple.launchd[1] System: Looking up service com.apple.DiskArbitration.diskarbitrationd 10-06-09 9:14:35 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.DiskArbitration.diskarbitrationd 10-06-09 9:14:35 AM com.apple.launchd[1] System: Looking up service com.apple.system.logger 10-06-09 9:14:35 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.logger 10-06-09 9:14:36 AM com.apple.launchd[1] System: Looking up service com.apple.FSEvents 10-06-09 9:14:36 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.FSEvents 10-06-09 9:14:36 AM com.apple.launchd[1] System: Looking up service com.apple.SystemConfiguration.configd 10-06-09 9:14:36 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.SystemConfiguration.configd 10-06-09 9:14:38 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:38 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:39 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:39 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) Dispatching kevent callback. 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) EVFILT_PROC event for job: 10-06-09 9:14:39 AM com.apple.launchd[1] KEVENT[0]: udata = 0x1004076f0 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) fork()ed 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Conceived 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Created PID 22211 anonymously by PPID 26 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Looking up per user launchd for UID: 0 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Per user launchd job found for UID: 505 10-06-09 9:14:39 AM com.apple.launchd[1] System: Looking up service com.apple.system.notification_center 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.notification_center 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Looking up per user launchd for UID: 0 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Per user launchd job found for UID: 505 10-06-09 9:14:39 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.libinfo_v1

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • Dropping all user tables/sequences in Oracle

    - by Ambience
    As part of our build process and evolving database, I'm trying to create a script which will remove all of the tables and sequences for a user. I don't want to do recreate the user as this will require more permissions than allowed. My script creates a procedure to drop the tables/sequences, executes the procedure, and then drops the procedure. I'm executing the file from sqlplus: drop.sql: create or replace procedure drop_all_cdi_tables is cur integer; begin cur:= dbms_sql.OPEN_CURSOR(); for t in (select table_name from user_tables) loop execute immediate 'drop table ' ||t.table_name|| ' cascade constraints'; end loop; dbms_sql.close_cursor(cur); cur:= dbms_sql.OPEN_CURSOR(); for t in (select sequence_name from user_sequences) loop execute immediate 'drop sequence ' ||t.sequence_name; end loop; dbms_sql.close_cursor(cur); end; / execute drop_all_cdi_tables; / drop procedure drop_all_cdi_tables; / Unfortunately, dropping the procedure causes a problem. There seems to cause a race condition and the procedure is dropped before it executes. E.g.: SQL*Plus: Release 11.1.0.7.0 - Production on Tue Mar 30 18:45:42 2010 Copyright (c) 1982, 2008, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Procedure created. PL/SQL procedure successfully completed. Procedure created. Procedure dropped. drop procedure drop_all_user_tables * ERROR at line 1: ORA-04043: object DROP_ALL_USER_TABLES does not exist SQL Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64 With the Partitioning, OLAP, Data Mining and Real Application Testing options Any ideas on how to get this working?

    Read the article

  • Allowing Google to bypass CAPTCHA verification - sensible or not?

    - by edanfalls
    My web site has a database lookup; filling out a CAPTCHA gives you 5 minutes of lookup time. There is also some custom code to detect any automated scripts. I do this as I don't want someone data mining my site. The problem is that Google does not see the lookup results when it crawls my site. If someone is searching for a string that is present in the result of a lookup, I would like them to find this page by Googling it. The obvious solution to me is to use the PHP variable $_SERVER['HTTP_USER_AGENT'] to bypass the CAPTCHA and custom security code for the Google bots. My question is whether this is sensible or not. People could then use Google's cache to view the lookup results without having to fill out the CAPTCHA, but would Google's own script detection methods prevent them from data mining these pages? Or would there be some way for people to make $_SERVER['HTTP_USER_AGENT'] appear as Google to bypass the security measures? Thanks in advance.

    Read the article

  • Making sense of S.M.A.R.T

    - by James
    First of all, I think everyone knows that hard drives fail a lot more than the manufacturers would like to admit. Google did a study that indicates that certain raw data attributes that the S.M.A.R.T status of hard drives reports can have a strong correlation with the future failure of the drive. We find, for example, that after their first scan error, drives are 39 times more likely to fail within 60 days than drives with no such errors. First errors in re- allocations, offline reallocations, and probational counts are also strongly correlated to higher failure probabil- ities. Despite those strong correlations, we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever. Seagate seems like it is trying to obscure this information about their drives by claiming that only their software can accurately determine the accurate status of their drive and by the way their software will not tell you the raw data values for the S.M.A.R.T attributes. Western digital has made no such claim to my knowledge but their status reporting tool does not appear to report raw data values either. I've been using HDtune and smartctl from smartmontools in order to gather the raw data values for each attribute. I've found that indeed... I am comparing apples to oranges when it comes to certain attributes. I've found for example that most Seagate drives will report that they have many millions of read errors while western digital 99% of the time shows 0 for read errors. I've also found that Seagate will report many millions of seek errors while Western Digital always seems to report 0. Now for my question. How do I normalize this data? Is Seagate producing millions of errors while Western digital is producing none? Wikipedia's article on S.M.A.R.T status says that manufacturers have different ways of reporting this data. Here is my hypothesis: I think I found a way to normalize (is that the right term?) the data. Seagate drives have an additional attribute that Western Digital drives do not have (Hardware ECC Recovered). When you subtract the Read error count from the ECC Recovered count, you'll probably end up with 0. This seems to be equivalent to Western Digitals reported "Read Error" count. This means that Western Digital only reports read errors that it cannot correct while Seagate counts up all read errors and tells you how many of those it was able to fix. I had a Seagate drive where the ECC Recovered count was less than the Read error count and I noticed that many of my files were becoming corrupt. This is how I came up with my hypothesis. The millions of seek errors that Seagate produces are still a mystery to me. Please confirm or correct my hypothesis if you have additional information. Here is the smart status of my western digital drive just so you can see what I'm talking about: james@ubuntu:~$ sudo smartctl -a /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: WDC WD1001FALS-00E3A0 Serial Number: WD-WCATR0258512 Firmware Version: 05.01D05 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Thu Jun 10 19:52:28 2010 PDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 179 175 021 Pre-fail Always - 4033 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 270 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 1468 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 262 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 46 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 223 194 Temperature_Celsius 0x0022 105 102 000 Old_age Always - 42 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0

    Read the article

  • Proliant server will not accept new hard disks in RAID 1+0?

    - by Leigh
    I have a HP ProLiant DL380 G5, I have two logical drives configured with RAID. I have one logical drive RAID 1+0 with two 72 gb 10k sas 1 port spare no 376597-001. I had one hard disk fail and ordered a replacement. The configuration utility showed error and would not rebuild the RAID. I presumed a hard disk fault and ordered a replacement again. In the mean time I put the original failed disk back in the server and this started rebuilding. Currently shows ok status however in the log I can see hardware errors. The new disk has come and I again have the same problem of not accepting the hard disk. I have updated the P400 controller with the latest firmware 7.24 , but still no luck. The only difference I can see is the original drive has firmware 0103 (same as the RAID drive) and the new one has HPD2. Any advice would be appreciated. Thanks in advance Logs from server ctrl all show config Smart Array P400 in Slot 1 (sn: PAFGK0P9VWO0UQ) array A (SAS, Unused Space: 0 MB) logicaldrive 1 (68.5 GB, RAID 1, Interim Recovery Mode) physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS, 73.5 GB, OK) physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS, 72 GB, Failed array B (SAS, Unused Space: 0 MB) logicaldrive 2 (558.7 GB, RAID 5, OK) physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 300 GB, OK) physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS, 300 GB, OK) physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS, 300 GB, OK) ctrl all show config detail Smart Array P400 in Slot 1 Bus Interface: PCI Slot: 1 Serial Number: PAFGK0P9VWO0UQ Cache Serial Number: PA82C0J9VWL8I7 RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: E Firmware Version: 7.24 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Cache Status Details: A cache error was detected. Run more information. Cache Ratio: 100% Read / 0% Write Drive Write Cache: Disabled Total Cache Size: 256 MB Total Cache Memory Available: 208 MB No-Battery Write Cache: Disabled Battery/Capacitor Count: 0 SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 0 MB Status: Failed Physical Drive Array Type: Data One of the drives on this array have failed or has Logical Drive: 1 Size: 68.5 GB Fault Tolerance: RAID 1 Heads: 255 Sectors Per Track: 32 Cylinders: 17594 Strip Size: 128 KB Full Stripe Size: 128 KB Status: Interim Recovery Mode Caching: Enabled Unique Identifier: 600508B10010503956574F305551 Disk Name: \\.\PhysicalDrive0 Mount Points: C:\ 68.5 GB Logical Drive Label: A0100539PAFGK0P9VWO0UQ0E93 Mirror Group 0: physicaldrive 2I:1:2 (port 2I:box 1:bay 2, S Mirror Group 1: physicaldrive 2I:1:1 (port 2I:box 1:bay 1, S Drive Type: Data physicaldrive 2I:1:1 Port: 2I Box: 1 Bay: 1 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 73.5 GB Rotational Speed: 10000 Firmware Revision: 0103 Serial Number: B379P8C006RK Model: HP DG072A9B7 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:2 Port: 2I Box: 1 Bay: 2 Status: Failed Drive Type: Data Drive Interface Type: SAS Size: 72 GB Rotational Speed: 15000 Firmware Revision: HPD9 Serial Number: D5A1PCA04SL01244 Model: HP EH0072FARUA PHY Count: 2 PHY Transfer Rate: Unknown, Unknown Array: B Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data Logical Drive: 2 Size: 558.7 GB Fault Tolerance: RAID 5 Heads: 255 Sectors Per Track: 32 Cylinders: 65535 Strip Size: 64 KB Full Stripe Size: 128 KB Status: OK Caching: Enabled Parity Initialization Status: Initialization Co Unique Identifier: 600508B10010503956574F305551 Disk Name: \\.\PhysicalDrive1 Mount Points: E:\ 558.7 GB Logical Drive Label: AF14FD12PAFGK0P9VWO0UQD007 Drive Type: Data physicaldrive 1I:1:5 Port: 1I Box: 1 Bay: 5 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 300 GB Rotational Speed: 10000 Firmware Revision: HPD4 Serial Number: 3SE07QH300009923X1X3 Model: HP DG0300BALVP Current Temperature (C): 32 Maximum Temperature (C): 45 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:3 Port: 2I Box: 1 Bay: 3 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 300 GB Rotational Speed: 10000 Firmware Revision: HPD4 Serial Number: 3SE0AHVH00009924P8F3 Model: HP DG0300BALVP Current Temperature (C): 34 Maximum Temperature (C): 47 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:4 Port: 2I Box: 1 Bay: 4 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 300 GB Rotational Speed: 10000 Firmware Revision: HPD4 Serial Number: 3SE08NAK00009924KWD6 Model: HP DG0300BALVP Current Temperature (C): 35 Maximum Temperature (C): 47 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown

    Read the article

  • How do I allow mysqld to use more than 24.9% of my cpu?

    - by Joseph Yancey
    I have a Web server running on RHEL that is running Apache and MySQL. It has a Quad core 3.2Ghz Xeon CPU and 8 Gigs of RAM Most of the time, we don't have any issues at all. Our web application is very database intensive. When our usage gets pretty heavy MySQL will peg out at using 24.9% of the cpu. Most of the time, it hangs around below 5%. I have speculated that it is only using one core of the CPU and it is pegging out that core but TOP shows me in the cpu column that mysqld changes cores even while the usage stays at 24.9%. When it does this MySQL gets painfully slow as it is queuing up queries Is there some magic configuration that will tell mysql to use more cpu when it needs to? Also, any other advice on my configuration would be helpful. We run two applications on this server. One that runs Innodb but doesn't get much usage (it has been replaced by the other app), and one that runs MyIsam and gets lots of use. Overall, our whole mysql data directory is something like 13Gigs if that matters at all. Here is my config: [root@ProductionLinux root]# cat /etc/my.cnf [mysqld] server-id = 71 log-bin = /var/log/mysql/mysql-bin.log binlog-do-db = oldapplication binlog-do-db = newapplication binlog-do-db = support thread_cache_size = 30 key_buffer_size = 256M table_cache = 256 sort_buffer_size = 4M read_buffer_size = 1M skip-name-resolve innodb_data_home_dir = /usr/local/mysql/data/ innodb_data_file_path = InnoDB:100M:autoextend set-variable = innodb_buffer_pool_size=70M set-variable = innodb_additional_mem_pool_size=10M set-variable = max_connections=500 innodb_log_group_home_dir = /usr/local/mysql/data innodb_log_arch_dir = /usr/local/mysql/data set-variable = innodb_log_file_size=20M set-variable = innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit = 1 log-queries-not-using-indexes log-error = /var/log/mysql/mysql-error.log mysql show variables; +---------------------------------+-----------------------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+-----------------------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/local/mysql-standard-5.0.18-linux-x86_64-glibc23/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/local/mysql-standard-5.0.18-linux-x86_64-glibc23/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 5 | | datadir | /usr/local/mysql/data/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | | group_concat_max_len | 1024 | | have_archive | YES | | have_bdb | NO | | have_blackhole_engine | NO | | have_compress | YES | | have_crypt | YES | | have_csv | NO | | have_example_engine | NO | | have_federated_engine | NO | | have_geometry | YES | | have_innodb | YES | | have_isam | NO | | have_ndbcluster | NO | | have_openssl | NO | | have_query_cache | YES | | have_raid | NO | | have_rtree_keys | YES | | have_symlink | YES | | init_connect | | | init_file | | | init_slave | | | innodb_additional_mem_pool_size | 10485760 | | innodb_autoextend_increment | 8 | | innodb_buffer_pool_awe_mem_mb | 0 | | innodb_buffer_pool_size | 73400320 | | innodb_checksums | ON | | innodb_commit_concurrency | 0 | | innodb_concurrency_tickets | 500 | | innodb_data_file_path | InnoDB:100M:autoextend | | innodb_data_home_dir | /usr/local/mysql/data/ | | innodb_doublewrite | ON | | innodb_fast_shutdown | 1 | | innodb_file_io_threads | 4 | | innodb_file_per_table | OFF | | innodb_flush_log_at_trx_commit | 1 | | innodb_flush_method | | | innodb_force_recovery | 0 | | innodb_lock_wait_timeout | 50 | | innodb_locks_unsafe_for_binlog | OFF | | innodb_log_arch_dir | /usr/local/mysql/data | | innodb_log_archive | OFF | | innodb_log_buffer_size | 8388608 | | innodb_log_file_size | 20971520 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | /usr/local/mysql/data | | innodb_max_dirty_pages_pct | 90 | | innodb_max_purge_lag | 0 | | innodb_mirrored_log_groups | 1 | | innodb_open_files | 300 | | innodb_support_xa | ON | | innodb_sync_spin_loops | 20 | | innodb_table_locks | ON | | innodb_thread_concurrency | 20 | | innodb_thread_sleep_delay | 10000 | | interactive_timeout | 28800 | | join_buffer_size | 131072 | | key_buffer_size | 268435456 | | key_cache_age_threshold | 300 | | key_cache_block_size | 1024 | | key_cache_division_limit | 100 | | language | /usr/local/mysql-standard-5.0.18-linux-x86_64-glibc23/share/mysql/english/ | | large_files_support | ON | | large_page_size | 0 | | large_pages | OFF | | license | GPL | | local_infile | ON | | locked_in_memory | OFF | | log | OFF | | log_bin | ON | | log_bin_trust_function_creators | OFF | | log_error | /var/log/mysql/mysql-error.log | | log_slave_updates | OFF | | log_slow_queries | OFF | | log_warnings | 1 | | long_query_time | 10 | | low_priority_updates | OFF | | lower_case_file_system | OFF | | lower_case_table_names | 0 | | max_allowed_packet | 1048576 | | max_binlog_cache_size | 18446744073709551615 | | max_binlog_size | 1073741824 | | max_connect_errors | 10 | | max_connections | 500 | | max_delayed_threads | 20 | | max_error_count | 64 | | max_heap_table_size | 16777216 | | max_insert_delayed_threads | 20 | | max_join_size | 18446744073709551615 | | max_length_for_sort_data | 1024 | | max_relay_log_size | 0 | | max_seeks_for_key | 18446744073709551615 | | max_sort_length | 1024 | | max_sp_recursion_depth | 0 | | max_tmp_tables | 32 | | max_user_connections | 0 | | max_write_lock_count | 18446744073709551615 | | multi_range_count | 256 | | myisam_data_pointer_size | 6 | | myisam_max_sort_file_size | 9223372036854775807 | | myisam_recover_options | OFF | | myisam_repair_threads | 1 | | myisam_sort_buffer_size | 8388608 | | myisam_stats_method | nulls_unequal | | net_buffer_length | 16384 | | net_read_timeout | 30 | | net_retry_count | 10 | | net_write_timeout | 60 | | new | OFF | | old_passwords | OFF | | open_files_limit | 2510 | | optimizer_prune_level | 1 | | optimizer_search_depth | 62 | | pid_file | /usr/local/mysql/data/ProductionLinux.pid | | port | 3306 | | preload_buffer_size | 32768 | | protocol_version | 10 | | query_alloc_block_size | 8192 | | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 0 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | | range_alloc_block_size | 2048 | | read_buffer_size | 1044480 | | read_only | OFF | | read_rnd_buffer_size | 262144 | | relay_log_purge | ON | | relay_log_space_limit | 0 | | rpl_recovery_rank | 0 | | secure_auth | OFF | | server_id | 71 | | skip_external_locking | ON | | skip_networking | OFF | | skip_show_database | OFF | | slave_compressed_protocol | OFF | | slave_load_tmpdir | /tmp/ | | slave_net_timeout | 3600 | | slave_skip_errors | OFF | | slave_transaction_retries | 10 | | slow_launch_time | 2 | | socket | /tmp/mysql.sock | | sort_buffer_size | 4194296 | | sql_mode | | | sql_notes | ON | | sql_warnings | ON | | storage_engine | MyISAM | | sync_binlog | 0 | | sync_frm | ON | | sync_replication | 0 | | sync_replication_slave_id | 0 | | sync_replication_timeout | 10 | | system_time_zone | CST | | table_cache | 256 | | table_lock_wait_timeout | 50 | | table_type | MyISAM | | thread_cache_size | 30 | | thread_stack | 262144 | | time_format | %H:%i:%s | | time_zone | SYSTEM | | timed_mutexes | OFF | | tmp_table_size | 33554432 | | tmpdir | | | transaction_alloc_block_size | 8192 | | transaction_prealloc_size | 4096 | | tx_isolation | REPEATABLE-READ | | updatable_views_with_limit | YES | | version | 5.0.18-standard-log | | version_comment | MySQL Community Edition - Standard (GPL) | | version_compile_machine | x86_64 | | version_compile_os | unknown-linux-gnu | | wait_timeout | 28800 | +---------------------------------+-----------------------------------------------------------------------------+ 210 rows in set (0.00 sec)

    Read the article

  • Hard drive after PCB swap strange stuff

    - by ramyy
    I’ve done a PCB swap to my HDD. The HDD model is: WD6400AAKS-00A7B2. The original PCB PN matches the new one (first three letter groups), though the cache mismatches (16MB original, 8MB new). The Hardware store that made the swap told me it was hard to do the swap, they have done firmware adaptation. I can see that this firmware version does not match the original, (01.03B01 original, 05.04E05 new). Still I can see that the serial number and model of the drive is correct, the hard drive appeared normal in the BIOS, all the partitions show and everything appears normal. I have encountered three things though, I have left the drive non operated for 2-3 weeks after the swap to avoid corrupting the data or anything else the new PCB might cause, until I buy a new drive and backup the data. I got a drive, and when I powered the old drive manually (I have a laptop, I use a normal desktop power supply and a USB/SATA connector), I heard the motor start and I could hear ticking as if the motor’s somehow struggling to start, and then the motor sound starts again then the ticking, and so on.. I tried powering again it happened again. The third time it started normally and I could see everything normally. I took the chance and copied all the data over to the new drive. When I was done, I powered off the drive (after more than 25 hours of continuous operation), tried to power it up again and it did so normally, and so are the times I powered it up later; but I got very suspicious now. What could be the problem here? And what happened new, it used to power normally after the swap directly? The second thing that happened is that I found size differences with some files; some include movies, songs, (.iso) files for games, and programs. I could find the size is the same, but size on disk is a little more on the new drive for these files. . I’ve tried some of those files (with size differences) they worked fine. They are not too much but still make you suspicious of the integrity of the data copied; one cannot try if all files are working for about (580 GB) worth of data. I tried copying these files on the same partition they exist of the old drive; they are the same in size as when copied to the new drive (allocation unit size not the issue). I took an image of a partition (sector by sector including empty ones) and when I explore it, these file sizes are equal to the original (old drive); I copy them anywhere else their size on disk, increases, i.e becomes equal to the ones I copy from the old drive itself anywhere. Why the size difference and can one trust the integrity of the data?? The third thing is that when I connect my new external USB HDD, the partitions of the old HDD unmount and then mount again. Connected are: (USB mouse + Old HDD) then external HDD. Why that happens?? Considering the following: I compared the SMART reports from after the swap directly and after the copying, no error readings or reallocated sectors where reported. Here they are: http://www.image-share.com/ijpg-1939-219.html I later ran both WD data life guard tests and they came out passed. I’m worried for this drive since I must be sure the data is fine and safe on the new one, and I will consider it backup for the new one, since you can’t trust anything anymore. I hope you can forgive me for the length of the post, but couldn’t ignore any of the details, this hard drive contains very important data to me and I have to deal with the situation with great care.

    Read the article

< Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >