Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 592/2386 | < Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >

  • Statically initialize anonymous union in C++

    - by wpfwannabe
    I am trying to statically initialize the following structure in Visual Studio 2010: struct Data { int x; union { char ch; const Data* data; }; }; The following is fails with error C2440: 'initializing' : cannot convert from 'Data *' to 'char'. static Data d1; static Data d = {1, &d1}; I have found references to some ways this can be initialized properly but none of them work in VS2010. Any ideas?

    Read the article

  • Porting VS2005 project to VS2008

    - by lucavb
    i need to port a VS2005 Project (.NET2) to a VS2008 (.NET3.5) (or to VS2010 .NET4 not yet defined). The project is composed by: resources and configuration files (VS project files, like: .settings .vbproj .myapp .config .xconfig .Designer.vb); a lot of VB codes; xsc, xsd, xss and xsx files; a lot of Crystal reports for VS2005; graphical resources. The application take data in order to generate reports from more DB SQL Server 2005 istances. What is the best way to approach to a migration activity? Is there an internal migration tool? If yes, what's the best practice to use it? Which kind of files will be automatically ported to the new VS version? Thanks in advance for all the provided information

    Read the article

  • Performance considerations of a large hard-coded array in the .cs file

    - by terence
    I'm writing some code where performance is important. In one part of it, I have to compare a large set of pre-computed data against dynamic values. Currently, I'm storing that pre-computed data in a giant array in the .cs file: Data[] data = { /* my data set */ }; The data set is about 90kb, or roughly 13k elements. I was wondering if there's any downside to doing this, as opposed to loading it in from an external file? I'm not entirely sure how C# works internally, so I just wanted to be aware of any performance issues I might encounter with this method.

    Read the article

  • How to select the previous element of a given class

    - by JCN
    I have an UL list : <ul> <li data-sel='foo'></li> <li></li> <li data-sel='foo'></li> <li class='selected'></li> <li data-sel='foo'></li> </ul> I can access the first previous element of li.selected who do not have attribute data-sel=foo by using :not selector var element = $('.selected').prev("li:not([data-sel='foo'])"); But how can i access the first previous element of li.selected who have attribute data-sel=foo ?

    Read the article

  • Linked List Inserting in sorted format

    - by user2738718
    package practise; public class Node { public int data; public Node next; public Node (int data, Node next) { this.data = data; this.next = next; } public int size (Node list) { int count = 0; while(list != null){ list = list.next; count++; } return count; } public static Node insert(Node head, int value) { Node T; if (head == null || head.data <= value) { T = new Node(value,head); return T; } else { head.next = insert(head.next, value); return head; } } } This work fine for all data values less than the first or the head. anything greater than than doesn't get added to the list.please explain in simple terms thanks.

    Read the article

  • Zend and Jquery (Ajax Post)

    - by Zend_Newbie_Dev
    I'm using zend framework, i would like to get POST data using Jquery ajax post on a to save without refreshing the page. //submit.js $(function() { $('#buttonSaveDetails').click(function (){ var details = $('textarea#details').val(); var id = $('#task_id').val(); $.ajax({ type: 'POST', url: 'http://localhost/myproject/public/module/save', async: false, data: 'id=' + id + '&details=' + details, success: function(responseText) { //alert(responseText) console.log(responseText); } }); }); }); On my controller, I just don't know how to retrieve the POST data from ajax. public function saveAction() { $data = $this->_request->getPost(); echo $id = $data['id']; echo $details = $data['details']; //this wont work; } Thanks in advance.

    Read the article

  • question about in -place sort

    - by davit-datuashvili
    for example we have following array char data[]=new char[]{'A','S','O','R','T','I','N','G','E','X','A','M','P','L','E'}; and index array int a[]=new int[]{0,1,2,3,4,5,6,7,8,9,10,11,12,13,14}: void insitu(char data[],int a[],N){ for (int i=0;i<N;i++) { char v=data[i]; int j,int k; for (k=i;a[k]!=i;k=a[j];a[j]=j) { j=k;data[k]=data[a[k]; } data[k]=v; a[k]=k; } i have question what is initialize value of j? when i run this code it asks me to initialize j and what should do? }

    Read the article

  • R - Subsetting XTS via Time and Which

    - by user2844947
    Currently, I have a XTS, called Data, which contains a Date, and two value columns which are numbers. I would like to get a single number as output and which would be the mean of Value1 in the time period from a point where Value2 < mean(Value2) and going forward 14 data points, weeks in this particular data set. In order to get the dates where Value2 < mean(Value2), I wrote the below code Data[which(Data$Value2 < mean(Data$Value2)),"Date"] However, I am not sure how to get the mean of Value1 in the period, going 14 days forward from each of the resultant dates from the above code.

    Read the article

  • How to prevent negative number in Mysql

    - by Jerry
    Hello guys.. I have data which is starting from 0 in my database. My php will add 1 or -1 to the data depending on the user's input. My problem is that if data is 0 and a user try to subtract 1. The data become 4294967295 which is the maximum value of INT data type. Are there anyways to make the data stays in 0 even when the user asks for -1? Thanks for the reply.. my sql command is like below update board set score=score-1 where team='TeamA' //this would generate 4294967295 if the score is 0.....

    Read the article

  • iPhone view architecture question

    - by Joe
    So I have (for instance) three views: A: root view B: a view functionally identical to root C: a data entry view which collects a few piece of info What I'm trying to do is reuse C to supply the data it collects to either A or B. It should supply the data to whichever of the two it is pushed onto. The data for A is similar, but functionally distinct, to what collects for B. Right now, I'm passing data from C to A or B via a singleton class. What I'm trying to avoid is having two instances of C, one to supply data to A and B (because, in actuality, the program will have 5 total views like C. Does the question make sense?

    Read the article

  • Export CSV from Mysql

    - by ss888
    Hi, I'm having a bit of trouble exporting a csv file that is created from one of my mysql tables using php. The code I'm using prints the correct data, but I can't see how to download this data in a csv file, providing a download link to the created file. I thought the browser was supposed to automatically provide the file for download, but it doesn't. (Could it be because the below code is called using ajax?) Any help greatly appreciated - code below, S. include('../config/config.php'); //db connection settings $query = "SELECT * FROM isregistered"; $export = mysql_query ($query ) or die ( "Sql error : " . mysql_error( ) ); $fields = mysql_num_fields ( $export ); for ( $i = 0; $i < $fields; $i++ ) { $header .= mysql_field_name( $export , $i ) . "\t"; } while( $row = mysql_fetch_row( $export ) ) { $line = ''; foreach( $row as $value ) { if ( ( !isset( $value ) ) || ( $value == "" ) ) { $value = "\t"; } else { $value = str_replace( '"' , '""' , $value ); $value = '"' . $value . '"' . "\t"; } $line .= $value; } $data .= trim( $line ) . "\n"; } $data = str_replace( "\r" , "" , $data ); if ( $data == "" ) { $data = "\n(0) Records Found!\n"; } //header("Content-type: application/octet-stream"); //have tried all of these at sometime //header("Content-type: text/x-csv"); header("Content-type: text/csv"); //header("Content-type: application/csv"); header("Content-Disposition: attachment; filename=export.csv"); //header("Content-Disposition: attachment; filename=export.xls"); header("Pragma: no-cache"); header("Expires: 0"); echo '<a href="">Download Exported Data</a>'; //want my link to go in here... print "$header\n$data";

    Read the article

  • Toggling between states on 3 containers in jQuery

    - by Saif Bechan
    Hello, i have an php/mysql/ajax auction application application. Now there is a section that can have 3 states. State one, the user can place a bid.(Bid button is shown) State two, the user has to wait before he can place a new bid(number of sec is shown) State three, the user has an auto bidding system enabled.(The price maximum and amount ate shown). Now i get these values trough jquery ajax, and i want to present the user the right section. Now the code i have is as follows: if(data[i].bidwait >= 0 && data[i].amount == ''){ // In this section the user has no autobiddings, and can place a bidding $(productContainer+' .bid-wrapper').removeClass('none'); $(productContainer+' .bidwait-wrapper').addClass('none'); $(productContainer+' .autobid-wrapper').addClass('none'); }else if(data[i].bidwait < 0 && data[i].amount == ''){ // In this section the user has to wait the amount of sec to place a new bid $(productContainer+' .bid-wrapper').addClass('none'); $(productContainer+' .bidwait-wrapper').removeClass('none'); $(productContainer+' .autobid-wrapper').addClass('none'); $(productContainer+' .bidwait-text').text(data[i].bidwait * -1); }else{ // In the last section the user has the auto bidder enabled, so that is shown $(productContainer+' .bid-wrapper').addClass('none'); $(productContainer+' .bidwait-wrapper').addClass('none'); $(productContainer+' .autobid-wrapper').removeClass('none'); $(productContainer+' .amount').html(data[i].amount == 0 ? '&#8734;' : data[i].amount); $(productContainer+' .maxprice').html(data[i].maxprice == 0 ? '&#8734;' : data[i].maxprice); } This looks like an awful lot of code for something so small. I was wondering if there is an easier method to accomplish such a thing. Speed is a huge issue for me because this has to run every second in the users browser. If there is no other option i am just going to remove this option and go with a no ajax approuch. If you have been, thank you for reading!

    Read the article

  • SSIS - user variable used in derived column transform is not available - in some cases

    - by soo
    Unfortunately I don't have a repro for my issue, but I thought I would try to describe it in case it sounds familiar to someone... I am using SSIS 2005, SP2. My package has a package-scope user variable - let's call it user_var first step in the control flow is an Execute SQL task which runs a stored procedure. All that SP does is insert a record in a SQL table (with an identity column) and then go back and get the max ID value. The Execute SQL task saves this output into user_var the control flow then has a Data Flow Task - it goes and gets some source data, has a derived column which sets a column called run_id to user_var - and saves the data to a SQL destination In most cases (this template is used for many packages, running every day) this all works great. All of the destination records created get set with a correct run_id. However, in some cases, there is a set of the destination data that does not get run_id equal to user_var, but instead gets a value of 0 (0 is the default value for user_var). I have 2 instances where this has happened, but I can't make it happen. In both cases, it was just less that 10,000 records that have run_id = 0. Since SSIS writes data out in 10,000 record blocks, this really makes me think that, for the first set of data written out, user_var was not yet set. Then, after that first block, for the rest of the data, run_id is set to a correct value. But control passed on to my data flow from the Execute SQL task - it would have seemed reasonable to me that it wouldn't go on until the SP has completed and user_var is set. Maybe it just runs the SP, but doesn't wait for it to complete? In both cases where this has happened there seemed to be a few packages hitting the table to get a new user_var at about the same time. And in both cases lots of data was written (40 million rows, 60 million rows) - my thinking is that that means the writes were happening for a while. Sorry to be both long-winded AND vague. A winning combination! Does this sound familiar to anyone? Thanks.

    Read the article

  • django - where to clean extra whitespace from form field inputs?

    - by Westerley
    I've just discovered that Django doesn't automatically strip out extra whitespace from form field inputs, and I think I understand the rationale ('frameworks shouldn't be altering user input'). I think I know how to remove the excess whitespace using python's re: #data = re.sub('\A\s+|\s+\Z', '', data) data = data.strip() data = re.sub('\s+', ' ', data) The question is where should I do this? Presumably this should happen in one of the form's clean stages, but which one? Ideally, I would like to clean all my fields of extra whitespace. If it should be done in the clean_field() method, that would mean I would have to have a lot of clean_field() methods that basically do the same thing, which seems like a lot of repetition. If not the form's cleaning stages, then perhaps in the model that the form is based on? Thanks for your help! W.

    Read the article

  • In SQL Server 2000, how to delete the specified rows in a table that does not have a primary key?

    - by Yousui
    Hi, Let's say we have a table with some data in it. IF OBJECT_ID('dbo.table1') IS NOT NULL BEGIN DROP TABLE dbo.table1; END CREATE TABLE table1 ( DATA INT ); --------------------------------------------------------------------- -- Generating testing data --------------------------------------------------------------------- INSERT INTO dbo.table1(data) SELECT 100 UNION ALL SELECT 200 UNION ALL SELECT NULL UNION ALL SELECT 400 UNION ALL SELECT 400 UNION ALL SELECT 500 UNION ALL SELECT NULL; How to delete the 2nd, 5th, 6th records in the table? The order id defined by the following query. SELECT data FROM dbo.table1 ORDER BY data DESC; Note, this is in SQL Server 2000 environment. Thanks.

    Read the article

  • jQuery/JSON/PHP failing

    - by user730936
    I am trying to call a php script that accepts JSON data, writes it into a file and returns simple text response using jQuery/AJAX call. jQuery code : $("input.callphp").click(function() { var url_file = myurl"; $.ajax({type : "POST", url : url_file, data : {'puzzle': 'Reset!'}, success : function(data){ alert("Success: " + data); }, error : function (data) { alert("Error: " + data); }, dataType : 'text' }); }); PHP Code : <?php $thefile = "new.json"; /* Our filename as defined earlier */ $towrite = $_POST["puzzle"]; /* What we'll write to the file */ $openedfile = fopen($thefile, "w"); fwrite($openedfile, $towrite); fclose($openedfile); echo "<br> <br>".$towrite; ?> However, the call is never a success and always gives an error with an alert "Error : [Object object]".

    Read the article

  • How to version SQL Server schema using VS 2005?

    - by Mike
    I am new to C# programming and am coming to it most recently from working with Ruby on Rails. In RoR, I am used to being able to write schema migrations for the database. I would like to be able to do something similar for my C#/SQLServer projects. Does such a tool exist for the VS 2005 toolset? Would it be wise to use RoR migrations with SQL Server directly outside of VS 2005? In other words, I would handle all schema versioning using ActiveRecord:Migration from Rails but nothing else. If I do handle migrations outside of C# and VS 2005 with another tool, is RoR ActiveRecord:Migration the best thing to use or is there something which is a better fit?

    Read the article

  • Unable to return/process JSON in JQuery $.get()

    - by shyam
    I have a problem returning/processing JSON data while calling $.get() function. Here's the code: JQuery: $.get("script.php?", function(data) { if (data.status) { alert('ok'); } else { alert(data.error); } },'json'); PHP if ($r) { $ret = array("status"=>true); } else { $ret = array("status"=>false,"error"=>$error); } echo json_encode( $ret ); So this is the code. But the response is always taken as string in the jquery. data.status and data.error is undefined.

    Read the article

  • C++ - Implementing my own stream

    - by HardCoder1986
    Hello! My problem can be described the following way: I have some data which actually is an array and could be represented as char* data with some size I also have some legacy code (function) that takes some abstract std::istream object as a param and uses that stream to retrieve data to operate. So, my question is the following - what would be the easy way to map my data to some std::istream object so that I can pass it to my function? I thought about creating a std::stringstream object from my data, but that means copying and (as I assume) isn't the best solution. Any ideas how this could be done so that my std::istream operates on the data directly? Thank you.

    Read the article

  • Merging DataTable(s) Column by Column.

    - by Omky
    Hello All, I want to merge two or more DataTables Colum by Column. I am developing C# Windows Application. My use case is below: I have empty data grid in my application. user will drag and drop one column from available column list box into data grid. The data grid will start displaying data for that column. Now, I will drag another column into data grid and now grid should get populated data of two columns. This will repeat till user feels that he has dropped all necessary columns. What is best way to do this? Is there any performance hits with large number of rows typically 1 million? Please help. Thanks, Omky

    Read the article

  • Byte manipulation in PHP

    - by Michael Angstadt
    In PHP, if you have a variable with binary data, how do you get specific bytes from the data? For example, if I have some data that is 30 bytes long, how do I get the first 8 bytes? Right now, I'm treating it like a string, using the substr() function: $data = //... $first8Bytes = substr($data, 0, 8); Is it safe to use substr with binary data? Or are there other functions that I should be using? Thanks.

    Read the article

  • trying to work through a list in sections

    - by user1714887
    I have a list of lists sorted by the second value of the list (the groups). I now need to iterate through this to work on each "group" at a time. the data is [name, group, data1, data2, data3, data4]. I wasn't sure if I need a while or some other sort of loop, or maybe groupby but I've never used that. any help would be appreciated. for i in range (int(max_group)): x1 = [] x2 = [] x3 = [] x4 = [] if data[i][1] == i+1: x1.append(data[2]) x2.append(data[3]) x3.append(data[4]) x4.append(data[5]) print x1 print 'next' # these are just to test where we're at

    Read the article

  • Why is iTunes starting and stopping play randomly, and how do I stop it?

    - by Chris R
    Since yesterday morning my copy of iTunes has been starting and stopping randomly. If iTunes is not running, then it opens and sometimes begins playing, other times sits idle. Eventually, after a random interval it will begin playing a song, and then stop, and so on... Needless to say, it's driving me mad. (Mac OSX, 10.6.3, on a new-ish (< 1 year old) 24" iMac) I've made five changes to my system that may or may not be connected to this: My office phone was replaced with a Linksys IP Phone, which necessitated a change to my networking; where previously my Mac was connected directly to the office network port, now it is connected through the phone. My network connection now uses auto link detection in lieu of forcing 100Mbit I unpaired my bluetooth headset. I removed the USB audio device associated with another headset. I upgraded to Safari 5. I don't use it as a primary browser, but it's often open to run web apps that I'm developing. All of these things happened in pretty close proximity to each other, so one or more of them may be the culprit. One other thing that may or may not be related; for some reason my built-in microphone is no longer picking up audio. It seems like this might be connected to the iTunes issue, because it happened around the same time. In terms of things that I've tried in order to solve this, I'm at a bit of a loss. I followed the instructions at http://developer.apple.com/mac/library/technotes/tn2004/tn2124.html#SECLAUNCHDLOGGING to enable detailed launchd logging to see if I could track down which process was asking iTunes to open (when it's not already open) but I wasn't able to make heads or tails of the output. I'm not even sure if I'm looking in the right place, to be honest; it actually acts like something is activating the application with AppleScript, but I have no processes running that are doing that, as far as I know. I'm running a few apps that have iTunes integration: Adium, iChat with Chax, Quicksilver. None of these have been changed lately, so I consider them low risks of causing this, but it's not impossible. Moreover, I'm not using any of those features intentionally. This is a snippet of launchd debug logging from around the time it just launched: 10-06-09 9:14:29 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:29 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) Dispatching kevent callback. 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) EVFILT_PROC event for job: 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x1004076f0 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) fork()ed 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Conceived 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Created PID 22197 anonymously by PPID 26 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.system.notification_center 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.notification_center 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.CoreServices.coreservicesd 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.CoreServices.coreservicesd 10-06-09 9:14:29 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 22197 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_EXIT 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Dispatching kevent callback. 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) EVFILT_PROC event for job: 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100401720 data = 0x0 ident = 22197 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_EXIT 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Reaping 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Total rusage: utime 0.000000 stime 0.000000 maxrss 0 ixrss 0 idrss 0 isrss 0 minflt 0 majflt 0 nswap 0 inblock 0 oublock 0 msgsnd 0 msgrcv 0 nsignals 0 nvcsw 0 nivcsw 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Removed 10-06-09 9:14:30 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:30 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 22197 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR|EV_EOF|EV_ONESHOT fflags = NOTE_REAP 10-06-09 9:14:32 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:32 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:33 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:33 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 143 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Dispatching kevent callback. 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) EVFILT_PROC event for job: 10-06-09 9:14:33 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10041e9a0 data = 0x0 ident = 143 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) fork()ed 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.distributed_notifications.2 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.distributed_notifications.2 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.system.notification_center 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.notification_center 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.CoreServices.coreservicesd 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.CoreServices.coreservicesd 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.SystemConfiguration.configd 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.SystemConfiguration.configd 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.audio.coreaudiod 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.audio.coreaudiod 10-06-09 9:14:34 AM com.apple.launchd[1] System: Looking up service com.apple.system.logger 10-06-09 9:14:34 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.logger 10-06-09 9:14:35 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:35 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:35 AM com.apple.launchd[1] System: Looking up service com.apple.DiskArbitration.diskarbitrationd 10-06-09 9:14:35 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.DiskArbitration.diskarbitrationd 10-06-09 9:14:35 AM com.apple.launchd[1] System: Looking up service com.apple.system.logger 10-06-09 9:14:35 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.logger 10-06-09 9:14:36 AM com.apple.launchd[1] System: Looking up service com.apple.FSEvents 10-06-09 9:14:36 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.FSEvents 10-06-09 9:14:36 AM com.apple.launchd[1] System: Looking up service com.apple.SystemConfiguration.configd 10-06-09 9:14:36 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.SystemConfiguration.configd 10-06-09 9:14:38 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:38 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:39 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:39 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) Dispatching kevent callback. 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) EVFILT_PROC event for job: 10-06-09 9:14:39 AM com.apple.launchd[1] KEVENT[0]: udata = 0x1004076f0 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) fork()ed 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Conceived 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Created PID 22211 anonymously by PPID 26 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Looking up per user launchd for UID: 0 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Per user launchd job found for UID: 505 10-06-09 9:14:39 AM com.apple.launchd[1] System: Looking up service com.apple.system.notification_center 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.notification_center 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Looking up per user launchd for UID: 0 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Per user launchd job found for UID: 505 10-06-09 9:14:39 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.libinfo_v1

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • Making sense of S.M.A.R.T

    - by James
    First of all, I think everyone knows that hard drives fail a lot more than the manufacturers would like to admit. Google did a study that indicates that certain raw data attributes that the S.M.A.R.T status of hard drives reports can have a strong correlation with the future failure of the drive. We find, for example, that after their first scan error, drives are 39 times more likely to fail within 60 days than drives with no such errors. First errors in re- allocations, offline reallocations, and probational counts are also strongly correlated to higher failure probabil- ities. Despite those strong correlations, we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever. Seagate seems like it is trying to obscure this information about their drives by claiming that only their software can accurately determine the accurate status of their drive and by the way their software will not tell you the raw data values for the S.M.A.R.T attributes. Western digital has made no such claim to my knowledge but their status reporting tool does not appear to report raw data values either. I've been using HDtune and smartctl from smartmontools in order to gather the raw data values for each attribute. I've found that indeed... I am comparing apples to oranges when it comes to certain attributes. I've found for example that most Seagate drives will report that they have many millions of read errors while western digital 99% of the time shows 0 for read errors. I've also found that Seagate will report many millions of seek errors while Western Digital always seems to report 0. Now for my question. How do I normalize this data? Is Seagate producing millions of errors while Western digital is producing none? Wikipedia's article on S.M.A.R.T status says that manufacturers have different ways of reporting this data. Here is my hypothesis: I think I found a way to normalize (is that the right term?) the data. Seagate drives have an additional attribute that Western Digital drives do not have (Hardware ECC Recovered). When you subtract the Read error count from the ECC Recovered count, you'll probably end up with 0. This seems to be equivalent to Western Digitals reported "Read Error" count. This means that Western Digital only reports read errors that it cannot correct while Seagate counts up all read errors and tells you how many of those it was able to fix. I had a Seagate drive where the ECC Recovered count was less than the Read error count and I noticed that many of my files were becoming corrupt. This is how I came up with my hypothesis. The millions of seek errors that Seagate produces are still a mystery to me. Please confirm or correct my hypothesis if you have additional information. Here is the smart status of my western digital drive just so you can see what I'm talking about: james@ubuntu:~$ sudo smartctl -a /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: WDC WD1001FALS-00E3A0 Serial Number: WD-WCATR0258512 Firmware Version: 05.01D05 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Thu Jun 10 19:52:28 2010 PDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 179 175 021 Pre-fail Always - 4033 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 270 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 1468 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 262 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 46 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 223 194 Temperature_Celsius 0x0022 105 102 000 Old_age Always - 42 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0

    Read the article

< Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >