Search Results

Search found 18954 results on 759 pages for 'connection reset'.

Page 719/759 | < Previous Page | 715 716 717 718 719 720 721 722 723 724 725 726  | Next Page >

  • What is the best strategy for populating a TableView from a service?

    - by alrutherford
    I have an application which has a potentially long running background process. I want this process to populate a TableView as results objects are generated. The results objects are added to an observableList and have properties which are bound to the columns in the usual fashion for JavaFX. As an example of this consider the following sample code Main Application import java.util.LinkedList; import javafx.application.Application; import javafx.collections.FXCollections; import javafx.collections.ObservableList; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.geometry.Insets; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.control.TableColumn; import javafx.scene.control.TableView; import javafx.scene.control.cell.PropertyValueFactory; import javafx.scene.layout.VBox; import javafx.stage.Stage; public class DataViewTest extends Application { private TableView<ServiceResult> dataTable = new TableView<ServiceResult>(); private ObservableList<ServiceResult> observableList; private ResultService resultService; public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) { observableList = FXCollections.observableArrayList(new LinkedList<ServiceResult>()); resultService = new ResultService(observableList); Button refreshBtn = new Button("Update"); refreshBtn.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent arg0) { observableList.clear(); resultService.reset(); resultService.start(); } }); TableColumn<ServiceResult, String> nameCol = new TableColumn<ServiceResult, String>("Value"); nameCol.setCellValueFactory(new PropertyValueFactory<ServiceResult, String>("value")); nameCol.setPrefWidth(200); dataTable.getColumns().setAll(nameCol); // productTable.getItems().addAll(products); dataTable.setItems(observableList); Scene scene = new Scene(new Group()); stage.setTitle("Table View Sample"); stage.setWidth(300); stage.setHeight(500); final VBox vbox = new VBox(); vbox.setSpacing(5); vbox.setPadding(new Insets(10, 0, 0, 10)); vbox.getChildren().addAll(refreshBtn, dataTable); ((Group) scene.getRoot()).getChildren().addAll(vbox); stage.setScene(scene); stage.show(); } } Service public class ResultService extends Service<Void> { public static final int ITEM_COUNT = 100; private ObservableList<ServiceResult> observableList; /** * Construct service. * */ public ResultService(ObservableList<ServiceResult> observableList) { this.observableList = observableList; } @Override protected Task<Void> createTask() { return new Task<Void>() { @Override protected Void call() throws Exception { process(); return null; } }; } public void process() { for (int i = 0; i < ITEM_COUNT; i++) { observableList.add(new ServiceResult(i)); } } } Data public class ServiceResult { private IntegerProperty valueProperty; /** * Construct property object. * */ public ServiceResult(int value) { valueProperty = new SimpleIntegerProperty(); setValue(value); } public int getValue() { return valueProperty.get(); } public void setValue(int value) { this.valueProperty.set(value); } public IntegerProperty valueProperty() { return valueProperty; } } Both the service and the TableView share a reference to the observable list? Is this good practise in JavaFx and if not what is the correct strategy? If you hit the the 'Update' button the list will not always refresh to the ITEM_COUNT length. I believe this is because the observableList.clear() is interfering with the update which is running in the background thread. Can anyone shed some light on this?

    Read the article

  • Webkit browser jQuery transformations/transitions not working with jSplitSlider

    - by user3689793
    I am helping to build a site and i'm having an issue with the functionality of an add-in called jsplitslider when running it in chrome. Right now, when I navigate between the slides, the div's get stuck on top of each other and never clear the webkit transformations/animations: <div class="sl-content-slice" style="transition: all 800ms ease-in-out; -webkit-transition: all 800ms ease-in-out;"> I think the problem is due to timing of the functions, but I can't seem to figure out where I would need to add a setTimeout(). I only think this because I exhausted a lot of the other options like display: inline-block, notransitions css, etc. I'm desperate to figure out how to make this work in chrome. It works in FF and IE(surprisingly enough). I'm not great at webcoding, so any help will be appreciated! The code on the site isn't minimized. Here is the jQuery where I think the problem lies: var cssStyle = config.orientation === 'horizontal' ? { marginTop : -this.size.height / 2 } : { marginLeft : -this.size.width / 2 }, // default slide's slices style resetStyle = { 'transform' : 'translate(0%,0%) rotate(0deg) scale(1)', opacity : 1 }, // slice1 style slice1Style = config.orientation === 'horizontal' ? { 'transform' : 'translateY(-' + this.options.translateFactor + '%) rotate(' + config.slice1angle + 'deg) scale(' + config.slice1scale + ')' } : { 'transform' : 'translateX(-' + this.options.translateFactor + '%) rotate(' + config.slice1angle + 'deg) scale(' + config.slice1scale + ')' }, // slice2 style slice2Style = config.orientation === 'horizontal' ? { 'transform' : 'translateY(' + this.options.translateFactor + '%) rotate(' + config.slice2angle + 'deg) scale(' + config.slice2scale + ')' } : { 'transform' : 'translateX(' + this.options.translateFactor + '%) rotate(' + config.slice2angle + 'deg) scale(' + config.slice2scale + ')' }; if( this.options.optOpacity ) { slice1Style.opacity = 0; slice2Style.opacity = 0; } // we are adding the classes sl-trans-elems and sl-trans-back-elems to the slide that is either coming "next" // or going "prev" according to the direction. // the idea is to make it more interesting by giving some animations to the respective slide's elements //( dir === 'next' ) ? $nextSlide.addClass( 'sl-trans-elems' ) : $currentSlide.addClass( 'sl-trans-back-elems' ); $currentSlide.removeClass( 'sl-trans-elems' ); var transitionProp = { 'transition' : 'all ' + this.options.speed + 'ms ease-in-out' }; // add the 2 slices and animate them $movingSlide.css( 'z-index', this.slidesCount ) .find( 'div.sl-content-wrapper' ) .wrap( $( '<div class="sl-content-slice" />' ).css( transitionProp ) ) .parent() .cond( dir === 'prev', function() { var slice = this; this.css( slice1Style ); setTimeout( function() { slice.css( resetStyle ); }, 150 ); }, function() { var slice = this; setTimeout( function() { slice.css( slice1Style ); }, 150 ); } ) .clone() .appendTo( $movingSlide ) .cond( dir === 'prev', function() { var slice = this; this.css( slice2Style ); setTimeout( function() { $currentSlide.addClass( 'sl-trans-back-elems' ); if( self.support ) { slice.css( resetStyle ).on( self.transEndEventName, function() { self._onEndNavigate( slice, $currentSlide, dir ); } ); } else { self._onEndNavigate( slice, $currentSlide, dir ); } }, 150 ); }, function() { var slice = this; setTimeout( function() { $nextSlide.addClass( 'sl-trans-elems' ); if( self.support ) { slice.css( slice2Style ).on( self.transEndEventName, function() { self._onEndNavigate( slice, $currentSlide, dir ); } ); } else { self._onEndNavigate( slice, $currentSlide, dir ); } }, 150 ); } ) .find( 'div.sl-content-wrapper' ) .css( cssStyle ); $nextSlide.show(); }, _validateValues : function( config ) { // OK, so we are restricting the angles and scale values here. // This is to avoid the slices wrong sides to be shown. // you can adjust these values as you wish but make sure you also ajust the // paddings of the slides and also the options.translateFactor value and scale data attrs if( config.slice1angle > this.options.maxAngle || config.slice1angle < -this.options.maxAngle ) { config.slice1angle = this.options.maxAngle; } if( config.slice2angle > this.options.maxAngle || config.slice2angle < -this.options.maxAngle ) { config.slice2angle = this.options.maxAngle; } if( config.slice1scale > this.options.maxScale || config.slice1scale <= 0 ) { config.slice1scale = this.options.maxScale; } if( config.slice2scale > this.options.maxScale || config.slice2scale <= 0 ) { config.slice2scale = this.options.maxScale; } if( config.orientation !== 'vertical' && config.orientation !== 'horizontal' ) { config.orientation = 'horizontal' } }, _onEndNavigate : function( $slice, $oldSlide, dir ) { // reset previous slide's style after next slide is shown var $slide = $slice.parent(), removeClasses = 'sl-trans-elems sl-trans-back-elems'; // remove second slide's slice $slice.remove(); // unwrap.. $slide.css( 'z-index', 10 ) .find( 'div.sl-content-wrapper' ) .unwrap(); // hide previous current slide $oldSlide.hide().removeClass( removeClasses ); $slide.removeClass( removeClasses ); // now we can navigate again.. this.isAnimating = false; this.options.onAfterChange( $slide, this.current ); }, Sorry if I missed any conventions when posting, this is my first S.O. post. Thanks in advance for any help.

    Read the article

  • Different behavior for REF CURSOR between Oracle 10g and 11g when unique index present?

    - by wweicker
    Description I have an Oracle stored procedure that has been running for 7 or so years both locally on development instances and on multiple client test and production instances running Oracle 8, then 9, then 10, and recently 11. It has worked consistently until the upgrade to Oracle 11g. Basically, the procedure opens a reference cursor, updates a table then completes. In 10g the cursor will contain the expected results but in 11g the cursor will be empty. No DML or DDL changed after the upgrade to 11g. This behavior is consistent on every 10g or 11g instance I've tried (10.2.0.3, 10.2.0.4, 11.1.0.7, 11.2.0.1 - all running on Windows). The specific code is much more complicated but to explain the issue in somewhat realistic overview: I have some data in a header table and a bunch of child tables that will be output to PDF. The header table has a boolean (NUMBER(1) where 0 is false and 1 is true) column indicating whether that data has been processed yet. The view is limited to only show rows in that have not been processed (the view also joins on some other tables, makes some inline queries and function calls, etc). So at the time when the cursor is opened, the view shows one or more rows, then after the cursor is opened an update statement runs to flip the flag in the header table, a commit is issued, then the procedure completes. On 10g, the cursor opens, it contains the row, then the update statement flips the flag and running the procedure a second time would yield no data. On 11g, the cursor never contains the row, it's as if the cursor does not open until after the update statement runs. I'm concerned that something may have changed in 11g (hopefully a setting that can be configured) that might affect other procedures and other applications. What I'd like to know is whether anyone knows why the behavior is different between the two database versions and whether the issue can be resolved without code changes. Update 1: I managed to track the issue down to a unique constraint. It seems that when the unique constraint is present in 11g the issue is reproducible 100% of the time regardless of whether I'm running the real world code against the actual objects or the following simple example. Update 2: I was able to completely eliminate the view from the equation. I have updated the simple example to show the problem exists even when querying directly against the table. Simple Example CREATE TABLE tbl1 ( col1 VARCHAR2(10), col2 NUMBER(1) ); INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); /* View is no longer required to demonstrate the problem CREATE OR REPLACE VIEW vw1 (col1, col2) AS SELECT col1, col2 FROM tbl1 WHERE col2 = 0; */ CREATE OR REPLACE PACKAGE pkg1 AS TYPE refWEB_CURSOR IS REF CURSOR; PROCEDURE proc1 (crs OUT refWEB_CURSOR); END pkg1; CREATE OR REPLACE PACKAGE BODY pkg1 IS PROCEDURE proc1 (crs OUT refWEB_CURSOR) IS BEGIN OPEN crs FOR SELECT col1 FROM tbl1 WHERE col1 = 'TEST1' AND col2 = 0; UPDATE tbl1 SET col2 = 1 WHERE col1 = 'TEST1'; COMMIT; END proc1; END pkg1; Anonymous Block Demo DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin first test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end first test'); END; /* After creating this index, the problem is seen */ CREATE UNIQUE INDEX unique_col1 ON tbl1 (col1); /* Reset data to initial values */ TRUNCATE TABLE tbl1; INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin second test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end second test'); END; Example of what the output on 10g would be:   begin first test   TEST1   end first test   begin second test   TEST1   end second test Example of what the output on 11g would be:   begin first test   TEST1   end first test   begin second test   end second test Clarification I can't remove the COMMIT because in the real world scenario the procedure is called from a web application. When the data provider on the front end calls the procedure it will issue an implicit COMMIT when disconnecting from the database anyways. So if I remove the COMMIT in the procedure then yes, the anonymous block demo would work but the real world scenario would not because the COMMIT would still happen. Question Why is 11g behaving differently? Is there anything I can do other than re-write the code?

    Read the article

  • list images from directory by function

    - by osc2nuke
    i'm using this function: function getmyimages($qid){ $imgdir = 'modules/Projects/uploaded_project_images/'. $qid .''; // the directory, where your images are stored $allowed_types = array('png','jpg','jpeg','gif'); // list of filetypes you want to show $dimg = opendir($imgdir); while($imgfile = readdir($dimg)) { if(in_array(strtolower(substr($imgfile,-3)),$allowed_types)) { $a_img[] = $imgfile; sort($a_img); reset ($a_img); } } $totimg = count($a_img); // total image number for($x=0; $x < $totimg; $x++) { $size = getimagesize($imgdir.'/'.$a_img[$x]); // do whatever $halfwidth = ceil($size[0]/2); $halfheight = ceil($size[1]/2); $mytest = 'name: '.$a_img[$x].' width: '.$size[0].' height: '.$size[1].'<br /><a href="'. $imgdir .'/'.$a_img[$x].'">'. $a_img[$x]. '</a>'; } return $mytest; } And i call this function between a while row as: $sql_select = $db->sql_query('SELECT * from '.$prefix.'_projects WHERE topic=\''.$cid.'\''); OpenTable(); while ($row2 = $db->sql_fetchrow($sql_select)){ $qid = $row2['qid']; $project_query = $db->sql_query('SELECT p.uid, p.uname, p.subject, p.story, p.storyext, p.date, p.topic, p.pdate, p.materials, p.bidoptions, p.projectduration, pd.id_duration, pm.material_id, pbo.bidid, pc.cid FROM ' . $prefix . '_projects p, ' . $prefix . '_projects_duration pd, ' . $prefix . '_project_materials pm, ' . $prefix . '_project_bid_options pbo, ' . $prefix . '_project_categories pc WHERE p.topic=\''.$cid.'\' and p.qid=\''.$qid.'\' and p.bidoptions=pbo.bidid and p.materials=pm.material_id and p.projectduration=pd.id_duration'); while ($project_row = $db->sql_fetchrow($project_query)) { //$qid = $project_row['qid']; $uid = $project_row['uid']; $uname = $project_row['uname']; $subject = $project_row['subject']; $story = $project_row['story']; $storyext = $project_row['storyext']; $date = $project_row['date']; $topic = $project_row['topic']; $pdate = $project_row['pdate']; $materials = $project_row['materials']; $bidoptions = $project_row['bidoptions']; $projectduration = $project_row['projectduration']; //Get the topic name $topic_query = $db->sql_query('SELECT cid,title from '.$prefix.'_project_categories WHERE cid =\''.$cid.'\''); while ($topic_row = $db->sql_fetchrow($topic_query)) { $topic_id = $topic_row['cid']; $topic_title = $topic_row['title']; } //Get the material text $material_query = $db->sql_query('SELECT material_id,material_name from '.$prefix.'_project_materials WHERE material_id =\''.$materials.'\''); while ($material_row = $db->sql_fetchrow($material_query)) { $material_id = $material_row['material_id']; $material_name = $material_row['material_name']; } //Get the bid methode $bid_query = $db->sql_query('SELECT bidid,bidname from '.$prefix.'_project_bid_options WHERE bidid =\''.$bidoptions.'\''); while ($bid_row = $db->sql_fetchrow($bid_query)) { $bidid = $bid_row['bidid']; $bidname = $bid_row['bidname']; } //Get the project duration $duration_query = $db->sql_query('SELECT id_duration,duration_value,duration_alias from '.$prefix.'_projects_duration WHERE id_duration =\''.$projectduration.'\''); while ($duration_row = $db->sql_fetchrow($duration_query)) { $id_duration = $duration_row['id_duration']; $duration_value = $duration_row['duration_value']; $duration_alias = $duration_row['duration_alias']; } } echo '<br/><b>id</b>--->' .$qid. '<br/><b>uid</b>--->' .$uid. '<br/><b>username</b>--->' .$uname. '<br/><b>subject</b>--->'.$subject. '<br/><b>story1</b>--->'.$story. '<br/><b>story2</b>--->'.$storyext. '<br/><b>postdate</b>--->'.$date. '<br/><b>categorie</b>--->'.$topic_title . '<br/><b>project start</b>--->'.$pdate. '<br/><b>materials</b>--->'.$material_name. '<br/><b>bid methode</b>--->'.$bidname. '<br/><b>project duration</b>--->'.$duration_alias.'<br /><br /><br/><b>image url</b>--->'.getmyimages($qid).'<br /><br />'; } CloseTable(); the result outputs only the "last" file from the directories. if i do a echo instead of a return $mytest; it read the whole directory but ruïns the output.

    Read the article

  • Redrawing content of UIWebView

    - by btate
    I have a bunch of webviews with static html content that I'm putting in a scroll view as pages. That works fine, but having 20 something full screen subviews of the scroll view is causing some lag. I solved that by only placing 5 at a time in there. The current view, and the two next and two previous. The problem now is that any web view that is not a subview of the scroll view is not drawing correctly based on the device orientation. The frame of the webview is printing out correct, but the actual content is drawing in portrait mode. So there is essentially a strip of blank space to the right of the content. How do I go about re rendering the content without reloading the page? Here's the relevant code: - (void)didRotateFromInterfaceOrientation:(UIInterfaceOrientation) fromInterfaceOrientation{ [self resizeSubViews]; } - (void) setupWebViews{ if(_webViews == nil) _webViews = [[NSMutableArray alloc] init]; // This is where the navigation would come into play as far as loading up available web views [_webViews removeAllObjects]; // for loop here to create web views for (int i = 0; i < 20; i++) { //CDCWebViewController *webView = [[[MyInternalWebView alloc] initWithFrame:_webViewWrapper.frame] retain]; MyInternalWebView *page = [[[MyInternalWebView alloc] init] retain]; [page loadRequest:[NSURLRequest requestWithURL:_url]]; [page setCdcIWVdelegate:self]; [page setTestIndex:i]; [page setPageIndex:i]; [_webViews addObject:page]; [self loadScrollViewWithPage:i]; } [self clearUnusedWebViews:_pageControl.currentPage]; } - (void) setupWebViewWrapper{ // a page is the width of the scroll view _webViewWrapper.pagingEnabled = YES; _pageControl = [[[UIPageControl alloc] init] retain]; _webViewWrapper.showsHorizontalScrollIndicator = NO; _webViewWrapper.showsVerticalScrollIndicator = NO; _webViewWrapper.scrollsToTop = NO; _webViewWrapper.delegate = self; _pageControl.numberOfPages = [_webViews count]; _pageControl.currentPage = 0; } - (void) resizeSubViews{ // The frame is set in IB _webViewWrapper.contentSize = CGSizeMake(_webViewWrapper.frame.size.width * [_webViews count], _webViewWrapper.frame.size.height); // Move the content offset. _webViewWrapper.contentOffset = CGPointMake(_webViewWrapper.frame.size.width * _pageControl.currentPage, _webViewWrapper.contentOffset.y); for (MyInternalWebView *subview in _webViewWrapper.subviews) { // Reset the frame height and width here? CGRect frame = _webViewWrapper.frame; frame.origin.x = frame.size.width * subview.pageIndex; frame.origin.y = 0; [subview setFrame:frame]; } } //***************************************************** //* //* ScrollView Functions //* //***************************************************** - (void)loadScrollViewWithPage:(int)page { // Make sure we're not out of bounds if (page < 0) return; if (page >= [_webViews count]) return; MyInternalWebView *webView = [_webViews objectAtIndex:page]; // Add the preloaded webview to the scrollview if it's not there already if (nil == [webView superview]) { CGRect frame = _webViewWrapper.frame; //NSLog(@"width = %f", frame.size.width); frame.origin.x = frame.size.width * page; frame.origin.y = 0; //NSLog(@"setting frame for page %d %@", page, NSStringFromCGRect(frame)); [webView setFrame:frame]; [_webViewWrapper addSubview:[_webViews objectAtIndex:page]]; // Now that the new one is loaded, clear what doesn't need to be here [self clearUnusedWebViews:page]; } } - (void) clearUnusedWebViews: (NSInteger) page{ for (int i = 0; i < [_webViews count]; i++) { if ((page - i) <= 2 && i - page <= 2) { continue; } [[_webViews objectAtIndex:i] removeFromSuperview]; } } - (void)scrollViewDidScroll:(UIScrollView *)sender { // Switch the indicator when more than 50% of the previous/next page is visible CGFloat pageWidth = _webViewWrapper.frame.size.width; NSInteger page = floor((_webViewWrapper.contentOffset.x - pageWidth / 2) / pageWidth) + 1; _pageControl.currentPage = page; // load the visible page and the page on either side of it (to avoid flashes when the user starts scrolling) [self loadScrollViewWithPage:page - 2]; [self loadScrollViewWithPage:page - 1]; [self loadScrollViewWithPage:page]; [self loadScrollViewWithPage:page + 1]; [self loadScrollViewWithPage:page + 2]; }

    Read the article

  • PHP - javascript validation radio button

    - by user1806136
    i have a form with 3 sets of radio buttons. i want to set a simple javascript validation alert to appear when user clicks on submit when one of the fields is null. how can i do that using javascript ? my code so far is .. <?php session_start(); $Load=$_SESSION['login_user']; include('../connect.php'); if (isset($_POST['submit'])) { $v1 = intval($_POST['v1']); $v2 = intval($_POST['v2']); $v3 = intval($_POST['v3']); $total = $v1 + $v2 + $v3 ; mysql_query("INSERT into Form1 (P1,P2,P3,TOTAL) values('$v1','$v2','$v3','$total')") or die(mysql_error()); header("Location: mark.php"); } <center><form method="post" action="mark.php" > <tr> <th > School Evaluation <font size="4" > </font></th> <tr> <th > Criteria <font size="4" > </font></th> <th> 4<font size="4" > </font></th> <th> 3<font size="4" > </font></th> <th> 2<font size="4" > </font></th> <th> 1<font size="4" > </font></th> </tr> <tr> <th> Your attendance<font size="4" > </font></th> <td> <input type="radio" name ="v1" value = "4" onclick="updateTotal();"/></td> <td> <input type="radio" name ="v1" value = "3" onclick="updateTotal();" /></td> <td> <input type="radio" name ="v1" value = "2" onclick="updateTotal();" /></td> <td> <input type="radio" name ="v1" value = "1" onclick="updateTotal();" /></td> </tr> <tr> <th > Your grades <font size="4" > </font></th> <td> <input type="radio" name ="v2" value = "4" onclick="updateTotal();" /></td> <td> <input type="radio" name ="v2" value = "3" onclick="updateTotal();" /></td> <td> <input type="radio" name ="v2" value = "2" onclick="updateTotal();" /></td> <td> <input type="radio" name ="v2" value = "1" onclick="updateTotal();" /></td> </tr> <tr> <th >Your self-control <font size="4" > </font></th> <td> <input type="radio" name ="v3" value = "4" onclick="updateTotal();" /></td> <td> <input type="radio" name ="v3" value = "3" onclick="updateTotal();" /></td> <td> <input type="radio" name ="v3" value = "2" onclick="updateTotal();" /></td> <td> <input type="radio" name ="v3" value = "1" onclick="updateTotal();" /></td> </tr> </tr> </table> i have put <br> <td><input type="submit" name="submit" value="Submit" onClick="return validation(form);"> <input type="reset" name="clear" value="clear" style="width: 70px"></td> </form> i have try alot of codes but no alert appears!

    Read the article

  • files get uploaded just before they get cancelled

    - by user1763986
    Got a little situation here where I am trying to cancel a file's upload. What I have done is stated that if the user clicks on the "Cancel" button, then it will simply remove the iframe so that it does not go to the page where it uploads the files into the server and inserts data into the database. Now this works fine if the user clicks on the "Cancel" button in quickish time the problem I have realised though is that if the user clicks on the "Cancel" button very late, it sometimes doesn't remove the iframe in time meaning that the file has just been uploaded just before the user has clicked on the "Cancel" button. So my question is that is there a way that if the file does somehow get uploaded before the user clicks on the "Cancel" button, that it deletes the data in the database and removes the file from the server? Below is the image upload form: <form action="imageupload.php" method="post" enctype="multipart/form-data" target="upload_target_image" onsubmit="return imageClickHandler(this);" class="imageuploadform" > <p class="imagef1_upload_process" align="center"> Loading...<br/> <img src="Images/loader.gif" /> </p> <p class="imagef1_upload_form" align="center"> <br/> <span class="imagemsg"></span> <label>Image File: <input name="fileImage" type="file" class="fileImage" /></label><br/> <br/> <label class="imagelbl"><input type="submit" name="submitImageBtn" class="sbtnimage" value="Upload" /></label> </p> <p class="imagef1_cancel" align="center"> <input type="reset" name="imageCancel" class="imageCancel" value="Cancel" /> </p> <iframe class="upload_target_image" name="upload_target_image" src="#" style="width:0px;height:0px;border:0px;solid;#fff;"></iframe> </form> Below is the jquery function which controls the "Cancel" button: $(imageuploadform).find(".imageCancel").on("click", function(event) { $('.upload_target_image').get(0).contentwindow $("iframe[name='upload_target_image']").attr("src", "javascript:'<html></html>'"); return stopImageUpload(2); }); Below is the php code where it uploads the files and inserts the data into the database. The form above posts to this php page "imageupload.php": <body> <?php include('connect.php'); session_start(); $result = 0; //uploads file move_uploaded_file($_FILES["fileImage"]["tmp_name"], "ImageFiles/" . $_FILES["fileImage"]["name"]); $result = 1; //set up the INSERT SQL query command to insert the name of the image file into the "Image" Table $imagesql = "INSERT INTO Image (ImageFile) VALUES (?)"; //prepare the above SQL statement if (!$insert = $mysqli->prepare($imagesql)) { // Handle errors with prepare operation here } //bind the parameters (these are the values that will be inserted) $insert->bind_param("s",$img); //Assign the variable of the name of the file uploaded $img = 'ImageFiles/'.$_FILES['fileImage']['name']; //execute INSERT query $insert->execute(); if ($insert->errno) { // Handle query error here } //close INSERT query $insert->close(); //Retrieve the ImageId of the last uploded file $lastID = $mysqli->insert_id; //Insert into Image_Question Table (be using last retrieved Image id in order to do this) $imagequestionsql = "INSERT INTO Image_Question (ImageId, SessionId, QuestionId) VALUES (?, ?, ?)"; //prepare the above SQL statement if (!$insertimagequestion = $mysqli->prepare($imagequestionsql)) { // Handle errors with prepare operation here echo "Prepare statement err imagequestion"; } //Retrieve the question number $qnum = (int)$_POST['numimage']; //bind the parameters (these are the values that will be inserted) $insertimagequestion->bind_param("isi",$lastID, 'Exam', $qnum); //execute INSERT query $insertimagequestion->execute(); if ($insertimagequestion->errno) { // Handle query error here } //close INSERT query $insertimagequestion->close(); ?> <!--Javascript which will output the message depending on the status of the upload (successful, failed or cancelled)--> <script> window.top.stopImageUpload(<?php echo $result; ?>, '<?php echo $_FILES['fileImage']['name'] ?>'); </script> </body> UPDATE: Below is the php code "cancelimage.php" where I want to delete the cancelled file from the server and delete the record from the database. It is set up but not finished, can somebody finish it off so I can retrieve the name of the file and it's id using $_SESSION? <?php // connect to the database include('connect.php'); /* check connection */ if (mysqli_connect_errno()) { printf("Connect failed: %s\n", mysqli_connect_error()); die(); } //remove file from server unlink("ImageFiles/...."); //need to retrieve file name here where the ... line is //DELETE query statement where it will delete cancelled file from both Image and Image Question Table $imagedeletesql = " DELETE img, img_q FROM Image AS img LEFT JOIN Image_Question AS img_q ON img_q.ImageId = img.ImageId WHERE img.ImageFile = ?"; //prepare delete query if (!$delete = $mysqli->prepare($imagedeletesql)) { // Handle errors with prepare operation here } //Dont pass data directly to bind_param store it in a variable $delete->bind_param("s",$img); //execute DELETE query $delete->execute(); if ($delete->errno) { // Handle query error here } //close query $delete->close(); ?> Can you please provide an sample code in your answer to make it easier for me. Thank you

    Read the article

  • Cisco 800 series won't forward port

    - by sam
    Hello ServerFault, I am trying to forward port 444 from my cisco router to my Web Server (192.168.0.2). As far as I can tell, my port forwarding is configured correctly, yet no traffic will pass through on port 444. Here is my config: ! version 12.3 service config no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug uptime service timestamps log uptime service password-encryption no service dhcp ! hostname QUESTMOUNT ! logging buffered 16386 informational logging rate-limit 100 except warnings no logging console no logging monitor enable secret 5 -removed- ! username administrator secret 5 -removed- username manager secret 5 -removed- clock timezone NZST 12 clock summer-time NZDT recurring 1 Sun Oct 2:00 3 Sun Mar 3:00 aaa new-model ! ! aaa authentication login default local aaa authentication login userlist local aaa authentication ppp default local aaa authorization network grouplist local aaa session-id common ip subnet-zero no ip source-route no ip domain lookup ip domain name quest.local ! ! no ip bootp server ip inspect name firewall tcp ip inspect name firewall udp ip inspect name firewall cuseeme ip inspect name firewall h323 ip inspect name firewall rcmd ip inspect name firewall realaudio ip inspect name firewall streamworks ip inspect name firewall vdolive ip inspect name firewall sqlnet ip inspect name firewall tftp ip inspect name firewall ftp ip inspect name firewall icmp ip inspect name firewall sip ip inspect name firewall fragment maximum 256 timeout 1 ip inspect name firewall netshow ip inspect name firewall rtsp ip inspect name firewall skinny ip inspect name firewall http ip audit notify log ip audit po max-events 100 ip audit name intrusion info list 3 action alarm ip audit name intrusion attack list 3 action alarm drop reset no ftp-server write-enable ! ! ! ! crypto isakmp policy 1 authentication pre-share ! crypto isakmp policy 2 encr 3des authentication pre-share group 2 ! crypto isakmp client configuration group staff key 0 qS;,sc:q<skro1^, domain quest.local pool vpnclients acl 106 ! ! crypto ipsec transform-set tr-null-sha esp-null esp-sha-hmac crypto ipsec transform-set tr-des-md5 esp-des esp-md5-hmac crypto ipsec transform-set tr-des-sha esp-des esp-sha-hmac crypto ipsec transform-set tr-3des-sha esp-3des esp-sha-hmac ! crypto dynamic-map vpnusers 1 description Client to Site VPN Users set transform-set tr-des-md5 ! ! crypto map cm-cryptomap client authentication list userlist crypto map cm-cryptomap isakmp authorization list grouplist crypto map cm-cryptomap client configuration address respond crypto map cm-cryptomap 65000 ipsec-isakmp dynamic vpnusers ! ! ! ! interface Ethernet0 ip address 192.168.0.254 255.255.255.0 ip access-group 102 in ip nat inside hold-queue 100 out ! interface ATM0 no ip address no atm ilmi-keepalive dsl operating-mode auto ! interface ATM0.1 point-to-point pvc 0/100 encapsulation aal5mux ppp dialer dialer pool-member 1 ! ! interface Dialer0 bandwidth 640 ip address negotiated ip access-group 101 in no ip redirects no ip unreachables ip nat outside ip inspect firewall out ip audit intrusion in encapsulation ppp no ip route-cache no ip mroute-cache dialer pool 1 dialer-group 1 no cdp enable ppp pap sent-username -removed- password 7 -removed- ppp ipcp dns request crypto map cm-cryptomap ! ip local pool vpnclients 192.168.99.1 192.168.99.254 ip nat inside source list 105 interface Dialer0 overload ip nat inside source static tcp 192.168.0.2 444 interface Dialer0 444 ip nat inside source static tcp 192.168.0.51 9000 interface Dialer0 9000 ip nat inside source static udp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 25 interface Dialer0 25 ip classless ip route 0.0.0.0 0.0.0.0 Dialer0 ip http server no ip http secure-server ! ip access-list logging interval 10 logging 192.168.0.2 access-list 1 remark The local LAN. access-list 1 permit 192.168.0.0 0.0.0.255 access-list 2 permit 192.168.0.0 access-list 2 remark Where management can be done from. access-list 2 permit 192.168.0.0 0.0.0.255 access-list 3 remark Traffic not to check for intrustion detection. access-list 3 deny 192.168.99.0 0.0.0.255 access-list 3 permit any access-list 101 remark Traffic allowed to enter the router from the Internet access-list 101 permit ip 192.168.99.0 0.0.0.255 192.168.0.0 0.0.0.255 access-list 101 deny ip 0.0.0.0 0.255.255.255 any access-list 101 deny ip 10.0.0.0 0.255.255.255 any access-list 101 deny ip 127.0.0.0 0.255.255.255 any access-list 101 deny ip 169.254.0.0 0.0.255.255 any access-list 101 deny ip 172.16.0.0 0.15.255.255 any access-list 101 deny ip 192.0.2.0 0.0.0.255 any access-list 101 deny ip 192.168.0.0 0.0.255.255 any access-list 101 deny ip 198.18.0.0 0.1.255.255 any access-list 101 deny ip 224.0.0.0 0.15.255.255 any access-list 101 deny ip any host 255.255.255.255 access-list 101 permit tcp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit tcp host 120.136.2.22 any eq 1433 access-list 101 permit tcp host 123.100.90.58 any eq 1433 access-list 101 permit udp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit udp host 120.136.2.22 any eq 1433 access-list 101 permit udp host 123.100.90.58 any eq 1433 access-list 101 permit tcp any any eq 444 access-list 101 permit tcp any any eq 9000 access-list 101 permit tcp any any eq smtp access-list 101 permit udp any any eq non500-isakmp access-list 101 permit udp any any eq isakmp access-list 101 permit esp any any access-list 101 permit tcp any any eq 1723 access-list 101 permit gre any any access-list 101 permit tcp any any eq 22 access-list 101 permit tcp any any eq telnet access-list 102 remark Traffic allowed to enter the router from the Ethernet access-list 102 permit ip any host 192.168.0.254 access-list 102 deny ip any host 192.168.0.255 access-list 102 deny udp any any eq tftp log access-list 102 permit ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 102 deny ip any 0.0.0.0 0.255.255.255 log access-list 102 deny ip any 10.0.0.0 0.255.255.255 log access-list 102 deny ip any 127.0.0.0 0.255.255.255 log access-list 102 deny ip any 169.254.0.0 0.0.255.255 log access-list 102 deny ip any 172.16.0.0 0.15.255.255 log access-list 102 deny ip any 192.0.2.0 0.0.0.255 log access-list 102 deny ip any 192.168.0.0 0.0.255.255 log access-list 102 deny ip any 198.18.0.0 0.1.255.255 log access-list 102 deny udp any any eq 135 log access-list 102 deny tcp any any eq 135 log access-list 102 deny udp any any eq netbios-ns log access-list 102 deny udp any any eq netbios-dgm log access-list 102 deny tcp any any eq 445 log access-list 102 permit ip 192.168.0.0 0.0.0.255 any access-list 102 permit ip any host 255.255.255.255 access-list 102 deny ip any any log access-list 105 remark Traffic to NAT access-list 105 deny ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 105 permit ip 192.168.0.0 0.0.0.255 any access-list 106 remark User to Site VPN Clients access-list 106 permit ip 192.168.0.0 0.0.0.255 any dialer-list 1 protocol ip permit ! line con 0 no modem enable line aux 0 line vty 0 4 access-class 2 in transport input telnet ssh transport output none ! scheduler max-task-time 5000 ! end any ideas? :)

    Read the article

  • Varnish default.vcl grace period

    - by Vladimir
    These are my settings for a grace period (/etc/varnish/default.vcl) sub vcl_recv { .... set req.grace = 360000s; ... } sub vcl_fetch { ... set beresp.grace = 360000s; ... } I tested Varnish using localhost and nodejs as a server. I started localhost, the site was up. Then I disconnected server and the site got disconnected in less than 2 min. It says: Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 1890127100 Varnish cache server Could you tell me what could be the problem? sub vcl_fetch { if (beresp.ttl < 120s) { ##std.log("Adjusting TTL"); set beresp.ttl = 36000s; ##120s; } # Do not cache the object if the backend application does not want us to. if (beresp.http.Cache-Control ~ "(no-cache|no-store|private|must-revalidate)") { return(hit_for_pass); } # Do not cache the object if the status is not in the 200s if (beresp.status >= 300) { # Remove the Set-Cookie header #remove beresp.http.Set-Cookie; return(hit_for_pass); } # # Everything below here should be cached # # Remove the Set-Cookie header ####remove beresp.http.Set-Cookie; # Set the grace time ## set beresp.grace = 1s; //change this to minutes in case of app shutdown set beresp.grace = 360000s; ## 10 hour - reduce if it has negative impact # Static assets - browser caches tpiphem for a long time. if (req.url ~ "\.(css|js|.js|jpg|jpeg|gif|ico|png)\??\d*$") { /* Remove Expires from backend, it's not long enough */ unset beresp.http.expires; /* Set the clients TTL on this object */ set beresp.http.cache-control = "public, max-age=31536000"; /* marker for vcl_deliver to reset Age: */ set beresp.http.magicmarker = "1"; } else { set beresp.http.Cache-Control = "private, max-age=0, must-revalidate"; set beresp.http.Pragma = "no-cache"; } if (req.url ~ "\.(css|js|min|)\??\d*$") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ##do not duplicate these settings if (req.url ~ ".css") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".js") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".min") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ## If the request to the backend returns a code other than 200, restart the loop ## If the number of restarts reaches the value of the parameter max_restarts, ## the request will be error'ed. max_restarts defaults to 4. This prevents ## an eternal loop in the event that, e.g., the object does not exist at all. if (beresp.status != 200 && beresp.status != 403 && beresp.status != 404) { return(restart); } if (beresp.status == 302) { return(deliver); } # Never cache posts if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(hit_for_pass); } ##check this setting to ensure that it does not cause issues for browsers with no gzip if (beresp.http.content-type ~ "text") { set beresp.do_gzip = true; } if (beresp.http.Set-Cookie) { return(deliver); } ##if (req.url == "/index.html") { set beresp.do_esi = true; ##} ## check if this is needed or should be used # return(deliver); the object return(deliver); } sub vcl_recv { ##avoid leeching of images call hot_link; set req.grace = 360000s; ##2m ## if one backend is down - use another if (req.restarts == 0) { set req.backend = cache_director; ##we can specify individual VMs } else if (req.restarts == 1) { set req.backend = cache_director; } ## post calls should not be cached - add cookie for these requests if using micro-caching # Pass requests that are not GET or HEAD if (req.request != "GET" && req.request != "HEAD") { return(pass); ## return(pass) goes to backend - not cache } # Don't cache the result of a redirect if (req.http.Referer ~ "redir" || req.http.Origin ~ "jumpto") { return(pass); } # Don't cache the result of a redirect (asking for logon) if (req.http.Referer ~ "post" || req.http.Referer ~ "submit" || req.http.Referer ~ "add" || req.http.Referer ~ "ask") { return(pass); } # Never cache posts - ensure that we do not use these strings in our URLs' that need to be cached if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(pass); } ## if (req.http.Authorization || req.http.Cookie) { if (req.http.Authorization) { /* Not cacheable by default */ return (pass); } # Handle compression correctly. Different browsers send different # "Accept-Encoding" headers, even though they mostly all support the same # compression mechanisms. By consolidating these compression headers into # a consistent format, we can reduce the size of the cache and get more hits. # @see: http:// varnish.projects.linpro.no/wiki/FAQ/Compression if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|ico)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { # If the browser supports it, we'll use gzip. set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { # Next, try deflate if it is supported. set req.http.Accept-Encoding = "deflate"; } else { # Unknown algorithm. Remove it and send unencoded. unset req.http.Accept-Encoding; } } # lookup graphics, css, js & ico files in the cache if (req.url ~ "\.(png|gif|jpg|jpeg|css|.js|ico)$") { return(lookup); } ##added on 0918 - check if it causes issues with user specific content if (req.request == "GET" && req.http.cookie) { return(lookup); } # Pipe requests that are non-RFC2616 or CONNECT which is weird. if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { ##closing connection and calling pipe return(pipe); } ##purge content via localhost only if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } ## do we need this? ## return(lookup); }

    Read the article

  • Why did my flash drive become "read only" and (how) can I fix it?

    - by Bob
    I have a brand new flash drive (one week old) that has become marked as read only, by Windows, Kubuntu and a bootable partitioner. Why did this happen? Is it fixable? If it is, how can I fix this? The problem Firstly, this drive is new. It's certainly not been used enough to die from normal wear and tear, though I would not discount defective components. The drive itself has somehow become locked in a read only state. Windows' Disk management: Diskpart: Generic Flash Disk USB Device Disk ID: 33FA33FA Type : USB Status : Online Path : 0 Target : 0 LUN ID : 0 Location Path : UNAVAILABLE Current Read-only State : Yes Read-only : No Boot Disk : No Pagefile Disk : No Hibernation File Disk : No Crashdump Disk : No Clustered Disk : No What really confuses me is Current Read-only State : Yes and Read-only : No. Attempted solutions So far, I've tried: Formatting it in Windows (in Disk management, the format options are greyed out when right clicking). DiskPart Clean (CLEAN - Clear the configuration information, or all information, off the disk.): DISKPART> clean DiskPart has encountered an error: The media is write protected. See the System Event Log for more information. There was nothing in the event log. Windows command line format >format G: Insert new disk for drive G: and press ENTER when ready... The type of the file system is FAT32. Verifying 7740M Cannot format. This volume is write protected. Windows chkdsk: see below for details Kubuntu fsck (through VirtualBox USB passthrough): see below for details Acronis True Image to format, to convert to GPT, to destroy and rebuild MBR, basically anything: failed (could not write to MBR) Details (and a nice story) Background This was a brand new, generic, 8GB flash drive I wanted to create a multiboot flash drive with. It came formatted as FAT32, though oddly a little larger than most 8 GIGAbyte flash drives I've come across. Approximately 127MB was listed as "used" by Windows. I never discovered why. The end usable space was about what I normally expect from a 8GB drive (approx 7.4 GIBIbytes). I had thrown quite a few Linux distros on, along with a copy of Hiren's. They would all boot perfectly. They were put on with YUMI. When I tried to put the Knoppix DVD on, YUMI added an odd video option to its boot comman which caused Knoppix to boot with a black screen on X. ttys 1 through 6 still worked as text only interfaces. A few days later, I took some time to take that odd video option off, making the boot command match the one that comes with Knoppix. On the attempt to boot, Knoppix reported some form of LZMA corruption. Leading up to the current issue I was thinking the Knoppix files may have been corrupted somehow, so I tried reloading it. The drive was nearly full (45MB free), so I deleted a generic ISO that also was not booting. That went fine. I then went through YUMI to 'uninstall' Knoppix, i.e. delete files and remove from the menus. The files went first, then the menus were cleared successfully. However, the free space was stuck at about 700MB, same as it was before removing Knoppix. In the old Knoppix folder, there was a 0 byte file named KNOPPIX that could not be deleted. I tried reinserting the drive to delete this file - without safely removing, if that made a difference (hey, first time for everything). Running the standard Windows chkdsk scan without /r or /f reported errors found. Running with /r just got it stuck. I decided to give fsck a shot, so I loaded up my Kubuntu VM and attached the drive to it with VirtualBox's USB 2.0 passthrough. I umounted it (/dev/sda1) and ran a fsck. There are differences between boot sector and its backup. I chose No action. It told me FATs differ and asked me to select either the first or second FAT. Whichever I selected, I got a notice of Free cluster summary wrong. If I chose Correct, it gave a list of incorrect file names. To try to fix something, at least, I ran it with the -p option. Halfway through fixing the files, the VM froze - I ended its process about ten minutes later. Cause? My next attempt was to use YUMI, again, to rebuild the whole drive. I used YUMI's built in reformat (to FAT32) option and installed a Kubuntu ISO (700MB). The format was successful, however, the extract and copy of Kubuntu (which YUMI uses a 7zip binary for) froze at about 60% done. After waiting for about fifteen minutes (longer than the 3.5GB Knoppix ISO took last time), I pulled the drive out. The drive at this point was already formatted, SYSLINUX already installed, just waiting on the unpacking of an ISO and the modifying of the boot menus. Plugging it back in, it came up as normal - however, any write action would fail. Disk management reported it as read only. On reconnect, it would come up as normal but a write operation would cause it to go read only again. After a few attempts, it started coming up as read only on insertion. Attempts to fix This is when I ran through the attempts listed above, to try and reformat it in case of a faulty format. However the inability to do so even on a bootable disk indicated something more serious is wrong. chkdsk now reports nothing is wrong, and fsck still reports MBR inconsistencies, but now always chooses first FAT automatically after telling me FATs differ. It still does the same Free cluster summary wrong afterwards. I cannot run with -p anymore because it is now marked as read only. It also managed to corrupt my VM's disk somehow on the first attempt (yes, I'm sure I chose sda, which is mapped to a 7.4GB drive - I triple checked). Thank god for snapshots? I'm just about out of ideas. To my inexperienced mind it looks like something in the drive's firmware set it to read only "permanently" somehow - is there any way to reset this? I don't particularly care about keeping data, considering I've reformatted it twice. Also, fixes that keep me in Windows are better; it reduces the risk of me accidentally nuking my main hard drive. Update 1: I pulled apart the drive out of curiosity. As you can see, there are no obvious write protect switches. There is an IC on the other side, ALCOR branded labelled AU6989HL, if that matters. If there appears to be no way to fix this, I'll probably pull out the (glued down) card and put it in a card reader to check if it's the card or the controller that died. Update 2: I've pulled the card off, Windows detects the drive as a card reader now. The contacts on the card don't appear to be used, and there are several rows of holes on the card itself. Putting it into the card reader only detects about 30MB total, RAW. It's probably either the reader incorrectly reporting the card as faulty (as if a real SD card's write protect was switched on) or a bad contact somewhere. If nothing else, I have a spare 8GB Micro SD card now... as soon as I figure out how to format it as 8GB.

    Read the article

  • Server randomly freezes

    - by PsySkeletor
    Im facing a very strange issue, my debian squeeze freezes up always at night (Berlin, time). Here is what i get from a time and after doing this a few times, it becomes frozen and must be hard-reset. From /var/log/messages Dec 11 01:36:11 srv156 kernel: [125983.204251] CPU 1: Dec 11 01:36:11 srv156 kernel: [125983.204251] Modules linked in: xt_multiport nf_conntrack_ipv4 nf_defrag_ipv4 xt_recent xt_state nf_conntrack xt_tcpudp iptable_filter ip_tables x_tables hwmon_vid snd_hda_codec_atihdmi snd_hda_intel snd_hda_codec snd_hwdep snd_pcm radeon snd_timer ttm drm_kms_helper snd k10temp i2c_piix4 soundcore snd_page_alloc edac_core parport_pc drm i2c_algo_bit i2c_core shpchp pci_hotplug pcspkr edac_mce_amd parport wmi evdev processor button ext3 jbd mbcache raid1 md_mod sd_mod crc_t10dif ata_generic ahci ohci_hcd pata_atiixp e100 mii libata xhci floppy ehci_hcd thermal thermal_sys usbcore scsi_mod nls_base [last unloaded: i2c_dev] Dec 11 01:36:11 srv156 kernel: [125983.204251] Pid: 758, comm: flush-9:0 Tainted: G B 2.6.32-5-amd64 #1 GA-78LMT-USB3 Dec 11 01:36:11 srv156 kernel: [125983.204251] RIP: 0010:[<ffffffff810b3506>] [<ffffffff810b3506>] find_get_pages_tag+0x66/0xdd Dec 11 01:36:11 srv156 kernel: [125983.204251] RSP: 0018:ffff8804235e7b30 EFLAGS: 00000286 Dec 11 01:36:11 srv156 kernel: [125983.204251] RAX: ffffffffffffffff RBX: ffff8804235e7c00 RCX: 0000000000000000 Dec 11 01:36:11 srv156 kernel: [125983.204251] RDX: 0000000000040000 RSI: ffffea000496b2a8 RDI: ffffea000496b2a0 Dec 11 01:36:11 srv156 kernel: [125983.204251] RBP: ffffffff8101166e R08: ffff8804235e7af0 R09: 0000000000000000 Dec 11 01:36:11 srv156 kernel: [125983.204251] R10: 0000000000000000 R11: 0000000000040000 R12: ffff8804235e7c08 Dec 11 01:36:11 srv156 kernel: [125983.204251] R13: 0000000d22678a20 R14: ffff8804235e7af0 R15: 00000000091b9060 Dec 11 01:36:11 srv156 kernel: [125983.204251] FS: 0000000000000000(0000) GS:ffff880010440000(0000) knlGS:000000007ebf7b70 Dec 11 01:36:11 srv156 kernel: [125983.204522] CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b Dec 11 01:36:11 srv156 kernel: [125983.204522] CR2: 00000000dec86000 CR3: 0000000001001000 CR4: 00000000000006e0 Dec 11 01:36:11 srv156 kernel: [125983.204522] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Dec 11 01:36:11 srv156 kernel: [125983.204522] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Dec 11 01:36:11 srv156 kernel: [125983.204522] Call Trace: Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810bb792>] ? pagevec_lookup_tag+0x1a/0x21 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810ba330>] ? write_cache_pages+0x162/0x327 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810b9d48>] ? __writepage+0x0/0x25 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff8110758a>] ? writeback_single_inode+0xe7/0x2da Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81108290>] ? writeback_inodes_wb+0x424/0x4ff Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81108497>] ? wb_writeback+0x12c/0x1ab Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff8110870d>] ? wb_do_writeback+0x14f/0x165 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81108754>] ? bdi_writeback_task+0x31/0xaa Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810c8664>] ? bdi_start_fn+0x0/0xd0 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810c86d4>] ? bdi_start_fn+0x70/0xd0 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810c8664>] ? bdi_start_fn+0x0/0xd0 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81064ac1>] ? kthread+0x79/0x81 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81011baa>] ? child_rip+0xa/0x20 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81064a48>] ? kthread+0x0/0x81 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81011ba0>] ? child_rip+0x0/0x20 From /var/log/syslog Dec 10 21:20:29 srv156 kernel: [110625.162930] BUG: Bad page map in process java pte:14fa4f067 pmd:424b54067 Dec 10 21:20:29 srv156 kernel: [110625.162937] page:ffffea000496c148 flags:0200000000000878 count:2 mapcount:-1 mapping:ffff88014f8d7de8 index:2f4 Dec 10 21:20:29 srv156 kernel: [110625.162946] addr:0000000009096000 vm_flags:00100077 anon_vma:ffff880422410d40 mapping:(null) index:9096 Dec 10 21:20:29 srv156 kernel: [110625.162955] Pid: 21356, comm: java Tainted: G B 2.6.32-5-amd64 #1 Dec 10 21:20:29 srv156 kernel: [110625.162961] Call Trace: Dec 10 21:20:29 srv156 kernel: [110625.162966] [<ffffffff810ca4bf>] ? print_bad_pte+0x232/0x24a Dec 10 21:20:29 srv156 kernel: [110625.162973] [<ffffffff810cb56f>] ? unmap_vmas+0x62d/0x931 Dec 10 21:20:29 srv156 kernel: [110625.162980] [<ffffffff810cfc74>] ? exit_mmap+0xc4/0x148 Dec 10 21:20:29 srv156 kernel: [110625.162986] [<ffffffff8104bbc1>] ? mmput+0x3c/0xdf Dec 10 21:20:29 srv156 kernel: [110625.162992] [<ffffffff8104f81e>] ? exit_mm+0x102/0x10d Dec 10 21:20:29 srv156 kernel: [110625.162998] [<ffffffff81051243>] ? do_exit+0x1f8/0x6c9 Dec 10 21:20:29 srv156 kernel: [110625.163004] [<ffffffff81071abb>] ? futex_wake+0xd6/0xe7 Dec 10 21:20:29 srv156 kernel: [110625.163010] [<ffffffff8105178a>] ? do_group_exit+0x76/0x9d Dec 10 21:20:29 srv156 kernel: [110625.163016] [<ffffffff8105df9f>] ? get_signal_to_deliver+0x310/0x339 Dec 10 21:20:29 srv156 kernel: [110625.163023] [<ffffffff81010037>] ? do_notify_resume+0x87/0x73f Dec 10 21:20:29 srv156 kernel: [110625.163029] [<ffffffff810cc664>] ? handle_mm_fault+0x7aa/0x80f Dec 10 21:20:29 srv156 kernel: [110625.163036] [<ffffffff81073f14>] ? compat_sys_futex+0x10d/0x12b Dec 10 21:20:29 srv156 kernel: [110625.163043] [<ffffffff812fb546>] ? do_page_fault+0x2e0/0x2fc Dec 10 21:20:29 srv156 kernel: [110625.163049] [<ffffffff81010e0e>] ? int_signal+0x12/0x17 Dec 10 21:20:29 srv156 kernel: [110625.163114] BUG: Bad page state in process java pfn:14fa0c Dec 10 21:20:29 srv156 kernel: [110625.163120] page:ffffea000496b2a0 flags:020000000002001c count:0 mapcount:-1 mapping:ffff88039dc0db30 index:11e3 Dec 10 21:20:29 srv156 kernel: [110625.164563] Pid: 21356, comm: java Tainted: G B 2.6.32-5-amd64 #1 Dec 10 21:20:29 srv156 kernel: [110625.164570] Call Trace: Dec 10 21:20:29 srv156 kernel: [110625.164578] [<ffffffff810b71a9>] ? bad_page+0x116/0x129 Dec 10 21:20:29 srv156 kernel: [110625.164586] [<ffffffff810b7692>] ? free_pages_check+0x38/0x57 Dec 10 21:20:29 srv156 kernel: [110625.164595] [<ffffffff810b89cf>] ? free_hot_cold_page+0x46/0x190 Dec 10 21:20:29 srv156 kernel: [110625.164603] [<ffffffff810b8b82>] ? __pagevec_free+0x69/0x7f Dec 10 21:20:29 srv156 kernel: [110625.164611] [<ffffffff810bba3f>] ? release_pages+0x137/0x18d Dec 10 21:20:29 srv156 kernel: [110625.164620] [<ffffffff810d8559>] ? free_pages_and_swap_cache+0x57/0x73 Dec 10 21:20:29 srv156 kernel: [110625.164629] [<ffffffff810cb5ed>] ? unmap_vmas+0x6ab/0x931 Dec 10 21:20:29 srv156 kernel: [110625.164637] [<ffffffff810cfc74>] ? exit_mmap+0xc4/0x148 Dec 10 21:20:29 srv156 kernel: [110625.164644] [<ffffffff8104bbc1>] ? mmput+0x3c/0xdf Dec 10 21:20:29 srv156 kernel: [110625.164652] [<ffffffff8104f81e>] ? exit_mm+0x102/0x10d Dec 10 21:20:29 srv156 kernel: [110625.164660] [<ffffffff81051243>] ? do_exit+0x1f8/0x6c9 Dec 10 21:20:29 srv156 kernel: [110625.164667] [<ffffffff81071abb>] ? futex_wake+0xd6/0xe7 Dec 10 21:20:29 srv156 kernel: [110625.164675] [<ffffffff8105178a>] ? do_group_exit+0x76/0x9d Dec 10 21:20:29 srv156 kernel: [110625.164683] [<ffffffff8105df9f>] ? get_signal_to_deliver+0x310/0x339 Dec 10 21:20:29 srv156 kernel: [110625.164692] [<ffffffff81010037>] ? do_notify_resume+0x87/0x73f Dec 10 21:20:29 srv156 kernel: [110625.164700] [<ffffffff810cc664>] ? handle_mm_fault+0x7aa/0x80f The last piece of log, has been recently posted, because I've just found it. It seems Java process do something and began to slowly eat all the resources of the server. I don't know exactly if this could be the root cause. Im using Debian Squeeze. uname -a Linux srv156 2.6.32-5-amd64 #1 SMP Sun Sep 23 11:00:33 UTC 2012 x86_64 GNU/Linux I really will appreciate your help, i dont know what more to do.

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • Windows Server 2008 R2 network adapter stops working, requires hard reboot

    - by Geoff Dalgas
    TL;DR version: Turns out this was a Windows Server 2008 R2 kernel networking bug. After siccing Microsoft support on it, we (eventually) got an unpublished kernel hotfix from Microsoft to address it. If you, too, are experiencing mysterious low-level network driver failures requiring a reboot/bluescreen cycle, you might want that hotfix (or maybe Service Pack 1 whenever it is released, too.) We have been using HAProxy along with heartbeat from the Linux-HA project. We are using two linux instances to provide a failover. Each server has with their own public IP and a single IP which is shared between the two using a virtual interface (eth1:1) at IP: 69.59.196.211 The virtual interface (eth1:1) IP 69.59.196.211 is configured as the gateway for the windows servers behind them and we use ip_forwarding to route traffic. We are experiencing an occasional network outage on one of our windows servers behind our linux gateways. HAProxy will detect the server is offline which we can verify by remoting to the failed server and attempting to ping the gateway: Pinging 69.59.196.211 with 32 bytes of data: Reply from 69.59.196.220: Destination host unreachable. Running arp -a on this failed server shows that there is no entry for the gateway address (69.59.196.211): Interface: 69.59.196.220 --- 0xa Internet Address Physical Address Type 69.59.196.161 00-26-88-63-c7-80 dynamic 69.59.196.210 00-15-5d-0a-3e-0e dynamic 69.59.196.212 00-21-5e-4d-45-c9 dynamic 69.59.196.213 00-15-5d-00-b2-0d dynamic 69.59.196.215 00-21-5e-4d-61-1a dynamic 69.59.196.217 00-21-5e-4d-2c-e8 dynamic 69.59.196.219 00-21-5e-4d-38-e5 dynamic 69.59.196.221 00-15-5d-00-b2-0d dynamic 69.59.196.222 00-15-5d-0a-3e-09 dynamic 69.59.196.223 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 225.0.0.1 01-00-5e-00-00-01 static On our linux gateway instances arp -a shows: peak-colo-196-220.peak.org (69.59.196.220) at <incomplete> on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-222.peak.org (69.59.196.222) at 00:15:5d:0a:3e:09 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 Why would arp occasionally set the entry for this failed server as <incomplete>? Should we be defining our arp entries statically? I've always left arp alone since it works 99% of the time, but in this one instance it appears to be failing. Are there any additional troubleshooting steps we can take help resolve this issue? THINGS WE HAVE TRIED I added a static arp entry for testing on one of the linux gateways which still didn't help. root@haproxy2:~# arp -a peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-221.peak.org (69.59.196.221) at 00:15:5d:00:b2:0d [ether] on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 peak-colo-196-220.peak.org (69.59.196.220) at 00:21:5e:4d:30:8d [ether] PERM on eth1 root@haproxy2:~# arp -i eth1 -s 69.59.196.220 00:21:5e:4d:30:8d root@haproxy2:~# ping 69.59.196.220 PING 69.59.196.220 (69.59.196.220) 56(84) bytes of data. --- 69.59.196.220 ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6006ms Rebooting the windows web server solves this issue temporarily with no other changes to the network but our experience shows this issue will come back. Swapping network cards and switches I noticed the link light on the port of the switch for the failed windows server was running at 100Mb instead of 1Gb on the failed interface. I moved the cable to several other open ports and the link indicated 100Mb for each port that I tried. I also swapped the cable with the same result. I tried changing the properties of the network card in windows and the server locked up and required a hard reset after clicking apply. This windows server has two physical network interfaces so I have swapped the cables and network settings on the two interfaces to see if the problem follows the interface. If the public interface goes down again we will know that it is not an issue with the network card. (We also tried another switch we have on hand, no change) Changing network hardware driver versions We've had the same problem with the latest Broadcom driver, as well as the built-in driver that ships in Windows Server 2008 R2. Replacing network cables As a last ditch effort we remembered another change that occurred was the replacement of all of the patch cords between our servers / switch. We had purchased two sets, one green of lengths 1ft - 3ft for the private interfaces and another set of red cables for the public interfaces. We swapped out all of the public interface patch cables with a different brand and ran our servers without issue for a full week ... aaaaaand then the problem recurred. Disable checksum offload, remove TProxy We also tried disabling TCP/IP checksum offload in the driver, no change. We're now pulling out TProxy and moving to a more traditional x-forwarded-for network arrangement without any fancy IP address rewriting. We'll see if that helps. Switch Virtualization providers On the off chance this was related to Hyper-V in some way (we do host Linux VMs on it), we switched to VMWare Server. No change. Switch host model We've reached the end of our troubleshooting rope and are now formally involving Microsoft support. They recommended changing the host model: http://en.wikipedia.org/wiki/Host_model http://technet.microsoft.com/en-us/magazine/2007.09.cableguy.aspx We did that, and.. we'll see.

    Read the article

  • Set up tunnel to HE.net and now only ipv6.google.com works, but other sites ping fine.

    - by AndrejaKo
    I'm setting up IPv6 using my router which is running OpenWRT, version Backfire 10.03.1-rc4. I made a tunnel using Hurricane Electric's tunnel broker and set it up on the router and I'm using RADVD to hand out IPv6 addresses. My problem is that on computers on the network, I can only access ipv6.google.com using a browser, but other sites seem to be loading forever and won't open in any browser. I can ping and traceroute to them fine, but can't open them with a browser. I can open any site normally with a browser from the router. Stopping firewall service on the router doesn't help, so it's probably not a firewall issue. All AAAA records resolve fine, so it's probably not a DNS issue. Computers on the network get their IPv6 addresses fine, so it's probably not a radvd issue. Similar setup worked fine for SixXs, but I'm having problems with my PoP there, so I decided to move to HE. Here are some traceroutes: From a client computer: Tracing route to ipv6.he.net [2001:470:0:64::2] over a maximum of 30 hops: 1 <1 ms 1 ms 1 ms 2001:470:1f0b:de5::1 2 62 ms 63 ms 62 ms andrejako-1.tunnel.tserv6.fra1.ipv6.he.net [2001:470:1f0a:de5::1] 3 60 ms 60 ms 63 ms gige-g2-4.core1.fra1.he.net [2001:470:0:69::1] 4 63 ms 68 ms 68 ms 10gigabitethernet1-4.core1.ams1.he.net [2001:470:0:47::1] 5 84 ms 74 ms 76 ms 10gigabitethernet1-4.core1.lon1.he.net [2001:470:0:3f::1] 6 146 ms 147 ms 151 ms 10gigabitethernet4-4.core1.nyc4.he.net [2001:470:0:128::1] 7 200 ms 198 ms 202 ms 10gigabitethernet5-3.core1.lax1.he.net [2001:470:0:10e::1] 8 219 ms * 210 ms 10gigabitethernet2-2.core1.fmt2.he.net [2001:470:0:18d::1] 9 221 ms 338 ms 209 ms gige-g4-18.core1.fmt1.he.net [2001:470:0:2d::1] 10 206 ms 210 ms 207 ms ipv6.he.net [2001:470:0:64::2] Trace complete. and another from a cliet computer Tracing route to whatismyipv6.com [2001:4870:a24f:2::90] over a maximum of 30 hops: 1 7 ms 1 ms 1 ms 2001:470:1f0b:de5::1 2 69 ms 70 ms 63 ms AndrejaKo-1.tunnel.tserv6.fra1.ipv6.he.net [2001:470:1f0a:de5::1] 3 57 ms 65 ms 58 ms gige-g2-4.core1.fra1.he.net [2001:470:0:69::1] 4 73 ms 74 ms 75 ms 10gigabitethernet1-4.core1.ams1.he.net [2001:470:0:47::1] 5 71 ms 74 ms 76 ms 10gigabitethernet1-4.core1.lon1.he.net [2001:470:0:3f::1] 6 141 ms 149 ms 148 ms 10gigabitethernet2-3.core1.nyc4.he.net [2001:470:0:3e::1] 7 141 ms 147 ms 143 ms 10gigabitethernet1-2.core1.nyc1.he.net [2001:470:0:37::2] 8 144 ms 145 ms 142 ms 2001:504:1::a500:4323:1 9 226 ms 225 ms 218 ms 2001:4870:a240::2 10 220 ms 224 ms 219 ms 2001:4870:a240::2 11 219 ms 218 ms 220 ms 2001:4870:a24f::2 12 221 ms 222 ms 220 ms www.whatismyipv6.com [2001:4870:a24f:2::90] Trace complete. Here's some firewall info on the router: root@OpenWrt:/# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 syn_flood tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 input_rule all -- 0.0.0.0/0 0.0.0.0/0 input all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) target prot opt source destination zone_wan_MSSFIX all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED forwarding_rule all -- 0.0.0.0/0 0.0.0.0/0 forward all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 output_rule all -- 0.0.0.0/0 0.0.0.0/0 output all -- 0.0.0.0/0 0.0.0.0/0 Chain forward (1 references) target prot opt source destination zone_lan_forward all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_forward all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_forward all -- 0.0.0.0/0 0.0.0.0/0 Chain forwarding_lan (1 references) target prot opt source destination Chain forwarding_rule (1 references) target prot opt source destination nat_reflection_fwd all -- 0.0.0.0/0 0.0.0.0/0 Chain forwarding_wan (1 references) target prot opt source destination Chain input (1 references) target prot opt source destination zone_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan all -- 0.0.0.0/0 0.0.0.0/0 Chain input_lan (1 references) target prot opt source destination Chain input_rule (1 references) target prot opt source destination Chain input_wan (1 references) target prot opt source destination Chain nat_reflection_fwd (1 references) target prot opt source destination ACCEPT tcp -- 192.168.1.0/24 192.168.1.2 tcp dpt:80 Chain output (1 references) target prot opt source destination zone_lan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain output_rule (1 references) target prot opt source destination Chain reject (7 references) target prot opt source destination REJECT tcp -- 0.0.0.0/0 0.0.0.0/0 reject-with tcp-reset REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain syn_flood (1 references) target prot opt source destination RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 25/sec burst 50 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan (1 references) target prot opt source destination input_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_lan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_ACCEPT (2 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_DROP (0 references) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_MSSFIX (0 references) target prot opt source destination TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU Chain zone_lan_REJECT (1 references) target prot opt source destination reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_forward (1 references) target prot opt source destination zone_wan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 forwarding_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_lan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan (2 references) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 ACCEPT 41 -- 0.0.0.0/0 0.0.0.0/0 input_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_ACCEPT (2 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_DROP (0 references) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_MSSFIX (1 references) target prot opt source destination TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU Chain zone_wan_REJECT (2 references) target prot opt source destination reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_forward (2 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 192.168.1.2 forwarding_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Here's some routing info: root@OpenWrt:/# ip -f inet6 route 2001:470:1f0a:de5::/64 via :: dev 6in4-henet proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0 2001:470:1f0b:de5::/64 dev br-lan proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev br-lan proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0.1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0.2 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 via :: dev 6in4-henet proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0 default dev 6in4-henet metric 1024 mtu 1280 advmss 1220 hoplimit 0 I have computers running windows 7 SP1 and openSUSE 11.3 and all of them have same problem. I also made a thread about this on HE's forum, but it seems that people there are out of ideas what to do.

    Read the article

  • How to set up linux watchdog daemon with Intel 6300esb

    - by ACiD GRiM
    I've been searching for this on Google for sometime now and I have yet to find proper documentation on how to connect the kernel driver for my 6300esb watchdog timer to /dev/watchdog and ensure that watchdog daemon is keeping it alive. I am using RHEL compatible Scientific Linux 6.3 in a KVM virtual machine by the way Below is everything I've tried so far: dmesg|grep 6300 i6300ESB timer: Intel 6300ESB WatchDog Timer Driver v0.04 i6300ESB timer: initialized (0xffffc900008b8000). heartbeat=30 sec (nowayout=0) | ll /dev/watchdog crw-rw----. 1 root root 10, 130 Sep 22 22:25 /dev/watchdog | /etc/watchdog.conf #ping = 172.31.14.1 #ping = 172.26.1.255 #interface = eth0 file = /var/log/messages #change = 1407 # Uncomment to enable test. Setting one of these values to '0' disables it. # These values will hopefully never reboot your machine during normal use # (if your machine is really hung, the loadavg will go much higher than 25) max-load-1 = 24 max-load-5 = 18 max-load-15 = 12 # Note that this is the number of pages! # To get the real size, check how large the pagesize is on your machine. #min-memory = 1 #repair-binary = /usr/sbin/repair #test-binary = #test-timeout = watchdog-device = /dev/watchdog # Defaults compiled into the binary #temperature-device = #max-temperature = 120 # Defaults compiled into the binary #admin = root interval = 10 #logtick = 1 # This greatly decreases the chance that watchdog won't be scheduled before # your machine is really loaded realtime = yes priority = 1 # Check if syslogd is still running by enabling the following line #pidfile = /var/run/syslogd.pid Now maybe I'm not testing it correctly, but I would expecting that stopping the watchdog service would cause the /dev/watchdog to time out after 30 seconds and I should see the host reboot, however this does not happen. Also, here is my config for the KVM vm <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit sl6template or other application using the libvirt API. --> <domain type='kvm'> <name>sl6template</name> <uuid>960d0ac2-2e6a-5efa-87a3-6bb779e15b6a</uuid> <memory unit='KiB'>262144</memory> <currentMemory unit='KiB'>262144</currentMemory> <vcpu placement='static'>1</vcpu> <os> <type arch='x86_64' machine='rhel6.3.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>Westmere</model> <vendor>Intel</vendor> <feature policy='require' name='tm2'/> <feature policy='require' name='est'/> <feature policy='require' name='vmx'/> <feature policy='require' name='ds'/> <feature policy='require' name='smx'/> <feature policy='require' name='ss'/> <feature policy='require' name='vme'/> <feature policy='require' name='dtes64'/> <feature policy='require' name='rdtscp'/> <feature policy='require' name='ht'/> <feature policy='require' name='dca'/> <feature policy='require' name='pbe'/> <feature policy='require' name='tm'/> <feature policy='require' name='pdcm'/> <feature policy='require' name='pdpe1gb'/> <feature policy='require' name='ds_cpl'/> <feature policy='require' name='pclmuldq'/> <feature policy='require' name='xtpr'/> <feature policy='require' name='acpi'/> <feature policy='require' name='monitor'/> <feature policy='require' name='aes'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/mnt/data/vms/sl6template.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <controller type='usb' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:44:57:f6'/> <source bridge='br0.2'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <interface type='bridge'> <mac address='52:54:00:88:0f:42'/> <source bridge='br1'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <watchdog model='i6300esb' action='reset'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </watchdog> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> </devices> </domain> Any help is appreciated as the most I've found are patches to kvm and general softdog documentation or IPMI watchdog answers.

    Read the article

  • my server suddenly crashes every 2 days or so. Programmer has no idea, please help find the cause, here is the top

    - by Alex
    Every couple of days my server suddenly crashes and I must request hardware reset at data center to get it back running. Today I came back to my shell and saw the server was dead and "top" was running on it, and see below for the "top" right before the crash. I opened /var/log/messages and scrolled to the reboot time and see nothing, no errors prior to the hard reboot. (I checked in /etc/syslog.conf and I see "*.info;mail.none;authpriv.none;cron.none /var/log/messages" , isn't this good enough to log all problems?) Usually when I look at the top, the swap is never used up like this! I also don't know why mysqld is at 323% cpu (server only runs drupal and its never slow or overloaded). Solver is my application. I don't know whats that 'sh' doing and 'dovecot' doing. Its driving me crazy over the last month, please help me solve this mystery and stop my downtimes. top - 01:10:06 up 6 days, 5 min, 3 users, load average: 34.87, 18.68, 9.03 Tasks: 500 total, 19 running, 481 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 96.6%sy, 0.0%ni, 1.7%id, 1.8%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8165600k total, 8139764k used, 25836k free, 428k buffers Swap: 2104496k total, 2104496k used, 0k free, 8236k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4421 mysql 15 0 571m 105m 976 S 323.5 1.3 9:08.00 mysqld 564 root 20 -5 0 0 0 R 99.5 0.0 2:49.16 kswapd1 25767 apache 19 0 399m 8060 888 D 79.3 0.1 0:06.64 httpd 25781 apache 19 0 398m 5648 492 R 79.0 0.1 0:08.21 httpd 25961 apache 25 0 398m 5700 560 R 76.7 0.1 0:17.81 httpd 25980 apache 25 0 10816 668 520 R 75.0 0.0 0:46.95 sh 563 root 20 -5 0 0 0 D 71.4 0.0 3:12.37 kswapd0 25766 apache 25 0 399m 7256 756 R 69.7 0.1 0:39.83 httpd 25911 apache 25 0 398m 5612 480 R 58.8 0.1 0:17.63 httpd 25782 apache 25 0 440m 38m 648 R 55.2 0.5 0:18.94 httpd 25966 apache 25 0 398m 5640 556 R 55.2 0.1 0:48.84 httpd 4588 root 25 0 74860 596 476 R 53.9 0.0 0:37.90 crond 25939 apache 25 0 2776 172 84 R 48.9 0.0 0:59.46 solver 4575 root 25 0 397m 6004 1144 R 48.6 0.1 1:00.43 httpd 25962 apache 25 0 398m 5628 492 R 47.9 0.1 0:14.58 httpd 25824 apache 25 0 440m 39m 680 D 47.3 0.5 0:57.85 httpd 25968 apache 25 0 398m 5612 528 R 46.6 0.1 0:42.73 httpd 4477 root 25 0 6084 396 280 R 46.3 0.0 0:59.53 dovecot 25982 root 25 0 397m 5108 240 R 45.9 0.1 0:18.01 httpd 25943 apache 25 0 2916 172 8 R 44.0 0.0 0:53.54 solver 30687 apache 25 0 468m 63m 1124 D 42.3 0.8 0:45.02 httpd 25978 apache 25 0 398m 5688 600 R 23.8 0.1 0:40.99 httpd 25983 root 25 0 397m 5272 384 D 14.9 0.1 0:18.99 httpd 935 root 10 -5 0 0 0 D 14.2 0.0 1:54.60 kjournald 25986 root 25 0 397m 5308 420 D 8.9 0.1 0:04.75 httpd 4011 haldaemo 25 0 31568 1476 716 S 5.6 0.0 0:24.36 hald 25956 apache 23 0 398m 5872 644 S 5.6 0.1 0:13.85 httpd 18336 root 18 0 13004 1332 724 R 0.3 0.0 1:46.66 top 1 root 18 0 10372 212 180 S 0.0 0.0 0:05.99 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.95 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.15 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00 .06 ksoftirqd/1 here is a normal top, when server is working fine: top - 01:50:41 up 21 min, 1 user, load average: 2.98, 2.70, 1.68 Tasks: 271 total, 2 running, 269 sleeping, 0 stopped, 0 zombie Cpu(s): 15.0%us, 1.1%sy, 0.0%ni, 81.4%id, 2.4%wa, 0.1%hi, 0.0%si, 0.0%st Mem: 8165600k total, 2035856k used, 6129744k free, 60840k buffers Swap: 2104496k total, 0k used, 2104496k free, 283744k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2204 apache 17 0 466m 83m 19m S 25.9 1.0 0:22.16 httpd 11347 apache 15 0 466m 83m 19m S 25.9 1.0 0:26.10 httpd 18204 apache 18 0 481m 97m 19m D 25.2 1.2 0:13.99 httpd 4644 apache 18 0 481m 100m 19m D 24.6 1.3 1:17.12 httpd 4727 apache 17 0 481m 99m 19m S 24.3 1.2 1:10.77 httpd 4777 apache 17 0 482m 102m 21m S 23.6 1.3 1:38.27 httpd 8924 apache 15 0 483m 99m 19m S 22.3 1.3 1:13.41 httpd 9390 apache 18 0 483m 99m 19m S 18.9 1.2 1:05.35 httpd 4728 apache 16 0 481m 101m 19m S 14.3 1.3 1:12.50 httpd 4648 apache 15 0 481m 107m 27m S 12.6 1.4 1:18.62 httpd 24955 apache 15 0 467m 82m 19m S 3.3 1.0 0:21.80 httpd 4722 apache 15 0 503m 118m 19m R 1.7 1.5 1:17.79 httpd 4647 apache 15 0 484m 105m 20m S 1.3 1.3 1:40.73 httpd 4643 apache 16 0 481m 100m 20m S 0.7 1.3 1:11.80 httpd 1561 root 15 0 12900 1264 828 R 0.3 0.0 0:00.54 top 4434 mysql 15 0 496m 55m 4812 S 0.3 0.7 0:06.69 mysqld 4646 apache 15 0 481m 100m 19m S 0.3 1.3 1:25.51 httpd 1 root 18 0 10372 692 580 S 0.0 0.0 0:02.09 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.03 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.03 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.02 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/6 21 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/6 22 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/6 23 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/7

    Read the article

  • Why is Git telling me "Your branch is ahead of 'origin/master' by 11 commits." and how do I get it t

    - by spilth
    I'm a Git newbie. I recently moved a Rails project from Subversion to Git. I followed the tutorial here: http://www.simplisticcomplexity.com/2008/03/05/cleanly-migrate-your-subversion-repository-to-a-git-repository/ I am also using unfuddle.com to store my code. I make changes on my Mac laptop on the train to/from work and then push them to unfuddle when I have a network connection using the following command: git push unfuddle master I use Capistrano for deployments and pull code from the unfuddle repository using the master branch. Lately I've noticed the following message when I run "git status" on my laptop: # On branch master # Your branch is ahead of 'origin/master' by 11 commits. # nothing to commit (working directory clean) And I'm confused as to why. I thought my laptop was the origin... but don't know if either the fact that I originally pulled from Subversion or push to Unfuddle is what's causing the message to show up. How can I: Find out where Git thinks 'origin/master' is? If it's somewhere else, how do I turn my laptop into the 'origin/master'? Get this message to go away. It makes me think Git is unhappy about something. My mac is running Git version 1.6.0.1. When I run git remote show origin as suggested by dbr, I get the following: ~/Projects/GeekFor/geekfor 10:47 AM $ git remote show origin fatal: '/Users/brian/Projects/GeekFor/gf/.git': unable to chdir or not a git archive fatal: The remote end hung up unexpectedly When I run git remote -v as suggested by Aristotle Pagaltzis, I get the following: ~/Projects/GeekFor/geekfor 10:33 AM $ git remote -v origin /Users/brian/Projects/GeekFor/gf/.git unfuddle [email protected]:spilth/geekfor.git Now, interestingly, I'm working on my project in the geekfor directory but it says my origin is my local machine in the gf directory. I believe gf was the temporary directory I used when converting my project from Subversion to Git and probably where I pushed to unfuddle from. Then I believe I checked out a fresh copy from unfuddle to the geekfor directory. So it looks like I should follow dbr's advice and do: git remote rm origin git remote add origin [email protected]:spilth/geekfor.git

    Read the article

  • Usage of IcmpSendEcho2 with an asynchronous callback

    - by Ben Voigt
    I've been reading the MSDN documentation for IcmpSendEcho2 and it raises more questions than it answers. I'm familiar with asynchronous callbacks from other Win32 APIs such as ReadFileEx... I provide a buffer which I guarantee will be reserved for the driver's use until the operation completes with any result other than IO_PENDING, I get my callback in case of either success or failure (and call GetCompletionStatus to find out which). Timeouts are my responsibility and I can call CancelIo to abort processing, but the buffer is still reserved until the driver cancels the operation and calls my completion routine with a status of CANCELLED. And there's an OVERLAPPED structure which uniquely identifies the request through all of this. IcmpSendEcho2 doesn't use an OVERLAPPED context structure for asynchronous requests. And the documentation is unclear excessively minimalist about what happens if the ping times out or fails (failure would be lack of a network connection, a missing ARP entry for local peers, ICMP destination unreachable response from an intervening router for remote peers, etc). Does anyone know whether the callback occurs on timeout and/or failure? And especially, if no response comes, can I reuse the buffer for another call to IcmpSendEcho2 or is it forever reserved in case a reply comes in late? I'm wanting to use this function from a Win32 service, which means I have to get the error-handling cases right and I can't just leak buffers (or if the API does leak buffers, I have to use a helper process so I have a way to abandon requests). There's also an ugly incompatibility in the way the callback is made. It looks like the first parameter is consistent between the two signatures, so I should be able to use the newer PIO_APC_ROUTINE as long as I only use the second parameter if an OS version check returns Vista or newer? Although MSDN says "don't do a Windows version check", it seems like I need to, because the set of versions with the new argument aren't the same as the set of versions where the function exists in iphlpapi.dll. Pointers to additional documentation or working code which uses this function and an APC would be much appreciated. Please also let me know if this is completely the wrong approach -- i.e. if either using raw sockets or some combination of IcmpCreateFile+WriteFileEx+ReadFileEx would be more robust.

    Read the article

  • How to convert a DataSet object into an ObjectContext (Entity Framework) object on the fly?

    - by Marcel
    Hi all, I have an existing SQL Server database, where I store data from large specific log files (often 100 MB and more), one per database. After some analysis, the database is deleted again. From the database, I have created both a Entity Framework Model and a DataSet Model via the Visual Studio designers. The DataSet is only for bulk importing data with SqlBulkCopy, after a quite complicated parsing process. All queries are then done using the Entity Framework Model, whose CreateQuery Method is exposed via an interface like this public IQueryable<TTarget> GetResults<TTarget>() where TTarget : EntityObject, new() { return this.Context.CreateQuery<TTarget>(typeof(TTarget).Name); } Now, sometimes my files are very small and in such a case I would like to omit the import into the database, but just have a an in-memory representation of the data, accessible as Entities. The idea is to create the DataSet, but instead of bulk importing, to directly transfer it into an ObjectContext which is accessible via the interface. Does this make sense? Now here's what I have done for this conversion so far: I traverse all tables in the DataSet, convert the single rows into entities of the corresponding type and add them to instantiated object of my typed Entity context class, like so MyEntities context = new MyEntities(); //create new in-memory context ///.... //get the item in the navigations table MyDataSet.NavigationResultRow dataRow = ds.NavigationResult.First(); //here, a foreach would be necessary in a true-world scenario NavigationResult entity = new NavigationResult { Direction = dataRow.Direction, ///... NavigationResultID = dataRow.NavigationResultID }; //convert to entities context.AddToNavigationResult(entity); //add to entities ///.... A very tedious work, as I would need to create a converter for each of my entity type and iterate over each table in the DataSet I have. Beware, if I ever change my database model.... Also, I have found out, that I can only instantiate MyEntities, if I provide a valid connection string to a SQL Server database. Since I do not want to actually write to my fully fledged database each time, this hinders my intentions. I intend to have only some in-memory proxy database. Can I do simpler? Is there some automated way of doing such a conversion, like generating an ObjectContext out of a DataSet object? P.S: I have seen a few questions about unit testing that seem somewhat related, but not quite exact.

    Read the article

  • Can't connect to local IP address on OSX

    - by Alex Worden
    I'm trying to connect to a webserver that's running on my mac OSX 1.6. I'm able to connect to it locally using http://127.0.0.1:8888/myapp but when I attempt to connect to it using my machine's local IP address (http://192.168.1.15:8888/myapp IP shown below) from the same machine (or another on the network) I cannot connect. I can ping the LAN IP address. I've tried adding IP forwarding to my router for port 8888 but it didn't help. I've checked and the OSX firewall is disabled Can anyone suggest what else is blocking the connection? Here's what I get when I run ifconfig: ~ :ifconfig lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 inet 127.0.0.1 netmask 0xff000000 gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280 stf0: flags=0<> mtu 1280 en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:1f:5b:e8:16:4d media: autoselect status: inactive supported media: autoselect 10baseT/UTP <half-duplex> 10baseT/UTP <full-duplex> 10baseT/UTP <full-duplex,hw-loopback> 10baseT/UTP <full-duplex,flow-control> 100baseTX <half-duplex> 100baseTX <full-duplex> 100baseTX <full-duplex,hw-loopback> 100baseTX <full-duplex,flow-control> 1000baseT <full-duplex> 1000baseT <full-duplex,hw-loopback> 1000baseT <full-duplex,flow-control> none en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 inet6 fe80::21e:c2ff:febf:4809%en1 prefixlen 64 scopeid 0x5 inet 192.168.1.15 netmask 0xffffff00 broadcast 192.168.1.255 ether 00:1e:c2:bf:48:09 media: autoselect status: active supported media: autoselect fw0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> mtu 4078 lladdr 00:1f:5b:ff:fe:2b:b3:3c media: autoselect <full-duplex> status: inactive supported media: autoselect <full-duplex> en5: flags=8822<BROADCAST,SMART,SIMPLEX,MULTICAST> mtu 1500 ether 00:1e:c2:8e:0f:45 media: autoselect status: inactive supported media: none autoselect 10baseT/UTP <half-duplex> en2: flags=8922<BROADCAST,SMART,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether 00:1c:42:00:00:00 media: autoselect status: inactive supported media: autoselect en3: flags=8922<BROADCAST,SMART,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether 00:1c:42:00:00:01 media: autoselect status: inactive supported media: autoselect

    Read the article

  • C# remote web request certificate error

    - by Ben
    Hi. I am currently integrating a payment gateway into an application. Following a successful transaction the remote payment gateway posts details of the transaction back to my application (ITN). It posts to a HttpHandler that is used to read and validate the data. Part of the validation performed is a POST made by the handler to a validation service provided by the payment gateway. This effectively posts some of the original form values received back to the payment gateway to ensure they are valid. The url that I am posting back to is: "https://sandbox.payfast.co.za/eng/query/validate" and the code I am using: /// <summary> /// Posts the data back to the payment processor to validate the data received /// </summary> public static bool ValidateITNRequestData(NameValueCollection formVariables) { bool isValid = true; try { using (WebClient client = new WebClient()) { string validateUrl = (UseSandBox) ? SandboxValidateUrl : LiveValidateUrl; byte[] responseArray = client.UploadValues(validateUrl, "POST", formVariables); // get the resposne and replace the line breaks with spaces string result = Encoding.ASCII.GetString(responseArray); result = result.Replace("\r\n", " ").Replace("\r", "").Replace("\n", " "); if (result == null || !result.StartsWith("VALID")) { isValid = false; LogManager.InsertLog(LogTypeEnum.OrderError, "PayFast ITN validation failed", "The validation response was not valid."); } } } catch (Exception ex) { LogManager.InsertLog(LogTypeEnum.Unknown, "Unable to validate ITN data. Unknown exception", ex); isValid = false; } return isValid; } However, on calling WebClient.UploadValues the following exception is raised: System.Net.WebException: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. ---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure. at System.Net.Security.SslState.StartSendAuthResetSignal(ProtocolToken message, AsyncProtocolRequest asyncRequest, Exception exception) at For the sake of brevity I haven't listed the full call stack (I can do if anyone thinks it will help). The remote certificate does appear to be valid. To get around the problem I did try adding a new RemoteCertificateValidationCallback that always returned true but just ended up getting the following exception: System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessPermission.Demand() at System.Net.ServicePointManager.set_ServerCertificateValidationCallback(RemoteCertificateValidationCallback value) at NopSolutions.NopCommerce.Payment.Methods.PayFast.PayFastPaymentProcessor.ValidateITNRequestData(NameValueCollection formVariables) The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The Zone of the assembly that failed was: MyComputer So I am not sure this will work in medium trust? Any help would be much appreciated. Thanks Ben

    Read the article

  • How to send email from an EC2 instance using GoDaddy's SMTP server?

    - by Matt Greer
    SMTP is a whole new ballgame for me, but I am reading up on it. I am attempting to send email from my EC2 instance using GoDaddy's SMTP server. My domain name is registered through GoDaddy and I have 2 email accounts with them. I can successfully send the email from my dev box no problem. my web.config <system.net> <mailSettings> <smtp from="[email protected]" deliveryMethod="Network"> <network host="smtpout.secureserver.net" clientDomain="mydomain.com" port="25" userName="[email protected]" password="mypassword" defaultCredentials="false" /> </smtp> </mailSettings> </system.net> In my ASP.NET app: MailMessage mailMessage = new MailMessage("[email protected]", recipientEmail, emailSubject, body); mailMessage.IsBodyHtml = false; SmtpClient mailClient = new SmtpClient(); mailClient.Send(mailMessage); Very typical, simple use of System.Net.Mail.SmtpClient. The mail client is picking up the settings from my web.config as expected. From the EC2 instance, the same setup yields: System.Net.Mail.SmtpException: Failure sending mail. ---> System.IO.IOException: Unable to read data from the transport connection: net_io_connectionclosed. at System.Net.Mail.SmtpReplyReaderFactory.ProcessRead(Byte[] buffer, Int32 offset, Int32 read, Boolean readLine) at System.Net.Mail.SmtpReplyReaderFactory.ReadLines(SmtpReplyReader caller, Boolean oneLine) at System.Net.Mail.SmtpReplyReaderFactory.ReadLine(SmtpReplyReader caller) at System.Net.Mail.SmtpConnection.GetConnection(ServicePoint servicePoint) at System.Net.Mail.SmtpClient.Send(MailMessage message) --- End of inner exception stack trace --- I have searched high and low and not found anyone else attempting this. All GoDaddy smtp situations I have found involve people being hosted by GoDaddy using their relay server. Some more info: My EC2 instance is Windows Server 2008 with IIS 7. The app is running in .NET 4 I can successfully use Gmail's SMTP server on the EC2 instance by using their port, setting SmtpClient.EnableSsl to true, and sending the mail through a gmail account. But we want to send the email from an account on our domain. I have port 25 open on both the Windows firewall and Amazon's Security group based firewall. I have played with Wireshark and noticed my SMTP related traffic was talking to ports in the 5,000s, so out of desperation I opened them all up to no avail (then closed them back down) As far as I know my EC2 instance's IP address is not black listed by GoDaddy. I have a feeling I'm just missing something fundamental. I also have a feeling someone is going to recommend I use AuthSmtp or something similar, I'll agree, and have had wasted the past 6 hours :)

    Read the article

  • MSDTC - Communication with the underlying transaction manager has failed (Firewall open, MSDTC netwo

    - by SocialAddict
    I'm having problems with my ASP.NET web forms system. It worked on our test server but now we are putting it live one of the servers is within a DMZ and the SQL server is outside of that (on our network still though - although a different subnet) I have open up the firewall completely between these two boxes to see if that was the issue and it still gives the error message "Communication with the underlying transaction manager has failed" whenever we try and use the "TransactionScope". We can access the data for retrieval it's just transactions that break it. We have also used msdtc ping to test the connection and with the amendments on the firewall that pings successfully, but the same error occurs! How do i resolve this error? Any help would be great as we have a system to go live today. Panic :) Edit: I have created a more straightforward test page with a transaction as below and this works fine. Could a nested transaction cause this kind of error and if so why would this only cause an issue when using a live box in a dmz with a firewall? AuditRepository auditRepository = new AuditRepository(); try { using (TransactionScope scope = new TransactionScope()) { auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#1", 1); auditRepository.Save(); auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#2", 1); auditRepository.Save(); scope.Complete(); } } catch (Exception ex) { Response.Write("Test Error For Transaction: " + ex.Message + "<br />" + ex.StackTrace); } This is the ErrorStack we are getting when the problem occurs: at System.Transactions.TransactionInterop.GetOletxTransactionFromTransmitterPropigationToken(Byte[] propagationToken) at System.Transactions.TransactionStatePSPEOperation.PSPEPromote(InternalTransaction tx) at System.Transactions.TransactionStateDelegatedBase.EnterState(InternalTransaction tx) at System.Transactions.EnlistableStates.Promote(InternalTransaction tx) at System.Transactions.Transaction.Promote() at System.Transactions.TransactionInterop.ConvertToOletxTransaction(Transaction transaction) at System.Transactions.TransactionInterop.GetExportCookie(Transaction transaction, Byte[] whereabouts) at System.Data.SqlClient.SqlInternalConnection.GetTransactionCookie(Transaction transaction, Byte[] whereAbouts) at System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction tx) at System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction tx) at System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) at System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at System.Data.Linq.SqlClient.SqlConnectionManager.UseConnection(IConnectionUser user) at System.Data.Linq.SqlClient.SqlProvider.get_IsSqlCe() at System.Data.Linq.SqlClient.SqlProvider.InitializeProviderMode() at System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) at System.Data.Linq.ChangeDirector.StandardChangeDirector.DynamicInsert(TrackedObject item) at System.Data.Linq.ChangeDirector.StandardChangeDirector.Insert(TrackedObject item) at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges() at RegBook.classes.DbBase.Save() at RegBook.usercontrols.BookingProcess.confirmBookingButton_Click(Object sender, EventArgs e)

    Read the article

  • Entity Framework SaveChanges error details

    - by Marek Karbarz
    When saving changes with SaveChanges on a data context is there a way to determine which Entity causes an error? For example, sometimes I'll forget to assign a date to a non-nullable date field and get "Invalid Date Range" error, but I get no information about which entity or which field it's caused by (I can usually track it down by painstakingly going through all my objects, but it's very time consuming). Stack trace is pretty useless as it only shows me an error at the SaveChanges call without any additional information as to where exactly it happened. Note that I'm not looking to solve any particular problem I have now, I would just like to know in general if there's a way to tell which entity/field is causing a problem. Quick sample of a stack trace as an example - in this case an error happened because CreatedOn date was not set on IAComment entity, however it's impossible to tell from this error/stack trace [SqlTypeException: SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.] System.Data.SqlTypes.SqlDateTime.FromTimeSpan(TimeSpan value) +2127345 System.Data.SqlTypes.SqlDateTime.FromDateTime(DateTime value) +232 System.Data.SqlClient.MetaType.FromDateTime(DateTime dateTime, Byte cb) +46 System.Data.SqlClient.TdsParser.WriteValue(Object value, MetaType type, Byte scale, Int32 actualLength, Int32 encodingByteSize, Int32 offset, TdsParserStateObject stateObj) +4997789 System.Data.SqlClient.TdsParser.TdsExecuteRPC(_SqlRPC[] rpcArray, Int32 timeout, Boolean inSchema, SqlNotificationRequest notificationRequest, TdsParserStateObject stateObj, Boolean isCommandProc) +6248 System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) +987 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) +162 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) +32 System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) +141 System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) +12 System.Data.Common.DbCommand.ExecuteReader(CommandBehavior behavior) +10 System.Data.Mapping.Update.Internal.DynamicUpdateCommand.Execute(UpdateTranslator translator, EntityConnection connection, Dictionary`2 identifierValues, List`1 generatedValues) +8084396 System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) +267 [UpdateException: An error occurred while updating the entries. See the inner exception for details.] System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) +389 System.Data.EntityClient.EntityAdapter.Update(IEntityStateManager entityCache) +163 System.Data.Objects.ObjectContext.SaveChanges(SaveOptions options) +609 IADAL.IAController.Save(IAHeader head) in C:\Projects\IA\IADAL\IAController.cs:61 IA.IAForm.saveForm(Boolean validate) in C:\Projects\IA\IA\IAForm.aspx.cs:198 IA.IAForm.advance_Click(Object sender, EventArgs e) in C:\Projects\IA\IA\IAForm.aspx.cs:287 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +118 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +112 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +10 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +13 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +36 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +5019

    Read the article

  • NullReferenceException when calling InsertOnSubmit in Linq to Sql.

    - by Charlie
    I'm trying to insert a new object into my database using LINQ to SQL but get a NullReferenceException when I call InsertOnSubmit() in the code snippet below. I'm passing in a derived class called FileUploadAudit, and all properties on the object are set. public void Save(Audit audit) { try { using (ULNDataClassesDataContext dataContext = this.Connection.GetContext()) { if (audit.AuditID > 0) { throw new RepositoryException(RepositoryExceptionCode.EntityAlreadyExists, string.Format("An audit entry with ID {0} already exists and cannot be updated.", audit.AuditID)); } dataContext.Audits.InsertOnSubmit(audit); dataContext.SubmitChanges(); } } catch (Exception ex) { if (ObjectFactory.GetInstance<IExceptionHandler>().HandleException(ex)) { throw; } } } Here's the stack trace: at System.Data.Linq.Table`1.InsertOnSubmit(TEntity entity) at XXXX.XXXX.Repository.AuditRepository.Save(Audit audit) C:\XXXX\AuditRepository.cs:line 25" I've added to the Audit class like this: public partial class Audit { public Audit(string message, ULNComponent component) : this() { this.Message = message; this.DateTimeRecorded = DateTime.Now; this.SetComponent(component); this.ServerName = Environment.MachineName; } public bool IsError { get; set; } public void SetComponent(ULNComponent component) { this.Component = Enum.GetName(typeof(ULNComponent), component); } } And the derived FileUploadAudit looks like this: public class FileUploadAudit : Audit { public FileUploadAudit(string message, ULNComponent component, Guid fileGuid, string originalFilename, string physicalFilename, HttpPostedFileBase postedFile) : base(message, component) { this.FileGuid = fileGuid; this.OriginalFilename = originalFilename; this.PhysicalFileName = physicalFilename; this.PostedFile = postedFile; this.ValidationErrors = new List<string>(); } public Guid FileGuid { get; set; } public string OriginalFilename { get; set; } public string PhysicalFileName { get; set; } public HttpPostedFileBase PostedFile { get; set; } public IList<string> ValidationErrors { get; set; } } Any ideas what the problem is? The closest question I could find to mine is here but my partial Audit class is calling the parameterless constructor in the generated code, and I still get the problem. UPDATE: This problem only occurs when I pass in the derived FileUploadAudit class, the Audit class works fine. The Audit class is generated as a linq to sql class and there are no Properties mapped to database fields in the derived class.

    Read the article

< Previous Page | 715 716 717 718 719 720 721 722 723 724 725 726  | Next Page >