Search Results

Search found 31994 results on 1280 pages for 'input output'.

Page 684/1280 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • How can I scale movement physics functions to frames per second (in a game engine)?

    - by Richard
    I am working on a game in Javascript (HTML5 Canvas). I implemented a simple algorithm that allows an object to follow another object with basic physics mixed in (a force vector to drive the object in the right direction, and the velocity stacks momentum, but is slowed by a constant drag force). At the moment, I set it up as a rectangle following the mouse (x, y) coordinates. Here's the code: // rectangle x, y position var x = 400; // starting x position var y = 250; // starting y position var FPS = 60; // frames per second of the screen // physics variables: var velX = 0; // initial velocity at 0 (not moving) var velY = 0; // not moving var drag = 0.92; // drag force reduces velocity by 8% per frame var force = 0.35; // overall force applied to move the rectangle var angle = 0; // angle in which to move // called every frame (at 60 frames per second): function update(){ // calculate distance between mouse and rectangle var dx = mouseX - x; var dy = mouseY - y; // calculate angle between mouse and rectangle var angle = Math.atan(dy/dx); if(dx < 0) angle += Math.PI; else if(dy < 0) angle += 2*Math.PI; // calculate the force (on or off, depending on user input) var curForce; if(keys[32]) // SPACE bar curForce = force; // if pressed, use 0.35 as force else curForce = 0; // otherwise, force is 0 // increment velocty by the force, and scaled by drag for x and y velX += curForce * Math.cos(angle); velX *= drag; velY += curForce * Math.sin(angle); velY *= drag; // update x and y by their velocities x += velX; y += velY; And that works fine at 60 frames per second. Now, the tricky part: my question is, if I change this to a different framerate (say, 30 FPS), how can I modify the force and drag values to keep the movement constant? That is, right now my rectangle (whose position is dictated by the x and y variables) moves at a maximum speed of about 4 pixels per second, and accelerates to its max speed in about 1 second. BUT, if I change the framerate, it moves slower (e.g. 30 FPS accelerates to only 2 pixels per frame). So, how can I create an equation that takes FPS (frames per second) as input, and spits out correct "drag" and "force" values that will behave the same way in real time? I know it's a heavy question, but perhaps somebody with game design experience, or knowledge of programming physics can help. Thank you for your efforts. jsFiddle: http://jsfiddle.net/BadDB

    Read the article

  • How to use python and beautfulsoup to print timestamp/last updated time (from HTML:) for each row ?

    - by cesalo
    How to use python and beautfulsoup to print timestamp/last updated time (from HTML:) for each row ? thanks a lot ! A) 1) can i add the print a)date/time and b)last updated time after row ? a) date/time - display the time when execute the python code b) last updated time from HTML: HTML structure: td x 1 including two tables each table have few "tr" and within "tr" have few "td" data inside HTML: <td> <table width="100%" border="4" cellspacing="0" bordercolor="white" align="center"> <tbody> <tr> <td colspan="2" class="verd_black11">Last Updated: 18/08/2014 10:19</td> </tr> <tr> <td colspan="3" class="verd_black11">All data delayed at least 15 minutes</td> </tr> </tbody> </table> <table width="100%" border="4" cellspacing="0" bordercolor="white" align="center"> <tbody id="tbody"> <tr id="tr0" class="tableHdrB1" align="center"> <td align="centre">C Aug-14 - 15000</td> <td align="right"> - </td> <td align="right">5</td> <td align="right">9,904</td> </tr> </tbody> </table> </td> Code: import urllib2 from bs4 import BeautifulSoup contenturl = "HTML:" soup = BeautifulSoup(urllib2.urlopen(contenturl).read()) table = soup.find('tbody', attrs={'id': 'tbody'}) rows = table.findAll('tr') for tr in rows: cols = tr.findAll('td') for td in cols: t = td.find(text=True) if t: text = t + ';' print text, print Output from above code C Aug-14 - 15000 ; - ; 5 ; 9,904 Expected output: C Aug-14 - 15000 ; - ; 5 ; 9,904 ; 18/08/2014 ; 13:48:00 ; 18/08/2014 ; 10:19 (execute python code) (last updated time)

    Read the article

  • converting timestamp to nanoseconds

    - by kuki
    I have a certain value of date and time say 28-3-2012(date) - 10:36:45(time) . I wish to convert this whole timestamp to nanoseconds with the precision of nanoseconds. As in the user would input the time and date as shown but internally i have to be accurate upto nanoseconds and convert the whole thing to nanoseconds to form a unique key assigned to a specific object created at that particular time. Could some one please help me with the same..

    Read the article

  • Get data from table celle

    - by mannetje88
    Hello, i have a table and i want the data from the cells to be printed onto a input field when i click the a specific table cell using the "onclick" command. i was thinking about getdocumentbyid or something like that greet

    Read the article

  • building a hex value from integers

    - by StillLearningToCode
    i am trying to generate a hex color value from an integer input, and I'm not sure I'm using the concat method correctly. when i output the string theColor, i only get "0x", any ideas? public String generateColor(String redVal, String blueVal, String greenVal, String alphaVal){ String theColor = "0x"; theColor.concat(alphaVal); theColor.concat(redVal); theColor.concat(greenVal); theColor.concat(blueVal); return theColor; }

    Read the article

  • Do spaces in your URL (%20) have a negative impact on SEO?

    - by Kevin
    All the articles I Googled on this subject are dated back in 2004-2005. Basically I am structuring precanned searches, and it is based off of categories the client will input. Example content/(term name)/index.htm Does it matter if I used the raw term with a space, which is converted to %20 in the URL, or should I convert the link to '-' and remove that before querying for results? I already have it working, but does anyone know if this definitely has a negative impact on SEO and ranking?

    Read the article

  • A free standing ASP.NET Pager Web Control

    - by Rick Strahl
    Paging in ASP.NET has been relatively easy with stock controls supporting basic paging functionality. However, recently I built an MVC application and one of the things I ran into was that I HAD TO build manual paging support into a few of my pages. Dealing with list controls and rendering markup is easy enough, but doing paging is a little more involved. I ended up with a small but flexible component that can be dropped anywhere. As it turns out the task of creating a semi-generic Pager control for MVC was fairly easily. Now I’m back to working in Web Forms and thought to myself that the way I created the pager in MVC actually would also work in ASP.NET – in fact quite a bit easier since the whole thing can be conveniently wrapped up into an easily reusable control. A standalone pager would provider easier reuse in various pages and a more consistent pager display regardless of what kind of 'control’ the pager is associated with. Why a Pager Control? At first blush it might sound silly to create a new pager control – after all Web Forms has pretty decent paging support, doesn’t it? Well, sort of. Yes the GridView control has automatic paging built in and the ListView control has the related DataPager control. The built in ASP.NET paging has several issues though: Postback and JavaScript requirements If you look at paging links in ASP.NET they are always postback links with javascript:__doPostback() calls that go back to the server. While that works fine and actually has some benefit like the fact that paging saves changes to the page and post them back, it’s not very SEO friendly. Basically if you use javascript based navigation nosearch engine will follow the paging links which effectively cuts off list content on the first page. The DataPager control does support GET based links via the QueryStringParameter property, but the control is effectively tied to the ListView control (which is the only control that implements IPageableItemContainer). DataSource Controls required for Efficient Data Paging Retrieval The only way you can get paging to work efficiently where only the few records you display on the page are queried for and retrieved from the database you have to use a DataSource control - only the Linq and Entity DataSource controls  support this natively. While you can retrieve this data yourself manually, there’s no way to just assign the page number and render the pager based on this custom subset. Other than that default paging requires a full resultset for ASP.NET to filter the data and display only a subset which can be very resource intensive and wasteful if you’re dealing with largish resultsets (although I’m a firm believer in returning actually usable sets :-}). If you use your own business layer that doesn’t fit an ObjectDataSource you’re SOL. That’s a real shame too because with LINQ based querying it’s real easy to retrieve a subset of data that is just the data you want to display but the native Pager functionality doesn’t support just setting properties to display just the subset AFAIK. DataPager is not Free Standing The DataPager control is the closest thing to a decent Pager implementation that ASP.NET has, but alas it’s not a free standing component – it works off a related control and the only one that it effectively supports from the stock ASP.NET controls is the ListView control. This means you can’t use the same data pager formatting for a grid and a list view or vice versa and you’re always tied to the control. Paging Events In order to handle paging you have to deal with paging events. The events fire at specific time instances in the page pipeline and because of this you often have to handle data binding in a way to work around the paging events or else end up double binding your data sources based on paging. Yuk. Styling The GridView pager is a royal pain to beat into submission for styled rendering. The DataPager control has many more options and template layout and it renders somewhat cleaner, but it too is not exactly easy to get a decent display for. Not a Generic Solution The problem with the ASP.NET controls too is that it’s not generic. GridView, DataGrid use their own internal paging, ListView can use a DataPager and if you want to manually create data layout – well you’re on your own. IOW, depending on what you use you likely have very different looking Paging experiences. So, I figured I’ve struggled with this once too many and finally sat down and built a Pager control. The Pager Control My goal was to create a totally free standing control that has no dependencies on other controls and certainly no requirements for using DataSource controls. The idea is that you should be able to use this pager control without any sort of data requirements at all – you should just be able to set properties and be able to display a pager. The Pager control I ended up with has the following features: Completely free standing Pager control – no control or data dependencies Complete manual control – Pager can render without any data dependency Easy to use: Only need to set PageSize, ActivePage and TotalItems Supports optional filtering of IQueryable for efficient queries and Pager rendering Supports optional full set filtering of IEnumerable<T> and DataTable Page links are plain HTTP GET href Links Control automatically picks up Page links on the URL and assigns them (automatic page detection no page index changing events to hookup) Full CSS Styling support On the downside there’s no templating support for the control so the layout of the pager is relatively fixed. All elements however are stylable and there are options to control the text, and layout options such as whether to display first and last pages and the previous/next buttons and so on. To give you an idea what the pager looks like, here are two differently styled examples (all via CSS):   The markup for these two pagers looks like this: <ww:Pager runat="server" id="ItemPager" PageSize="5" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PagesTextCssClass="gridpagertext" CssClass="gridpager" RenderContainerDiv="true" ContainerDivCssClass="gridpagercontainer" MaxPagesToDisplay="6" PagesText="Item Pages:" NextText="next" PreviousText="previous" /> <ww:Pager runat="server" id="ItemPager2" PageSize="5" RenderContainerDiv="true" MaxPagesToDisplay="6" /> The latter example uses default style settings so it there’s not much to set. The first example on the other hand explicitly assigns custom styles and overrides a few of the formatting options. Styling The styling is based on a number of CSS classes of which the the main pager, pagerbutton and pagerbutton-selected classes are the important ones. Other styles like pagerbutton-next/prev/first/last are based on the pagerbutton style. The default styling shown for the red outlined pager looks like this: .pagercontainer { margin: 20px 0; background: whitesmoke; padding: 5px; } .pager { float: right; font-size: 10pt; text-align: left; } .pagerbutton,.pagerbutton-selected,.pagertext { display: block; float: left; text-align: center; border: solid 2px maroon; min-width: 18px; margin-left: 3px; text-decoration: none; padding: 4px; } .pagerbutton-selected { font-size: 130%; font-weight: bold; color: maroon; border-width: 0px; background: khaki; } .pagerbutton-first { margin-right: 12px; } .pagerbutton-last,.pagerbutton-prev { margin-left: 12px; } .pagertext { border: none; margin-left: 30px; font-weight: bold; } .pagerbutton a { text-decoration: none; } .pagerbutton:hover { background-color: maroon; color: cornsilk; } .pagerbutton-prev { background-image: url(images/prev.png); background-position: 2px center; background-repeat: no-repeat; width: 35px; padding-left: 20px; } .pagerbutton-next { background-image: url(images/next.png); background-position: 40px center; background-repeat: no-repeat; width: 35px; padding-right: 20px; margin-right: 0px; } Yup that’s a lot of styling settings although not all of them are required. The key ones are pagerbutton, pager and pager selection. The others (which are implicitly created by the control based on the pagerbutton style) are for custom markup of the ‘special’ buttons. In my apps I tend to have two kinds of pages: Those that are associated with typical ‘grid’ displays that display purely tabular data and those that have a more looser list like layout. The two pagers shown above represent these two views and the pager and gridpager styles in my standard style sheet reflect these two styles. Configuring the Pager with Code Finally lets look at what it takes to hook up the pager. As mentioned in the highlights the Pager control is completely independent of other controls so if you just want to display a pager on its own it’s as simple as dropping the control and assigning the PageSize, ActivePage and either TotalPages or TotalItems. So for this markup: <ww:Pager runat="server" id="ItemPagerManual" PageSize="5" MaxPagesToDisplay="6" /> I can use code as simple as: ItemPagerManual.PageSize = 3; ItemPagerManual.ActivePage = 4;ItemPagerManual.TotalItems = 20; Note that ActivePage is not required - it will automatically use any Page=x query string value and assign it, although you can override it as I did above. TotalItems can be any value that you retrieve from a result set or manually assign as I did above. A more realistic scenario based on a LINQ to SQL IQueryable result is even easier. In this example, I have a UserControl that contains a ListView control that renders IQueryable data. I use a User Control here because there are different views the user can choose from with each view being a different user control. This incidentally also highlights one of the nice features of the pager: Because the pager is independent of the control I can put the pager on the host page instead of into each of the user controls. IOW, there’s only one Pager control, but there are potentially many user controls/listviews that hold the actual display data. The following code demonstrates how to use the Pager with an IQueryable that loads only the records it displays: protected voidPage_Load(objectsender, EventArgs e) {     Category = Request.Params["Category"] ?? string.Empty;     IQueryable<wws_Item> ItemList = ItemRepository.GetItemsByCategory(Category);     // Update the page and filter the list down     ItemList = ItemPager.FilterIQueryable<wws_Item>(ItemList); // Render user control with a list view Control ulItemList = LoadControl("~/usercontrols/" + App.Configuration.ItemListType + ".ascx"); ((IInventoryItemListControl)ulItemList).InventoryItemList = ItemList; phItemList.Controls.Add(ulItemList); // placeholder } The code uses a business object to retrieve Items by category as an IQueryable which means that the result is only an expression tree that hasn’t execute SQL yet and can be further filtered. I then pass this IQueryable to the FilterIQueryable() helper method of the control which does two main things: Filters the IQueryable to retrieve only the data displayed on the active page Sets the Totaltems property and calculates TotalPages on the Pager and that’s it! When the Pager renders it uses those values, plus the PageSize and ActivePage properties to render the Pager. In addition to IQueryable there are also filter methods for IEnumerable<T> and DataTable, but these versions just filter the data by removing rows/items from the entire already retrieved data. Output Generated and Paging Links The output generated creates pager links as plain href links. Here’s what the output looks like: <div id="ItemPager" class="pagercontainer"> <div class="pager"> <span class="pagertext">Pages: </span><a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=1" class="pagerbutton" />1</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=2" class="pagerbutton" />2</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton" />3</a> <span class="pagerbutton-selected">4</span> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton" />5</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=6" class="pagerbutton" />6</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=20" class="pagerbutton pagerbutton-last" />20</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton pagerbutton-prev" />Prev</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton pagerbutton-next" />Next</a></div> <br clear="all" /> </div> </div> The links point back to the current page and simply append a Page= page link into the page. When the page gets reloaded with the new page number the pager automatically detects the page number and automatically assigns the ActivePage property which results in the appropriate page to be displayed. The code shown in the previous section is all that’s needed to handle paging. Note that HTTP GET based paging is different than the Postback paging ASP.NET uses by default. Postback paging preserves modified page content when clicking on pager buttons, but this control will simply load a new page – no page preservation at this time. The advantage of not using Postback paging is that the URLs generated are plain HTML links that a search engine can follow where __doPostback() links are not. Pager with a Grid The pager also works in combination with grid controls so it’s easy to bypass the grid control’s paging features if desired. In the following example I use a gridView control and binds it to a DataTable result which is also filterable by the Pager control. The very basic plain vanilla ASP.NET grid markup looks like this: <div style="width: 600px; margin: 0 auto;padding: 20px; "> <asp:DataGrid runat="server" AutoGenerateColumns="True" ID="gdItems" CssClass="blackborder" style="width: 600px;"> <AlternatingItemStyle CssClass="gridalternate" /> <HeaderStyle CssClass="gridheader" /> </asp:DataGrid> <ww:Pager runat="server" ID="Pager" CssClass="gridpager" ContainerDivCssClass="gridpagercontainer" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PageSize="8" RenderContainerDiv="true" MaxPagesToDisplay="6" /> </div> and looks like this when rendered: using custom set of CSS styles. The code behind for this code is also very simple: protected void Page_Load(object sender, EventArgs e) { string category = Request.Params["category"] ?? ""; busItem itemRep = WebStoreFactory.GetItem(); var items = itemRep.GetItemsByCategory(category) .Select(itm => new {Sku = itm.Sku, Description = itm.Description}); // run query into a DataTable for demonstration DataTable dt = itemRep.Converter.ToDataTable(items,"TItems"); // Remove all items not on the current page dt = Pager.FilterDataTable(dt,0); // bind and display gdItems.DataSource = dt; gdItems.DataBind(); } A little contrived I suppose since the list could already be bound from the list of elements, but this is to demonstrate that you can also bind against a DataTable if your business layer returns those. Unfortunately there’s no way to filter a DataReader as it’s a one way forward only reader and the reader is required by the DataSource to perform the bindings.  However, you can still use a DataReader as long as your business logic filters the data prior to rendering and provides a total item count (most likely as a second query). Control Creation The control itself is a pretty brute force ASP.NET control. Nothing clever about this other than some basic rendering logic and some simple calculations and update routines to determine which buttons need to be shown. You can take a look at the full code from the West Wind Web Toolkit’s Repository (note there are a few dependencies). To give you an idea how the control works here is the Render() method: /// <summary> /// overridden to handle custom pager rendering for runtime and design time /// </summary> /// <param name="writer"></param> protected override void Render(HtmlTextWriter writer) { base.Render(writer); if (TotalPages == 0 && TotalItems > 0) TotalPages = CalculateTotalPagesFromTotalItems(); if (DesignMode) TotalPages = 10; // don't render pager if there's only one page if (TotalPages < 2) return; if (RenderContainerDiv) { if (!string.IsNullOrEmpty(ContainerDivCssClass)) writer.AddAttribute("class", ContainerDivCssClass); writer.RenderBeginTag("div"); } // main pager wrapper writer.WriteBeginTag("div"); writer.AddAttribute("id", this.ClientID); if (!string.IsNullOrEmpty(CssClass)) writer.WriteAttribute("class", this.CssClass); writer.Write(HtmlTextWriter.TagRightChar + "\r\n"); // Pages Text writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(PagesTextCssClass)) writer.WriteAttribute("class", PagesTextCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(this.PagesText); writer.WriteEndTag("span"); // if the base url is empty use the current URL FixupBaseUrl(); // set _startPage and _endPage ConfigurePagesToRender(); // write out first page link if (ShowFirstAndLastPageLinks && _startPage != 1) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-first"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write("1"); writer.WriteEndTag("a"); writer.Write("&nbsp;"); } // write out all the page links for (int i = _startPage; i < _endPage + 1; i++) { if (i == ActivePage) { writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(SelectedPageCssClass)) writer.WriteAttribute("class", SelectedPageCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(i.ToString()); writer.WriteEndTag("span"); } else { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, i.ToString()).TrimEnd('&'); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(i.ToString()); writer.WriteEndTag("a"); } writer.Write("\r\n"); } // write out last page link if (ShowFirstAndLastPageLinks && _endPage < TotalPages) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, TotalPages.ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-last"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(TotalPages.ToString()); writer.WriteEndTag("a"); } // Previous link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(PreviousText) && ActivePage > 1) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage - 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-prev"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(PreviousText); writer.WriteEndTag("a"); } // Next link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(NextText) && ActivePage < TotalPages) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage + 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-next"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(NextText); writer.WriteEndTag("a"); } writer.WriteEndTag("div"); if (RenderContainerDiv) { if (RenderContainerDivBreak) writer.Write("<br clear=\"all\" />\r\n"); writer.WriteEndTag("div"); } } As I said pretty much brute force rendering based on the control’s property settings of which there are quite a few: You can also see the pager in the designer above. unfortunately the VS designer (both 2010 and 2008) fails to render the float: left CSS styles properly and starts wrapping after margins are applied in the special buttons. Not a big deal since VS does at least respect the spacing (the floated elements overlay). Then again I’m not using the designer anyway :-}. Filtering Data What makes the Pager easy to use is the filter methods built into the control. While this functionality is clearly not the most politically correct design choice as it violates separation of concerns, it’s very useful for typical pager operation. While I actually have filter methods that do something similar in my business layer, having it exposed on the control makes the control a lot more useful for typical databinding scenarios. Of course these methods are optional – if you have a business layer that can provide filtered page queries for you can use that instead and assign the TotalItems property manually. There are three filter method types available for IQueryable, IEnumerable and for DataTable which tend to be the most common use cases in my apps old and new. The IQueryable version is pretty simple as it can simply rely on on .Skip() and .Take() with LINQ: /// <summary> /// <summary> /// Queries the database for the ActivePage applied manually /// or from the Request["page"] variable. This routine /// figures out and sets TotalPages, ActivePage and /// returns a filtered subset IQueryable that contains /// only the items from the ActivePage. /// </summary> /// <param name="query"></param> /// <param name="activePage"> /// The page you want to display. Sets the ActivePage property when passed. /// Pass 0 or smaller to use ActivePage setting. /// </param> /// <returns></returns> public IQueryable<T> FilterIQueryable<T>(IQueryable<T> query, int activePage) where T : class, new() { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = query.Count(); if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return query; } int skip = ActivePage - 1; if (skip > 0) query = query.Skip(skip * PageSize); _TotalPages = CalculateTotalPagesFromTotalItems(); return query.Take(PageSize); } The IEnumerable<T> version simply  converts the IEnumerable to an IQuerable and calls back into this method for filtering. The DataTable version requires a little more work to manually parse and filter records (I didn’t want to add the Linq DataSetExtensions assembly just for this): /// <summary> /// Filters a data table for an ActivePage. /// /// Note: Modifies the data set permanently by remove DataRows /// </summary> /// <param name="dt">Full result DataTable</param> /// <param name="activePage">Page to display. 0 to use ActivePage property </param> /// <returns></returns> public DataTable FilterDataTable(DataTable dt, int activePage) { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = dt.Rows.Count; if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return dt; } int skip = ActivePage - 1; if (skip > 0) { for (int i = 0; i < skip * PageSize; i++ ) dt.Rows.RemoveAt(0); } while(dt.Rows.Count > PageSize) dt.Rows.RemoveAt(PageSize); return dt; } Using the Pager Control The pager as it is is a first cut I built a couple of weeks ago and since then have been tweaking a little as part of an internal project I’m working on. I’ve replaced a bunch of pagers on various older pages with this pager without any issues and have what now feels like a more consistent user interface where paging looks and feels the same across different controls. As a bonus I’m only loading the data from the database that I need to display a single page. With the preset class tags applied too adding a pager is now as easy as dropping the control and adding the style sheet for styling to be consistent – no fuss, no muss. Schweet. Hopefully some of you may find this as useful as I have or at least as a baseline to build ontop of… Resources The Pager is part of the West Wind Web & Ajax Toolkit Pager.cs Source Code (some toolkit dependencies) Westwind.css base stylesheet with .pager and .gridpager styles Pager Example Page © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Nagios Creating lots of zombie process

    - by pradeepchhetri
    In my monitoring box, I have lots of zombie process created by nagios and they gets remove quickly also. I am using active checks to perform monitoring of my servers. I accumulated the defunct processes created using the following command: $ top -d 0.25 -b -n 20 > topout.txt This collected the output of top with 0.25s delay 20 times. I did grep on the topout.txt for the defunct process. $ cat topout.txt | grep defunct I get the following output. 8957 nagios 20 0 0 0 0 Z 6.0 0.0 0:00.02 nagios <defunct> 8951 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 8954 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 8945 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 8946 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 8980 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9000 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.00 nagios <defunct> 9024 nagios 20 0 0 0 0 Z 7.0 0.0 0:00.02 nagios <defunct> 9025 nagios 20 0 0 0 0 Z 3.5 0.0 0:00.01 nagios <defunct> 9040 nagios 20 0 0 0 0 Z 3.1 0.0 0:00.01 nagios <defunct> 9086 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9087 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9123 nagios 20 0 0 0 0 Z 6.1 0.0 0:00.02 nagios <defunct> 9126 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 9131 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 9091 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.05 nagios <defunct> 9111 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9119 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9118 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9151 nagios 20 0 0 0 0 Z 2.9 0.0 0:00.02 nagios <defunct> 9153 nagios 20 0 0 0 0 Z 2.9 0.0 0:00.02 nagios <defunct> 9150 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9164 nagios 20 0 0 0 0 Z 3.5 0.0 0:00.02 nagios <defunct> 9171 nagios 20 0 0 0 0 Z 3.5 0.0 0:00.02 nagios <defunct> 9154 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9156 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9163 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9167 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9178 nagios 20 0 0 0 0 Z 3.8 0.0 0:00.02 nagios <defunct> 9174 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9179 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9182 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> Can somebody help me in finding out the reason of these zombie processes and how i can prevent these zombie processes ?

    Read the article

  • SSH X11 not working

    - by azat
    I have a home and work computer, the home computer has a static IP address. If I ssh from my work computer to my home computer, the ssh connection works but X11 applications are not displayed. In my /etc/ssh/sshd_config at home: X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost yes At work I have tried the following commands: xhost + home HOME_IP ssh -X home ssh -X HOME_IP ssh -Y home ssh -Y HOME_IP My /etc/ssh/ssh_config at work: Host * ForwardX11 yes ForwardX11Trusted yes My ~/.ssh/config at work: Host home HostName HOME_IP User azat PreferredAuthentications password ForwardX11 yes My ~/.Xauthority at work: -rw------- 1 azat azat 269 Jun 7 11:25 .Xauthority My ~/.Xauthority at home: -rw------- 1 azat azat 246 Jun 7 19:03 .Xauthority But it doesn't work After I make an ssh connection to home: $ echo $DISPLAY localhost:10.0 $ kate X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. kate: cannot connect to X server localhost:10.0 I use iptables at home, but I've allowed port 22. According to what I've read that's all I need. UPD. With -vvv ... debug2: callback start debug2: x11_get_proto: /usr/bin/xauth list :0 2/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 1: request x11-req confirm 1 debug2: client_session2_setup: id 1 debug2: fd 3 setting TCP_NODELAY debug2: channel 1: request pty-req confirm 1 ... When try to launch kate: debug1: client_input_channel_open: ctype x11 rchan 2 win 65536 max 16384 debug1: client_request_x11: request from 127.0.0.1 55486 debug2: fd 8 setting O_NONBLOCK debug3: fd 8 is O_NONBLOCK debug1: channel 2: new [x11] debug1: confirm x11 debug2: X11 connection uses different authentication protocol. X11 connection rejected because of wrong authentication. debug2: X11 rejected 2 i0/o0 debug2: channel 2: read failed debug2: channel 2: close_read debug2: channel 2: input open - drain debug2: channel 2: ibuf empty debug2: channel 2: send eof debug2: channel 2: input drain - closed debug2: channel 2: write failed debug2: channel 2: close_write debug2: channel 2: output open - closed debug2: X11 closed 2 i3/o3 debug2: channel 2: send close debug2: channel 2: rcvd close debug2: channel 2: is dead debug2: channel 2: garbage collecting debug1: channel 2: free: x11, nchannels 3 debug3: channel 2: status: The following connections are open: #1 client-session (t4 r0 i0/0 o0/0 fd 5/6 cc -1) #2 x11 (t7 r2 i3/0 o3/0 fd 8/8 cc -1) # The same as above repeate about 7 times kate: cannot connect to X server localhost:10.0 UPD2 Please provide your Linux distribution & version number. Are you using a default GNOME or KDE environment for X or something else you customized yourself? azat:~$ kded4 -version Qt: 4.7.4 KDE Development Platform: 4.6.5 (4.6.5) KDE Daemon: $Id$ Are you invoking ssh directly on a command line from a terminal window? What terminal are you using? xterm, gnome-terminal, or? How did you start the terminal running in the X environment? From a menu? Hotkey? or ? From terminal emulator `yakuake` Manualy press `Ctrl + N` and write commands Can you run xeyes from the same terminal window where the ssh -X fails? `xeyes` - is not installed But `kate` or another kde app is running Are you invoking the ssh command as the same user that you're logged into the X session as? From the same user UPD3 I also download ssh sources, and using debug2() write why it's report that version is different It see some cookies, and one of them is empty, another is MIT-MAGIC-COOKIE-1

    Read the article

  • How to bridge Debian guest VM to VPN via Cisco AnyConnect Client running on Windows Vista host

    - by bgoodr
    I am running Cisco Anyconnect VPN Client version 2.5.3054 on a laptop running Windows Vista Home Premium (version 6.0.6002) Service Pack 2. I am running the VMware Player version 4.0.2 build-591240. The host operating system running under VMware Player is Debian 6.0.2.1 i386. The laptop is connected to a wireless connection, and I can browse the web from Windows Vista using Firefox just fine. I am able to boot into the Debian VM and open up a browser and access websites on the WAN from within the VM just fine. I can ping real Linux hosts on my LAN via: ping <lan_system>.local where <lan_system> is the hostname returned from uname -a on that system on my LAN. From a DOS CMD shell, I am able to ping hosts that exist on the remote network served by the Cisco AnyConnect Client's VPN network (and without the .local suffix applied as above): ping <remote_system> However, from within the Debian VM, I expect to be able to also ping those same remote hosts (<remote_system>) that are tunnelled over the VPN set up by Cisco AnyConnect Client. Let's say that I can ping a <remote_system> called flubber from a DOS CMD prompt just fine. When I execute Linux ping command from inside the Debian VM via: ping flubber It returns immediately with this output: ping: unknown host flubber For reference since I suspect it will be useful, here is the output of the route print command from the DOS CMD prompt: route print =========================================================================== Interface List 30 ...00 05 9a 3c 7a 00 ...... Cisco AnyConnect VPN Virtual Miniport Adapter for Windows 11 ...00 1b 9e c4 de e5 ...... Atheros AR5007EG Wireless Network Adapter 26 ...00 50 56 c0 00 01 ...... VMware Virtual Ethernet Adapter for VMnet1 28 ...00 50 56 c0 00 08 ...... VMware Virtual Ethernet Adapter for VMnet8 1 ........................... Software Loopback Interface 1 12 ...02 00 54 55 4e 01 ...... Teredo Tunneling Pseudo-Interface 13 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 32 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #4 27 ...00 00 00 00 00 00 00 e0 isatap.{E5292CF6-4FBB-4320-806D-A6B366769255} 17 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 20 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #8 22 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #10 24 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #11 25 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #12 29 ...00 00 00 00 00 00 00 e0 isatap.{C3852986-5053-4E2E-BE60-52EA2FCF5899} 41 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #14 =========================================================================== At the top window border of the VM, clicking on Virtual Machine, then clicking on Virtual Machine Settings, then clicking on Network Adapter, I have these two options checked: [X] Bridged: Connected directly to the physical Network [X] Replicate physical network connection state [ ] NAT: used to share the hosts's IP address [ ] Host-only: A private network shared with the host [ ] LAN segment: [ ] <LAN Segments...> <Advanced> I've toyed with the other options such as NAT and Host-only but that had no effect. Is there some way to allow the VM to access those <remote_system>'s?

    Read the article

  • Locating Rogue Perl Script

    - by Gary Garside
    I've been trying to source the location of a perl script which is causing havoc on a server which i control. I'm also trying to find out exactly how this script was installed on the server - my best guess is through a wordpress exploit. The server is a basic web setup running Ubuntu 9.04, Apache and MySQL. I use IPTables for firewall, the site runs around 20 sites and the load never really creeps above 0.7. From what i can see the script is making outbound connection to other servers (most likely trying to brute force entry). Here is a top dump of one of the processes: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22569 www-data 20 0 22784 3216 780 R 100 0.2 47:00.60 perl The command the process is running is /usr/sbin/sshd . I've tried to find an exact file name but im having no luck... i've ran a lsof -p PID and here is the output: COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME perl 22569 www-data cwd DIR 8,6 4096 2 / perl 22569 www-data rtd DIR 8,6 4096 2 / perl 22569 www-data txt REG 8,6 10336 162220 /usr/bin/perl perl 22569 www-data mem REG 8,6 26936 170219 /usr/lib/perl/5.10.0/auto/Socket/Socket.so perl 22569 www-data mem REG 8,6 22808 170214 /usr/lib/perl/5.10.0/auto/IO/IO.so perl 22569 www-data mem REG 8,6 39112 145112 /lib/libcrypt-2.9.so perl 22569 www-data mem REG 8,6 1502512 145124 /lib/libc-2.9.so perl 22569 www-data mem REG 8,6 130151 145113 /lib/libpthread-2.9.so perl 22569 www-data mem REG 8,6 542928 145122 /lib/libm-2.9.so perl 22569 www-data mem REG 8,6 14608 145125 /lib/libdl-2.9.so perl 22569 www-data mem REG 8,6 1503704 162222 /usr/lib/libperl.so.5.10.0 perl 22569 www-data mem REG 8,6 135680 145116 /lib/ld-2.9.so perl 22569 www-data 0r FIFO 0,6 157216 pipe perl 22569 www-data 1w FIFO 0,6 197642 pipe perl 22569 www-data 2w FIFO 0,6 197642 pipe perl 22569 www-data 3w FIFO 0,6 197642 pipe perl 22569 www-data 4u IPv4 383991 TCP outsidesoftware.com:56869->server12.34.56.78.live-servers.net:www (ESTABLISHED) My gut feeling is outsidesoftware.com is also under attacK? Or possibly being used as a tunnel. I've managed to find a number of rouge files in /tmp and /var/tmp, here is a brief output of one of these files: #!/usr/bin/perl # this spreader is coded by xdh # xdh@xxxxxxxxxxx # only for testing... my @nickname = ("vn"); my $nick = $nickname[rand scalar @nickname]; my $ircname = $nickname[rand scalar @nickname]; #system("kill -9 `ps ax |grep httpdse |grep -v grep|awk '{print $1;}'`"); my $processo = '/usr/sbin/sshd'; The full file contents can be viewed here: http://pastebin.com/yenFRrGP Im trying to achieve a couple of things here... Firstly i need to stop these processes from running. Either by disabling outbound SSH or any IP Tables rules etc... these scripts have been running for around 36 hours now and my main concern is to stop these things running and respawning by themselves. Secondly i need to try and source where and how these scripts have been installed. If anybody has any advise on what to look for in access logs or anything else i would be really grateful. Thanks in advance

    Read the article

  • Why can't I convert FLV to MP4 format using FFmpeg when MP3 works?

    - by hugemeow
    In fact I have succeeded to convert FLV to MP3: D:\tmp\ffmpeg-20121005-git-d9dfe9a-win64-static\ffmpeg-20121005-git-d9dfe9a-win 4-static\bin>ffmpeg.exe -i a.flv -acodec mp3 a.mp3 ffmpeg version N-45080-gd9dfe9a Copyright (c) 2000-2012 the FFmpeg developers built on Oct 5 2012 16:49:01 with gcc 4.7.1 (GCC) configuration: --enable-gpl --enable-version3 --disable-pthreads --enable-run ime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass -enable-libcelt --enable-libopencore-amrnb --enable-libopencore-amrwb --enable- ibfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopen peg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libthe ra --enable-libutvideo --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-l bvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --en ble-zlib libavutil 51. 73.102 / 51. 73.102 libavcodec 54. 63.100 / 54. 63.100 libavformat 54. 29.105 / 54. 29.105 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 19.102 / 3. 19.102 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 16.100 / 0. 16.100 libpostproc 52. 1.100 / 52. 1.100 Input #0, flv, from 'a.flv': Metadata: metadatacreator : iku hasKeyframes : true hasVideo : true hasAudio : true hasMetadata : true canSeekToEnd : false datasize : 16906383 videosize : 14558526 audiosize : 2270465 lasttimestamp : 530 lastkeyframetimestamp: 529 lastkeyframelocation: 16893721 Duration: 00:08:49.73, start: 0.000000, bitrate: 255 kb/s Stream #0:0: Video: h264 (Main), yuv420p, 448x336 [SAR 1:1 DAR 4:3], 218 kb s, 15 tbr, 1k tbn, 30 tbc Stream #0:1: Audio: aac, 44100 Hz, stereo, s16, 32 kb/s File 'a.mp3' already exists. Overwrite ? [y/N] y Output #0, mp3, to 'a.mp3': Metadata: metadatacreator : iku hasKeyframes : true hasVideo : true hasAudio : true hasMetadata : true canSeekToEnd : false datasize : 16906383 videosize : 14558526 audiosize : 2270465 lasttimestamp : 530 lastkeyframetimestamp: 529 lastkeyframelocation: 16893721 TSSE : Lavf54.29.105 Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16 Stream mapping: Stream #0:1 -> #0:0 (aac -> libmp3lame) Press [q] to stop, [?] for help size= 8279kB time=00:08:49.78 bitrate= 128.0kbits/s video:0kB audio:8278kB subtitle:0 global headers:0kB muxing overhead 0.006842% But I failed to convert FLV to MP4. Why is the encoder 'mp4' unknown? What's more, how can I find the codecs which are already supported by my FFmpeg? D:\tmp\ffmpeg-20121005-git-d9dfe9a-win64-static\ffmpeg-20121005-git-d9dfe9a-win6 4-static\bin>ffmpeg.exe -i a.flv -acodec mp4 aa.mp4 ffmpeg version N-45080-gd9dfe9a Copyright (c) 2000-2012 the FFmpeg developers built on Oct 5 2012 16:49:01 with gcc 4.7.1 (GCC) configuration: --enable-gpl --enable-version3 --disable-pthreads --enable-runt ime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass - -enable-libcelt --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-l ibfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopenj peg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheo ra --enable-libutvideo --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-li bvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --ena ble-zlib libavutil 51. 73.102 / 51. 73.102 libavcodec 54. 63.100 / 54. 63.100 libavformat 54. 29.105 / 54. 29.105 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 19.102 / 3. 19.102 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 16.100 / 0. 16.100 libpostproc 52. 1.100 / 52. 1.100 Input #0, flv, from 'a.flv': Metadata: metadatacreator : iku hasKeyframes : true hasVideo : true hasAudio : true hasMetadata : true canSeekToEnd : false datasize : 16906383 videosize : 14558526 audiosize : 2270465 lasttimestamp : 530 lastkeyframetimestamp: 529 lastkeyframelocation: 16893721 Duration: 00:08:49.73, start: 0.000000, bitrate: 255 kb/s Stream #0:0: Video: h264 (Main), yuv420p, 448x336 [SAR 1:1 DAR 4:3], 218 kb/ s, 15 tbr, 1k tbn, 30 tbc Stream #0:1: Audio: aac, 44100 Hz, stereo, s16, 32 kb/s Unknown encoder 'mp4'

    Read the article

  • Error installing pkgconfig via macports

    - by Greg K
    I installed Macports 1.8.2 from a DMG. That seemed to install fine. I ran sudo port selfupdate to make sure my ports tree was current. I then tried to install bindfs as I want to mount some directories in my OS X file system (like you can do with mount --bind in linux). pkgconfig and macfuse are two dependencies of bindfs. I had trouble installing bindfs due to errors installing pkgconfig, so I tried to just install pkgconfig, here's the debug output from sudo port install pkgconfig: $ sudo port -d install pkgconfig DEBUG: Found port in file:///opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: Changing to port directory: /opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: OS Platform: darwin DEBUG: OS Version: 10.3.0 DEBUG: Mac OS X Version: 10.6 DEBUG: System Arch: i386 DEBUG: setting option os.universal_supported to yes DEBUG: org.macports.load registered provides 'load', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.unload registered provides 'unload', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.distfiles registered provides 'distfiles', a pre-existing procedure. Target override will not be provided DEBUG: adding the default universal variant DEBUG: Reading variant descriptions from /opt/local/var/macports/sources/rsync.macports.org/release/ports/_resources/port1.0/variant_descriptions.conf DEBUG: Requested variant darwin is not provided by port pkgconfig. DEBUG: Requested variant i386 is not provided by port pkgconfig. DEBUG: Requested variant macosx is not provided by port pkgconfig. ---> Computing dependencies for pkgconfig DEBUG: Executing org.macports.main (pkgconfig) DEBUG: Skipping completed org.macports.fetch (pkgconfig) DEBUG: Skipping completed org.macports.checksum (pkgconfig) DEBUG: Skipping completed org.macports.extract (pkgconfig) DEBUG: Skipping completed org.macports.patch (pkgconfig) ---> Configuring pkgconfig DEBUG: Using compiler 'Mac OS X gcc 4.2' DEBUG: Executing org.macports.configure (pkgconfig) DEBUG: Environment: CFLAGS='-O2 -arch x86_64' CPPFLAGS='-I/opt/local/include' CXXFLAGS='-O2 -arch x86_64' MACOSX_DEPLOYMENT_TARGET='10.6' CXX='/usr/bin/g++-4.2' F90FLAGS='-O2 -m64' LDFLAGS='-L/opt/local/lib' OBJC='/usr/bin/gcc-4.2' FCFLAGS='-O2 -m64' INSTALL='/usr/bin/install -c' OBJCFLAGS='-O2 -arch x86_64' FFLAGS='-O2 -m64' CC='/usr/bin/gcc-4.2' DEBUG: Assembled command: 'cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig' checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... i386-apple-darwin10.3.0 checking host system type... i386-apple-darwin10.3.0 checking for style of include used by make... none checking for gcc... /usr/bin/gcc-4.2 checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 DEBUG: Backtrace: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 while executing "$procedure $targetname" Warning: the following items did not execute (for pkgconfig): org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. I have only recently installed Xcode 3.2.2 (prior to installing macports). Am I right in thinking this the issue here: configure: error: C compiler cannot create executables

    Read the article

  • Linux pptp client stops working after several hours

    - by Aron Rotteveel
    Here's the situation: Setup: 1 Windows Server 2008 machine acting as a Domain Controller and RRAS server 1 CentOS machine in a datacentre located elsewhere PPTP client running on CentOS machine, connected to the DC via When I connect to the DC, everything is working fine. I have set up a static IP for the dialup connection in my RRAS server so that the CentOS machine is automatically assigned the IP 192.168.1.240. Inside the VPN, it is not possible to access this machine on the local IP-address. Perfect. However, after several hours, it simply seems to stop working (IE: I cannot ping to or from this machine on the local network). The strange thing is, however: The DC shows the VPN client as still being connected The CentOS machine shows the network interface as being up There are no entries in my /var/log/messages that indicate a problem Output from ifconfig: ppp0 Link encap:Point-to-Point Protocol inet addr:192.168.1.240 P-t-P:192.168.1.160 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1396 Metric:1 RX packets:43 errors:0 dropped:0 overruns:0 frame:0 TX packets:58 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:4511 (4.4 KiB) TX bytes:15071 (14.7 KiB) Output from route -n: 192.168.1.160 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 ppp0 I have the following in my ip-up.local: route add -net 192.168.1.0 netmask 255.255.255.0 dev ppp0 The situation can be easily fixed by issueing a killall pppd and re-connecting. However, I obviously do not want to do this every X-hours or so. I have tried running pppd with both the debug as the kdebug flag but cannot find the cause of this problem. Currently, my ppp0 network interface seems to be running and the last log lines mentioning it are: Feb 19 14:10:40 graviton pppd[10934]: local IP address 192.168.1.240 Feb 19 14:10:40 graviton pppd[10934]: remote IP address 192.168.1.160 Feb 19 14:10:40 graviton pppd[10934]: Script /etc/ppp/ip-up started (pid 10952) Feb 19 14:10:40 graviton pppd[10934]: Script /etc/ppp/ip-up finished (pid 10952), status = 0x0 Feb 19 14:11:27 graviton pptp[10935]: anon log[decaps_gre:pptp_gre.c:414]: buffering packet 190 (expecting 189, lost or reordered) Feb 19 14:11:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Request received. Feb 19 14:11:37 graviton pptp[10942]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 6 'Echo-Reply' Feb 19 14:12:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Request received. Feb 19 14:12:37 graviton pptp[10942]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 6 'Echo-Reply' Feb 19 14:12:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:13:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:14:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:15:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:16:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:19:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:19:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:679]: no more Echo Reply/Request packets will be reported. I have enabled the persist option. The network interface is still running, but it is still impossible to send data through the VPN. Any help is appreciated.

    Read the article

  • Huawei b153 limit of devices

    - by bdecaf
    I set up my home network all through this 3G wifi router. Problem is it only allows 5 devices to connect. That's not much especially if a wifi printer and gaming consoles keep hogging these slots. On the other hand I don't see the point on blocking these devices. They are (should) not doing anything online just intern in my network. The documentation I can find is surpirisingly unhelpful and focuses how to plug the device in a power socket. So what would be my options. Notes: I have already been able to get a shell on the device using ssh. It's running some Busybox. But I fail to find the how and where this limit is enforced/created. Notes 2: Specifically my device is a 3WebCube - unfortunately not specifically marked with the Huawei Model number. Successes so far After enabling ssh in the options I can login: ssh -T [email protected] [email protected]'s password: ------------------------------- -----Welcome to ATP Cli------ ------------------------------- unfortunately because of this -T - the tab key does not work for autocomplete and all inputted commands will be echoed. Also no history with arrow keys. ATP interface this custom interface is not very useful: ATP>help help Welcome to ATP command line tool. If any question, please input "?" at the end of command. ATP>? ? cls debug help save ? exit ATP>save? save? Command failed. ATP>save ? save ? ATP>debug ? debug ? display set trace ? Shell BUT undocumented - I somehow found on a auto translated chinese website - all you need to do is input sh ATP>sh sh BusyBox vv1.9.1 (2011-03-27 11:59:11 CST) built-in shell (ash) Enter 'help' for a list of built-in commands. # builtin commands # help Built-in commands: ------------------- . : alias bg break cd chdir command continue eval exec exit export false fg getopts hash help jobs kill let local pwd read readonly return set shift source times trap true type ulimit umask unalias unset wait shows standard unix structure: # ls / var tmp proc linuxrc init etc bin usr sbin mnt lib html dev in /bin # ls /bin zebra strace ppps ln echo cat wscd startbsp pppc klog ebtables busybox wlancmd sshd ping kill dns brctl web sntp netstat iwpriv dhcps auth usbdiagd sms mount iwcontrol dhcpc atserver upnp sleep mknod iptables date atcmd upg siproxd mkdir ipcheck cp at umount sh mini_upnpd ip console ash test_at rm mic igmpproxy cms telnetd ripd ls ethcmd cmgr swapdev ps log equipcmd cli in /sbin # ls /sbin vconfig reboot insmod ifconfig arp route poweroff init halt using tftp after installing tftp on my desktop I was able to send files with tftp -s -l curcfg.xml 192.168.1.103 and to download onto the huawei with tftp -g -r curcfg.xml 192.168.1.103 I think I'll need that - because I don't see any editor installed. readout stuff (still playing around where I would get interesting info) For confirmation of hardware: # cat /var/log/modem_hardware_name ^HWVER:"WL1B153M001"# # cat /var/log/modem_software_name 1096.11.03.02.107 # cat /var/log/product_name B153

    Read the article

  • Error on installing SVN extension with pecl

    - by thedp
    Hello, I'm trying to install the following PHP extension: http://php.net/manual/en/book.svn.php But when I do pecl install svn-beta I receive an error message that it can't locate the svn_client.h file. I searched the net but couldn't find any useful reference to this error. Thank you for your help. Installation result: root@myUbuntu:/home/thedp# pecl install svn-beta downloading svn-0.5.1.tgz ... Starting to download svn-0.5.1.tgz (23,563 bytes) .....done: 23,563 bytes 4 source files, building running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 1. Please provide the prefix of Subversion installation : autodetect 1-1, 'all', 'abort', or Enter to continue: 1. Please provide the prefix of the APR installation used with Subversion : autodetect 1-1, 'all', 'abort', or Enter to continue: building in /var/tmp/pear-build-root/svn-0.5.1 running: /tmp/pear/temp/svn/configure --with-svn --with-svn-apr checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for a sed that does not truncate output... /bin/sed checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking whether gcc and cc understand -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php5 -I/usr/include/php5/main -I/usr/include/php5/TSRM -I/usr/include/php5/Zend -I/usr/include/php5/ext -I/usr/include/php5/ext/date/lib -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 checking for PHP extension directory... /usr/lib/php5/20060613+lfs checking for PHP installed headers prefix... /usr/include/php5 checking for re2c... no configure: WARNING: You will need re2c 0.12.0 or later if you want to regenerate PHP parsers. checking for gawk... no checking for nawk... nawk checking if nawk is broken... no checking for svn support... yes, shared checking for specifying the location of apr for svn... yes, shared checking for svn includes... configure: error: failed to find svn_client.h ERROR: `/tmp/pear/temp/svn/configure --with-svn --with-svn-apr' failed

    Read the article

  • Useful Command-line Commands on Windows

    - by Sung Meister
    The aim for this Wiki is to promote using a command to open up commonly used applications without having to go through many mouse clicks - thus saving time on monitoring and troubleshooting Windows machines. Answer entries need to specify Application name Commands Screenshot (Optional) Shortcut to commands && - Command Chaining %SYSTEMROOT%\System32\rcimlby.exe -LaunchRA - Remote Assistance (Windows XP) appwiz.cpl - Programs and Features (Formerly Known as "Add or Remove Programs") appwiz.cpl @,2 - Turn Windows Features On and Off (Add/Remove Windows Components pane) arp - Displays and modifies the IP-to-Physical address translation tables used by address resolution protocol (ARP) at - Schedule tasks either locally or remotely without using Scheduled Tasks bootsect.exe - Updates the master boot code for hard disk partitions to switch between BOOTMGR and NTLDR cacls - Change Access Control List (ACL) permissions on a directory, its subcontents, or files calc - Calculator chkdsk - Check/Fix the disk surface for physical errors or bad sectors cipher - Displays or alters the encryption of directories [files] on NTFS partitions cleanmgr.exe - Disk Cleanup clip - Redirects output of command line tools to the Windows clipboard cls - clear the command line screen cmd /k - Run command with command extensions enabled color - Sets the default console foreground and background colors in console command.com - Default Operating System Shell compmgmt.msc - Computer Management control.exe /name Microsoft.NetworkAndSharingCenter - Network and Sharing Center control keyboard - Keyboard Properties control mouse(or main.cpl) - Mouse Properties control sysdm.cpl,@0,3 - Advanced Tab of the System Properties dialog control userpasswords2 - Opens the classic User Accounts dialog desk.cpl - opens the display properties devmgmt.msc - Device Manager diskmgmt.msc - Disk Management diskpart - Disk management from the command line dsa.msc - Opens active directory users and computers dsquery - Finds any objects in the directory according to criteria dxdiag - DirectX Diagnostic Tool eventvwr - Windows Event Log (Event Viewer) explorer . - Open explorer with the current folder selected. explorer /e, . - Open explorer, with folder tree, with current folder selected. F7 - View command history find - Searches for a text string in a file or files findstr - Find a string in a file firewall.cpl - Opens the Windows Firewall settings fsmgmt.msc - Shared Folders fsutil - Perform tasks related to FAT and NTFS file systems ftp - Transfers files to and from a computer running an FTP server service getmac - Shows the mac address(es) of your network adapter(s) gpedit.msc - Group Policy Editor gpresult - Displays the Resultant Set of Policy (RSoP) information for a target user and computer httpcfg.exe - HTTP Configuration Utility iisreset - To restart IIS InetMgr.exe - Internet Information Services (IIS) Manager 7 InetMgr6.exe - Internet Information Services (IIS) Manager 6 intl.cpl - Regional and Language Options ipconfig - Internet protocol configuration lusrmgr.msc - Local Users and Groups Administrator msconfig - System Configuration notepad - Notepad? ;) mmsys.cpl - Sound/Recording/Playback properties mode - Configure system devices more - Displays one screen of output at a time mrt - Microsoft Windows Malicious Software Removal Tool mstsc.exe - Remote Desktop Connection nbstat - displays protocol statistics and current TCP/IP connections using NBT ncpa.cpl - Network Connections netsh - Display or modify the network configuration of a computer that is currently running netstat - Network Statistics net statistics - Check computer up time net stop - Stops a running service. net use - Connects a computer to or disconnects a computer from a shared resource, or displays information about computer connections odbcad32.exe - ODBC Data Source Administrator pathping - A traceroute that collects detailed packet loss stats perfmon - Opens Reliability and Performance Monitor ping - Determine whether a remote computer is accessible over the network powercfg.cpl - Power management control panel applet quser - Display information about user sessions on a terminal server qwinsta - See disconnected remote desktop sessions reg.exe - Console Registry Tool for Windows regedit - Registry Editor rasdial - Connects to a VPN or a dialup network robocopy - Backup/Restore/Copy large amounts of files reliably rsop.msc - Resultant Set of Policy (shows the combined effect of all group policies active on the current system/login) runas - Run specific tools and programs with different permissions than the user's current logon provides sc - Manage anything you want to do with services. schtasks - Enables an administrator to create, delete, query, change, run and end scheduled tasks on a local or remote system. secpol.msc - Local Security Settings services.msc - Services control panel set - Displays, sets, or removes cmd.exe environment variables. set DIRCMD - Preset dir parameter in cmd.exe start - Starts a separate window to run a specified program or command start. - opens the current directory in the Windows Explorer. shutdown.exe - Shutdown or Reboot a local/remote machine subst.exe - Associates a path with a drive letter, including local drives systeminfo -Displays a comprehensive information about the system taskkill - terminate tasks by process id (PID) or image name tasklist.exe - List Processes on local or a remote machine taskmgr.exe - Task Manager telephon.cpl - Telephone and Modem properties timedate.cpl - Date and Time title - Change the title of the CMD window you have open tracert - Trace route wmic - Windows Management Instrumentation Command-line winver.exe - Find Windows Version wscui.cpl - Windows Security Center wuauclt.exe - Windows Update AutoUpdate Client

    Read the article

  • Increase samba space on open suse 12.1

    - by Kapil Sharma
    I know linux basics but not an expert. IT guy left the job here and there is some time before new hire. So sorry if question is very basic. We have local testing server based on Open SUSE 12.1, which also act as shared drive between dev/mgmt team here and using Samba for that. Now we are running out of space on samba, even though server's 2*1TB harddisk is nearly 90% free. My question is, what is limiting Samba and how can I increase its limit? We need around at least 500 GB as shared drive but currently its just 25 GB. I don't need step by step answer, just a link to any helpful article would be sufficient. Probably I'm putting wrong keywords in google so not getting any helpful link. EDIT: Output of commands in the first comment. All commands were run as root user df -h (getting error with df -ht) Filesystem Size Used Avail Use% Mounted on rootfs 30G 5.1G 23G 19% / devtmpfs 2.0G 36K 2.0G 1% /dev tmpfs 2.0G 1.1M 2.0G 1% /dev/shm tmpfs 2.0G 676K 2.0G 1% /run /dev/sda2 30G 5.1G 23G 19% / tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup tmpfs 2.0G 676K 2.0G 1% /var/run tmpfs 2.0G 0 2.0G 0% /media tmpfs 2.0G 676K 2.0G 1% /var/lock /dev/sda3 36G 31G 3.3G 91% /home fdisk -l /dev/[hmsv]d* Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x2d4a2d49 Device Boot Start End Blocks Id System /dev/sda1 2048 16771071 8384512 82 Linux swap / Solaris /dev/sda2 * 16771072 79681535 31455232 83 Linux /dev/sda3 79681536 156301311 38309888 83 Linux Disk /dev/sda1: 8585 MB, 8585740288 bytes 255 heads, 63 sectors/track, 1043 cylinders, total 16769024 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda1 doesn't contain a valid partition table Disk /dev/sda2: 32.2 GB, 32210157568 bytes 255 heads, 63 sectors/track, 3915 cylinders, total 62910464 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System Disk /dev/sda3: 39.2 GB, 39229325312 bytes 255 heads, 63 sectors/track, 4769 cylinders, total 76619776 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda3 doesn't contain a valid partition table vgs No volume groups found lvs No volume groups found output of vi /etc/samba/smb.conf # smb.conf is the main Samba configuration file. You find a full commented # version at /usr/share/doc/packages/samba/examples/smb.conf.SUSE if the # samba-doc package is installed. # Date: 2011-11-02 [global] workgroup = WORKGROUP passdb backend = tdbsam printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User include = /etc/samba/dhcp.conf logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes [homes] comment = Home Directories valid users = %S, %D%w%S browseable = No read only = No inherit acls = Yes [profiles] comment = Network Profiles Service path = %H read only = No store dos attributes = Yes create mask = 0600 directory mask = 0700 [users] comment = All users path = /home read only = No inherit acls = Yes veto files = /aquota.user/groups/shares/ [groups] comment = All groups path = /home/groups read only = No inherit acls = Yes [printers] comment = All Printers path = /var/tmp printable = Yes create mask = 0600 browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/drivers write list = @ntadmin root force group = ntadmin create mask = 0664 directory mask = 0775 [allusers] comment = All Users path = /home/shares/allusers valid users = @users force group = users create mask = 0660 directory mask = 0771 writable = yes

    Read the article

  • Cannot connect to MySQL over TCP locally - Connection Timeout - Ubuntu 9.04

    - by gav
    I am running Ubuntu and am ultimately trying to connect Tomcat to my MySQL database using JDBC. It has worked previously but after a reboot the instance now fails to connect. Both Tomcat 6 and MySQL 5.0.75 are on the same machine Connection string: jdbc:mysql:///localhost:3306 I can connect to MySQL on the command line using the mysql command The my.cnf file is pretty standard (Available on request) has bind address: 127.0.0.1 I cannot Telnet to the MySQL port despite netstat saying MySQL is listening I have one IpTables rule to forward 80 - 8080 and no firewall I'm aware of. I'm pretty new to this and I'm not sure what else to test. I don't know whether I should be looking in etc/interfaces and if I did what to look for. It's weird because it used to work but after a reboot it's down so I must have changed something.... :). I realise a timeout indicates the server is not responding and I assume it's because the request isn't actually getting through. I installed MySQL via apt-get and Tomcat manually. MySqld processes root@88:/var/log/mysql# ps -ef | grep mysqld root 21753 1 0 May27 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe mysql 21792 21753 0 May27 ? 00:00:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --port=3306 --socket=/var/run/mysqld/mysqld.sock root 21793 21753 0 May27 ? 00:00:00 logger -p daemon.err -t mysqld_safe -i -t mysqld root 21888 13676 0 11:23 pts/1 00:00:00 grep mysqld Netstat root@88:/var/log/mysql# netstat -lnp | grep mysql tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 21792/mysqld unix 2 [ ACC ] STREAM LISTENING 1926205077 21792/mysqld /var/run/mysqld/mysqld.sock Toy Connection Class root@88:~# cat TestConnect/TestConnection.java import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; public class TestConnection { public static void main(String args[]) throws Exception { Connection con = null; try { Class.forName("com.mysql.jdbc.Driver").newInstance(); System.out.println("Got driver"); con = DriverManager.getConnection( "jdbc:mysql:///localhost:3306", "uname", "pass"); System.out.println("Got connection"); if(!con.isClosed()) System.out.println("Successfully connected to " + "MySQL server using TCP/IP..."); } finally { if(con != null) con.close(); } } } Toy Connection Class Output Note: This is the same error I get from Tomcat. root@88:~/TestConnect# java -cp mysql-connector-java-5.1.12-bin.jar:. TestConnection Got driver Exception in thread "main" com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 1 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.mysql.jdbc.Util.handleNewInstance(Util.java:409) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122) at TestConnection.main(TestConnection.java:14) Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.mysql.jdbc.Util.handleNewInstance(Util.java:409) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122) at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:344) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2181) ... 12 more Caused by: java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) ... 13 more Telnet Output root@88:~/TestConnect# telnet localhost 3306 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection timed out

    Read the article

  • How Do I Enable My Ubuntu Server To Host Various SSL-Enabled Websites?

    - by Andy Ibanez
    Actually, I Have looked around for a few hours now, but I can't get this to work. The main problem I'm having is that only one out of two sites works. I have my website which will mostly be used for an app. It's called atajosapp.com . atajosapp.com will have three main sites: www.atajosapp.com <- Homepage for the app. auth.atajosapp.com <- Login endpoint for my API (needs SSL) api.atajosapp.com <- Main endpoint for my API (needs SSL). If you attempt to access api.atajosapp.com it works. It will throw you a 403 error and a JSON output, but that's fully intentional. If you try to access auth.atajosapp.com however, the site simply doesn't load. Chrome complains with: The webpage at https://auth.atajosapp.com/ might be temporarily down or it may have moved permanently to a new web address. Error code: ERR_TUNNEL_CONNECTION_FAILED But the website IS there. If you try to access www.atajosapp.com or any other HTTP site, it connects fine. It just doesn't like dealing with more than one HTTPS websites, it seems. The VirtualHost for api.atajosapp.com looks like this: <VirtualHost *:443> DocumentRoot /var/www/api.atajosapp.com ServerName api.atajosapp.com SSLEngine on SSLCertificateFile /certificates/STAR_atajosapp_com.crt SSLCertificateKeyFile /certificates/star_atajosapp_com.key SSLCertificateChainFile /certificates/PositiveSSLCA2.crt </VirtualHost> auth.atajosapp.com Looks very similar: <VirtualHost *:443> DocumentRoot /var/www/auth.atajosapp.com ServerName auth.atajosapp.com SSLEngine on SSLCertificateFile /certificates/STAR_atajosapp_com.crt SSLCertificateKeyFile /certificates/star_atajosapp_com.key SSLCertificateChainFile /certificates/PositiveSSLCA2.crt </VirtualHost> Now I have found many websites that talk about possible solutions. At first, I was getting a message like this: _default_ VirtualHost overlap on port 443, the first has precedence But after googling for hours, I managed to solve it by editing both apache2.conf and ports.conf. This is the last thing I added to ports.conf: <IfModule mod_ssl.c> NameVirtualHost *:443 # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here NameVirtualHost *:443 Listen 443 </IfModule> Still, right now only api.atajosapp.com and www.atajosapp.com are working. I still can't access auth.atajosapp.com. When I check the error log, I see this: Init: Name-based SSL virtual hosts only work for clients with TLS server name indication support (RFC 4366) I don't know what else to do to make both sites work fine on this. I purchased a Wildcard SSL certificate from Comodo that supposedly secures *.atajosapp.com, so after hours trying and googling, I don't know what's wrong anymore. Any help will be really appreciated. EDIT: I just ran the apachectl -t -D DUMP_VHOSTS command and this is the output. Can't make much sense of it...: root@atajosapp:/# apachectl -t -D DUMP_VHOSTS apache2: Could not reliably determine the server's fully qualified domain name, using atajosapp.com for ServerName [Thu Nov 07 02:01:24 2013] [warn] NameVirtualHost *:443 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:443 is a NameVirtualHost default server api.atajosapp.com (/etc/apache2/sites-enabled/api.atajosapp.com:1) port 443 namevhost api.atajosapp.com (/etc/apache2/sites-enabled/api.atajosapp.com:1) port 443 namevhost auth.atajosapp.com (/etc/apache2/sites-enabled/auth.atajosapp.com:1) *:80 is a NameVirtualHost default server atajosapp.com (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost atajosapp.com (/etc/apache2/sites-enabled/000-default:1)

    Read the article

  • Port forwarding DD-WRT

    - by Pawel
    Hi, I'am runing locally service on port 81 (192.168.1.101) I would like to access server from outside MY.WAN.IP.ADDR:81. Everything is working fine on my local network, However can't access it from outside. Below iptables rules on the router. I am using dd-wrt and asus rt-n16 (everything is setup through standard port range forwarding in dd-wrt ) It might be something obvious, but I don't have any experience with routing. Any help will be really appreciated. Thanks. #iptables -t nat -vnL Chain PREROUTING (policy ACCEPT 1285 packets, 148K bytes) pkts bytes target prot opt in out source destination 3 252 DNAT icmp -- * * 0.0.0.0/0 MY.WAN.IP.ADDR to:192.168.1.1 5 300 DNAT tcp -- * * 0.0.0.0/0 MY.WAN.IP.ADDR tcp dpt:81 to:192.168.1.101 0 0 DNAT udp -- * * 0.0.0.0/0 MY.WAN.IP.ADDR udp dpt:81 to:192.168.1.101 298 39375 TRIGGER 0 -- * * 0.0.0.0/0 MY.WAN.IP.ADDR TRIGGER type:dnat match:0 relate:0 Chain POSTROUTING (policy ACCEPT 7 packets, 433 bytes) pkts bytes target prot opt in out source destination 747 91318 SNAT 0 -- * vlan2 0.0.0.0/0 0.0.0.0/0 to:MY.WAN.IP.ADDR 0 0 RETURN 0 -- * br0 0.0.0.0/0 0.0.0.0/0 PKTTYPE = broadcast Chain OUTPUT (policy ACCEPT 86 packets, 5673 bytes) pkts bytes target prot opt in out source destination # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination DROP tcp -- anywhere anywhere tcp dpt:webcache DROP tcp -- anywhere anywhere tcp dpt:www DROP tcp -- anywhere anywhere tcp dpt:https DROP tcp -- anywhere anywhere tcp dpt:69 DROP tcp -- anywhere anywhere tcp dpt:ssh DROP tcp -- anywhere anywhere tcp dpt:ssh DROP tcp -- anywhere anywhere tcp dpt:telnet DROP tcp -- anywhere anywhere tcp dpt:telnet Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT 0 -- anywhere anywhere TCPMSS tcp -- anywhere anywhere tcp flags:SYN,RST/SYN TCPMSS clamp to PMTU lan2wan 0 -- anywhere anywhere ACCEPT 0 -- anywhere anywhere state RELATED,ESTABLISHED logaccept tcp -- anywhere pawel-ubuntu tcp dpt:81 logaccept udp -- anywhere pawel-ubuntu udp dpt:81 TRIGGER 0 -- anywhere anywhere TRIGGER type:in match:0 relate:0 trigger_out 0 -- anywhere anywhere logaccept 0 -- anywhere anywhere state NEW Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain advgrp_1 (0 references) target prot opt source destination Chain advgrp_10 (0 references) target prot opt source destination Chain advgrp_2 (0 references) target prot opt source destination Chain advgrp_3 (0 references) target prot opt source destination Chain advgrp_4 (0 references) target prot opt source destination Chain advgrp_5 (0 references) target prot opt source destination Chain advgrp_6 (0 references) target prot opt source destination Chain advgrp_7 (0 references) target prot opt source destination Chain advgrp_8 (0 references) target prot opt source destination Chain advgrp_9 (0 references) target prot opt source destination Chain grp_1 (0 references) target prot opt source destination Chain grp_10 (0 references) target prot opt source destination Chain grp_2 (0 references) target prot opt source destination Chain grp_3 (0 references) target prot opt source destination Chain grp_4 (0 references) target prot opt source destination Chain grp_5 (0 references) target prot opt source destination Chain grp_6 (0 references) target prot opt source destination Chain grp_7 (0 references) target prot opt source destination Chain grp_8 (0 references) target prot opt source destination Chain grp_9 (0 references) target prot opt source destination Chain lan2wan (1 references) target prot opt source destination Chain logaccept (3 references) target prot opt source destination ACCEPT 0 -- anywhere anywhere Chain logdrop (0 references) target prot opt source destination DROP 0 -- anywhere anywhere Chain logreject (0 references) target prot opt source destination REJECT tcp -- anywhere anywhere tcp reject-with tcp-reset Chain trigger_out (1 references) target prot opt source destination #iptables -vnL FORWARD Chain FORWARD (policy ACCEPT 130 packets, 5327 bytes) pkts bytes target prot opt in out source destination 15 900 ACCEPT 0 -- br0 br0 0.0.0.0/0 0.0.0.0/0 390 20708 TCPMSS tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU 182K 130M lan2wan 0 -- * * 0.0.0.0/0 0.0.0.0/0 179K 129M ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 logaccept tcp -- * * 0.0.0.0/0 192.168.1.101 tcp dpt:81 0 0 logaccept udp -- * * 0.0.0.0/0 192.168.1.101 udp dpt:81 0 0 TRIGGER 0 -- vlan2 br0 0.0.0.0/0 0.0.0.0/0 TRIGGER type:in match:0 relate:0 2612 768K trigger_out 0 -- br0 * 0.0.0.0/0 0.0.0.0/0 2482 762K logaccept 0 -- br0 * 0.0.0.0/0 0.0.0.0/0 state NEW

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >