Search Results

Search found 4167 results on 167 pages for 'rman 11gr2 duplicate'.

Page 157/167 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Master Page: Dynamically Adding Rows in ASP Table on Button Click event

    - by Vincent Maverick Durano
    In my previous post here, I wrote an example that demonstrates how are we going to generate table rows dynamically using ASP Table on click of the Button control. Now based on some comments in my previous example and in the forums they wanted to implement it within Masterpage. Unfortunately the code in my previous example doesn't work in Masterpage for the following main reasons: The Table is dynamically added within the Form tag and so the TextBox control will not be generated correcty in the page. The data will not be retained on each and every postbacks because the SetPreviousData() method is looking for the Table element within the Page and not on the MasterPage. The Request.Form key value should be set correctly since all controls within the master page are prefixed with the naming containter ID to prevent duplicate ids on the final rendered HTML. For example the TextBox control with the ID of TextBoxRow will turn to ID to this ctl00$MainBody$TextBoxRow. In order for the previous example to work within Masterpage then we will have to correct those three main reasons above and this post will guide you how to correct it. Suppose we have this content page declaration below:   <asp:Content ID="Content1" ContentPlaceHolderID="MainHead" Runat="Server"> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainBody" Runat="Server"> <asp:PlaceHolder ID="PlaceHolder1" runat="server"> <asp:Button ID="BTNAdd" runat="server" Text="Add New Row" OnClick="BTNAdd_Click" /> </asp:PlaceHolder> </asp:Content> As you notice I've added a PlaceHolder control within the MainBody ContentPlaceHolder. This is because we are going to generate the Table in the PlaceHolder instead of generating it within the Form element. Now since issue #1 is already corrected then let's proceed to the code beind part. Here are the full code blocks below:     using System; using System.Web.UI; using System.Web.UI.WebControls; public partial class DynamicControlDemo : System.Web.UI.Page { private int numOfRows = 1; protected void Page_Load(object sender, EventArgs e) { //Generate the Rows on Initial Load if (!Page.IsPostBack) { GenerateTable(numOfRows); } } protected void BTNAdd_Click(object sender, EventArgs e) { if (ViewState["RowsCount"] != null) { numOfRows = Convert.ToInt32(ViewState["RowsCount"].ToString()); GenerateTable(numOfRows); } } private void SetPreviousData(int rowsCount, int colsCount) { Table table = (Table)this.Page.Master.FindControl("MainBody").FindControl("Table1"); // **** if (table != null) { for (int i = 0; i < rowsCount; i++) { for (int j = 0; j < colsCount; j++) { //Extracting the Dynamic Controls from the Table TextBox tb = (TextBox)table.Rows[i].Cells[j].FindControl("TextBoxRow_" + i + "Col_" + j); //Use Request object for getting the previous data of the dynamic textbox tb.Text = Request.Form["ctl00$MainBody$TextBoxRow_" + i + "Col_" + j];//***** } } } } private void GenerateTable(int rowsCount) { //Creat the Table and Add it to the Page Table table = new Table(); table.ID = "Table1"; PlaceHolder1.Controls.Add(table);//****** //The number of Columns to be generated const int colsCount = 3;//You can changed the value of 3 based on you requirements // Now iterate through the table and add your controls for (int i = 0; i < rowsCount; i++) { TableRow row = new TableRow(); for (int j = 0; j < colsCount; j++) { TableCell cell = new TableCell(); TextBox tb = new TextBox(); // Set a unique ID for each TextBox added tb.ID = "TextBoxRow_" + i + "Col_" + j; // Add the control to the TableCell cell.Controls.Add(tb); // Add the TableCell to the TableRow row.Cells.Add(cell); } // And finally, add the TableRow to the Table table.Rows.Add(row); } //Set Previous Data on PostBacks SetPreviousData(rowsCount, colsCount); //Sore the current Rows Count in ViewState rowsCount++; ViewState["RowsCount"] = rowsCount; } }   As you observed the code is pretty much similar to the previous example except for the highlighted lines above. That's it! I hope someone find this post usefu! Technorati Tags: Dynamic Controls,ASP.NET,C#,Master Page

    Read the article

  • For Programmers familiar with ACM API? Drawing Initials [closed]

    - by user71992
    Possible Duplicate: For Programmers familiar with ACM API? Drawing Initials I came across an exercise (in the book "The Art and Science of Java" by Eric Roberts) that requires using only GArc and GLine classes to create a lettering library which draws your initials on the canvas. This should be made independent of the GLabel class. I'd like to know the correct approach to use in solving this problem. I'm not sure what I have so far is good enough (I'm thinking it's too long). The questions requires that I use a good Top-Down approach. Here's my code so far: //Passes letters to GLetter objects and draws them on the canvas package artScienceJavaExercises.chapter8; import acm.program.*; //import acm.graphics.*; public class DrawInitials extends GraphicsProgram{ public void init(){ resize(400,400); } public void run(){ //String let = readLine("Letter?: "); letter = new GLetter("l"); add(letter, (getWidth()-letter.getWidth()*2)/2, (getHeight()-letter.getHeight())/2); add(new GLetter("o"), (letter.getX()+letter.getWidth()), letter.getY()); } private GLetter letter; } //GLetter Class package artScienceJavaExercises.chapter8; import acm.graphics.*; import java.awt.*; public class GLetter extends GCompound{ private static final int ONE_THIRD = 30; private static final int ROW_2_HEIGHT = 40; private GArc[] arc = new GArc[4]; private GLine[] line = new GLine[24]; public GLetter(String s){ line[0] = new GLine(0,0, ONE_THIRD, 0); line[1] = new GLine(ONE_THIRD,0, ONE_THIRD*2, 0); line[2] = new GLine(ONE_THIRD*2,0, ONE_THIRD*3, 0); line[3] = new GLine(0,0, 0,ONE_THIRD); line[4] = new GLine(ONE_THIRD,0, ONE_THIRD, ONE_THIRD); line[5] = new GLine(ONE_THIRD*2,0, ONE_THIRD*2, ONE_THIRD); line[6] = new GLine(ONE_THIRD*3,0, ONE_THIRD*3, ONE_THIRD); line[7] = new GLine(0,ONE_THIRD, ONE_THIRD*2, ONE_THIRD); line[8] = new GLine(ONE_THIRD,ONE_THIRD, ONE_THIRD*2, ONE_THIRD); line[9] = new GLine(ONE_THIRD*2,ONE_THIRD, ONE_THIRD*3, ONE_THIRD); line[10] = new GLine(0,ONE_THIRD, 0, ONE_THIRD+ROW_2_HEIGHT); line[11] = new GLine(ONE_THIRD, ONE_THIRD, ONE_THIRD, ONE_THIRD+ROW_2_HEIGHT); line[12] = new GLine(ONE_THIRD*2,ONE_THIRD, ONE_THIRD*2, ONE_THIRD+ROW_2_HEIGHT); line[13] = new GLine(ONE_THIRD*3,ONE_THIRD, ONE_THIRD*3, ONE_THIRD+ROW_2_HEIGHT); line[14] = new GLine(0, ONE_THIRD+ROW_2_HEIGHT, ONE_THIRD, ONE_THIRD+ROW_2_HEIGHT); line[15] = new GLine(ONE_THIRD, ONE_THIRD+ROW_2_HEIGHT, ONE_THIRD*2, ONE_THIRD+ROW_2_HEIGHT); line[16] = new GLine(ONE_THIRD*2, ONE_THIRD+ROW_2_HEIGHT, ONE_THIRD*3, ONE_THIRD+ROW_2_HEIGHT); line[17] = new GLine(0, ONE_THIRD+ROW_2_HEIGHT, 0, ONE_THIRD*2+ROW_2_HEIGHT); line[18] = new GLine(ONE_THIRD, ONE_THIRD+ROW_2_HEIGHT, ONE_THIRD, ONE_THIRD*2+ROW_2_HEIGHT); line[19] = new GLine(ONE_THIRD*2, ONE_THIRD+ROW_2_HEIGHT, ONE_THIRD*2, ONE_THIRD*2+ROW_2_HEIGHT); line[20] = new GLine(ONE_THIRD*3, ONE_THIRD+ROW_2_HEIGHT, ONE_THIRD*3, ONE_THIRD*2+ROW_2_HEIGHT); line[21] = new GLine(0,ONE_THIRD*2+ROW_2_HEIGHT, ONE_THIRD, ONE_THIRD*2+ROW_2_HEIGHT); line[22] = new GLine(ONE_THIRD, ONE_THIRD*2+ROW_2_HEIGHT, ONE_THIRD*2, ONE_THIRD*2+ROW_2_HEIGHT); line[23] = new GLine(ONE_THIRD*2,ONE_THIRD*2+ROW_2_HEIGHT, ONE_THIRD*3, ONE_THIRD*2+ROW_2_HEIGHT); for(int i = 0; i<line.length; i++){ add(line[i]); line[i].setColor(Color.BLACK); line[i].setVisible(false); } arc[0] = new GArc(getWidth(), getHeight(), 106.699, 49.341); arc[1] = new GArc(getWidth(), getHeight(), 23.96, 49.341); arc[2] = new GArc(getWidth(), getHeight(), -23.96, -49.341); arc[3] = new GArc(0,0,getWidth(), getHeight(), -106.699, -49.341); for(int i = 0; i<arc.length; i++){ add(arc[i],0,0); arc[i].setColor(Color.BLACK); arc[i].setVisible(false); } paintLetter(s); } private void paintLetter(String s){ if (s.equalsIgnoreCase("l")){ turnOn(line[3]); turnOn(line[10]); turnOn(line[17]); turnOn(line[21]); turnOn(line[22]); turnOn(line[23]); } else if(s.equalsIgnoreCase("o")){ for(int i = 0; i<4; ++i){ turnOn(arc[i]); } turnOn(line[1]); turnOn(line[10]); turnOn(line[13]); turnOn(line[22]); } } private void turnOn(GObject g){ g.setVisible(true); } } I created a class (GLetter.java) with arrays for GArc and GLine objects. They are positioned in certain ways thereby turning certain Glines and/or GArcs on or off (changing visiblity) would create a pattern for a letter. This Gletter uses the if/else statements to determine which pattern to create - this makes me feel my code is too long. There is another class (DrawInitials.java) that simulates a GraphicsProgram and allows the user to pass certain letters as arguments to the GLetter object. I've used 'L' and 'O' as examples. However, I posted this because I'm not sure I'm using the right approach. That's why I need your help. I feel MY CODE IS TOO LONG! The code above is not the complete project...it only draws letters 'L' and 'O' for now.

    Read the article

  • Scripting Part 1

    - by rbishop
    Dynamic Scripting is a large topic, so let me get a couple of things out of the way first. If you aren't familiar with JavaScript, I can suggest CodeAcademy's JavaScript series. There are also many other websites and books that cover JavaScript from every possible angle.The second thing we need to deal with is JavaScript as a programming language versus a JavaScript environment running in a web browser. Many books, tutorials, and websites completely blur these two together but they are in fact completely separate. What does this really mean in relation to DRM? Since DRM isn't a web browser, there are no document, window, history, screen, or location objects. There are no events like mousedown or click. Trying to call alert('hello!') in DRM will just cause an error. Those concepts are all related to an HTML document (web page) and are part of the Browser Object Model or Document Object Model. DRM has its own object model that exposes DRM-related objects. In practice, feel free to use those sorts of tutorials or practice within your browser; Many of the concepts are directly translatable to writing scripts in DRM. Just don't try to call document.getElementById in your property definition!I think learning by example tends to work the best, so let's try getting a list of all the unique property values for a given node and its children. var uniqueValues = {}; var childEnumerator = node.GetChildEnumerator(); while(childEnumerator.MoveNext()) { var propValue = childEnumerator.GetCurrent().PropValue("Custom.testpropstr1"); print(propValue); if(propValue != null && propValue != '' && !uniqueValues[propValue]) uniqueValues[propValue] = true; } var result = ''; for(var value in uniqueValues){ result += "Found value " + value + ","; } return result;  Now lets break this down piece by piece. var uniqueValues = {}; This declares a variable and initializes it as a new empty Object. You could also have written var uniqueValues = new Object(); Why use an object here? JavaScript objects can also function as a list of keys and we'll use that later to store each property value as a key on the object. var childEnumerator = node.GetChildEnumerator(); while(childEnumerator.MoveNext()) { This gets an enumerator for the node's children. The enumerator allows us to loop through the children one by one. If we wanted to get a filtered list of children, we would instead use ChildrenWith(). When we reach the end of the child list, the enumerator will return false for MoveNext() and that will stop the loop. var propValue = childEnumerator.GetCurrent().PropValue("Custom.testpropstr1"); print(propValue); if(propValue != null && propValue != '' && !uniqueValues[propValue]) uniqueValues[propValue] = true; } This gets the node the enumerator is currently pointing at, then calls PropValue() on it to get the value of a property. We then make sure the prop value isn't null or the empty string, then we make sure the value doesn't already exist as a key. Assuming it doesn't we add it as a key with a value (true in this case because it makes checking for an existing value faster when the value exists). A quick word on the print() function. When viewing the prop grid, running an export, or performing normal DRM operations it does nothing. If you have a lot of print() calls with complicated arguments it can slow your script down slightly, but otherwise has no effect. But when using the script editor, all the output of print() will be shown in the Warnings area. This gives you an extremely useful debugging tool to see what exactly a script is doing. var result = ''; for(var value in uniqueValues){ result += "Found value " + value + ","; } return result; Now we build a string by looping through all the keys in uniqueValues and adding that value to our string. The last step is to simply return the result. Hopefully this small example demonstrates some of the core Dynamic Scripting concepts. Next time, we can try checking for node references in other hierarchies to see if they are using duplicate property values.

    Read the article

  • How to Apply a Business Card Template to a Contact and Customize it in Outlook 2013

    - by Lori Kaufman
    If you want to add a business card template to an existing contact in Outlook, you can do so without having to enter all of the information again. We will also show you how to customize the layout and format of the text on the card. Microsoft provides a couple of business card templates you can use. We will use their Blue Sky template as an example. To open the archive file for the template you downloaded, double-click on the .cab file. NOTE: You can also use a tool like 7-Zip to open the archive. A new Extract tab becomes available under Compressed Folder Tools and the files in the archive are listed. Select the .vcf file in the list of files. This automatically activates the Extract tab. Click Extract To and select a location or select Choose location if the desired location is not on the drop-down menu. Select a folder in which you want to save the .vcf file on the Copy Items dialog box and click Copy. NOTE: Use the Make New Folder button to create a new folder for the location, if desired. Double-click on the .vcf file that you copied out of the .cab archive file. By default, .vcf files are associated with Outlook so, when you double-click on a .vcf file, it automatically opens in a Contact window in Outlook. Change the Full Name to match the existing contact to which you want to apply this template. Delete the other contact info from the template. If you want to add any additional information not in the existing contact, enter it. Click Save & Close to save the contact with the new template. The Duplicate Contact Detected dialog box displays. To update the existing contact, select the Update information of selected Contact option. Click Update. NOTE: If you want to create a new contact from this template, select the Add new contact option. With the Contacts folder open (the People link on the Navigation Bar), click Business Card in the Current View section of the Home tab. You may notice that not all the fields from your contact display on the business card you just updated. Double-click on the contact to update the contact and the business card. On the Contact window, right-click on the image of the business card and select Edit Business Card from the popup menu. The Edit Business Card dialog box displays. You can change the design of the card, including changing he background color or image. The Fields box allows you to specify which fields display on the business card and in what order. Notice, in our example, that Company is listed below the Full Name, but no text displays on the business card below the name. That’s because we did not enter any information for Company in the Contact. We have information in Job Title. So, we select Company and click Remove to remove that field. Now, we want to add Job Title. First, select the field below which you want to add the new field. We select Full Name to add the Job Title below that. Then, we click Add and select Organization | Job Title from the popup menu to insert the Job Title. To make the Job Title white like the name, we select Job Title in the list of Fields and click the Font Color button in the Edit section. On the Color dialog box, select the color you want to use for the text in the selected field. Click OK. You can also make text bold, italic, or underlined. We chose to make the Job Title bold and the Full Name bold and italic. We also need to remove the Business Phone because this contact only has a mobile phone number. So, we add a Mobile Phone from the Phone submenu. Then, we need to remove enough blank lines so the Mobile Phone is visible on the card. We also added a website and email address and removed more blank lines so they are visible. You can also move text to the right side of the card or make it centered on the card. We also changed the color of the bottom three lines to blue. Click OK to accept your changes and close the dialog box. Your new business card design displays on the Contact window. Click Save & Close to save the changes you made to the business card for this contact and close the Contact window. The final design of the business card displays in the Business Card view on the People screen. If you have a signature that contains the business card for the contact you just updated, you will also need to update the signature by removing the business card and adding it again using the Business Card button in the Signature editor. You can also add the updated Business Card to a signature without the image or without the vCard (.vcf) file.     

    Read the article

  • How to get distinct values from the List&lt;T&gt; with LINQ

    - by Vincent Maverick Durano
    Recently I was working with data from a generic List<T> and one of my objectives is to get the distinct values that is found in the List. Consider that we have this simple class that holds the following properties: public class Product { public string Make { get; set; } public string Model { get; set; } }   Now in the page code behind we will create a list of product by doing the following: private List<Product> GetProducts() { List<Product> products = new List<Product>(); Product p = new Product(); p.Make = "Samsung"; p.Model = "Galaxy S 1"; products.Add(p); p = new Product(); p.Make = "Samsung"; p.Model = "Galaxy S 2"; products.Add(p); p = new Product(); p.Make = "Samsung"; p.Model = "Galaxy Note"; products.Add(p); p = new Product(); p.Make = "Apple"; p.Model = "iPhone 4"; products.Add(p); p = new Product(); p.Make = "Apple"; p.Model = "iPhone 4s"; products.Add(p); p = new Product(); p.Make = "HTC"; p.Model = "Sensation"; products.Add(p); p = new Product(); p.Make = "HTC"; p.Model = "Desire"; products.Add(p); p = new Product(); p.Make = "Nokia"; p.Model = "Some Model"; products.Add(p); p = new Product(); p.Make = "Nokia"; p.Model = "Some Model"; products.Add(p); p = new Product(); p.Make = "Sony Ericsson"; p.Model = "800i"; products.Add(p); p = new Product(); p.Make = "Sony Ericsson"; p.Model = "800i"; products.Add(p); return products; }   And then let’s bind the products to the GridView. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { Gridview1.DataSource = GetProducts(); Gridview1.DataBind(); } }   Running the code will display something like this in the page: Now what I want is to get the distinct row values from the list. So what I did is to use the LINQ Distinct operator and unfortunately it doesn't work. In order for it work is you must use the overload method of the Distinct operator for you to get the desired results. So I’ve added this IEqualityComparer<T> class to compare values: class ProductComparer : IEqualityComparer<Product> { public bool Equals(Product x, Product y) { if (Object.ReferenceEquals(x, y)) return true; if (Object.ReferenceEquals(x, null) || Object.ReferenceEquals(y, null)) return false; return x.Make == y.Make && x.Model == y.Model; } public int GetHashCode(Product product) { if (Object.ReferenceEquals(product, null)) return 0; int hashProductName = product.Make == null ? 0 : product.Make.GetHashCode(); int hashProductCode = product.Model.GetHashCode(); return hashProductName ^ hashProductCode; } }   After that you can then bind the GridView like this: protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { Gridview1.DataSource = GetProducts().Distinct(new ProductComparer()); Gridview1.DataBind(); } }   Running the page will give you the desired output below: As you notice, it now eliminates the duplicate rows in the GridView. Now what if we only want to get the distinct values for a certain field. For example I want to get the distinct “Make” values such as Samsung, Apple, HTC, Nokia and Sony Ericsson and populate them to a DropDownList control for filtering purposes. I was hoping the the Distinct operator has an overload that can compare values based on the property value like (GetProducts().Distinct(o => o.PropertyToCompare). But unfortunately it doesn’t provide that overload so what I did as a workaround is to use the GroupBy,Select and First LINQ query operators to achieve what I want. Here’s the code to get the distinct values of a certain field. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { DropDownList1.DataSource = GetProducts().GroupBy(o => o.Make).Select(o => o.First()); DropDownList1.DataTextField = "Make"; DropDownList1.DataValueField = "Model"; DropDownList1.DataBind(); } } Running the code will display the following output below:   That’s it! I hope someone find this post useful!

    Read the article

  • c++ optimize array of ints

    - by a432511
    I have a 2D lookup table of int16_t. int16_t my_array[37][73] = {{**DATA HERE**}} I have a mixture of values that range from just above the range of int8_t to just below the range of int8_t and some of the values repeat themselves. I am trying to reduce the size of this lookup table. What I have done so far is split each int16_t value into two int8_t values to visualize the wasted bytes. int8_t part_1 = original_value >> 4; int8_t part_2 = original_value & 0x0000FFFF; // If the upper 4 bits of the original_value were empty if(part_1 == 0) wasted_bytes_count++; I can easily remove the zero value int8_t that are wasting a byte of space and I can also remove the duplicate values, but my question is how do I do remove those values while retaining the ability to lookup based on the two indices? I contemplated translating this into a 1D array and adding a number following each duplicated value that would represent the number of duplicates that were removed, but I am struggling with how I would then identify what is a lookup value and what is a duplicate count. Also, it is further complicated by stripping out the zero int8_t values that were wasted bytes. EDIT: This array is stored in ROM already. RAM is even more limited than ROM so it is already stored in ROM. EDIT: I am going to post a bounty for this question as soon as I can. I need a complete answer of how to store the information AND retrieve it. It does not need to be a 2D array as long as I can get the same values. EDIT: Adding the actual array below: {150,145,140,135,130,125,120,115,110,105,100,95,90,85,80,75,70,65,60,55,50,45,40,35,30,25,20,15,10,5,0,-4,-9,-14,-19,-24,-29,-34,-39,-44,-49,-54,-59,-64,-69,-74,-79,-84,-89,-94,-99,104,109,114,119,124,129,134,139,144,149,154,159,164,169,174,179,175,170,165,160,155,150}, \ {143,137,131,126,120,115,110,105,100,95,90,85,80,75,71,66,62,57,53,48,44,39,35,31,27,22,18,14,9,5,1,-3,-7,-11,-16,-20,-25,-29,-34,-38,-43,-47,-52,-57,-61,-66,-71,-76,-81,-86,-91,-96,101,107,112,117,123,128,134,140,146,151,157,163,169,175,178,172,166,160,154,148,143}, \ {130,124,118,112,107,101,96,92,87,82,78,74,70,65,61,57,54,50,46,42,38,34,31,27,23,19,16,12,8,4,1,-2,-6,-10,-14,-18,-22,-26,-30,-34,-38,-43,-47,-51,-56,-61,-65,-70,-75,-79,-84,-89,-94,100,105,111,116,122,128,135,141,148,155,162,170,177,174,166,159,151,144,137,130}, \ {111,104,99,94,89,85,81,77,73,70,66,63,60,56,53,50,46,43,40,36,33,30,26,23,20,16,13,10,6,3,0,-3,-6,-9,-13,-16,-20,-24,-28,-32,-36,-40,-44,-48,-52,-57,-61,-65,-70,-74,-79,-84,-88,-93,-98,103,109,115,121,128,135,143,152,162,172,176,165,154,144,134,125,118,111}, \ {85,81,77,74,71,68,65,63,60,58,56,53,51,49,46,43,41,38,35,32,29,26,23,19,16,13,10,7,4,1,-1,-3,-6,-9,-13,-16,-19,-23,-26,-30,-34,-38,-42,-46,-50,-54,-58,-62,-66,-70,-74,-78,-83,-87,-91,-95,100,105,110,117,124,133,144,159,178,160,141,125,112,103,96,90,85}, \ {62,60,58,57,55,54,52,51,50,48,47,46,44,42,41,39,36,34,31,28,25,22,19,16,13,10,7,4,2,0,-3,-5,-8,-10,-13,-16,-19,-22,-26,-29,-33,-37,-41,-45,-49,-53,-56,-60,-64,-67,-70,-74,-77,-80,-83,-86,-89,-91,-94,-97,101,105,111,130,109,84,77,74,71,68,66,64,62}, \ {46,46,45,44,44,43,42,42,41,41,40,39,38,37,36,35,33,31,28,26,23,20,16,13,10,7,4,1,-1,-3,-5,-7,-9,-12,-14,-16,-19,-22,-26,-29,-33,-36,-40,-44,-48,-51,-55,-58,-61,-64,-66,-68,-71,-72,-74,-74,-75,-74,-72,-68,-61,-48,-25,2,22,33,40,43,45,46,47,46,46}, \ {36,36,36,36,36,35,35,35,35,34,34,34,34,33,32,31,30,28,26,23,20,17,14,10,6,3,0,-2,-4,-7,-9,-10,-12,-14,-15,-17,-20,-23,-26,-29,-32,-36,-40,-43,-47,-50,-53,-56,-58,-60,-62,-63,-64,-64,-63,-62,-59,-55,-49,-41,-30,-17,-4,6,15,22,27,31,33,34,35,36,36}, \ {30,30,30,30,30,30,30,29,29,29,29,29,29,29,29,28,27,26,24,21,18,15,11,7,3,0,-3,-6,-9,-11,-12,-14,-15,-16,-17,-19,-21,-23,-26,-29,-32,-35,-39,-42,-45,-48,-51,-53,-55,-56,-57,-57,-56,-55,-53,-49,-44,-38,-31,-23,-14,-6,0,7,13,17,21,24,26,27,29,29,30}, \ {25,25,26,26,26,25,25,25,25,25,25,25,25,26,25,25,24,23,21,19,16,12,8,4,0,-3,-7,-10,-13,-15,-16,-17,-18,-19,-20,-21,-22,-23,-25,-28,-31,-34,-37,-40,-43,-46,-48,-49,-50,-51,-51,-50,-48,-45,-42,-37,-32,-26,-19,-13,-7,-1,3,7,11,14,17,19,21,23,24,25,25}, \ {21,22,22,22,22,22,22,22,22,22,22,22,22,22,22,22,21,20,18,16,13,9,5,1,-3,-7,-11,-14,-17,-18,-20,-21,-21,-22,-22,-22,-23,-23,-25,-27,-29,-32,-35,-37,-40,-42,-44,-45,-45,-45,-44,-42,-40,-36,-32,-27,-22,-17,-12,-7,-3,0,3,7,9,12,14,16,18,19,20,21,21}, \ {18,19,19,19,19,19,19,19,19,19,19,19,19,19,19,19,18,17,16,14,10,7,2,-1,-6,-10,-14,-17,-19,-21,-22,-23,-24,-24,-24,-24,-23,-23,-23,-24,-26,-28,-30,-33,-35,-37,-38,-39,-39,-38,-36,-34,-31,-28,-24,-19,-15,-10,-6,-3,0,1,4,6,8,10,12,14,15,16,17,18,18}, \ {16,16,17,17,17,17,17,17,17,17,17,16,16,16,16,16,16,15,13,11,8,4,0,-4,-9,-13,-16,-19,-21,-23,-24,-25,-25,-25,-25,-24,-23,-21,-20,-20,-21,-22,-24,-26,-28,-30,-31,-32,-31,-30,-29,-27,-24,-21,-17,-13,-9,-6,-3,-1,0,2,4,5,7,9,10,12,13,14,15,16,16}, \ {14,14,14,15,15,15,15,15,15,15,14,14,14,14,14,14,13,12,11,9,5,2,-2,-6,-11,-15,-18,-21,-23,-24,-25,-25,-25,-25,-24,-22,-21,-18,-16,-15,-15,-15,-17,-19,-21,-22,-24,-24,-24,-23,-22,-20,-18,-15,-12,-9,-5,-3,-1,0,1,2,4,5,6,8,9,10,11,12,13,14,14}, \ {12,13,13,13,13,13,13,13,13,13,13,13,12,12,12,12,11,10,9,6,3,0,-4,-8,-12,-16,-19,-21,-23,-24,-24,-24,-24,-23,-22,-20,-17,-15,-12,-10,-9,-9,-10,-12,-13,-15,-17,-17,-18,-17,-16,-15,-13,-11,-8,-5,-3,-1,0,1,1,2,3,4,6,7,8,9,10,11,12,12,12}, \ {11,11,11,11,11,12,12,12,12,12,11,11,11,11,11,10,10,9,7,5,2,-1,-5,-9,-13,-17,-20,-22,-23,-23,-23,-23,-22,-20,-18,-16,-14,-11,-9,-6,-5,-4,-5,-6,-8,-9,-11,-12,-12,-12,-12,-11,-9,-8,-6,-3,-1,0,0,1,1,2,3,4,5,6,7,8,9,10,11,11,11}, \ {10,10,10,10,10,10,10,10,10,10,10,10,10,10,9,9,9,7,6,3,0,-3,-6,-10,-14,-17,-20,-21,-22,-22,-22,-21,-19,-17,-15,-13,-10,-8,-6,-4,-2,-2,-2,-2,-4,-5,-7,-8,-8,-9,-8,-8,-7,-5,-4,-2,0,0,1,1,1,2,2,3,4,5,6,7,8,9,10,10,10}, \ {9,9,9,9,9,9,9,10,10,9,9,9,9,9,9,8,8,6,5,2,0,-4,-7,-11,-15,-17,-19,-21,-21,-21,-20,-18,-16,-14,-12,-10,-8,-6,-4,-2,-1,0,0,0,-1,-2,-4,-5,-5,-6,-6,-5,-5,-4,-3,-1,0,0,1,1,1,1,2,3,3,5,6,7,8,8,9,9,9}, \ {9,9,9,9,9,9,9,9,9,9,9,9,8,8,8,8,7,5,4,1,-1,-5,-8,-12,-15,-17,-19,-20,-20,-19,-18,-16,-14,-11,-9,-7,-5,-4,-2,-1,0,0,1,1,0,0,-2,-3,-3,-4,-4,-4,-3,-3,-2,-1,0,0,0,0,0,1,1,2,3,4,5,6,7,8,8,9,9}, \ {9,9,9,8,8,8,9,9,9,9,9,8,8,8,8,7,6,5,3,0,-2,-5,-9,-12,-15,-17,-18,-19,-19,-18,-16,-14,-12,-9,-7,-5,-4,-2,-1,0,0,1,1,1,1,0,0,-1,-2,-2,-3,-3,-2,-2,-1,-1,0,0,0,0,0,0,0,1,2,3,4,5,6,7,8,8,9}, \ {8,8,8,8,8,8,9,9,9,9,9,9,8,8,8,7,6,4,2,0,-3,-6,-9,-12,-15,-17,-18,-18,-17,-16,-14,-12,-10,-8,-6,-4,-2,-1,0,0,1,2,2,2,2,1,0,0,-1,-1,-1,-2,-2,-1,-1,0,0,0,0,0,0,0,0,0,1,2,3,4,5,6,7,8,8}, \ {8,8,8,8,9,9,9,9,9,9,9,9,9,8,8,7,5,3,1,-1,-4,-7,-10,-13,-15,-16,-17,-17,-16,-15,-13,-11,-9,-6,-5,-3,-2,0,0,0,1,2,2,2,2,1,1,0,0,0,-1,-1,-1,-1,-1,0,0,0,0,-1,-1,-1,-1,-1,0,0,1,3,4,5,7,7,8}, \ {8,8,9,9,9,9,10,10,10,10,10,10,10,9,8,7,5,3,0,-2,-5,-8,-11,-13,-15,-16,-16,-16,-15,-13,-12,-10,-8,-6,-4,-2,-1,0,0,1,2,2,3,3,2,2,1,0,0,0,0,0,0,0,0,0,0,-1,-1,-2,-2,-2,-2,-2,-1,0,0,1,3,4,6,7,8}, \ {7,8,9,9,9,10,10,11,11,11,11,11,10,10,9,7,5,3,0,-2,-6,-9,-11,-13,-15,-16,-16,-15,-14,-13,-11,-9,-7,-5,-3,-2,0,0,1,1,2,3,3,3,3,2,2,1,1,0,0,0,0,0,0,0,-1,-1,-2,-3,-3,-4,-4,-4,-3,-2,-1,0,1,3,5,6,7}, \ {6,8,9,9,10,11,11,12,12,12,12,12,11,11,9,7,5,2,0,-3,-7,-10,-12,-14,-15,-16,-15,-15,-13,-12,-10,-8,-7,-5,-3,-1,0,0,1,2,2,3,3,4,3,3,3,2,2,1,1,1,0,0,0,0,-1,-2,-3,-4,-4,-5,-5,-5,-5,-4,-2,-1,0,2,3,5,6}, \ {6,7,8,10,11,12,12,13,13,14,14,13,13,11,10,8,5,2,0,-4,-8,-11,-13,-15,-16,-16,-16,-15,-13,-12,-10,-8,-6,-5,-3,-1,0,0,1,2,3,3,4,4,4,4,4,3,3,3,2,2,1,1,0,0,-1,-2,-3,-5,-6,-7,-7,-7,-6,-5,-4,-3,-1,0,2,4,6}, \ {5,7,8,10,11,12,13,14,15,15,15,14,14,12,11,8,5,2,-1,-5,-9,-12,-14,-16,-17,-17,-16,-15,-14,-12,-11,-9,-7,-5,-3,-1,0,0,1,2,3,4,4,5,5,5,5,5,5,4,4,3,3,2,1,0,-1,-2,-4,-6,-7,-8,-8,-8,-8,-7,-6,-4,-2,0,1,3,5}, \ {4,6,8,10,12,13,14,15,16,16,16,16,15,13,11,9,5,2,-2,-6,-10,-13,-16,-17,-18,-18,-17,-16,-15,-13,-11,-9,-7,-5,-4,-2,0,0,1,3,3,4,5,6,6,7,7,7,7,7,6,5,4,3,2,0,-1,-3,-5,-7,-8,-9,-10,-10,-10,-9,-7,-5,-4,-1,0,2,4}, \ {4,6,8,10,12,14,15,16,17,18,18,17,16,15,12,9,5,1,-3,-8,-12,-15,-18,-19,-20,-20,-19,-18,-16,-15,-13,-11,-8,-6,-4,-2,-1,0,1,3,4,5,6,7,8,9,9,9,9,9,9,8,7,5,3,1,-1,-3,-6,-8,-10,-11,-12,-12,-11,-10,-9,-7,-5,-2,0,1,4}, \ {4,6,8,11,13,15,16,18,19,19,19,19,18,16,13,10,5,0,-5,-10,-15,-18,-21,-22,-23,-22,-22,-20,-18,-17,-14,-12,-10,-8,-5,-3,-1,0,1,3,5,6,8,9,10,11,12,12,13,12,12,11,9,7,5,2,0,-3,-6,-9,-11,-12,-13,-13,-12,-11,-10,-8,-6,-3,-1,1,4}, \ {3,6,9,11,14,16,17,19,20,21,21,21,19,17,14,10,4,-1,-8,-14,-19,-22,-25,-26,-26,-26,-25,-23,-21,-19,-17,-14,-12,-9,-7,-4,-2,0,1,3,5,7,9,11,13,14,15,16,16,16,16,15,13,10,7,4,0,-3,-7,-10,-12,-14,-15,-14,-14,-12,-11,-9,-6,-4,-1,1,3}, \ {4,6,9,12,14,17,19,21,22,23,23,23,21,19,15,9,2,-5,-13,-20,-25,-28,-30,-31,-31,-30,-29,-27,-25,-22,-20,-17,-14,-11,-9,-6,-3,0,1,4,6,9,11,13,15,17,19,20,21,21,21,20,18,15,11,6,2,-2,-7,-11,-13,-15,-16,-16,-15,-13,-11,-9,-7,-4,-1,1,4}, \ {4,7,10,13,15,18,20,22,24,25,25,25,23,20,15,7,-2,-12,-22,-29,-34,-37,-38,-38,-37,-36,-34,-31,-29,-26,-23,-20,-17,-13,-10,-7,-4,-1,2,5,8,11,13,16,18,21,23,24,26,26,26,26,24,21,17,12,5,0,-6,-10,-14,-16,-16,-16,-15,-14,-12,-10,-7,-4,-1,1,4}, \ {4,7,10,13,16,19,22,24,26,27,27,26,24,19,11,-1,-15,-28,-37,-43,-46,-47,-47,-45,-44,-41,-39,-36,-32,-29,-26,-22,-19,-15,-11,-8,-4,-1,2,5,9,12,15,19,22,24,27,29,31,33,33,33,32,30,26,21,14,6,0,-6,-11,-14,-15,-16,-15,-14,-12,-9,-7,-4,-1,1,4}, \ {6,9,12,15,18,21,23,25,27,28,27,24,17,4,-14,-34,-49,-56,-60,-60,-60,-58,-56,-53,-50,-47,-43,-40,-36,-32,-28,-25,-21,-17,-13,-9,-5,-1,2,6,10,14,17,21,24,28,31,34,37,39,41,42,43,43,41,38,33,25,17,8,0,-4,-8,-10,-10,-10,-8,-7,-4,-2,0,3,6}, \ {22,24,26,28,30,32,33,31,23,-18,-81,-96,-99,-98,-95,-93,-89,-86,-82,-78,-74,-70,-66,-62,-57,-53,-49,-44,-40,-36,-32,-27,-23,-19,-14,-10,-6,-1,2,6,10,15,19,23,27,31,35,38,42,45,49,52,55,57,60,61,63,63,62,61,57,53,47,40,33,28,23,21,19,19,19,20,22}, \ {168,173,178,176,171,166,161,156,151,146,141,136,131,126,121,116,111,106,101,-96,-91,-86,-81,-76,-71,-66,-61,-56,-51,-46,-41,-36,-31,-26,-21,-16,-11,-6,-1,3,8,13,18,23,28,33,38,43,48,53,58,63,68,73,78,83,88,93,98,103,108,113,118,123,128,133,138,143,148,153,158,163,168}, \ Thanks for your time.

    Read the article

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • No More NCrunch For Me

    - by Steve Wilkes
    When I opened up Visual Studio this morning, I was greeted with this little popup: NCrunch is a Visual Studio add-in which runs your tests while you work so you know if and when you've broken anything, as well as providing coverage indicators in the IDE and coverage metrics on demand. It recently went commercial (which I thought was fair enough), and time is running out for the free version I've been using for the last couple of months. From my experiences using NCrunch I'm going to let it expire, and go about my business without it. Here's why. Before I start, let me say that I think NCrunch is a good product, which is to say it's had a positive impact on my programming. I've used it to help test-drive a library I'm making right from the start of the project, and especially at the beginning it was very useful to have it run all my tests whenever I made a change. The first problem is that while that was cool to start with, it’s recently become a bit of a chore. Problems Running Tests NCrunch has two 'engine modes' in which it can run tests for you - it can run all your tests when you make a change, or it can figure out which tests were impacted and only run those. Unfortunately, it became clear pretty early on that that second option (which is marked as 'experimental') wasn't really working for me, so I had to have it run everything. With a smallish number of tests and while I was adding new features that was great, but I've now got 445 tests (still not exactly loads) and am more in a 'clean and tidy' mode where I know that a change I'm making will probably only affect a particular subset of the tests. With that in mind it's a bit of a drag sitting there after I make a change and having to wait for NCrunch to run everything. I could disable it and manually run the tests I know are impacted, but then what's the point of having NCrunch? If the 'impacted only' engine mode worked well this problem would go away, but that's not what I found. Secondly, what's wrong with this picture? I've got 445 tests, and NCrunch has queued 455 tests to run. So it's queued duplicate tests - in this quickly-screenshotted case 10, but I've seen the total queue get up over 600. If I'm already itchy waiting for it to run all my tests against a change I know only affects a few, I'm even itchier waiting for it to run a lot of them twice. Problems With Code Coverage NCrunch marks each line of code with a dot to say if it's covered by tests - a black dot says the line isn't covered, a red dot says it's covered but at least one of the covering tests is failing, and a green dot means all the covering tests pass. It also calculates coverage statistics for you. Unfortunately, there's a couple of flaws in the coverage. Firstly, it doesn't support ExcludeFromCodeCoverage attributes. This feature has been requested and I expect will be included in a later release, but right now it doesn't. So this: ...is counted as a non-covered line, and drags your coverage statistics down. Hmph. As well as that, coverage of certain types of code is missed. This: ...is definitely covered. I am 100% absolutely certain it is, by several tests. NCrunch doesn't pick it up, down go my coverage statistics. I've had NCrunch find genuinely uncovered code which I've been able to remove, and that's great, but what's the coverage percentage on this project? Umm... I don't know. Conclusion None of these are major, tool-crippling problems, and I expect NCrunch to get much better in future releases. The current version has some great features, like this: ...that's a line of code with a failing test covering it, and NCrunch can run that failing test and take me to that line exquisitely easily. That's awesome! I'd happily pay for a tool that can do that. But here's the thing: NCrunch (currently) costs $159 (about £100) for a personal licence and $289 (about £180) for a commercial one. I'm not sure which one I'd need as my project is a personal one which I'm intending to open-source, but I'm a professional, self-employed developer, but in any case - that seems like a lot of money for an imperfect tool. If it did everything it's advertised to do more or less perfectly I'd consider it, but it doesn't. So no more NCrunch for me.

    Read the article

  • Notes from AT&T ARO Session at Oredev 2013

    - by Geertjan
    The mobile internet is 12 times bigger than internet was 12 years ago. Explosive growth, faster networks, and more powerful devices. 85% of users prefer mobile apps, while 56% have problems. Almost 60% want less than 2 second mobile app startup. App with poor mobile experience results in not buying stuff, going to competitor, not liking your company. Battery life. Bad mobile app is worse than no app at all because it turns people away from brand, etc. Apps didn't exist 10 years ago, 72 billion dollars a year in 2013, 151 billion in 2017.Testing performance. Mobile is different than regular app. Need to fix issues before customers discover them. ARO is free and open source AT&T tool for identifying mobile app performance problems. Mobile data is different -- radio resource control state machine. Radio resource control -- radio from idle to continuous reception -- drains battery, sends data, packets coming through, after packets come through radio is still on which is tail time, after 10 seconds of no data coming through radio goes off. For example, YouTube, e.g., 10 to 15 seconds after every connection, can be huge drain on battery, app traffic triggers RRC state. Goal. Balance fast network connectivity against battery usage. ARO is free and open source and test any platform and won awards. How do I test my app? pcap or tcdump network. Native collector: Android and iOS. Android rooted device is needed. Test app on phone, background data, idle for ads and analytics. Graded against 25 best practices. See all the processes, all network traffic mapped to processes, stats about trace, can look just at your app, exlude Facebook, etc. Many tests conducted, e.g., file download, HTML (wrapped applications, e.g., cordova). Best Practices. Make stuff smaller. GZIP, smaller files, download faster, best for files larger than 800 bytes, minification -- remove tabs and commenting -- browser doesn't need that, just give processor what it needs remove wheat from chaff. Images -- make images smaller, 1024x1024 image for a checkmark, swish it, make it 33% smaller, ARO records the screen, probably could be 9 times smaller. Download less stuff. 17% of HTTP content on mobile is duplicate data because of caching, reloading from cache is 75% to 99% faster than downloading again, 75% possible savings which means app will start up faster because using cache -- everyone wants app starting up 2 seconds. Make fewer HTTP requests. Inline and combine CSS and JS when possible reduces the number of requests, spread images used often. Fewer connections. Faster and use less battery, for example, download an image every 60 secs, download an add every 60 seconds, send analytics every 60 seconds -- instead of that, use transaction manager, download everything at once, reduce amount of time connected to network by 40% also -- 80% of applications do NOT close connections when they are finished, e.g., download picture, 10 seconds later the radio turns off, if you do not explicitly close, eventually server closes, 38% more tail time, 40% less energy if you close connection right away, background data traffic is 27% of data and 55% of network time, this kills the battery. Look at redirection. Adds 200 to 600 ms on each connection, waterfall diagram to all the requests -- e.g., xyz.com redirect to www.xyz.com redirect to xyz.mobi to www.xyz.com, waterfall visualization of packets, minimize redirects but redirects are fine. HTML best practices. Order matters and hiding code (JS downloading blocks rendering, always do CSS before JS or JS asynchronously, CSS 'display:none' hides images from user but the browser downloads them which adds latency to application. Some apps turn on GPS for no reason. Tell network when down, but maybe some other app is using the radio at the same time. It's all about knowing best practices: everyone wins with ARO (carriers, e.g., AT&T, developers, customers). Faster apps, better battery usage, network traffic better, better app reviews, happier customers. MBTA app, referenced as an example.ARO is free, open source, can test all platforms.

    Read the article

  • IIS logs show sc-win32-status=64 but only through some networks

    - by wweicker
    I have an ASP.NET application running on a client server (W2k3, IIS6, .NET 2.0). FWIW, this is a Test instance, it hasn't been moved into Production yet. So it is not running under SSL, load balancing, etc. When I access one of the pages on their server from our office, the page gets hit once. Inspecting the IIS logs (c:WINDOWS\system32\LogFiles\W3SVC1) show a GET for that page, then I push a button on the page and the log file shows a POST. This seems to be working fine so far. Now when I remote into the client's network and access the page from one of their local machines, the log file shows a GET, then I push the button on the page and the log shows two POSTs at the same second. The first one shows status (sc-status, sc-substatus, sc-win32-status) 200 0 64, the second shows 200 0 0. In the log file, both POSTs are identical. Basically the log looks like this (except I masked some of the data): #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2009-08-11 20:19:32 x.x.x.x GET /File.aspx - 80 - y.y.y.y Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+6.0;+WOW64;+Trident/4.0;+SLCC1;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.21022;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30618;+MDDR;+OfficeLiveConnector.1.4;+OfficeLivePatch.0.0) 200 0 0 2009-08-11 20:19:45 x.x.x.x POST /File.aspx - 80 - y.y.y.y Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+6.0;+WOW64;+Trident/4.0;+SLCC1;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.21022;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30618;+MDDR;+OfficeLiveConnector.1.4;+OfficeLivePatch.0.0) 200 0 64 2009-08-11 20:19:45 x.x.x.x POST /File.aspx - 80 - y.y.y.y Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+6.0;+WOW64;+Trident/4.0;+SLCC1;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.21022;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30618;+MDDR;+OfficeLiveConnector.1.4;+OfficeLivePatch.0.0) 200 0 0 The problem is, the page is getting hit twice. The database performs an operation for the first request, then the second request detects that a duplicate operation is being performed and throws an error message. The users think their operation failed, but it actually succeeded. The error description of sc-win32-status 64 is: "The specified network name is no longer available." This leads me to believe, given that both POST requests show an HTTP status of 200, that the server is successful in serving the request, but the client is never notified and resubmits the request. How can I troubleshoot this? Any ideas what could be causing this behavior on their internal network only? I should mention, this is happening at two separate client sites, but does not happen at six of our other client sites, or in our office, or connecting to any of our eight clients over the web. What could be making this reproducible 100% of the time on their local network but 0% of the time anywhere else? Update: I found a very small number of the duplicated POST requests had sc-win32-status of 995 instead of 64 as originally reported. The error description of sc-win32-status=995 is: "The I/O operation has been aborted because of either a thread exit or an application request." This doesn't make any sense (considering I have full access to the code). I still don't understand how or why this issue is occurring, but the new error code leads me to believe it may not be a network issue after all and I am now investigating the possibility of a random code bug.

    Read the article

  • HTTP 500 Internal Server Error on IIS 7.5 with MVC3

    - by Tor Haugen
    I am trying to install an MVC3 application on our production server with no luck. The application is from a 3rd party (compiled), and so debugging is not available to me. Besides, I strongly suspect the error occurs before any code in the site has a chance to execute. Our staging server is - as far as I can determine - set up excactly like the production server. Both run Windows Server 2008 Standard R2, both also run a Sharepoint 2010 site (though this install doesn't touch that in any way). IIS is version 7.5, and .NET Framework 4.0 (required by the MVC app) is (recently) installed (by me, with a reboot after). The application is very small and simple and, as far as I can tell sticks to fairly standard functionality - including forms authentication (ie. it doesnt' pull any dirty tricks). The error message shown in the browser is very general: HTTP Error 500.0 - Internal Server Error An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. The bit about 'An error message detailing the cause' being in the application event log seems to be just speculation - a pious hope that whatever code actually caused the error will log it. Nothing useful is to be found in the event log (only the very same message, logged by IIS). Module: AspNetInitClrHostFailureModule Notification: BeginRequest Handler: StaticFile Error Code: 0x80070002 Requested URL: http://xxxxxx.xxxxxx.xx:80/ Physical Path: C:\Xxxxxxx\Prod\WebClient Logon Method: Not yet determined Logon User: Not yet determined Using Failed Request Tracing, I have been able to track the error (as also indicated above) to the AspNetInitClrHostFailureModule: 103. -NOTIFY_MODULE_START ModuleName AspNetInitClrHostFailureModule Notification 1 fIsPostNotification false Notification BEGIN_REQUEST 104. -SET_RESPONSE_ERROR_DESCRIPTION ErrorDescription An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. 105. -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName AspNetInitClrHostFailureModule Notification 1 HttpStatus 500 HttpReason Internal Server Error HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification BEGIN_REQUEST ErrorCode The system cannot find the file specified. (0x80070002) So there you have it. Seemingly, the AspNetInitClrHostFailureModule fails to find some file. So some questions are: What is the AspNetInitClrHostFailureModule? It is not listed in the fairly exhausting list of modules configurable in IIS manager for the site. I have had no success googling it either. Maybe it's secret.. I access the root URL of the site. This is supposed to be redirected to /Account/LogOn by the FormsAuthenticationModule. Why then is the handler StaticFile? Is that a clue? I have tried removing the infamous system.webserver/modules/runAllManagedModulesForAllRequests attribute, and that makes the error go away (but MVC not actually working, of course). I am prepared to specify all necessary modules manually if that's what it takes, but if the AspNetInitClrHostFailureModule is actually needed, I will be just as stuck. Does anyone know, or can anyone direct me to someone who knows, exactly what modules a typical MVC3 application actually needs? This question might well be a duplicate of this one, but he didn't get any useful answer, and also asked less specific questions. So I'll have my own go. Hoping for some help here :) Edit: I have now tried setting up a trivial MVC 3 project on the server. I created a new project using the MVC Application template, compiled it and deployed it to the server. It behaves in exactly the same way. The server simply cannot run MVC 3 projects.

    Read the article

  • 502 Bad Gateway - nginx

    - by ADH2
    I am randomly receiving 502 Bad Gateway error pages - I can reproduce this issue by modifying hosting plans in plesk 11 and in the same time refreshing a page for a minute or two. When I get the 502 error page all I have to do is refresh the browser and the page refreshes properly. i am using centos 6 this it from todays log (/var/log/nginx/error.log): 2012/12/04 10:52:07 [error] 21272#0: *545 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 82.77.68.111, server: likeit-craiova.ro, request: "GET / HTTP/1.1", upstream: "http://195.254.135.113:7080/", host: "likeit-craiova.ro" this is the nginx config (/etc/nginx/nginx.conf) #user nginx; worker_processes 1; #error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; #pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #tcp_nodelay on; #gzip on; #gzip_disable "MSIE [1-6]\.(?!.*SV1)"; server_tokens off; include /etc/nginx/conf.d/*.conf; } fastcgi config file (/etc/nginx/fastcgi.conf): fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; fastcgi parameters config (/etc/nginx/fastcgi_params): fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; alsow i'm getting this on a shared hosting server, on one of the domains: Unable to generate the web server configuration file on the host because of the following errors: nginx: [warn] duplicate MIME type "text/html" in /etc/nginx/nginx.conf:45 nginx: [emerg] open() "/var/www/vhosts/partydayandnight.ro/statistics/logs/proxy_access_log" failed (24: Too many open files) nginx: configuration file /etc/nginx/nginx.conf test failed Please resolve the errors in web server configuration templates and generate the file again. why is this appearing and what troubles may it cause? what can i do to get this errors fixed? thank you!

    Read the article

  • Squid w/ SquidGuard fails w/ "Too few redirector processes are running"

    - by DKNUCKLES
    I'm trying to implement a Squid proxy in a quick and easy fashion and I'm receiving some errors I have been unable to resolve. The box is a pre-made appliance, however it seems to fail on launch.The following is the cache.log file when I attempt to launch the squid service. 2012/11/18 22:14:29| Starting Squid Cache version 3.0.STABLE20-20091201 for i686 -pc-linux-gnu... 2012/11/18 22:14:29| Process ID 12647 2012/11/18 22:14:29| With 1024 file descriptors available 2012/11/18 22:14:29| Performing DNS Tests... 2012/11/18 22:14:29| Successful DNS name lookup tests... 2012/11/18 22:14:29| DNS Socket created at 0.0.0.0, port 40513, FD 8 2012/11/18 22:14:29| Adding nameserver 192.168.0.78 from /etc/resolv.conf 2012/11/18 22:14:29| Adding nameserver 8.8.8.8 from /etc/resolv.conf 2012/11/18 22:14:29| helperOpenServers: Starting 5/5 'bin' processes 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| helperOpenServers: Starting 5/5 'squid-auth.pl' processes 2012/11/18 22:14:29| User-Agent logging is disabled. 2012/11/18 22:14:29| Referer logging is disabled. 2012/11/18 22:14:29| Unlinkd pipe opened on FD 23 2012/11/18 22:14:29| Swap maxSize 10240000 + 8192 KB, estimated 788322 objects 2012/11/18 22:14:29| Target number of buckets: 39416 2012/11/18 22:14:29| Using 65536 Store buckets 2012/11/18 22:14:29| Max Mem size: 8192 KB 2012/11/18 22:14:29| Max Swap size: 10240000 KB 2012/11/18 22:14:29| Version 1 of swap file with LFS support detected... 2012/11/18 22:14:29| Rebuilding storage in /opt/squid3/var/cache (DIRTY) 2012/11/18 22:14:29| Using Least Load store dir selection 2012/11/18 22:14:29| Set Current Directory to /opt/squid3/var/cache 2012/11/18 22:14:29| Loaded Icons. 2012/11/18 22:14:29| Accepting HTTP connections at 10.0.0.6, port 3128, FD 25. 2012/11/18 22:14:29| Accepting ICP messages at 0.0.0.0, port 3130, FD 26. 2012/11/18 22:14:29| HTCP Disabled. 2012/11/18 22:14:29| Ready to serve requests. 2012/11/18 22:14:29| Done reading /opt/squid3/var/cache swaplog (0 entries) 2012/11/18 22:14:29| Finished rebuilding storage from disk. 2012/11/18 22:14:29| 0 Entries scanned 2012/11/18 22:14:29| 0 Invalid entries. 2012/11/18 22:14:29| 0 With invalid flags. 2012/11/18 22:14:29| 0 Objects loaded. 2012/11/18 22:14:29| 0 Objects expired. 2012/11/18 22:14:29| 0 Objects cancelled. 2012/11/18 22:14:29| 0 Duplicate URLs purged. 2012/11/18 22:14:29| 0 Swapfile clashes avoided. 2012/11/18 22:14:29| Took 0.02 seconds ( 0.00 objects/sec). 2012/11/18 22:14:29| Beginning Validation Procedure 2012/11/18 22:14:29| WARNING: redirector #1 (FD 9) exited 2012/11/18 22:14:29| WARNING: redirector #2 (FD 10) exited 2012/11/18 22:14:29| WARNING: redirector #3 (FD 11) exited 2012/11/18 22:14:29| WARNING: redirector #4 (FD 12) exited 2012/11/18 22:14:29| Too few redirector processes are running FATAL: The redirector helpers are crashing too rapidly, need help! Squid Cache (Version 3.0.STABLE20-20091201): Terminated abnormally. CPU Usage: 0.112 seconds = 0.032 user + 0.080 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 0 Memory usage for squid via mallinfo(): total space in arena: 2944 KB Ordinary blocks: 2857 KB 6 blks Small blocks: 0 KB 0 blks Holding blocks: 1772 KB 8 blks Free Small blocks: 0 KB Free Ordinary blocks: 86 KB Total in use: 4629 KB 157% Total free: 86 KB 3% The "permission denied" area is where I have been focusing my attention with no luck. The following is what I've tried. Chmod'ing the /opt/squidguard/bin folder to 777 Changing the user that squidguard runs under to root / nobody / www-data / squid3 Tried changing ownership of the /opt/squidguard/bin folder to all names listed above after assigning that user to run with squid. Any help with this would be greatly appreciated.

    Read the article

  • Integrating HP Systems Insight Manager into an existing environment

    - by ewwhite
    I'm working with an environment that spans multiple data centers/sites and consists primarily of HP ProLiant servers (G5-G7) running Linux. The mix is 30% RHEL/CentOS, the rest are Gentoo :(. I also have a few dozen virtual machines running back-office and Windows servers on VMWare ESX hosts. I run OpenNMS to pull SNMP data from the various server nodes and networking devices. While OpenNMS works wonderfully for up/down, thresholds and notifications, it's native handling of traps is a little rough and the graphs are not particularly pretty. I use Orca/RRD graphs for performance trending and nice graphs. I'm tasked with inventorying the environment and wanted to come up with a clean way to organize server information. Since my environment is mostly HP, I've been playing with HP Systems Insight Manager as a way to extract server data and to deploy HP health/monitoring packages and firmware. The Gentoo systems eventually have to be converted to CentOS, so getting a quick assessment of what hardware is where would be great. Although I've read through a few hundred pages of HP manuals, I'm having a difficult time understanding how to get HP SIM to do what I want, though. My main problems are: I have about 40 subnets to deal with; 98% connected with private lines to facilities across the globe. I don't want to initiate an HP SIM discovery only to pull back every piece of intermediate networking hardware and equipment from all of the locations. I'd like this to focus on the servers. I have OpenNMS configured to accept traps. I don't want HP SIM to duplicate that effort. It seems like the built-in software deployment tool wants to overwrite the trapsink parameters for the systems it encounters during discovery. I have about 10 administrative username/password combinations in use across this infrastructure. Is there a more efficient way to get HP SIM to do the discovery or break discovery into manageable chunks? In terms of general workflow, do people typically install the HP Management Agents during the initial OS deployment (e.g. kickstart post script) or afterwards from HP SIM? Is HP SIM too thick/fat to be an inventory tool? I can't tell if it's meant to be used standalone or alongside other monitoring products. Since the majority of the systems I'm trying to track are those running Gentoo (in order to plan the move to CentOS), is there any way for HP SIM to extract system model information from them ( like dmidecode)? I have systems here where I may have an SSH key established, but not direct user or login access. Is there a way for me to import an SSH private/public key pair into HP SIM to reach out to the servers that can't accept standard credentials? There are a handful of sites where I have inconsistent access or have a double-NAT situation. I may be able to poke a server, but it may not be able to find its way back to the management system. Is there a workaround for this? The certificate configuration for HP SIM seems complicated. What is the preferred setup for trust between systems? I'd also appreciate any notes or recommendations to using this product. Or if there's a better way to do this, I'd like to know.

    Read the article

  • Pecl install ssh2, make failed

    - by user28259
    Hi! I'm trying really hard since two hours to install ssh2 with pecl... But all I get is: /bin/sh /root/ssh2-0.11.0/libtool --mode=compile cc -I. -I/root/ssh2-0.11.0 -DPHP_ATOM_INC -I/root/ssh2-0.11.0/include -I/root/ssh2-0.11.0/main -I/root/ssh2-0.11.0 -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/ssh2-0.11.0/ssh2.c -o ssh2.lo mkdir .libs cc -I. -I/root/ssh2-0.11.0 -DPHP_ATOM_INC -I/root/ssh2-0.11.0/include -I/root/ssh2-0.11.0/main -I/root/ssh2-0.11.0 -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/ssh2-0.11.0/ssh2.c -fPIC -DPIC -o .libs/ssh2.o /root/ssh2-0.11.0/ssh2.c:52: error: duplicate 'static' /root/ssh2-0.11.0/ssh2.c: In function 'zif_ssh2_methods_negotiated': /root/ssh2-0.11.0/ssh2.c:503: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:504: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:508: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:509: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:510: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:511: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:516: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:517: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:518: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:519: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c: In function 'zif_ssh2_publickey_add': /root/ssh2-0.11.0/ssh2.c:1045: warning: passing argument 1 of '_efree' discards qualifiers from pointer target type /usr/include/php/Zend/zend_alloc.h:46: note: expected 'void *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c: In function 'zif_ssh2_publickey_list': /root/ssh2-0.11.0/ssh2.c:1104: warning: passing argument 4 of 'add_assoc_stringl_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:361: note: expected 'char *' but argument is of type 'const unsigned char *' /root/ssh2-0.11.0/ssh2.c:1105: warning: passing argument 4 of 'add_assoc_stringl_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:361: note: expected 'char *' but argument is of type 'const unsigned char *' make: *** [ssh2.lo] Error 1 I looked on google a lot, I found some patches which didn't worked at all. So if you think you could help me, go ahead! Thanks!

    Read the article

  • How to export computers from Active Directory to XML using Powershell?

    - by CoDeRs
    I am trying to create a powershell scripts for Remote Desktop Connection Manager using the active directory module. My first thought was get a list of computers in AD and parse them out into XML format similar to the OU structure that is in AD. I have no problem with that, the below code will work just but not how I wanted. EG # here is a the array $OUs Americas/Canada/Canada Computers/Desktops Americas/Canada/Canada Computers/Laptops Americas/Canada/Canada Computers/Virtual Computers Americas/USA/USA Computers/Laptops Computers Disabled Accounts Domain Controllers EMEA/UK/UK Computers/Desktops EMEA/UK/UK Computers/Laptops Outside Sales and Service/Laptops Servers I wanted to have the basic XML structured like this Americas Canada Canada Computers Desktops Laptops Virtual Computers USA USA Computers Laptops Computers Disabled Accounts Domain Controllers EMEA UK UK Computers Desktops Laptops Outside Sales and Service Laptops Servers However if you run the below it does not nest the next string in the array it only restarts the from the beginning and duplicating Americas Canada Canada Computers Desktops Americas Canada Canada Computers Laptops Americas Canada Canada Computers Virtual Computers Americas USA USA Computers Laptops RDCMGenerator.ps1 #Importing Microsoft`s PowerShell-module for administering ActiveDirectory Import-Module ActiveDirectory #Initial variables $OUs = @() $RDCMVer = "2.2" $userName = "domain\username" $password = "Hashed Password+" $Path = "$env:temp\test.xml" $allComputers = Get-ADComputer -LDAPFilter "(OperatingSystem=*)" -Properties Name,Description,CanonicalName | Sort-Object CanonicalName | select Name,Description,CanonicalName $allOUObjects = $allComputers | Foreach {"$($_.CanonicalName)"} Function Initialize-XML{ ##<RDCMan schemaVersion="1"> $xmlWriter.WriteStartElement('RDCMan') $XmlWriter.WriteAttributeString('schemaVersion', '1') $xmlWriter.WriteElementString('version',$RDCMVer) $xmlWriter.WriteStartElement('file') $xmlWriter.WriteStartElement('properties') $xmlWriter.WriteElementString('name',$env:userdomain) $xmlWriter.WriteElementString('expanded','true') $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'None') $xmlWriter.WriteElementString('userName',$userName) $xmlWriter.WriteElementString('domain',$env:userdomain) $xmlWriter.WriteStartElement('password') $XmlWriter.WriteAttributeString('storeAsClearText', 'false') $XmlWriter.WriteRaw($password) $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'None') $xmlWriter.WriteElementString('size','1024 x 768') $xmlWriter.WriteElementString('sameSizeAsClientArea','True') $xmlWriter.WriteElementString('fullScreen','False') $xmlWriter.WriteElementString('colorDepth','32') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() } Function Create-Group ($groupName){ #Start Group $xmlWriter.WriteStartElement('properties') $xmlWriter.WriteElementString('name',$groupName) $xmlWriter.WriteElementString('expanded','true') $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() } Function Create-Server ($computerName, $computerDescription) { #Start Server $xmlWriter.WriteStartElement('server') $xmlWriter.WriteElementString('name',$computerName) $xmlWriter.WriteElementString('displayName',$computerDescription) $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() #Stop Server } Function Close-XML { $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() # finalize the document: $xmlWriter.Flush() $xmlWriter.Close() notepad $path } #Strip out Domain and Computer Name from CanonicalName foreach($OU in $allOUObjects){ $newSplit = $OU.split("/") $rebildOU = "" for($i=1; $i -le ($newSplit.count - 2); $i++){ $rebildOU += $newSplit[$i] + "/" } $OUs += $rebildOU.substring(0,($rebildOU.length - 1)) } #Remove Duplicate OU's $OUs = $OUs | select -uniq #$OUs # get an XMLTextWriter to create the XML $XmlWriter = New-Object System.XMl.XmlTextWriter($Path,$UTF8) # choose a pretty formatting: $xmlWriter.Formatting = 'Indented' $xmlWriter.Indentation = 1 $XmlWriter.IndentChar = "`t" # write the header $xmlWriter.WriteStartDocument() # # 'encoding', 'utf-8' How? # # set XSL statements #Initialize Pre-Defined XML Initialize-XML ######################################################### # Start Loop for each OU-Path that has a computer in it ######################################################### foreach ($OU in $OUs){ $totalGroupName = "" #Create / Reset Total OU-Path Completed $OU.split("/") | foreach { #Split the OU-Path into individual OU's $groupName = "$_" #Current OU $totalGroupName += $groupName + "/" #Total OU-Path Completed $xmlWriter.WriteStartElement('group') #Start new XML Group Create-Group $groupName #Call function to create XML Group ################################################ # Start Loop for each Computer in $allComputers ################################################ foreach($computer in $allComputers){ $computerOU = $computer.CanonicalName #Set the computers OU-Path $OUSplit = $computerOU.split("/") #Create the Split for the OU-Path $rebiltOU = "" #Create / Reset the stripped OU-Path for($i=1; $i -le ($OUSplit.count - 2); $i++){ #Start Loop for OU-Path to strip out the Domain and Computer Name $rebiltOU += $OUSplit[$i] + "/" #Rebuild the stripped OU-Path } if ($rebiltOU -eq $totalGroupName){ #Compare the Current OU-Path with the computers stripped OU-Path $computerName = $computer.Name #Set the computer name $computerDescription = $computerName + " - " + $computer.Description #Set the computer Description Create-Server $computerName $computerDescription #Call function to create XML Server } } } ################################################### # Start Loop to close out XML Groups created above ################################################### $totalGroupName.split("/") | foreach { #Split the if ($_ -ne "" ){ $xmlWriter.WriteEndElement() #End Group } } } Close-XML

    Read the article

  • simple and reliable centralized logging inside Amazon VPC

    - by Nakedible
    I need to set up centralized logging for a set of servers (10-20) in an Amazon VPC. The logging should be as to not lose any log messages in case any single server goes offline - or in the case that an entire availability zone goes offline. It should also tolerate packet loss and other normal network conditions without losing or duplicating messages. It should store the messages durably, at the minimum on two different EBS volumes in two availability zones, but S3 is a good place as well. It should also be realtime so that the messages arrive within seconds of their generation to two different availability zones. I also need to sync logfiles not generated via syslog, so a syslog-only centralized logging solution would not fulfill all the needs, although I guess that limitation could be worked around. I have already reviewed a few solutions, and I will list them here: Flume to Flume to S3: I could set up two logservers as Flume hosts which would store log messages either locally or in S3, and configure all the servers with Flume to send all messages to both servers, using the end-to-end reliability options. That way the loss of a single server shouldn't cause lost messages and all messages would arrive in two availability zones in realtime. However, there would need to be some way to join the logs of the two servers, deduplicating all the messages delivered to both. This could be done by adding a unique id on the sending side to each message and then write some manual deduplication runs on the logfiles. I haven't found an easy solution to the duplication problem. Logstash to Logstash to ElasticSearch: I could install Logstash on the servers and have them deliver to a central server via AMQP, with the durability options turned on. However, for this to work I would need to use some of the clustering capable AMQP implementations, or fan out the deliver just as in the Flume case. AMQP seems to be a yet another moving part with several implementations and no real guidance on what works best this sort of setup. And I'm not entirely convinced that I could get actual end-to-end durability from logstash to elasticsearch, assuming crashing servers in between. The fan-out solutions run in to the deduplication problem again. The best solution that would seem to handle all the cases, would be Beetle, which seems to provide high availability and deduplication via a redis store. However, I haven't seen any guidance on how to set this up with Logstash and Redis is one more moving part again for something that shouldn't be terribly difficult. Logstash to ElasticSearch: I could run Logstash on all the servers, have all the filtering and processing rules in the servers themselves and just have them log directly to a removet ElasticSearch server. I think this should bring me reliable logging and I can use the ElasticSearch clustering features to share the database transparently. However, I am not sure if the setup actually survives Logstash restarts and intermittent network problems without duplicating messages in a failover case or similar. But this approach sounds pretty promising. rsync: I could just rsync all the relevant log files to two different servers. The reliability aspect should be perfect here, as the files should be identical to the source files after a sync is done. However, doing an rsync several times per second doesn't sound fun. Also, I need the logs to be untamperable after they have been sent, so the rsyncs would need to be in append-only mode. And log rotations mess things up unless I'm careful. rsyslog with RELP: I could set up rsyslog to send messages to two remote hosts via RELP and have a local queue to store the messages. There is the deduplication problem again, and RELP itself might also duplicate some messages. However, this would only handle the things that log via syslog. None of these solutions seem terribly good, and they have many unknowns still, so I am asking for more information here from people who have set up centralized reliable logging as to what are the best tools to achieve that goal.

    Read the article

  • Ubuntu 12 crashed and took down network

    - by Leopd
    We recently set up a new Ubuntu 12.04LTS server on our network. It's not fully configured so it's not doing much beyond sshd and a default apache2 install. But this evening it appears to have crashed. It wasn't responding to the network or the keyboard. But the worst part is, it took down the entire network. My knowledge of the network stack below OSI layer 3 is very limited, so the rest confuses me. When this machine was physically connected to the network, no other machine could connect to the outside internet. When things were broken, running arp showed that our gateway's IP address (10.0.1.1) was listed as "invalid." Unplugging the server from the network fixed the problem, and plugging it back in broke it again. So the crashed server was advertising itself as owning the gateway's IP address? There's nothing at all in syslog during the time when it was causing problems. Any ideas about how to figure out what went wrong or what we can do to prevent it from happening again? I'm hesitant to even put the machine back on the network right now. Update ** It crashed again, and I ran tcpdump -penn arp (thanks bahamat!) for several minutes and got this... (timestamps and duplicate lines removed) 00:1e:65:f8:dc:24 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.1 tell 10.0.2.191, length 46 00:1e:65:f8:dc:24 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.44 tell 10.0.2.191, length 46 60:d8:19:d4:71:d6 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.1 tell 10.0.2.125, length 46 d4:9a:20:04:e9:78 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 192.168.1.1 tell 192.168.1.100, length 28 Update 2 ** When the network is functioning properly, arping -c4 10.0.1.1 returns this: ARPING 10.0.1.1 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=0 time=267.982 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=1 time=422.955 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=2 time=299.215 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=3 time=366.926 usec --- 10.0.1.1 statistics --- 4 packets transmitted, 4 packets received, 0% unanswered (0 extra) When the bad server is plugged in, arping -c4 10.0.1.1 returns: ARPING 10.0.1.1 --- 10.0.1.1 statistics --- 4 packets transmitted, 0 packets received, 100% unanswered (0 extra) Context ** 10.0.x.x is the main subnet. 10.0.1.1 is the main internet gateway 10.0.1.44 is a printer 10.0.2.* devices are all laptops / workstations I have no idea what's using the 192.168.x.x subnet -- your guesses are at least as good as mine. A VM on a workstation? A misconfigured WAP? Somebody re-sharing wifi? A machine that failed to DHCP? The offending ubuntu server's MAC address ends in cd:80 so isn't listed in the dump. It should DHCP to 10.0.3.3 Thanks for any help. This ARP stuff is all voodoo to me. Packets just go to IP addresses, right? ;)

    Read the article

  • synchronization of file locations between two machines

    - by intuited
    Although similar threads have been asked on this site and its siblings before, I've not managed to glean the answer to this persistent question. Any help is much appreciated. The situation: I've got two laptops; both contain a ton of music. Sometimes I move these music files to different locations, or change the metadata in them, or convert them to a different format. I might do any of these things on either machine. I rarely do all of them at once — ie it's unlikely that I'll convert a file's format and move it to a different location all in one go. I'd like to be able to synchronize these changes without having to sift through everything that was renamed or moved. I'm familiar with rsync but I find it inadequate, because although it can compute checksums, it doesn't have any way to store them. So if a file differs, it can't figure out which side it changed on. This also means that it can't attempt to match a missing file to a new one with the same checksum (ie a move) if the filesize and date are the same, it , so it takes an epoch to do a sync on a large repository. I would like to only check the checksum if the files even if you turn on checksumming, it still doesn't use it intelligently: ie it checksums files even if the sizes differ. IIRC. it's not able to use file metadata as a means of file comparison. this is sort of a wishlist item but it seems doable. I've also looked into rsnapshot, but its requirement to create a full backup is impractical in this situation. I don't need a backup, I just need a record of what file with each hash was where when. Unison seems like it might be able to do something vaguely along these lines, but I'm loathe to spend hours wading through its details only to discover that it's sadly lacking. Plus, it's fun asking questions on here. What I'd like is a tool that does something along these lines: keeps track of file checksums or of actual renames, possibly using inotify to greatly reduce resource consumption/latency stores a database containing this info, along with other pertinencies like the file format and metadata, the actual inode, the filename history, etc. uses this info to provide more-intelligent synchronization with a counterpart on the other side. So for example: if a file has been converted from flac to ogg, but kept the same base filename, or the same metadata, it should be able to send the new version over, and the other side should delete the original. Probably it should actually sequester it somewhere in case they or you screwed up, but that's a detail. And then when the transaction is done, the state is logged so that the next time the two interact they can work out their differences. Maybe all this metadata stuff is a fancy pipe dream. I would actually be pretty happy if there was something out there that could just use checksums in an intelligent way. This would be sort of like having the intelligence of something like git, minus the need to duplicate data in an index/backup/etc (and branching, and checkouts, and all the other great stuff that RCSs do. basically just fast forward commit pushes are all I want, with maybe the option to roll back.) So is there something out there that can do this? If not, can someone suggest a good way to start making it?

    Read the article

  • php-fpm + persistent sockets = 502 bad gateway

    - by leeoniya
    Put on your reading glasses - this will be a long-ish one. First, what I'm doing. I'm building a web-app interface for some particularly slow tcp devices. Opening a socket to them takes 200ms and an fwrite/fread cycle takes another 300ms. To reduce the need for both of these actions on each request, I'm opening a persistent tcp socket which reduces the response time by the aforementioned 200ms. I was hoping PHP-FPM would share the persistent connections between requests from different clients (and indeed it does!), but there are some issues which I havent been able to resolve after 2 days of interneting, reading logs and modifying settings. I have somewhat narrowed it down though. Setup: Ubuntu 13.04 x64 Server (fully updated) on Linode PHP 5.5.0-6~raring+1 (fpm-fcgi) nginx/1.5.2 Relevent config: nginx worker_processes 4; php-fpm/pool.d pm = dynamic pm.max_children = 2 pm.start_servers = 2 pm.min_spare_servers = 2 Let's go from coarse to fine detail of what happens. After a fresh start I have 4x nginx processes and 2x php5-fpm processes waiting to handle requests. Then I send requests every couple seconds to the script. The first take a while to open the socket connection and returns with the data in about 500ms, the second returns data in 300ms (yay it's re-using the socket), the third also succeeds in about 300ms, the fourth request = 502 Bad Gateway, same with the 5th. Sixth request once again returns data, except now it took 500ms again. The process repeats for several cycles after which every 4 requests result in 2x 502 Bad Gateways and 2x 500ms Data responses. If I double all the fpm pool values and have 4x php-fpm processes running, the cycles settles in with 4x successful 500ms responses followed by 4x Bad Gateway errors. If I don't use persistent sockets, this issue goes away but then every request is 500ms. What I suspect is happening is the persistent socket keeps each php-fpm process from idling and ties it up, so the next one gets chosen until none are left and as they error out, maybe they are restarted and become available on the next round-robin loop ut the socket dies with the process. I haven't yet checked the 'slowlog', but the nginx error log shows lots of this: *188 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:... All the suggestions on the internet regarding fixing nginx/php-fpm/502 bad gateway relate to high load or fcgi_pass misconfiguration. This is not the case here. Increasing buffers/sizes, changing timeouts, switching from unix socket to tcp socket for fcgi_pass, upping connection limits on the system....none of this stuff applies here. I've had some other success with setting pm = ondemand rather than dynamic, but as soon as the initial fpm-process gets killed off after idling, the persistent socket is gone for all subsequent php-fpm spawns. For the php script, I'm using stream_socket_client() with a STREAM_CLIENT_PERSISTENT flag. A while/stream_select() loop to detect socket data and fread($sock, 4096) to grab the data. I don't call fclose() obviously. If anyone has some additional questions or advice on how to get a persistent socket without tying up the php-fpm processes beyond the request completion, or maybe some other things to try, I'd appreciate it. some useful links: Nginx + php-fpm - recv() error Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server) Nginx + PHP-FPM "error 104 Connection reset by peer" causes occasional duplicate posts http://www.linuxquestions.org/questions/programming-9/php-pfsockopen-552084/ http://stackoverflow.com/questions/14268018/concurrent-use-of-a-persistent-php-socket http://devzone.zend.com/303/extension-writing-part-i-introduction-to-php-and-zend/#Heading3 http://stackoverflow.com/questions/242316/how-to-keep-a-php-stream-socket-alive http://php.net/manual/en/install.fpm.configuration.php https://www.google.com/search?q=recv%28%29+failed+%28104:+Connection+reset+by+peer%29+while+reading+response+header+from+upstream+%22502%22&ei=mC1XUrm7F4WQyAHbv4H4AQ&start=10&sa=N&biw=1920&bih=953&dpr=1

    Read the article

  • iCloud stuff stops working while connected to OpenVPN

    - by Taco Bob
    I have a fairly simple OpenVPN setup on an OpenVZ VPS with Ubuntu 11.10. Client is the Viscosity client on Mac OS X 10.8.2, and after some testing, we can rule out the client as being part of the problem. Everything has been working fine except for Apple's iCloud stuff. Web surfing, email, FTP, NNTP, and Skype are all working as expected. It's ONLY the iCloud services that cease to function. If I connect to the VPN, iCloud stuff stops working. I no longer get anything in Messages, Calendar items don't get updated, and Notifications stop working. If I disconnect, the iCloud stuff all starts working. Connect again, iCloud stops working. Here's the server.conf: status openvpn-status.log log /var/log/openvpn.log verb 4 port 1194 proto udp dev tun ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh1024.pem server 10.9.8.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1" push “dhcp-option DNS 10.9.8.1? keepalive 10 120 duplicate-cn cipher BF-CBC comp-lzo user nobody group nogroup persist-key persist-tun tun-mtu 1500 mssfix 1400 I'm using iptables in a script, and it's also fairly simplistic. iptables -F iptables -t nat -F iptables -t mangle -F iptables -A FORWARD -i tun0 -o venet0 -j ACCEPT iptables -A FORWARD -i venet0 -o tun0 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 1194 -j ACCEPT iptables -A INPUT -p udp --dport 1194 -j ACCEPT iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source <server's public ip> echo 1 > /proc/sys/net/ipv4/ip_forward I tried forwarding ports as well, with no success. iptables -A FORWARD -p tcp -d 10.9.8.0/24 --dport 5222:5230 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 5222:5230 -j DNAT --to-destination 10.9.8.6 I am also sometimes behind a double-NAT situation that I have no control over. Client -> work VPN -> my OpenVPN box -> Internet. Client -> Airport Express -> ISP (which is doing NAT) -> my OpenVPN box -> Internet. Those two situations are just the fact of life where I am, and I cannot change them. I do have full control over my client and the OpenVPN server. I am completely out of ideas. I have posted a similar query at the OpenVPN forums, but it hasn't posted yet and seems to be in their moderation queue still. Tried on freenode irc channels, but nobody is awake, so here I am. I have Googled extensively for this, and can find nothing that is related. Help me get iCloud stuff working again! (I tried serverfault, it was closed as off-topic. I'm trying here and the Unix site as well. Here because it's a more general audience that might know more about OpenVPN based on the number of questions I see asked about it) EDIT: -I have also tried upgrading to Version: 2.3-beta1-debian0 - issue persists. -Removed all iptables rules except for the ones that flush -left this rule:iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source (server ip) -added iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT still, nothing works. I can see traffic in tcpdump on the server if i watch the tunnel: 20:03:48.702835 IP nk11p01st-courier105-bz.push.apple.com.5223 10.9.8.6.60772: Flags [F.], seq 2635, ack 1218, win 76, options [nop,nop,TS val 914984811 ecr 745921298], length 0 20:03:48.911244 IP 10.9.8.6.60772 nk11p01st-courier105-bz.push.apple.com.5223: Flags [R], seq 3621143451, win 0, length 0 But still, no push messages/notifications are ever delivered. :/ EDIT: * Further testing indicates that it might actually be the client after all.

    Read the article

  • Configuring dhcp module in FreeRadius (3.0.2 - Centos 6.5)

    - by mixja
    I am using the REST module to authorise a DHCP request. I would like to send an explicit DHCP NAK if the authorisation fails, however the DHCP module seems to return immediately if there is a failure and just ignores the DHCP request without any response. Here is my DHCP module configuration - if rest.authorize is successful, the if (ok) control block is hit, but if rest.authorize fails the if (fail) is never hit. dhcp DHCP-Discover { rest.authorize if (fail) { update reply { DHCP-Message-Type = DHCP-Nak } } if (ok) { update reply { DHCP-Message-Type = DHCP-Offer } update reply { DHCP-Domain-Name-Server = x.x.x.x DHCP-Domain-Name-Server = x.x.x.x DHCP-Subnet-Mask = 255.255.255.0 DHCP-Router-Address = x.x.x.x DHCP-IP-Address-Lease-Time = 3600 DHCP-DHCP-Server-Identifier = x.x.x.x } mac2ip } } Below is the output after a 401 Unauthorized is received. I am wanting to achieve a temporary block on DHCP for a specified (small) period of time. However the FreeRADIUS behaviour is to ignore duplicate requests for same DHCP transaction, meaning DHCP on client is blocked until it begins a new transaction. If a DHCP NAK can be sent, the DHCP client will initiate a new transaction after each NAK (i.e. DHCP Discover), meaning FreeRADIUS will process each DHCP Discover from the client, and the block will be removed much closer to the desired block time. Tue Jun 3 03:00:57 2014 : Debug: (3) rest : Sending HTTP GET to "http://xxxxxx//api/v1/dhcp/80%3Aea%3A96%3A2a%3Ab6%3Aaa" Tue Jun 3 03:00:57 2014 : Debug: (3) rest : Processing response header Tue Jun 3 03:00:57 2014 : Debug: (3) rest : Status : 401 (Unauthorized) Tue Jun 3 03:00:57 2014 : Debug: (3) rest : Skipping attribute processing, no body data received Tue Jun 3 03:00:57 2014 : Debug: rlm_rest (rest): Released connection (4) Tue Jun 3 03:00:57 2014 : Debug: (3) modsingle[authorize]: returned from rest (rlm_rest) for request 3 Tue Jun 3 03:00:57 2014 : Debug: (3) [rest.authorize] = fail Tue Jun 3 03:00:57 2014 : Debug: (3) } # dhcp DHCP-Discover = fail Tue Jun 3 03:00:57 2014 : Debug: (3) Finished request 3. Tue Jun 3 03:00:57 2014 : Debug: Waking up in 0.2 seconds. Tue Jun 3 03:00:58 2014 : Debug: Waking up in 4.6 seconds. Received DHCP-Discover of id 7b0fb2de from 172.19.0.9:67 to 172.19.0.12:67 Tue Jun 3 03:00:59 2014 : Debug: (3) No reply. Ignoring retransmit. Tue Jun 3 03:00:59 2014 : Debug: Waking up in 2.9 seconds. Received DHCP-Discover of id 7b0fb2de from 172.19.0.9:67 to 172.19.0.12:67 Tue Jun 3 03:01:02 2014 : Debug: (3) No reply. Ignoring retransmit. Tue Jun 3 03:01:02 2014 : Debug: Waking up in 0.4 seconds. Tue Jun 3 03:01:02 2014 : Debug: (2) Cleaning up request packet ID 2064626397 with timestamp +56 Tue Jun 3 03:01:02 2014 : Debug: Waking up in 1999991.0 seconds. Received DHCP-Discover of id 7b0fb2de from 172.19.0.9:67 to 172.19.0.12:67 Tue Jun 3 03:01:06 2014 : Debug: (3) No reply. Ignoring retransmit. Tue Jun 3 03:01:06 2014 : Debug: Waking up in 3999983.1 seconds. Received DHCP-Discover of id 7b0fb2de from 172.19.0.9:67 to 172.19.0.12:67 Tue Jun 3 03:01:15 2014 : Debug: (3) No reply. Ignoring retransmit. Tue Jun 3 03:01:15 2014 : Debug: Waking up in 7999966.3 seconds. Received DHCP-Discover of id 7b0fb2de from 172.19.0.9:67 to 172.19.0.12:67 Tue Jun 3 03:01:23 2014 : Debug: (3) No reply. Ignoring retransmit. Tue Jun 3 03:01:23 2014 : Debug: Waking up in 15999942.1 seconds.

    Read the article

  • IPv6: Should I have private addresses?

    - by AlReece45
    Right now, we have a rack of servers. Every server right now has at least 2 IP addresses, one for the public interface, another for the private. The servers that have SSL websites on them have more IP addresses. We also have virtual servers, that are configured similarly. Private Network The private range is currently just used for backups and monitoring. Its a gigabit port, the interface usage does not usually get very high. There are other technologies we're considering using that would use this port: iSCSI (implementations usually recommends dedicating an interface to it, which would be yet another IP network), VPN to get access to the private range (something I'd rather avoid) dedicated database servers LDAP centralized configuration (like puppet) centralized logging We don't have any private addresses in our DNS records (only public addresses). For our servers to utilize the correct IP address for the right interface (and not hard code the IP address) probably requires setting up a private DNS server (So now we add 2 different dns entries to 2 different systems). Public Network Our public range has a variety of services include web, email, and ftp. There is a hardware firewall between our network and the "public" network. We have (relatively secure) method to instruct the firewall to open and close administrative access (web interfaces, ssh, etc) for our current IP address. With either solution discussed, the host-based firewalls will be configured as well. The public network currently runs at a dedicated 20Mbps link. There are a couple of legacy servers with fast-ethernet ports, but they are scheduled for decommissioning. All of the other production boxes have at least 2 Gigabit Ethernet ports. The more traffic-heavy servers have 4-6 available (none is using more than the 2 Gigabit ports right now). IPv6 I want to get an IPv6 prefix from our ISP. So at least every "server" has at least one IPv6 interface. We'll still need to keep the IPv4 addressees up and available for legacy clients (web servers and email at the very least). We have two IP networks right now. Adding the public IPv6 address would make it three. Just use IPv6? I'm thinking about just dumping the private IPv4 range and using the IPv6 range as the primary means of all communications. If an interface starts reaching its capacity, utilize the newly free interfaces to create a trunk. It has the advantage that if either the public or private traffic needs to exceed 1Gbps. The traffic for each interface is already analyzed on a regular basis to predict future bandwidth use. In the rare instances where bandwidth unexpected peaks: utilize QoS to ensure traffic (like our limited SSH access) is prioritized correctly so the problem can be corrected (if possible, our WAN is the bottleneck right now). It also has the advantage of not needing to make an entry for every private address. We may have private DNS (or just LDAP), but it'll be much more limited in scope with less entries to duplicate. Summary I'm trying to make this network as "simple" as possible. At the same time, I want to make sure its reliable, upgradeable, scalable, and (eventually) redundant. Having one IPv6 network, and a legacy IPv4 network seems to be the best solution to me. Regarding using assigned IPv6 addresses for both networks, sharing the available bandwidth on one (more trunked if needed): Are there any technical disadvantages (limitations, buffers, scalability)? Are there any other security considerations (asides from firewalls mentioned above) to consider? Are there regulations or other security requirements (like PCI-DSS) that this doesn't meet? Is there typical software for setting up a Linux network that doesn't have IPv6 support yet? (logging, ldap, puppet) Some other thing I didn't consider?

    Read the article

  • Find/Replace Paragraph End (^13) in Microsoft Word 2007 Merges Paragraphs

    - by Mike Blyth
    I need to replace a target at the beginning of lines with something else. Without wildcards, I can say to replace "^pTarget" with "^pReplacement". With wildcards enabled, I use replace "^13Target" with "^13Replacement". The replacement is successful except that the paragraph is now merged with the previous one in a strange way: The end-paragraph mark is still in place and the paragraph begins on a new line, but Triple clicking to select paragraph selects both the changed paragraph and the one above In a macro, starting in the paragraph above and extending the selection to the end of paragraph causes both paragraphs to be selected. Inter-paragraph spacing disappears between the changed paragraph and the one above. In essence, the paragraph boundary has been removed although the end-paragraph mark is still shown. To duplicate this problem, make a new document Line 1 Line 2 Line 3 (separate paragraphs). Find and replace (with wildcards on) "^13" with "^13". If your result is the same as mine, you will see the problems listed above. I can work around this in the usual way of replacing ^p with something else first, e.g. "^p" = "$", then "$target" = "$replacement", but I'm curious about what's going on. (This is using Word 2007 on Windows 7) I don't know Word XML, but the XML output seems to correspond with the above. Replacing ^13 with ^13 moves the paragraphs together in almost the same way as replacing end-paragraph with end-line (^p = ^l). Here is the relevant XML of the original "Line 1, Line 2, Line 3" in separate paragraphs: <w:p w:rsidR="00BB3032" w:rsidRDefault="00027252"> <w:r><w:t>Line 1</w:t></w:r> </w:p> <w:p w:rsidR="00027252" w:rsidRDefault="00027252"> <w:r><w:t>Line 2</w:t></w:r> </w:p> <w:p w:rsidR="00027252" w:rsidRDefault="00027252"> <w:r><w:t>Line 3</w:t></w:r> </w:p> Now after replacing ^13 with ^13: <w:p w:rsidR="00027252" w:rsidRDefault="00027252"> <w:r><w:t>Line 1</w:t></w:r> <w:r w:rsidR="00C57863"><w:cr/></w:r> <w:r><w:t>Line 2</w:t></w:r> <w:r w:rsidR="00C57863"><w:cr/></w:r> <w:r><w:t>Line 3</w:t></w:r> <w:r w:rsidR="00C57863"><w:cr/></w:r> </w:p> Now original after replacement of ^p with ^l (convert end-paragraph to end-line) <w:p w:rsidR="00027252" w:rsidRDefault="00027252"> <w:r><w:t>Line 1</w:t></w:r> <w:r w:rsidR="00AC7B51"><w:br/></w:r> <w:r><w:t>Line 2</w:t></w:r> <w:r w:rsidR="00AC7B51"><w:br/></w:r> <w:r><w:t>Line 3</w:t></w:r> <w:r w:rsidR="00AC7B51"><w:br/></w:r> </w:p>

    Read the article

  • PHP, Apache and curl: Differences between Windows and Linux?

    - by beginner_
    I'm trying to run my php App on Ubuntu Server 11.10. This App works fine under Apache + PHP in windows. I have other applications that I can simply copy&paste between the 2 OS and they work on both. (These don't use cURL). However this one uses the php library tonic (RESTful webservices) and makes us of php cURL module. The issue is I'm not getting an error message which makes it impossible to find the issue. I (must) use NTLM authentication and this is done with AuthenNTLM Apache Module: Order allow,deny Allow from all PerlAuthenHandler Apache2::AuthenNTLM AuthType ntlm AuthName "Protected Access" require valid-user PerlAddVar ntdomain "domainName server" PerlSetVar defaultdomain domainName PerlSetVar ntlmsemtimeout 2 PerlSetVar ntlmdebug 1 PerlSetVar splitdomainprefix 0 All files that cURL needs to fetch override AuthenNTLM authentication: order deny,allow deny from all allow from 127.0.0.1 Satisfy any Since these files are only fectehd by cURL from same server, access can be limited to localhost. Possible issues are: NTLM auth isn't overridden for files requested through cURL (even though AllowOverride All is set) curl works differently on linux $ch = curl_init(); curl_setopt($ch, CURLOPT_COOKIE, $strCookie); curl_setopt($ch, CURLOPT_URL, $baseUrl . $queryString); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $html = curl_exec($ch); curl_close($ch); other? Apache log says: [error] Bad/Missing NTLM/Basic Authorization Header for /myApp/webservice/local/viewList.php But this directory should override NTLM authentication using curl command line from windows to access same resource i get: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html> <head> <title>406 Not Acceptable</title> </head> <body> <h1>Not Acceptable</h1> <p>An appropriate representation of the requested resource /myApp/webservice/myResource could not be found on this server.</p> Available variants: <ul> <li><a href="myResource.php">myResource.php</a> , type application/x-httpd-php</li> </ul> <hr> <address>Apache/2.2.20 (Ubuntu) Server at localhost Port 80</address> </body> </html> Note: This is duplicate from http://stackoverflow.com/questions/9821979/php-curl-on-linux-what-is-the-difference-to-curl-on-windows Is it was suggested I post it here. EDIT: Please see Ubuntu Server: Apache2 seems to attach .php to URI as I discovered why it does not work but need help so the issue does not occur anymore. ANSWER: The issue is the default Apache configuration on Ubuntu: Options Indexes FollowSymLinks MultiViews MultiViews is changing request_uri from myResource to myResource.php. Solutions: disable MultiViews in .htaccess: Options -MultiViews remove MultiViews from default config rename the file as example to myResourceClass I chose last option because that should work regardless of configuration and I only have 3 such files so the change took about 30 secs...

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >