Search Results

Search found 14464 results on 579 pages for 'del icio us'.

Page 573/579 | < Previous Page | 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • EM12c: Using the LIST verb in emcli

    - by SubinDaniVarughese
    Many of us who use EM CLI to write scripts and automate our daily tasks should not miss out on the new list verb released with Oracle Enterprise Manager 12.1.0.3.0. The combination of list and Jython based scripting support in EM CLI makes it easier to achieve automation for complex tasks with just a few lines of code. Before I jump into a script, let me highlight the key attributes of the list verb and why it’s simply excellent! 1. Multiple resources under a single verb:A resource can be set of users or targets, etc. Using the list verb, you can retrieve information about a resource from the repository database.Here is an example which retrieves the list of administrators within EM.Standard mode$ emcli list -resource="Administrators" Interactive modeemcli>list(resource="Administrators")The output will be the same as standard mode.Standard mode$ emcli @myAdmin.pyEnter password :  ******The output will be the same as standard mode.Contents of myAdmin.py scriptlogin()print list(resource="Administrators",jsonout=False).out()To get a list of all available resources use$ emcli list -helpWith every release of EM, more resources are being added to the list verb. If you have a resource which you feel would be valuable then go ahead and contact Oracle Support to log an enhancement request with product development. Be sure to say how the resource is going to help improve your daily tasks. 2. Consistent Formatting:It is possible to format the output of any resource consistently using these options:  –column  This option is used to specify which columns should be shown in the output. Here is an example which shows the list of administrators and their account status$ emcli list -resource="Administrators" -columns="USER_NAME,REPOS_ACCOUNT_STATUS" To get a list of columns in a resource use:$ emcli list -resource="Administrators" -help You can also specify the width of the each column. For example, here the column width of user_type is set to 20 and department to 30. $ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE:20,COST_CENTER,CONTACT,DEPARTMENT:30"This is useful if your terminal is too small or you need to fine tune a list of specific columns for your quick use or improved readability.  –colsize  This option is used to resize column widths.Here is the same example as above, but using -colsize to define the width of user_type to 20 and department to 30.$ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE,COST_CENTER,CONTACT,DEPARTMENT" -colsize="USER_TYPE:20,DEPARTMENT:30" The existing standard EMCLI formatting options are also available in list verb. They are: -format="name:pretty" | -format="name:script” | -format="name:csv" | -noheader | -scriptThere are so many uses depending on your needs. Have a look at the resources and columns in each resource. Refer to the EMCLI book in EM documentation for more information.3. Search:Using the -search option in the list verb makes it is possible to search for a specific row in a specific column within a resource. This is similar to the sqlplus where clause. The following operators are supported:           =           !=           >           <           >=           <=           like           is (Must be followed by null or not null)Here is an example which searches for all EM administrators in the marketing department located in the USA.$emcli list -resource="Administrators" -search="DEPARTMENT ='Marketing'" -search="LOCATION='USA'" Here is another example which shows all the named credentials created since a specific date.  $emcli list -resource=NamedCredentials -search="CredCreatedDate > '11-Nov-2013 12:37:20 PM'"Note that the timestamp has to be in the format DD-MON-YYYY HH:MI:SS AM/PM Some resources need a bind variable to be passed to get output. A bind variable is created in the resource and then referenced in the command. For example, this command will list all the default preferred credentials for target type oracle_database.Here is an example$ emcli list -resource="PreferredCredentialsDefault" -bind="TargetType='oracle_database'" -colsize="SetName:15,TargetType:15" You can provide multiple bind variables. To verify if a column is searchable or requires a bind variable, use the –help option. Here is an example:$ emcli list -resource="PreferredCredentialsDefault" -help 4. Secure accessWhen list verb collects the data, it only displays content for which the administrator currently logged into emcli, has access. For example consider this usecase:AdminA has access only to TargetA. AdminA logs into EM CLIExecuting the list verb to get the list of all targets will only show TargetA.5. User defined SQLUsing the –sql option, user defined sql can be executed. The SQL provided in the -sql option is executed as the EM user MGMT_VIEW, which has read-only access to the EM published MGMT$ database views in the SYSMAN schema. To get the list of EM published MGMT$ database views, go to the Extensibility Programmer's Reference book in EM documentation. There is a chapter about Using Management Repository Views. It’s always recommended to reference the documentation for the supported MGMT$ database views.  Consider you are using the MGMT$ABC view which is not in the chapter. During upgrade, it is possible, since the view was not in the book and not supported, it is likely the view might undergo a change in its structure or the data in it. Using a supported view ensures that your scripts using -sql will continue working after upgrade.Here’s an example  $ emcli list -sql='select * from mgmt$target' 6. JSON output support    JSON (JavaScript Object Notation) enables data to be displayed in a collection of name/value pairs. There is lot of reading material about JSON on line for more information.As an example, we had a requirement where an EM administrator had many 11.2 databases in their test environment and the developers had requested an Administrator to change the lifecycle status from Test to Production which meant the admin had to go to the EM “All targets” page and identify the set of 11.2 databases and then to go into each target database page and manually changes the property to Production. Sounds easy to say, but this Administrator had numerous targets and this task is repeated for every release cycle.We told him there is an easier way to do this with a script and he can reuse the script whenever anyone wanted to change a set of targets to a different Lifecycle status. Here is a jython script which uses list and JSON to change all 11.2 database target’s LifeCycle Property value.If you are new to scripting and Jython, I would suggest visiting the basic chapters in any Jython tutorials. Understanding Jython is important to write the logic depending on your usecase.If you are already writing scripts like perl or shell or know a programming language like java, then you can easily understand the logic.Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here.  1 from emcli import *  2  search_list = ['PROPERTY_NAME=\'DBVersion\'','TARGET_TYPE= \'oracle_database\'','PROPERTY_VALUE LIKE \'11.2%\'']  3 if len(sys.argv) == 2:  4    print login(username=sys.argv[0])  5    l_prop_val_to_set = sys.argv[1]  6      l_targets = list(resource="TargetProperties", search=search_list,   columns="TARGET_NAME,TARGET_TYPE,PROPERTY_NAME")  7    for target in l_targets.out()['data']:  8       t_pn = 'LifeCycle Status'  9      print "INFO: Setting Property name " + t_pn + " to value " +       l_prop_val_to_set + " for " + target['TARGET_NAME']  10      print  set_target_property_value(property_records=      target['TARGET_NAME']+":"+target['TARGET_TYPE']+":"+      t_pn+":"+l_prop_val_to_set)  11  else:  12   print "\n ERROR: Property value argument is missing"  13   print "\n INFO: Format to run this file is filename.py <username>   <Database Target LifeCycle Status Property Value>" You can download the script from here. I could not upload the file with .py extension so you need to rename the file to myScript.py before executing it using emcli.A line by line explanation for beginners: Line  1 Imports the emcli verbs as functions  2 search_list is a variable to pass to the search option in list verb. I am using escape character for the single quotes. In list verb to pass more than one value for the same option, you should define as above comma separated values, surrounded by square brackets.  3 This is an “if” condition to ensure the user does provide two arguments with the script, else in line #15, it prints an error message.  4 Logging into EM. You can remove this if you have setup emcli with autologin. For more details about setup and autologin, please go the EM CLI book in EM documentation.  5 l_prop_val_to_set is another variable. This is the property value to be set. Remember we are changing the value from Test to Production. The benefit of this variable is you can reuse the script to change the property value from and to any other values.  6 Here the output of the list verb is stored in l_targets. In the list verb I am passing the resource as TargetProperties, search as the search_list variable and I only need these three columns – target_name, target_type and property_name. I don’t need the other columns for my task.  7 This is a for loop. The data in l_targets is available in JSON format. Using the for loop, each pair will now be available in the ‘target’ variable.  8 t_pn is the “LifeCycle Status” variable. If required, I can have this also as an input and then use my script to change any target property. In this example, I just wanted to change the “LifeCycle Status”.  9 This a message informing the user the script is setting the property value for dbxyz.  10 This line shows the set_target_property_value verb which sets the value using the property_records option. Once it is set for a target pair, it moves to the next one. In my example, I am just showing three dbs, but the real use is when you have 20 or 50 targets. The script is executed as:$ emcli @myScript.py subin Production The recommendation is to first test the scripts before running it on a production system. We tested on a small set of targets and optimizing the script for fewer lines of code and better messaging.For your quick reference, the resources available in Enterprise Manager 12.1.0.4.0 with list verb are:$ emcli list -helpWatch this space for more blog posts using the list verb and EM CLI Scripting use cases. I hope you enjoyed reading this blog post and it has helped you gain more information about the list verb. Happy Scripting!!Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter mt=8">Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • 12c - Utl_Call_Stack...

    - by noreply(at)blogger.com (Thomas Kyte)
    Over the next couple of months, I'll be writing about some cool new little features of Oracle Database 12c - things that might not make the front page of Oracle.com.  I'm going to start with a new package - UTL_CALL_STACK.In the past, developers have had access to three functions to try to figure out "where the heck am I in my code", they were:dbms_utility.format_call_stackdbms_utility.format_error_backtracedbms_utility.format_error_stackNow these routines, while useful, were of somewhat limited use.  Let's look at the format_call_stack routine for a reason why.  Here is a procedure that will just print out the current call stack for us:ops$tkyte%ORA12CR1> create or replace  2  procedure Print_Call_Stack  3  is  4  begin  5    DBMS_Output.Put_Line(DBMS_Utility.Format_Call_Stack());  6  end;  7  /Procedure created.Now, if we have a package - with nested functions and even duplicated function names:ops$tkyte%ORA12CR1> create or replace  2  package body Pkg is  3    procedure p  4    is  5      procedure q  6      is  7        procedure r  8        is  9          procedure p is 10          begin 11            Print_Call_Stack(); 12            raise program_error; 13          end p; 14        begin 15          p(); 16        end r; 17      begin 18        r(); 19      end q; 20    begin 21      q(); 22    end p; 23  end Pkg; 24  /Package body created.When we execute the procedure PKG.P - we'll see as a result:ops$tkyte%ORA12CR1> exec pkg.p----- PL/SQL Call Stack -----  object      line  object  handle    number  name0x6e891528         4  procedure OPS$TKYTE.PRINT_CALL_STACK0x6ec4a7c0        10  package body OPS$TKYTE.PKG0x6ec4a7c0        14  package body OPS$TKYTE.PKG0x6ec4a7c0        17  package body OPS$TKYTE.PKG0x6ec4a7c0        20  package body OPS$TKYTE.PKG0x76439070         1  anonymous blockBEGIN pkg.p; END;*ERROR at line 1:ORA-06501: PL/SQL: program errorORA-06512: at "OPS$TKYTE.PKG", line 11ORA-06512: at "OPS$TKYTE.PKG", line 14ORA-06512: at "OPS$TKYTE.PKG", line 17ORA-06512: at "OPS$TKYTE.PKG", line 20ORA-06512: at line 1The bit in red above is the output from format_call_stack whereas the bit in black is the error message returned to the client application (it would also be available to you via the format_error_backtrace API call). As you can see - it contains useful information but to use it you would need to parse it - and that can be trickier than it seems.  The format of those strings is not set in stone, they have changed over the years (I wrote the "who_am_i", "who_called_me" functions, I did that by parsing these strings - trust me, they change over time!).Starting in 12c - we'll have structured access to the call stack and a series of API calls to interrogate this structure.  I'm going to rewrite the print_call_stack function as follows:ops$tkyte%ORA12CR1> create or replace 2  procedure Print_Call_Stack  3  as  4    Depth pls_integer := UTL_Call_Stack.Dynamic_Depth();  5    6    procedure headers  7    is  8    begin  9        dbms_output.put_line( 'Lexical   Depth   Line    Name' ); 10        dbms_output.put_line( 'Depth             Number      ' ); 11        dbms_output.put_line( '-------   -----   ----    ----' ); 12    end headers; 13    procedure print 14    is 15    begin 16        headers; 17        for j in reverse 1..Depth loop 18          DBMS_Output.Put_Line( 19            rpad( utl_call_stack.lexical_depth(j), 10 ) || 20                    rpad( j, 7) || 21            rpad( To_Char(UTL_Call_Stack.Unit_Line(j), '99'), 9 ) || 22            UTL_Call_Stack.Concatenate_Subprogram 23                       (UTL_Call_Stack.Subprogram(j))); 24        end loop; 25    end; 26  begin 27    print; 28  end; 29  /Here we are able to figure out what 'depth' we are in the code (utl_call_stack.dynamic_depth) and then walk up the stack using a loop.  We will print out the lexical_depth, along with the line number within the unit we were executing plus - the unit name.  And not just any unit name, but the fully qualified, all of the way down to the subprogram name within a package.  Not only that - but down to the subprogram name within a subprogram name within a subprogram name.  For example - running the PKG.P procedure again results in:ops$tkyte%ORA12CR1> exec pkg.pLexical   Depth   Line    NameDepth             Number-------   -----   ----    ----1         6       20      PKG.P2         5       17      PKG.P.Q3         4       14      PKG.P.Q.R4         3       10      PKG.P.Q.R.P0         2       26      PRINT_CALL_STACK1         1       17      PRINT_CALL_STACK.PRINTBEGIN pkg.p; END;*ERROR at line 1:ORA-06501: PL/SQL: program errorORA-06512: at "OPS$TKYTE.PKG", line 11ORA-06512: at "OPS$TKYTE.PKG", line 14ORA-06512: at "OPS$TKYTE.PKG", line 17ORA-06512: at "OPS$TKYTE.PKG", line 20ORA-06512: at line 1This time - we get much more than just a line number and a package name as we did previously with format_call_stack.  We not only got the line number and package (unit) name - we got the names of the subprograms - we can see that P called Q called R called P as nested subprograms.  Also note that we can see a 'truer' calling level with the lexical depth, we can see we "stepped" out of the package to call print_call_stack and that in turn called another nested subprogram.This new package will be a nice addition to everyone's error logging packages.  Of course there are other functions in there to get owner names, the edition in effect when the code was executed and more. See UTL_CALL_STACK for all of the details.

    Read the article

  • New hire expectations... (Am I being unreasonable?)

    - by user295841
    I work for a very small custom software shop. We currently consist me and my boss. My boss is an old FoxPro DOS developer and OOP makes him uncomfortable. He is planning on taking a back seat in the next few years to hopefully enjoy a “partial retirement”. I will be taking over the day to day operations and we are now desperately looking for more help. We tried Monster.com, Dice.com, and others a few years ago when we started our search. We had no success. We have tried outsourcing overseas (total disaster), hiring kids right out of college (mostly a disaster but that’s where I came from), interns (good for them, not so good for us) and hiring laid off “experienced” developers (there was a reason they were laid off). I have heard hiring practices discussed on podcasts, blogs, etc... and have tried a few. The “Fizz Buzz” test was a good one. One kid looked physically ill before he finally gave up. I think my problem is that I have grown so much as a developer since I started here that I now have a high standard. I hear/read very intelligent people podcasts and blogs and I know that there are lots of people out there that can do the job. I don’t want to settle for less than a “good” developer. Perhaps my expectations are unreasonable. I expect any good developer (entry level or experienced) to be billable (at least paying their own wage) in under one month. I expect any good developer to be able to be productive (at least dangerous) in any language or technology with only a few days of research/training. I expect any good developer to be able to take a project from initial customer request to completion with little or no help from others. Am I being unreasonable? What constitutes a valuable developer? What should be expected of an entry level developer? What should be expected of an experienced developer? I realize that everyone is different but there has to be some sort of expectations standard, right? I have been giving the test project below to potential canidates to weed them out. Good idea? Too much? Too little? Please let me know what you think. Thanks. Project ID: T00001 Description: Order Entry System Deadline: 1 Week Scope The scope of this project is to develop a fully function order entry system. Screen/Form design must be user friendly and promote efficient data entry and modification. User experience (Navigation, Screen/Form layouts, Look and Feel…) is at the developer’s discretion. System may be developed using any technologies that conform to the technical and system requirements. Deliverables Complete source code Database setup instructions (Scripts or restorable backup) Application installation instructions (Installer or installation procedure) Any necessary documentation Technical Requirements Server Platform – Windows XP / Windows Server 2003 / SBS Client Platform – Windows XP Web Browser (If applicable) – IE 8 Database – At developer’s discretion (Must be a relational SQL database.) Language – At developer’s discretion All data must be normalized. (+) All data must maintain referential integrity. (++) All data must be indexed for optimal performance. System must handle concurrency. System Requirements Customer Maintenance Customer records must have unique ID. Customer data will include Name, Address, Phone, etc. User must be able to perform all CRUD (Create, Read, Update, and Delete) operations on the Customer table. User must be able to enter a specific Customer ID to edit. User must be able to pull up a sortable/queryable search grid/utility to find a customer to edit. Validation must be performed prior to database commit. Customer record cannot be deleted if the customer has an order in the system. (++) Inventory Maintenance Part records must have unique ID. Part data will include Description, Price, UOM (Unit of Measure), etc. User must be able to perform all CRUD operations on the part table. User must be able to enter a specific Part ID to edit. User must be able to pull up a sortable/queryable search grid/utility to find a part to edit. Validation must be performed prior to database commit. Part record cannot be deleted if the part has been used in an order. (++) Order Entry Order records must have a unique auto-incrementing key (Order Number). Order data must be split into a header/detail structure. (+) Order can contain an infinite number of detail records. Order header data will include Order Number, Customer ID (++), Order Date, Order Status (Open/Closed), etc. Order detail data will include Part Number (++), Quantity, Price, etc. User must be able to perform all CRUD operations on the order tables. User must be able to enter a specific Order Number to edit. User must be able to pull up a sortable/queryable search grid/utility to find an order to edit. User must be able to print an order form from within the order entry form. Validation must be performed prior to database commit. Reports Customer Listing – All Customers in the system. Inventory Listing – All parts in the system. Open Order Listing – All open orders in system. Customer Order Listing – All orders for specific customer. All reports must include sorts and filter functions where applicable. Ex. Customer Listing by range of Customer IDs. Open Order Listing by date range.

    Read the article

  • CSS Menu loses focus when part of jquery hover()

    - by Steve Syfuhs
    I have the following html (viewable at www.communityftw.com) <table width="100%"> <tr> <td style="text-align: left"> <!-- 2008.3.1314.35 --><span id="headerSearch1_sb_form_q_wrapper" class="RadInput_Default" style="white-space:nowrap;"><input value="language..." type="text" size="20" id="headerSearch1_sb_form_q_text" name="headerSearch1_sb_form_q_text" class="riTextBox riEmpty sw_qboxTop" name="q" style="width:140px;" /><input id="headerSearch1_sb_form_q" name="ctl00$headerSearch1$sb_form_q" class="rdfd_" style="visibility:hidden;margin:-18px 0 0 0;width:1px;height:1px;overflow:hidden;border:0;padding:0;" type="text" value="" /><input id="headerSearch1_sb_form_q_ClientState" name="headerSearch1_sb_form_q_ClientState" type="hidden" /></span> <input type="submit" name="ctl00$headerSearch1$sb_form_go" value="" id="headerSearch1_sb_form_go" class="sw_qbtnTop" /> </td> <td style="text-align: left"> <ul id="menu"> <li class="languageContainer"> <div> <a href="#" id="languageField"> <img src="/images/flags/ca.png" alt="Canada" /> Canada (English)</a> </div> <ul id="language"> <li><a href="#" id="A1"> <img src="/images/flags/ca.png" alt="Canada" /> Canada (French)</a> </li> <li><a href="#" id="A2"> <img src="/images/flags/us.png" alt="United States" /> United States</a> </li> <li><a href="#" id="A3"> <img src="/images/flags/de.png" alt="Germany" /> Germany</a> </li> <li><a href="#" id="A4"> <img src="/images/flags/fr.png" alt="France" /> France</a> </li> <li><a href="#" id="A5"> <img src="/images/flags/ru.png" alt="Russia" /> Russia</a> </li> <li class="last"> <img alt="" src="images/langLocDrop_r4_c1.png" /> </li> </ul> </li> </ul> </td> </tr> </table> Javascript/jquery $('#slide').animate({ top: '-=34' }, 1000); $("#slide").hover(function () { $(this).animate({ top: '+=34' }); }, function () { $(this).animate({ top: '-=34' }); }); menu { margin: 0px; padding: 0px; list-style: none; display: inline-block; float: left; z-index: 1000; } menu a { color: #dc2525; text-decoration: none; } menu li { background: none repeat scroll 0 0; cursor: pointer; float: left; position: relative; } menu li a:hover { color: orange; } menu ul { padding: 0px; margin: 0px; display: block; display: inline; } menu li ul { position: absolute; left: -15px; top: 0px; margin-top: 20px; width: 170px; line-height: 16px; background-image: url(/images/langLocDrop_r2_c1.png); display: none; } menu li:hover ul { display: block; } menu li ul li { display: block; margin: 5px 20px; padding: 5px 0px; border-top: dotted 1px #606060; list-style-type: none; } menu li ul li:first-child { border-top: none; } menu li ul li a { display: block; } menu li ul li a:hover { color: orange; } .languageContainer div { display: inline; padding: 5px; } languageField img { display: inline; vertical-align: middle; } language img { display: inline; } menu .last { background: transparent none repeat scroll 0% 0%; margin: 0px; padding: 0px; border: none; position: relative; border: none; height: 0px; } What I'm trying to do is have a menu mostly hidden at the top except when you mouse over it, and then have a submenu (just css driven) pop out when you mouse over the language. What is happening though is that when I move onto the language list, and I go past Germany (~50% down the list?), the hover() loses focus and closes the original menu, which closes the language menu. Any idea's what is causing the issue? Any ideas how to fix the issue? I have tried the hoverIntent() plugin as well to no avail.

    Read the article

  • ASp.Net Mvc 1.0 Dynamic Images Returned from Controller taking 154 seconds+ to display in IE8, firef

    - by julian guppy
    I have a curious problem with IE, IIS 6.0 dynamic PNG files and I am baffled as to how to fix.. Snippet from Helper (this returns the URL to the view for requesting the images from my Controller. string url = LinkBuilder.BuildUrlFromExpression(helper.ViewContext.RequestContext, helper.RouteCollection, c = c.FixHeight(ir.Filename, ir.AltText, "FFFFFF")); url = url.Replace("&", "&"); sb.Append(string.Format("<removed id=\"TheImage\" src=\"{0}\" alt=\"\" /", url)+Environment.NewLine); This produces a piece of html as follows:- img id="TheImage" src="/ImgText/FixHeight?sFile=Images%2FUser%2FJulianGuppy%2FMediums%2Fconservatory.jpg&backgroundColour=FFFFFF" alt="" / brackets missing because i cant post an image... even though I dont want to post an image I jsut want to post the markup... sigh Snippet from Controller ImgTextController /// <summary> /// This function fixes the height of the image /// </summary> /// <param name="sFile"></param> /// <param name="alternateText"></param> /// <param name="backgroundColour"></param> /// <returns></returns> [AcceptVerbs(HttpVerbs.Get)] public ActionResult FixHeight(string sFile, string alternateText, string backgroundColour) { #region File if (string.IsNullOrEmpty(sFile)) { return new ImgTextResult(); } // MVC specific change to prepend the new directory if (sFile.IndexOf("Content") == -1) { sFile = "~/Content/" + sFile; } // open the file System.Drawing.Image img; try { img = System.Drawing.Image.FromFile(Server.MapPath(sFile)); } catch { img = null; } // did we fail? if (img == null) { return new ImgTextResult(); } #endregion File #region Width // Sort out the width from the image passed to me Int32 nWidth = img.Width; #endregion Width #region Height Int32 nHeight = img.Height; #endregion Height // What is the ideal height given a width of 2100 this should be 1400. var nIdealHeight = (int)(nWidth / 1.40920096852); // So is the actual height of the image already greater than the ideal height? Int32 nSplit; if (nIdealHeight < nHeight) { // Yes, do nothing, well i need to return the iamge... nSplit = 0; } else { // rob wants to not show the white at the top or bottom, so if we were to crop the image how would be do it // 1. Calculate what the width should be If we dont adjust the heigt var newIdealWidth = (int)(nHeight * 1.40920096852); // 2. This newIdealWidth should be smaller than the existing width... so work out the split on that Int32 newSplit = (nWidth - newIdealWidth) / 2; // 3. Now recrop the image using 0-nHeight as the height (i.e. full height) // but crop the sides so that its the correct aspect ration var newRect = new Rectangle(newSplit, 0, newIdealWidth, nHeight); img = CropImage(img, newRect); nHeight = img.Height; nWidth = img.Width; nSplit = 0; } // No, so I want to place this image on a larger canvas and we do this by Creating a new image to be the size that we want System.Drawing.Image canvas = new Bitmap(nWidth, nIdealHeight, PixelFormat.Format24bppRgb); Graphics g = Graphics.FromImage(canvas); #region Color // Whilst we can set the background colour we shall default to white if (string.IsNullOrEmpty(backgroundColour)) { backgroundColour = "FFFFFF"; } Color bc = ColorTranslator.FromHtml("#" + backgroundColour); #endregion Color // Filling the background (which gives us our broder) Brush backgroundBrush = new SolidBrush(bc); g.FillRectangle(backgroundBrush, -1, -1, nWidth + 1, nIdealHeight + 1); // draw the image at the position var rect = new Rectangle(0, nSplit, nWidth, nHeight); g.DrawImage(img, rect); return new ImgTextResult { Image = canvas, ImageFormat = ImageFormat.Png }; } My ImgTextResult is a class that returns an Action result for me but embedding the image from a memory stream into the response.outputstream. snippet from my ImageResults /// <summary> /// Execute the result /// </summary> /// <param name="context"></param> public override void ExecuteResult(ControllerContext context) { // output context.HttpContext.Response.Clear(); context.HttpContext.Response.ContentType = "image/png"; try { var memStream = new MemoryStream(); Image.Save(memStream, ImageFormat.Png); context.HttpContext.Response.BinaryWrite(memStream.ToArray()); context.HttpContext.Response.Flush(); context.HttpContext.Response.Close(); memStream.Dispose(); Image.Dispose(); } catch (Exception ex) { string a = ex.Message; } } Now all of this works locally and lovely, and indeed all of this works on my production server BUT Only for Firefox, Safari, Chrome (and other browsers) IE has a fit and decides that it either wont display the image or it does display the image after approx 154seconds of waiting..... I have made sure my HTML is XHTML compliant, I have made sure I am getting no Routing errors or crashes in my event log on the server.... Now obviously I have been a muppet and have done something wrong... but what I cant fathom is why in development all works fine, and in production all non IE browsers also work fine, but IE 8 using IIS 6.0 production server is having some kind of problem in returning this PNG and I dont have an error to trace... so what I am looking for is guidance as to how I can debug this problem.

    Read the article

  • Selling Federal Enterprise Architecture (EA)

    - by TedMcLaughlan
    Selling Federal Enterprise Architecture A taxonomy of subject areas, from which to develop a prioritized marketing and communications plan to evangelize EA activities within and among US Federal Government organizations and constituents. Any and all feedback is appreciated, particularly in developing and extending this discussion as a tool for use – more information and details are also available. "Selling" the discipline of Enterprise Architecture (EA) in the Federal Government (particularly in non-DoD agencies) is difficult, notwithstanding the general availability and use of the Federal Enterprise Architecture Framework (FEAF) for some time now, and the relatively mature use of the reference models in the OMB Capital Planning and Investment (CPIC) cycles. EA in the Federal Government also tends to be a very esoteric and hard to decipher conversation – early apologies to those who agree to continue reading this somewhat lengthy article. Alignment to the FEAF and OMB compliance mandates is long underway across the Federal Departments and Agencies (and visible via tools like PortfolioStat and ITDashboard.gov – but there is still a gap between the top-down compliance directives and enablement programs, and the bottom-up awareness and effective use of EA for either IT investment management or actual mission effectiveness. "EA isn't getting deep enough penetration into programs, components, sub-agencies, etc.", verified a panelist at the most recent EA Government Conference in DC. Newer guidance from OMB may be especially difficult to handle, where bottom-up input can't be accurately aligned, analyzed and reported via standardized EA discipline at the Agency level – for example in addressing the new (for FY13) Exhibit 53D "Agency IT Reductions and Reinvestments" and the information required for "Cloud Computing Alternatives Evaluation" (supporting the new Exhibit 53C, "Agency Cloud Computing Portfolio"). Therefore, EA must be "sold" directly to the communities that matter, from a coordinated, proactive messaging perspective that takes BOTH the Program-level value drivers AND the broader Agency mission and IT maturity context into consideration. Selling EA means persuading others to take additional time and possibly assign additional resources, for a mix of direct and indirect benefits – many of which aren't likely to be realized in the short-term. This means there's probably little current, allocated budget to work with; ergo the challenge of trying to sell an "unfunded mandate". Also, the concept of "Enterprise" in large Departments like Homeland Security tends to cross all kinds of organizational boundaries – as Richard Spires recently indicated by commenting that "...organizational boundaries still trump functional similarities. Most people understand what we're trying to do internally, and at a high level they get it. The problem, of course, is when you get down to them and their system and the fact that you're going to be touching them...there's always that fear factor," Spires said. It is quite clear to the Federal IT Investment community that for EA to meet its objective, understandable, relevant value must be measured and reported using a repeatable method – as described by GAO's recent report "Enterprise Architecture Value Needs To Be Measured and Reported". What's not clear is the method or guidance to sell this value. In fact, the current GAO "Framework for Assessing and Improving Enterprise Architecture Management (Version 2.0)", a.k.a. the "EAMMF", does not include words like "sell", "persuade", "market", etc., except in reference ("within Core Element 19: Organization business owner and CXO representatives are actively engaged in architecture development") to a brief section in the CIO Council's 2001 "Practical Guide to Federal Enterprise Architecture", entitled "3.3.1. Develop an EA Marketing Strategy and Communications Plan." Furthermore, Core Element 19 of the EAMMF is advised to be applied in "Stage 3: Developing Initial EA Versions". This kind of EA sales campaign truly should start much earlier in the maturity progress, i.e. in Stages 0 or 1. So, what are the understandable, relevant benefits (or value) to sell, that can find an agreeable, participatory audience, and can pave the way towards success of a longer-term, funded set of EA mechanisms that can be methodically measured and reported? Pragmatic benefits from a useful EA that can help overcome the fear of change? And how should they be sold? Following is a brief taxonomy (it's a taxonomy, to help organize SME support) of benefit-related subjects that might make the most sense, in creating the messages and organizing an initial "engagement plan" for evangelizing EA "from within". An EA "Sales Taxonomy" of sorts. We're not boiling the ocean here; the subjects that are included are ones that currently appear to be urgently relevant to the current Federal IT Investment landscape. Note that successful dialogue in these topics is directly usable as input or guidance for actually developing early-stage, "Fit-for-Purpose" (a DoDAF term) Enterprise Architecture artifacts, as prescribed by common methods found in most EA methodologies, including FEAF, TOGAF, DoDAF and our own Oracle Enterprise Architecture Framework (OEAF). The taxonomy below is organized by (1) Target Community, (2) Benefit or Value, and (3) EA Program Facet - as in: "Let's talk to (1: Community Member) about how and why (3: EA Facet) the EA program can help with (2: Benefit/Value)". Once the initial discussion targets and subjects are approved (that can be measured and reported), a "marketing and communications plan" can be created. A working example follows the Taxonomy. Enterprise Architecture Sales Taxonomy Draft, Summary Version 1. Community 1.1. Budgeted Programs or Portfolios Communities of Purpose (CoPR) 1.1.1. Program/System Owners (Senior Execs) Creating or Executing Acquisition Plans 1.1.2. Program/System Owners Facing Strategic Change 1.1.2.1. Mandated 1.1.2.2. Expected/Anticipated 1.1.3. Program Managers - Creating Employee Performance Plans 1.1.4. CO/COTRs – Creating Contractor Performance Plans, or evaluating Value Engineering Change Proposals (VECP) 1.2. Governance & Communications Communities of Practice (CoP) 1.2.1. Policy Owners 1.2.1.1. OCFO 1.2.1.1.1. Budget/Procurement Office 1.2.1.1.2. Strategic Planning 1.2.1.2. OCIO 1.2.1.2.1. IT Management 1.2.1.2.2. IT Operations 1.2.1.2.3. Information Assurance (Cyber Security) 1.2.1.2.4. IT Innovation 1.2.1.3. Information-Sharing/ Process Collaboration (i.e. policies and procedures regarding Partners, Agreements) 1.2.2. Governing IT Council/SME Peers (i.e. an "Architects Council") 1.2.2.1. Enterprise Architects (assumes others exist; also assumes EA participants aren't buried solely within the CIO shop) 1.2.2.2. Domain, Enclave, Segment Architects – i.e. the right affinity group for a "shared services" EA structure (per the EAMMF), which may be classified as Federated, Segmented, Service-Oriented, or Extended 1.2.2.3. External Oversight/Constraints 1.2.2.3.1. GAO/OIG & Legal 1.2.2.3.2. Industry Standards 1.2.2.3.3. Official public notification, response 1.2.3. Mission Constituents Participant & Analyst Community of Interest (CoI) 1.2.3.1. Mission Operators/Users 1.2.3.2. Public Constituents 1.2.3.3. Industry Advisory Groups, Stakeholders 1.2.3.4. Media 2. Benefit/Value (Note the actual benefits may not be discretely attributable to EA alone; EA is a very collaborative, cross-cutting discipline.) 2.1. Program Costs – EA enables sound decisions regarding... 2.1.1. Cost Avoidance – a TCO theme 2.1.2. Sequencing – alignment of capability delivery 2.1.3. Budget Instability – a Federal reality 2.2. Investment Capital – EA illuminates new investment resources via... 2.2.1. Value Engineering – contractor-driven cost savings on existing budgets, direct or collateral 2.2.2. Reuse – reuse of investments between programs can result in savings, chargeback models; avoiding duplication 2.2.3. License Refactoring – IT license & support models may not reflect actual or intended usage 2.3. Contextual Knowledge – EA enables informed decisions by revealing... 2.3.1. Common Operating Picture (COP) – i.e. cross-program impacts and synergy, relative to context 2.3.2. Expertise & Skill – who truly should be involved in architectural decisions, both business and IT 2.3.3. Influence – the impact of politics and relationships can be examined 2.3.4. Disruptive Technologies – new technologies may reduce costs or mitigate risk in unanticipated ways 2.3.5. What-If Scenarios – can become much more refined, current, verifiable; basis for Target Architectures 2.4. Mission Performance – EA enables beneficial decision results regarding... 2.4.1. IT Performance and Optimization – towards 100% effective, available resource utilization 2.4.2. IT Stability – towards 100%, real-time uptime 2.4.3. Agility – responding to rapid changes in mission 2.4.4. Outcomes –measures of mission success, KPIs – vs. only "Outputs" 2.4.5. Constraints – appropriate response to constraints 2.4.6. Personnel Performance – better line-of-sight through performance plans to mission outcome 2.5. Mission Risk Mitigation – EA mitigates decision risks in terms of... 2.5.1. Compliance – all the right boxes are checked 2.5.2. Dependencies –cross-agency, segment, government 2.5.3. Transparency – risks, impact and resource utilization are illuminated quickly, comprehensively 2.5.4. Threats and Vulnerabilities – current, realistic awareness and profiles 2.5.5. Consequences – realization of risk can be mapped as a series of consequences, from earlier decisions or new decisions required for current issues 2.5.5.1. Unanticipated – illuminating signals of future or non-symmetric risk; helping to "future-proof" 2.5.5.2. Anticipated – discovering the level of impact that matters 3. EA Program Facet (What parts of the EA can and should be communicated, using business or mission terms?) 3.1. Architecture Models – the visual tools to be created and used 3.1.1. Operating Architecture – the Business Operating Model/Architecture elements of the EA truly drive all other elements, plus expose communication channels 3.1.2. Use Of – how can the EA models be used, and how are they populated, from a reasonable, pragmatic yet compliant perspective? What are the core/minimal models required? What's the relationship of these models, with existing system models? 3.1.3. Scope – what level of granularity within the models, and what level of abstraction across the models, is likely to be most effective and useful? 3.2. Traceability – the maturity, status, completeness of the tools 3.2.1. Status – what in fact is the degree of maturity across the integrated EA model and other relevant governance models, and who may already be benefiting from it? 3.2.2. Visibility – how does the EA visibly and effectively prove IT investment performance goals are being reached, with positive mission outcome? 3.3. Governance – what's the interaction, participation method; how are the tools used? 3.3.1. Contributions – how is the EA program informed, accept submissions, collect data? Who are the experts? 3.3.2. Review – how is the EA validated, against what criteria?  Taxonomy Usage Example:   1. To speak with: a. ...a particular set of System Owners Facing Strategic Change, via mandate (like the "Cloud First" mandate); about... b. ...how the EA program's visible and easily accessible Infrastructure Reference Model (i.e. "IRM" or "TRM"), if updated more completely with current system data, can... c. ...help shed light on ways to mitigate risks and avoid future costs associated with NOT leveraging potentially-available shared services across the enterprise... 2. ....the following Marketing & Communications (Sales) Plan can be constructed: a. Create an easy-to-read "Consequence Model" that illustrates how adoption of a cloud capability (like elastic operational storage) can enable rapid and durable compliance with the mandate – using EA traceability. Traceability might be from the IRM to the ARM (that identifies reusable services invoking the elastic storage), and then to the PRM with performance measures (such as % utilization of purchased storage allocation) included in the OMB Exhibits; and b. Schedule a meeting with the Program Owners, timed during their Acquisition Strategy meetings in response to the mandate, to use the "Consequence Model" for advising them to organize a rapid and relevant RFI solicitation for this cloud capability (regarding alternatives for sourcing elastic operational storage); and c. Schedule a series of short "Discovery" meetings with the system architecture leads (as agreed by the Program Owners), to further populate/validate the "As-Is" models and frame the "To Be" models (via scenarios), to better inform the RFI, obtain the best feedback from the vendor community, and provide potential value for and avoid impact to all other programs and systems. --end example -- Note that communications with the intended audience should take a page out of the standard "Search Engine Optimization" (SEO) playbook, using keywords and phrases relating to "value" and "outcome" vs. "compliance" and "output". Searches in email boxes, internal and external search engines for phrases like "cost avoidance strategies", "mission performance metrics" and "innovation funding" should yield messages and content from the EA team. This targeted, informed, practical sales approach should result in additional buy-in and participation, additional EA information contribution and model validation, development of more SMEs and quick "proof points" (with real-life testing) to bolster the case for EA. The proof point here is a successful, timely procurement that satisfies not only the external mandate and external oversight review, but also meets internal EA compliance/conformance goals and therefore is more transparently useful across the community. In short, if sold effectively, the EA will perform and be recognized. EA won’t therefore be used only for compliance, but also (according to a validated, stated purpose) to directly influence decisions and outcomes. The opinions, views and analysis expressed in this document are those of the author and do not necessarily reflect the views of Oracle.

    Read the article

  • Google and Bing Map APIs Compared

    - by SGWellens
    At one of the local golf courses I frequent, there is an open grass field next to the course. It is about eight acres in size and mowed regularly. It is permissible to hit golf balls there—you bring and shag our own balls. My golf colleagues and I spend hours there practicing, chatting and in general just wasting time. One of the guys brings Ginger, the amazing, incredible, wonder dog. Ginger is a Portuguese Pointer. She chases squirrels, begs for snacks and supervises us closely to make sure we don't misbehave.     Anyway, I decided to make a dedicated web page to measure distances on the field in yards using online mapping services. I started with Google maps and then did the same application with Bing maps. It is a good way to become familiar with the APIs. Here are images of the final two maps: Google:  Bing:   To start with online mapping services, you need to visit the respective websites and get a developers key. I pared the code down to the minimum to make it easier to compare the APIs. Google maps required this CSS (or it wouldn't work): <style type="text/css">     html     {         height: 100%;     }       body     {         height: 100%;         margin: 0;         padding: 0;     } Here is how the map scripts are included. Google requires the developer Key when loading the JavaScript, Bing requires it when the map object is created: Google: <script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?key=XXXXXXX&libraries=geometry&sensor=false" > </script> Bing: <script  type="text/javascript" src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"> </script> Note: I use jQuery to manipulate the DOM elements which may be overkill, but I may add more stuff to this application and I didn't want to have to add it later. Plus, I really like jQuery. Here is how the maps are created: Common Code (the same for both Google and Bing Maps):     <script type="text/javascript">         var gTheMap;         var gMarker1;         var gMarker2;           $(document).ready(DocLoaded);           function DocLoaded()         {             // golf course coordinates             var StartLat = 44.924254;             var StartLng = -93.366859;               // what element to display the map in             var mapdiv = $("#map_div")[0];   Google:         // where on earth the map should display         var StartPoint = new google.maps.LatLng(StartLat, StartLng);           // create the map         gTheMap = new google.maps.Map(mapdiv,             {                 center: StartPoint,                 zoom: 18,                 mapTypeId: google.maps.MapTypeId.SATELLITE             });           // place two markers         marker1 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng - .0001));           DragEnd(null);     } Bing:         // where on earth the map should display         var StartPoint = new  Microsoft.Maps.Location(StartLat, StartLng);           // create the map         gTheMap = new Microsoft.Maps.Map(mapdiv,             {                 credentials: 'Asbsa_hzfHl69XF3wxBd_WbW0dLNTRUH3ZHQG9qcV5EFRLuWEaOP1hjWdZ0A0P17',                 center: StartPoint,                 zoom: 18,                 mapTypeId: Microsoft.Maps.MapTypeId.aerial             });             // place two markers         marker1 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng - .0001));           DragEnd(null);     } Note: In the Bing documentation, mapTypeId: was missing from the list of options even though the sample code included it. Note: When creating the Bing map, use the developer Key for the credentials property. I immediately place two markers/pins on the map which is simpler that creating them on the fly with mouse clicks (as I first tried). The markers/pins are draggable and I capture the DragEnd event to calculate and display the distance in yards and draw a line when the user finishes dragging. Here is the code to place a marker: Google: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new google.maps.Marker(         {             position: location,             map: gTheMap,             draggable: true         });     marker.addListener('dragend', DragEnd);     return marker; }   Bing: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new Microsoft.Maps.Pushpin(location,     {         draggable : true     });     Microsoft.Maps.Events.addHandler(marker, 'dragend', DragEnd);     gTheMap.entities.push(marker);     return marker; } Here is the code than runs when the user stops dragging a marker: Google: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Event) {     var meters = google.maps.geometry.spherical.computeDistanceBetween(marker1.position, marker2.position);     var yards = meters * 1.0936133;     $("#message").text(yards.toFixed(1) + ' yards');    // draw a line connecting the points     var Endpoints = [marker1.position, marker2.position];       if (gLine == null)     {         gLine = new google.maps.Polyline({             path: Endpoints,             strokeColor: "#FFFF00",             strokeOpacity: 1.0,             strokeWeight: 2,             map: gTheMap         });     }     else        gLine.setPath(Endpoints); } Bing: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Args) {    var Distance =  CalculateDistance(marker1._location, marker2._location);      $("#message").text(Distance.toFixed(1) + ' yards');       // draw a line connecting the points    var Endpoints = [marker1._location, marker2._location];           if (gLine == null)    {        gLine = new Microsoft.Maps.Polyline(Endpoints,            {                strokeColor: new Microsoft.Maps.Color(0xFF, 0xFF, 0xFF, 0),  // aRGB                strokeThickness : 2            });          gTheMap.entities.push(gLine);    }    else        gLine.setLocations(Endpoints);  }   Note: I couldn't find a function to calculate the distance between points in the Bing API, so I wrote my own (CalculateDistance). If you want to see the source for it, you can pick it off the web page. Note: I was able to verify the accuracy of the measurements by using the golf hole next to the field. I put a pin/marker on the center of the green, and then by zooming in, I was able to see the 150 markers on the fairway and put the other pin/marker on one of them. Final Notes: All in all, the APIs are very similar. Both made it easy to accomplish a lot with a minimum amount of code. In one aerial view, there are leaves on the tree, in the other, the trees are bare. I don't know which service has the newer data. Here are links to working pages: Bing Map Demo Google Map Demo I hope someone finds this useful. Steve Wellens   CodeProject

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Google and Bing Map APIs Compared

    - by SGWellens
    At one of the local golf courses I frequent, there is an open grass field next to the course. It is about eight acres in size and mowed regularly. It is permissible to hit golf balls there—you bring and shag our own balls. My golf colleagues and I spend hours there practicing, chatting and in general just wasting time. One of the guys brings Ginger, the amazing, incredible, wonder dog. Ginger is a Hungarian Vizlas (or Hungarian pointer). She chases squirrels, begs for snacks and supervises us closely to make sure we don't misbehave. Anyway, I decided to make a dedicated web page to measure distances on the field in yards using online mapping services. I started with Google maps and then did the same application with Bing maps. It is a good way to become familiar with the APIs. Here are images of the final two maps: Google:  Bing:   To start with online mapping services, you need to visit the respective websites and get a developers key. I pared the code down to the minimum to make it easier to compare the APIs. Google maps required this CSS (or it wouldn't work): <style type="text/css">     html     {         height: 100%;     }       body     {         height: 100%;         margin: 0;         padding: 0;     } Here is how the map scripts are included. Google requires the developer Key when loading the JavaScript, Bing requires it when the map object is created: Google: <script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?key=XXXXXXX&libraries=geometry&sensor=false" > </script> Bing: <script  type="text/javascript" src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"> </script> Note: I use jQuery to manipulate the DOM elements which may be overkill, but I may add more stuff to this application and I didn't want to have to add it later. Plus, I really like jQuery. Here is how the maps are created: Common Code (the same for both Google and Bing Maps):     <script type="text/javascript">         var gTheMap;         var gMarker1;         var gMarker2;           $(document).ready(DocLoaded);           function DocLoaded()         {             // golf course coordinates             var StartLat = 44.924254;             var StartLng = -93.366859;               // what element to display the map in             var mapdiv = $("#map_div")[0];   Google:         // where on earth the map should display         var StartPoint = new google.maps.LatLng(StartLat, StartLng);           // create the map         gTheMap = new google.maps.Map(mapdiv,             {                 center: StartPoint,                 zoom: 18,                 mapTypeId: google.maps.MapTypeId.SATELLITE             });           // place two markers         marker1 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng - .0001));           DragEnd(null);     } Bing:         // where on earth the map should display         var StartPoint = new  Microsoft.Maps.Location(StartLat, StartLng);           // create the map         gTheMap = new Microsoft.Maps.Map(mapdiv,             {                 credentials: 'XXXXXXXXXXXXXXXXXXX',                 center: StartPoint,                 zoom: 18,                 mapTypeId: Microsoft.Maps.MapTypeId.aerial             });           // place two markers         marker1 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng - .0001));           DragEnd(null);     } Note: In the Bing documentation, mapTypeId: was missing from the list of options even though the sample code included it. Note: When creating the Bing map, use the developer Key for the credentials property. I immediately place two markers/pins on the map which is simpler that creating them on the fly with mouse clicks (as I first tried). The markers/pins are draggable and I capture the DragEnd event to calculate and display the distance in yards and draw a line when the user finishes dragging. Here is the code to place a marker: Google: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new google.maps.Marker(         {             position: location,             map: gTheMap,             draggable: true         });     marker.addListener('dragend', DragEnd);     return marker; } Bing: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new Microsoft.Maps.Pushpin(location,     {         draggable : true     });     Microsoft.Maps.Events.addHandler(marker, 'dragend', DragEnd);     gTheMap.entities.push(marker);     return marker; } Here is the code than runs when the user stops dragging a marker: Google: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Event) {     var meters = google.maps.geometry.spherical.computeDistanceBetween(marker1.position, marker2.position);     var yards = meters * 1.0936133;     $("#message").text(yards.toFixed(1) + ' yards');    // draw a line connecting the points     var Endpoints = [marker1.position, marker2.position];       if (gLine == null)     {         gLine = new google.maps.Polyline({             path: Endpoints,             strokeColor: "#FFFF00",             strokeOpacity: 1.0,             strokeWeight: 2,             map: gTheMap         });     }     else        gLine.setPath(Endpoints); } Bing: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Args) {    var Distance =  CalculateDistance(marker1._location, marker2._location);      $("#message").text(Distance.toFixed(1) + ' yards');       // draw a line connecting the points    var Endpoints = [marker1._location, marker2._location];           if (gLine == null)    {        gLine = new Microsoft.Maps.Polyline(Endpoints,            {                strokeColor: new Microsoft.Maps.Color(0xFF, 0xFF, 0xFF, 0),  // aRGB                strokeThickness : 2            });          gTheMap.entities.push(gLine);    }    else        gLine.setLocations(Endpoints);  }  Note: I couldn't find a function to calculate the distance between points in the Bing API, so I wrote my own (CalculateDistance). If you want to see the source for it, you can pick it off the web page. Note: I was able to verify the accuracy of the measurements by using the golf hole next to the field. I put a pin/marker on the center of the green, and then by zooming in, I was able to see the 150 markers on the fairway and put the other pin/marker on one of them. Final Notes: All in all, the APIs are very similar. Both made it easy to accomplish a lot with a minimum amount of code. In one aerial view, there are leaves on the tree, in the other, the trees are bare. I don't know which service has the newer data. Here are links to working pages: Bing Map Demo Google Map Demo I hope someone finds this useful. Steve Wellens   CodeProject

    Read the article

  • Google App Engine - SiteMap Creation for a social network

    - by spidee
    Hi all. I am creating a social tool - I want to allow search engines to pick up "public" user profiles - like twitter and face-book. I have seen all the protocol info at http://www.sitemaps.org and i understand this and how to build such a file - along with an index if i exceed the 50K limit. Where i am struggling is the concept of how i make this run. The site map for my general site pages is simple i can use a tool to create the file - or a script - host the file - submit the file and done. What i then need is a script that will create the site-maps of user profiles. I assume this would be something like: <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://www.socialsite.com/profile/spidee</loc> <lastmod>2010-5-12</lastmod> <changefreq>???</changefreq> <priority>???</priority> </url> <url> <loc>http://www.socialsite.com/profile/webbsterisback</loc> <lastmod>2010-5-12</lastmod> <changefreq>???</changefreq> <priority>???</priority> </url> </urlset> Ive added some ??? as i don't know how i should set these settings for my profiles based on the following:- When a new profile is created it must be added to a site-map. If the profile is changed or if "certain" properties are changed - then i don't know if i update the entry in the map - or do something else? (updating would be a nightmare!) Some users may change their profile. In terms of relevance to the search engine the only way a google or yahoo search will find the users (for my requirement) profile would be for example by means of [user name] and [location] so once the entry for the profile has been added to the map file the only reason to have the search-bot re-index the profile would be if the user changed their user-name - which they cant. or their location - and or set their settings so that their profile would be "hidden" from search engines. I assume my map creation will need to be dynamic. From what i have said above i would imagine that creating a new profile and possible editing certain properties could mark it as needing adding/updating in the sitemap. Assuming i will have millions of profiles added/being edited how can i manage this in a sensible manner. i know i need a script that can append urls as each profile is created i know the script will prob be a TASK - running at a set freq - perhaps the profiles have a property like "indexed" and the TASK sets them to "true" when the profiles are added to the map. I dont see the best way to store the map - do i store it in the datastore i.e; model=sitemaps properties key_name=sitemap_xml_1 (and for my map sitemap_index_xml) mapxml=blobstore (the raw xml map or ror map) full=boolean (set true when url count is 50) # might need this as a shard will tell us To make this work my thoughts are m cache the current site map structure as "sitemap_xml" keep a shard of url count when my task executes 1. build the xml structure for say the first 100 urls marked "index==false" (how many could u run at a time?) 2. test if the current mcache sitemap is full (shardcounter+10050K) 3.a if the map is near full create a new map entry in models "sitemap_xml_2" - update the map_index file (also stored in my model as "sitemap_index" start a new shard - or reset.2 3.b if the map is not full grab it from mcache 4.append the 100 url xml structure 5.save / m cache the map I can now add a handler using a url map/route like /sitemaps/* Get my * as map name and serve the maps from the blobstore/mache on the fly. Now my question is does this work - is this the right way or a good way to start? Will this handle the situation of making sure the search bots update when a user changes their profile - possibly by setting the change freq correctly? - Do i need a more advance system :( ? or have i re-invented the wheel! I hope this is all clear and make some form of sense :-)

    Read the article

  • PHP Include and accents (They show up as ?)

    - by user146780
    I'm using PHP include to include a PHP file that has HTML in it. some of the content has french accents and these show up as ? on the site. How can this be solved? Thanks Here is the PHP file I include: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html dir="ltr" xmlns="http://www.w3.org/1999/xhtml"> <head> <meta content="en-us" http-equiv="Content-Language" /> <title>Accueil</title> <meta content="text/html; charset=utf-8" http-equiv="Content-Type" /> <meta content="Changement créativité rêve buts être centré Plénitude personnel Développement transformation Modification nouveauté avancement bien-être Nouvelle vision ressentis L’énergie positive satisfaction l’acceptation Pardon" name="keywords" /> <link href="masterstyles.css" rel="stylesheet" type="text/css" /> <link href="menustyles.css" rel="stylesheet" type="text/css" /> <link href="menudropdown.css" rel="stylesheet" type="text/css" /> <td class="tbsyles" >&nbsp; <h3 class="bigorange"> ACTIVITÉS À VENIR…</h3> <p class="horizblue"> </p> <p class="bigblack"> <br /> Inscrivez-vous à nos conférences et formations <br /> <br /> </p> <h4 class="orange"> Example of some text that could be here<br /> </h4> <p class="horizblue"> &nbsp;</p> <h3 class="bigorange"> <br /> ABONNEZ-VOUS… </h3> <p class="nopadding"> À notre liste d’envoi </p> <form method="post" action="<?php echo $PHP_SELF;?>"> <?PHP function process_info(){ if(isset($_POST['email'])) { $email=$_POST["email"]; $email=strtolower($email); $action = "subc"; // check if email exists // check whether email is correct (basic checking) $test1=strpos($email, "@"); //value must be >1 $test2=strpos(substr($email,strpos($email,"@")), "."); //value must be >1 $test3=strlen($email); //value must be >6 $test4=substr_count ($email,"@"); //value must be 1 if ($test1<2 or $test2<2 or $test3<7 or $test4!=1) { print "<h6>Il a une erreur avec vôtre email</h6>"; print "<h6>Aucune informations ont été envoyer</h6>"; } else { print "<h5>vôtre address est enregistrer, Merci </h5>"; //If they wanted to subsribe, do it... $file = "emaillist-666XXX.txt"; // lets try to get the content of the file if (file_exists($file)){ // If the file is already in the server, its content is pasted to variable $file_content $file_content=file_get_contents($file); } else{ // If the file does not exists, lets try to create it // In case file can not be created (probably due to problems with directory permissions), // the users is informed (the first user will be the webmaster, who must solve the problem). $cf = fopen($file, "w") or die(""); fclose($cf); } // IF REQUEST HAS BEEN TO SUBSCRIBE FROM MAILING LIST, ADD EMAIL TO THE FILE if ($action=="subc"){ // check whether the email is already registered if(strpos($file_content,"<$email>")>0){die("");} // write the email to the list (append it to the file) $cf = fopen($file, "a"); fputs($cf, "\n$email"); // new email is written to the file in a new line fclose($cf); } } } } process_info(); ?> &nbsp;<p class="nopadding">Votre Courriel</p> <input name="email" type="text" class="style3" /> <input name="Submit" type="submit" value="OK" /></form> <p class="horizblue"></p> <h3 class="bigorange"> <br /> OUTILS GRATUIT… </h3> <p class="nopadding">Amusez-vous avec des outils intéressants</p> </td>

    Read the article

  • How to detect crashing tabed webbrowser and handle it?

    - by David Eaton
    I have a desktop application (forms) with a tab control, I assign a tab and a new custom webrowser control. I open up about 10 of these tabs. Each one visits about 100 - 500 different pages. The trouble is that if 1 webbrowser control has a problem it shuts down the entire program. I want to be able to close the offending webbrowser control and open a new one in it's place. Is there any event that I need to subscribe to catch a crashing or unresponsive webbrowser control ? I am using C# on windows 7 (Forms), .NET framework v4 =============================================================== UPDATE: 1 - The Tabbed WebBrowser Example Here is the code I have and How I use the webbrowser control in the most basic way. Create a new forms project and name it SimpleWeb Add a new class and name it myWeb.cs, here is the code to use. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows.Forms; using System.Security.Policy; namespace SimpleWeb { //inhert all of webbrowser class myWeb : WebBrowser { public myWeb() { //no javascript errors this.ScriptErrorsSuppressed = true; //Something we want set? AssignEvents(); } //keep near the top private void AssignEvents() { //assign WebBrowser events to our custom methods Navigated += myWeb_Navigated; DocumentCompleted += myWeb_DocumentCompleted; Navigating += myWeb_Navigating; NewWindow += myWeb_NewWindow; } #region Events //List of events:http://msdn.microsoft.com/en-us/library/system.windows.forms.webbrowser_events%28v=vs.100%29.aspx //Fired when a new windows opens private void myWeb_NewWindow(object sender, System.ComponentModel.CancelEventArgs e) { //cancel all popup windows e.Cancel = true; //beep to let you know canceled new window Console.Beep(9000, 200); } //Fired before page is navigated (not sure if its before or during?) private void myWeb_Navigating(object sender, System.Windows.Forms.WebBrowserNavigatingEventArgs args) { } //Fired after page is navigated (but not loaded) private void myWeb_Navigated(object sender, System.Windows.Forms.WebBrowserNavigatedEventArgs args) { } //Fired after page is loaded (Catch 22 - Iframes could be considered a page, can fire more than once. Ads are good examples) private void myWeb_DocumentCompleted(System.Object sender, System.Windows.Forms.WebBrowserDocumentCompletedEventArgs args) { } #endregion //Answer supplied by mo. (modified)? public void OpenUrl(string url) { try { //this.OpenUrl(url); this.Navigate(url); } catch (Exception ex) { MessageBox.Show("Your App Crashed! Because = " + ex.ToString()); //MyApplication.HandleException(ex); } } //Keep near the bottom private void RemoveEvents() { //Remove Events Navigated -= myWeb_Navigated; DocumentCompleted -= myWeb_DocumentCompleted; Navigating -= myWeb_Navigating; NewWindow -= myWeb_NewWindow; } } } On Form1 drag a standard tabControl and set the dock to fill, you can go into the tab collection and delete the pre-populated tabs if you like. Right Click on Form1 and Select "View Code" and replace it with this code. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using mshtml; namespace SimpleWeb { public partial class Form1 : Form { public Form1() { InitializeComponent(); //Load Up 10 Tabs for (int i = 0; i <= 10; i++) { newTab("Test_" + i, "http://wwww.yahoo.com"); } } private void newTab(string Title, String Url) { //Create a new Tab TabPage newTab = new TabPage(); newTab.Name = Title; newTab.Text = Title; //create webbrowser Instance myWeb newWeb = new myWeb(); //Add webbrowser to new tab newTab.Controls.Add(newWeb); newWeb.Dock = DockStyle.Fill; //Add New Tab to Tab Pages tabControl1.TabPages.Add(newTab); newWeb.OpenUrl(Url); } } } Save and Run the project. Using the answer below by mo. , you can surf the first url with no problem, but what about all the urls the user clicks on? How do we check those? I prefer not to add events to every single html element on a page, there has to be a way to run the new urls thru the function OpenUrl before it navigates without having an endless loop. Thanks.

    Read the article

  • C++: Simplifying my program to convert numbers to from one base to another.

    - by Spin City
    Hello, I'm taking a beginner C++ course. I received an assignment telling me to write a program that converts an arbitrary number from any base between binary and hex to another base between binary and hex. I was asked to use separate functions to convert to and from base 10. It was to help us get used to using arrays. (We already covered passing by reference previously in class.) I already turned this in, but I'm pretty sure this wasn't how I was meant to do it: #include <iostream> #include <conio.h> #include <cstring> #include <cmath> using std::cout; using std::cin; using std::endl; int to_dec(char value[], int starting_base); char* from_dec(int value, int ending_base); int main() { char value[30]; int starting_base; int ending_base; cout << "This program converts from one base to another, so long as the bases are" << endl << "between 2 and 16." << endl << endl; input_numbers: cout << "Enter the number, then starting base, then ending base:" << endl; cin >> value >> starting_base >> ending_base; if (starting_base < 2 || starting_base > 16 || ending_base < 2 || ending_base > 16) { cout << "Invalid base(s). "; goto input_numbers; } for (int i=0; value[i]; i++) value[i] = toupper(value[i]); cout << "Base " << ending_base << ": " << from_dec(to_dec(value, starting_base), ending_base) << endl << "Press any key to exit."; getch(); return 0; } int to_dec(char value[], int starting_base) { char hex[16] = {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D', 'E', 'F'}; long int return_value = 0; unsigned short int digit = 0; for (short int pos = strlen(value)-1; pos > -1; pos--) { for (int i=0; i<starting_base; i++) { if (hex[i] == value[pos]) { return_value+=i*pow((float)starting_base, digit++); break; } } } return return_value; } char* from_dec(int value, int ending_base) { char hex[16] = {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D', 'E', 'F'}; char *return_value = (char *)malloc(30); unsigned short int digit = (int)ceil(log10((double)(value+1))/log10((double)ending_base)); return_value[digit] = 0; for (; value != 0; value/=ending_base) return_value[--digit] = hex[value%ending_base]; return return_value; } I'm pretty sure this is more advanced than it was meant to be. How do you think I was supposed to do it? I'm essentially looking for two kinds of answers: Examples of what a simple solution like the one my teacher probably expected would be. Suggestions on how to improve the code.

    Read the article

  • Parsing adobe Kuler RSS feed

    - by dezkev
    I have been trying to parse the below XML file (kuler rss feed). I have read the various posts on this site but am unable to piece them together. I specifically want to extract the child(or siblings) nodes of the element <kuler:themeItem>. However I am getting an exception : Namespace Manager or XsltContext needed. This query has a prefix, variable, or user-defined function. Pl help : C# 3.0 net framework 3.5 RSS feed snippet: <?xml version="1.0" encoding="UTF-8" ?> - <rss xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:kuler="http://kuler.adobe.com/kuler/API/rss/" xmlns:rss="http://blogs.law.harvard.edu/tech/rss" version="2.0"> - <channel> <title>kuler recent themes</title> <link>http://kuler.adobe.com/</link> <description>most recent themes published on kuler (1 to 20 of 332518)</description> <language>en-us</language> <pubDate>Wed, 07 Apr 2010 08:41:31 PST</pubDate> <lastBuildDate>Wed, 07 Apr 2010 08:41:31 PST</lastBuildDate> <docs>http://blogs.law.harvard.edu/tech/rss</docs> <generator>Kuler Services</generator> <managingEditor>[email protected]</managingEditor> <webMaster>[email protected]</webMaster> <recordCount>332518</recordCount> <startIndex>0</startIndex> <itemsPerPage>20</itemsPerPage> - <item> <title>Theme Title: Muted Graph</title> <link>http://kuler.adobe.com/index.cfm#themeID/856075</link> <guid>http://kuler.adobe.com/index.cfm#themeID/856075</guid> - <enclosure xmlns="http://www.solitude.dk/syndication/enclosures/"> <title>Muted Graph</title> - <link length="1" type="image/png"> <url>http://kuler-api.adobe.com/kuler/themeImages/theme_856075.png</url> </link> </enclosure> <description><img src="http://kuler-api.adobe.com/kuler/themeImages/theme_856075.png" /><br /> Artist: tischt<br /> ThemeID: 856075<br /> Posted: 04/07/2010<br /> Hex: F1E9B2, 3D3606, 2A3231, 4A0A07, 424431</description> - <kuler:themeItem> <kuler:themeID>856075</kuler:themeID> <kuler:themeTitle>Muted Graph</kuler:themeTitle> <kuler:themeImage>http://kuler-api.adobe.com/kuler/themeImages/theme_856075.png</kuler:themeImage> - <kuler:themeAuthor> <kuler:authorID>216099</kuler:authorID> <kuler:authorLabel>tischt</kuler:authorLabel> </kuler:themeAuthor> <kuler:themeTags /> <kuler:themeRating>0</kuler:themeRating> <kuler:themeDownloadCount>0</kuler:themeDownloadCount> <kuler:themeCreatedAt>20100407</kuler:themeCreatedAt> <kuler:themeEditedAt>20100407</kuler:themeEditedAt> - <kuler:themeSwatches> - <kuler:swatch> <kuler:swatchHexColor>F1E9B2</kuler:swatchHexColor> <kuler:swatchColorMode>rgb</kuler:swatchColorMode> <kuler:swatchChannel1>0.945098</kuler:swatchChannel1> <kuler:swatchChannel2>0.913725</kuler:swatchChannel2> <kuler:swatchChannel3>0.698039</kuler:swatchChannel3> <kuler:swatchChannel4>0.0</kuler:swatchChannel4> <kuler:swatchIndex>0</kuler:swatchIndex> </kuler:swatch> My Code so far: static void Main(string[] args) { const string feedUrl = "http://kuler-api.adobe.com/rss/get.cfm?listtype=recent&key=xxxx"; var doc = new XmlDocument(); var request = WebRequest.Create(feedUrl) as HttpWebRequest; using (var response = request.GetResponse() as HttpWebResponse) { var reader = new StreamReader(response.GetResponseStream()); doc.Load(reader); } XmlNodeList rsslist = doc.SelectNodes("//rss/channel/item/kuler:themeItem"); for (int i = 0; i < rsslist.Count; i++) { XmlNode rssdetail = rsslist.Item(i).SelectSingleNode("kuler:themeTitle"); string title = rssdetail.InnerText; Console.WriteLine(title); } } }

    Read the article

  • PHP XPath - how to get value from result

    - by LeeTee
    I have the followng XML file from Ebay <GeteBayDetailsResponse xmlns="urn:ebay:apis:eBLBaseComponents"><Timestamp>2012-07-04T12:02:14.541Z</Timestamp><Ack>Success</Ack><Version>779</Version><Build>E779_INTL_BUNDLED_14986004_R1</Build><SiteDetails><Site>US</Site><SiteID>0</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Canada</Site><SiteID>2</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>UK</Site><SiteID>3</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Germany</Site><SiteID>77</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Australia</Site><SiteID>15</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>France</Site><SiteID>71</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>eBayMotors</Site><SiteID>100</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Italy</Site><SiteID>101</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Netherlands</Site><SiteID>146</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Spain</Site><SiteID>186</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>India</Site><SiteID>203</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>HongKong</Site><SiteID>201</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Singapore</Site><SiteID>216</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Malaysia</Site><SiteID>207</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Philippines</Site><SiteID>211</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>CanadaFrench</Site><SiteID>210</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Poland</Site><SiteID>212</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Belgium_Dutch</Site><SiteID>123</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Belgium_French</Site><SiteID>23</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Austria</Site><SiteID>16</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Switzerland</Site><SiteID>193</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Ireland</Site><SiteID>205</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></GeteBayDetailsResponse> I need to get the value of node SiteID where the node Site matches whatever variable I pass. The code I have is: $xml_file ='SiteDetails.xml'; $xmlDoc = new DomDocument(); $xmlDoc->load($xml_file); $xpath = new DOMXpath($xmlDoc); $siteIDList = $xpath->query("/GeteBayDetailsResponse/SiteDetails[Site=$site]/SiteID"); var_dump($siteIDList); echo $siteIDList->SiteID; I get the following result: object(DOMNodeList)#11 (0) { } Can anyone help? I want to get a value of 3.

    Read the article

  • Sockets: Transport endpoint is not connected on send

    - by TheoretiCAL
    I'm trying to learn socket programming from http://beej.us/guide/bgnet/output/html/singlepage/bgnet.html and am attempting to build a SOCK_STREAM client/server. My client: #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> #include <string.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #define SERVERPORT "4951" // the port users will be connecting to int main(int argc, char *argv[]) { int sockfd; struct addrinfo hints, *servinfo, *p; int rv; int numbytes; if (argc != 3) { fprintf(stderr,"usage: talker hostname message\n"); exit(1); } memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; if ((rv = getaddrinfo(argv[1], SERVERPORT, &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return 1; } // loop through all the results and make a socket for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("talker: socket"); continue; if (connect(sockfd, p->ai_addr, p->ai_addrlen) == -1) { close(sockfd); perror("client: connect"); continue; } } break; } if (p == NULL) { fprintf(stderr, "talker: failed to bind socket\n"); return 2; } if ((numbytes = send(sockfd, argv[2], strlen(argv[2]), 0) == -1)) { perror("talker: send"); exit(1); } freeaddrinfo(servinfo); printf("talker: sent %d bytes to %s\n", numbytes, argv[1]); close(sockfd); return 0; } Server: #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> #include <string.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #define MYPORT "4951" // the port users will be connecting to #define MAXBUFLEN 100 static int backlog = 10; // get sockaddr, IPv4 or IPv6: void *get_in_addr(struct sockaddr *sa) { if (sa->sa_family == AF_INET) { return &(((struct sockaddr_in*)sa)->sin_addr); } return &(((struct sockaddr_in6*)sa)->sin6_addr); } int main(void) { int sockfd; struct addrinfo hints, *servinfo, *p; int rv; int numbytes; int new_fd; socklen_t addr_size; struct sockaddr_storage their_addr; char buf[MAXBUFLEN]; char s[INET6_ADDRSTRLEN]; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; // set to AF_INET to force IPv4 hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; // use my IP if ((rv = getaddrinfo(NULL, MYPORT, &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return 1; } // loop through all the results and bind to the first we can for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("listener: socket"); continue; } int yes=1; // lose the pesky "Address already in use" error message if (setsockopt(sockfd,SOL_SOCKET,SO_REUSEADDR,&yes,sizeof(int)) == -1) { perror("setsockopt"); exit(1); } if (bind(sockfd, p->ai_addr, p->ai_addrlen) == -1) { close(sockfd); perror("listener: bind"); continue; } if (listen(sockfd,backlog) == -1){ close(sockfd); perror("listener:listen"); continue; } break; } if (p == NULL) { fprintf(stderr, "listener: failed to bind socket\n"); return 2; } freeaddrinfo(servinfo); printf("listener: waiting to recv..\n"); while(1){ addr_size = sizeof their_addr; if ((new_fd = accept(sockfd, (struct sockaddr *)&their_addr, &addr_size))==-1){ perror("accept"); exit(1); } if ((numbytes = recv(new_fd, buf, MAXBUFLEN-1 , 0) == -1)) { perror("recv"); exit(1); } printf("listener: got packet from %s\n", inet_ntop(their_addr.ss_family, get_in_addr((struct sockaddr *)&their_addr), s, sizeof s)); printf("listener: packet is %d bytes long\n", numbytes); buf[numbytes] = '\0'; printf("listener: packet contains \"%s\"\n", buf); close(sockfd); } return 0; } Upon executing the client, I get " send: Transport endpoint is not connected" and I'm not sure where I went wrong. Thanks.

    Read the article

  • Trying to use json to populate areas of my website using mysql, php, and jquery.

    - by RyanPitts
    Ok, so this is my first attempt at doing anything with JSON. I have done a lot with PHP and MySql as well as jQuery and JavaScript...but nothing with JSON. I have some data in a MySql database. In the codes below i am using a PHP file to retrieve the data from the MySql database and using json_encode to format it to JSON. This file is being called by the JavaScript that runs when the page loads (well, it runs on document.ready actually). I then use jQuery to access the JSON keys and values to fill in areas of the page "dynamically". Here is the code snippets i am using (excuse my "noobness" on writing these snippets, still learning all this). This is my script that is on my HTML page test.php: <script type="text/javascript"> $(document).ready(function(){ $.getJSON("json_events.php",function(data){ $.each(data.events, function(i,events){ var tblRow = "<tr>" +"<td>" + events.id + "</td>" +"<td>" + events.customerId + "</td>" +"<td>" + events.filingName + "</td>" +"<td>" + events.title + "</td>" +"<td>" + events.details + "</td>" +"<td>" + events.dateEvent + "</td>" +"<td><a href='assets/customers/testchurch/events/" + events.image + "'>" + events.image + "</a></td>" +"<td>" + events.dateStart + "</td>" +"<td>" + events.dateEnd + "</td>" +"</tr>" $(tblRow).appendTo("#eventsdata tbody"); }); }); $.getJSON("json_events.php",function(data){ $.each(data.events, function(i,events){ $("#title").html("First event title: " + events.title + " ..."); }); }); }); </script> This is the code for the php file being called by the above JS: json_events.php <?php require_once('includes/configread.php'); $arrayEvents = array(); $resultsEvents = mysql_query("SELECT * FROM events"); while($objectEvents = mysql_fetch_object($resultsEvents)) { $arrayEvents[] = $objectEvents; } $json_object_events = json_encode($arrayEvents); $json_events = "{\"events\": " . $json_object_events . " }"; echo $json_events; require_once('includes/closeconnread.php'); ?> This is my JSON that is held in the variable $json_events from my php file json_events.php: { "events": [ { "id": "2", "customerId": "1004", "filingName": "testchurch", "title": "Kenya 2011 Training Meeting", "details": "This meeting will be taking place on Sunday, February 10th @ 6pm. Get ready for our annual Kenya trip in 2011. We have been blessed to be able to take this trip for the past 3 years. Now, it's your turn to bless others! Come to this training meeting to learn how to minister to the people in Kenya and also learn what we'll be doing there.", "dateEvent": "2011-02-10", "image": "kenya2011.jpg", "dateStart": "2010-09-04", "dateEnd": "2011-02-10" }, { "id": "6", "customerId": "1004", "filingName": "testchurch", "title": "New Series: The Journey", "details": "We will be starting our new series titled "The Journey". Come worship with us as we walk with Paul on his 2nd missionary journey.", "dateEvent": "2011-01-02", "image": "", "dateStart": "2010-09-06", "dateEnd": "2011-01-02" } ] } This is my HTML on test.php: <table id="eventsdata" border="1"> <thead> <tr> <th>id</th> <th>customerId</th> <th>filingName</th> <th>title</th> <th>details</th> <th>dateEvent</th> <th>image</th> <th>dateStart</th> <th>dateEnd</th> </tr> </thead> <tbody></tbody> </table> <div id="title"></div> I have two questions really... Question 1: Does this code look like it is written correctly at first glance? Question 2: I want to be able to select only the title from the first event in the JSON array. The code i am using now is selecting the second events' title by default it seems. How can i accomplish this?

    Read the article

  • Android application displays black screen after running

    - by frgnvola
    When I click "Run as an Android Application" on Eclipse, the following is displayed in the console [2014-06-05 20:07:18 - StudentConnect] Android Launch! [2014-06-05 20:07:18 - StudentConnect] adb is running normally. [2014-06-05 20:07:18 - StudentConnect] Performing sandhu.student.connect.SplashActivity activity launch [2014-06-05 20:07:18 - StudentConnect] Using default Build Tools revision 19.0.0 [2014-06-05 20:07:18 - StudentConnect] Refreshing resource folders. [2014-06-05 20:07:18 - StudentConnect] Using default Build Tools revision 19.0.0 [2014-06-05 20:07:18 - StudentConnect] Starting incremental Pre Compiler: Checking resource changes. [2014-06-05 20:07:18 - StudentConnect] Nothing to pre compile! [2014-06-05 20:07:18 - StudentConnect] Starting incremental Package build: Checking resource changes. [2014-06-05 20:07:18 - StudentConnect] Using default Build Tools revision 19.0.0 [2014-06-05 20:07:18 - StudentConnect] Skipping over Post Compiler. [2014-06-05 20:07:20 - StudentConnect] Application already deployed. No need to reinstall. [2014-06-05 20:07:20 - StudentConnect] Starting activity sandhu.student.connect.SplashActivity on device 0f0898b2 [2014-06-05 20:07:21 - StudentConnect] ActivityManager: Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] cmp=sandhu.student.connect/.SplashActivity } [2014-06-05 20:07:21 - StudentConnect] ActivityManager: Warning: Activity not started, its current task has been brought to the front After deployed to my phone, it only displays a black screen. I recently implemented a splash screen, but it was working fine before; however I think it might have something to do with the problem. Here are my java and xml files: MainActivity.java package sandhu.student.connect; import android.app.Activity; import android.os.Bundle; import android.view.KeyEvent; import android.view.View; import android.webkit.WebSettings; import android.webkit.WebView; import android.webkit.WebViewClient; public class MainActivity extends Activity { public WebView student_zangle; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); WebView student_zangle = (WebView) findViewById(R.id.student_zangle); student_zangle.loadUrl("https://zangleweb01.clovisusd.k12.ca.us/studentconnect/"); student_zangle.setWebViewClient(new WebViewClient()); student_zangle.setScrollBarStyle(View.SCROLLBARS_INSIDE_OVERLAY); WebSettings settings = student_zangle.getSettings(); settings.setJavaScriptEnabled(true); settings.setBuiltInZoomControls(true); settings.setLoadWithOverviewMode(true); settings.setUseWideViewPort(true); } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { WebView student_zangle = (WebView) findViewById(R.id.student_zangle); if ((keyCode == KeyEvent.KEYCODE_BACK) && student_zangle.canGoBack()) { student_zangle.goBack(); return true; } else { finish(); } return super.onKeyDown(keyCode, event); } } activity_main.xml <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@drawable/blue" tools:context=".MainActivity" > <WebView android:id="@+id/student_zangle" android:layout_width="match_parent" android:layout_height="match_parent" /> </RelativeLayout> SplashActivity.java package sandhu.student.connect; import android.os.Bundle; import android.preference.PreferenceActivity; public class SplashActivity extends PreferenceActivity { @SuppressWarnings("deprecation") @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); addPreferencesFromResource(R.xml.prefs); } } splash_activity.xml <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@drawable/blue" android:orientation="vertical" > <ImageView android:id="@+id/imageView1" android:layout_width="250dp" android:layout_height="100dp" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" android:layout_marginTop="145dp" android:contentDescription="@string/zangle_logo" android:src="@drawable/logo" /> </RelativeLayout> Also, here is a full copy of the logcat error output: 06-05 20:19:46.698: E/Watchdog(817): !@Sync 1952 06-05 20:20:09.971: E/memtrack(16438): Couldn't load memtrack module (No such file or directory) 06-05 20:20:09.971: E/android.os.Debug(16438): failed to load memtrack module: -2 06-05 20:20:11.012: E/memtrack(16451): Couldn't load memtrack module (No such file or directory) 06-05 20:20:11.012: E/android.os.Debug(16451): failed to load memtrack module: -2 06-05 20:20:11.202: E/EnterpriseContainerManager(817): ContainerPolicy Service is not yet ready!!! Please help me figure out what is wrong, or at least point me in the right direction. Thanks in advance.

    Read the article

  • $.post is not working

    - by BEBO
    i am trying to post data to Mysql using jquery $.post and php page. my code is not running and nothing is added to the mysql table. I am not sure if the path i am creating is wrong but any help would be appreciated. Jquery location: f_js/tasks/TaskTest.js <script type="text/javascript"> $(document).ready(function(){ $("#AddTask").click(function(){ var acct = $('#acct').val(); var quicktask = $('#quicktask').val(); var user = $('#user').val(); $.post('addTask.php',{acct:acct,quicktask:quicktask,user:user}, function(data){ $('#result').fadeIn('slow').html(data); }); }); }); </script> addTask.php (runs the jqeury code) <?php include 'dbconnect.php'; include 'sessions.php'; $acct = $_POST['acct']; $task = $_POST['quicktask']; $taskstatus = 'Active'; //get task Creator $user = $_POST['user']; //query task creator from users table $allusers = mysql_query("SELECT * FROM users WHERE username = '$user'"); while ($rows = mysql_fetch_array($allusers)) { //get first and last name for task creator $taskOwner = $rows['user_firstname']; $taskOwnerLast = $rows['user_lastname']; $taskOwnerFull = $taskOwner." ".$taskOwnerLast; mysql_query("INSERT INTO tasks (taskresource, tasktitle, taskdetail, taskstatus, taskowner, taskOwnerFullName) VALUES ('$acct', '$task', '$task', '$taskstatus', '$user', '$taskOwnerFull' )"); echo "inserted"; } ?> Accountview.php finally the front page <html> <div class="input-cont "> <input type="text" class="form-control col-lg-12" placeholder="Add a quick Task..." name ="quicktask" id="quicktask"> </div> <div class="form-group"> <div class="pull-right chat-features"> <a href="javascript:;"> <i class="icon-camera"></i> </a> <a href="javascript:;"> <i class="icon-link"></i> </a> <input type="button" class="btn btn-danger" name="AddTask" id="AddTask" value="Add" /> <input type="hidden" name="acct" id="acct" value="<?php echo $_REQUEST['acctname']?>"/> <input type="hidden" name="user" id="user" value="<?php $username = $_SESSION['username']; echo $username?>"/> <div id="result">result</div> </div> </div> <!-- js placed at the end of the document so the pages load faster --> <script src="js/jquery.js"></script> <script src="f_js/tasks/TaskTest.js"></script> <!--common script for all pages--> <script src="js/common-scripts.js"></script> <script type="text/javascript" src="assets/gritter/js/jquery.gritter.js"></script> <script src="js/gritter.js" type="text/javascript"></script> <script> </html> Firebug reponse: Response Headers Connection Keep-Alive Content-Length 0 Content-Type text/html Date Fri, 08 Nov 2013 21:48:50 GMT Keep-Alive timeout=5, max=100 Server Apache/2.4.4 (Win32) OpenSSL/0.9.8y PHP/5.4.16 X-Powered-By PHP/5.4.16 refresh 5; URL=index.php Request Headers Accept */* Accept-Encoding gzip, deflate Accept-Language en-US,en;q=0.5 Content-Length 13 Content-Type application/x-www-form-urlencoded; charset=UTF-8 Cookie PHPSESSID=6gufl3guiiddreg8cdlc0htnc6 Host localhost Referer http://localhost/betahtml/AccountView.php?acctname=client%201 User-Agent Mozilla/5.0 (Windows NT 6.3; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 X-Requested-With XMLHttpRequest

    Read the article

  • Cannot determine ethernet address for proxy ARP on PPTP

    - by Linux Intel
    I installed pptp server on a centos 6 64bit server PPTP Server ip : 55.66.77.10 PPTP Local ip : 10.0.0.1 Client1 IP : 10.0.0.60 centos 5 64bit Client2 IP : 10.0.0.61 centos5 64bit PPTP Server can ping Client1 And client 1 can ping PPTP Server PPTP Server can ping Client2 And client 2 can ping PPTP Server The problem is client 1 can not ping Client 2 and i get this error also on PPTP server error log Cannot determine ethernet address for proxy ARP Ping from Client2 to Client1 PING 10.0.0.60 (10.0.0.60) 56(84) bytes of data. --- 10.0.0.60 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 5000ms route -n on PPTP Server Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.60 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.61 0.0.0.0 255.255.255.255 UH 0 0 0 ppp1 55.66.77.10 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 55.66.77.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 1 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 70.14.13.19 255.255.255.255 UGH 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 70.14.13.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 2 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 84.56.120.60 255.255.255.255 UGH 0 0 0 eth1 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 84.56.120.60 0.0.0.0 UG 0 0 0 eth1 cat /etc/ppp/options.pptpd on PPTP server ############################################################################### # $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $ # # Sample Poptop PPP options file /etc/ppp/options.pptpd # Options used by PPP when a connection arrives from a client. # This file is pointed to by /etc/pptpd.conf option keyword. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 and the kernel MPPE module. ############################################################################### # Authentication # Name of the local system for authentication purposes # (must match the second field in /etc/ppp/chap-secrets entries) name pptpd # Strip the domain prefix from the username before authentication. # (applies if you use pppd with chapms-strip-domain patch) #chapms-strip-domain # Encryption # (There have been multiple versions of PPP with encryption support, # choose with of the following sections you will use.) # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} # OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o # {{{ #-chap #-chapms # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. #+chapms-v2 # Require MPPE encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) #mppe-40 # enable either 40-bit or 128-bit, not both #mppe-128 #mppe-stateless # }}} # Network and Routing # If pppd is acting as a server for Microsoft Windows clients, this # option allows pppd to supply one or two DNS (Domain Name Server) # addresses to the clients. The first instance of this option # specifies the primary DNS address; the second instance (if given) # specifies the secondary DNS address. #ms-dns 10.0.0.1 #ms-dns 10.0.0.2 # If pppd is acting as a server for Microsoft Windows or "Samba" # clients, this option allows pppd to supply one or two WINS (Windows # Internet Name Services) server addresses to the clients. The first # instance of this option specifies the primary WINS address; the # second instance (if given) specifies the secondary WINS address. #ms-wins 10.0.0.3 #ms-wins 10.0.0.4 # Add an entry to this system's ARP [Address Resolution Protocol] # table with the IP address of the peer and the Ethernet address of this # system. This will have the effect of making the peer appear to other # systems to be on the local ethernet. # (you do not need this if your PPTP server is responsible for routing # packets to the clients -- James Cameron) proxyarp # Normally pptpd passes the IP address to pppd, but if pptpd has been # given the delegate option in pptpd.conf or the --delegate command line # option, then pppd will use chap-secrets or radius to allocate the # client IP address. The default local IP address used at the server # end is often the same as the address of the server. To override this, # specify the local IP address here. # (you must not use this unless you have used the delegate option) #10.8.0.100 # Logging # Enable connection debugging facilities. # (see your syslog configuration for where pppd sends to) debug # Print out all the option values which have been set. # (often requested by mailing list to verify options) #dump # Miscellaneous # Create a UUCP-style lock file for the pseudo-tty to ensure exclusive # access. lock # Disable BSD-Compress compression nobsdcomp # Disable Van Jacobson compression # (needed on some networks with Windows 9x/ME/XP clients, see posting to # poptop-server on 14th April 2005 by Pawel Pokrywka and followups, # http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 ) novj novjccomp # turn off logging to stderr, since this may be redirected to pptpd, # which may trigger a loopback nologfd # put plugins here # (putting them higher up may cause them to sent messages to the pty) cat /etc/ppp/options.pptp on Client1 and Client2 ############################################################################### # $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $ # # Sample PPTP PPP options file /etc/ppp/options.pptp # Options used by PPP when a connection is made by a PPTP client. # This file can be referred to by an /etc/ppp/peers file for the tunnel. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/ # and the kernel MPPE module available from the CVS repository also on # http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe. ############################################################################### # Lock the port lock # Authentication # We don't need the tunnel server to authenticate itself noauth # We won't do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2 # (you may need to remove these refusals if the server is not using MPPE) refuse-pap refuse-eap refuse-chap refuse-mschap # Compression # Turn off compression protocols we know won't be used nobsdcomp nodeflate # Encryption # (There have been multiple versions of PPP with encryption support, # choose which of the following sections you will use. Note that MPPE # requires the use of MSCHAP-V2 during authentication) # # Note that using PPTP with MPPE and MSCHAP-V2 should be considered # insecure: # http://marc.info/?l=pptpclient-devel&m=134372640219039&w=2 # https://github.com/moxie0/chapcrack/blob/master/README.md # http://technet.microsoft.com/en-us/security/advisory/2743314 # http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras # ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o # If the kernel is booted in FIPS mode (fips=1), the ppp_mppe.ko module # is not allowed and PPTP-MPPE is not available. # {{{ # Require MPPE 128-bit encryption #require-mppe-128 # }}} # http://mppe-mppc.alphacron.de/ fork from PPP project by Jan Dubiec # ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o # {{{ # Require MPPE 128-bit encryption #mppe required,stateless # }}} IPtables is stopped on clients and server, Also net.ipv4.ip_forward = 1 is enabled on PPTP Server. How can i solve this problem .?

    Read the article

  • l2tp / ipsec debian Openswan U2.6.38 does not connect

    - by locojay
    i am trying to get ipsec/l2tp running on a debian server with an iphone as a client but always get: Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [RFC 3947] method set to=115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: ignoring Vendor ID payload [FRAGMENTATION 80000000] Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [Dead Peer Detection] Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: responding to Main Mode from unknown peer <clientip> Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: STATE_MAIN_R1: sent MR1, expecting MI2 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): both are NATed Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: STATE_MAIN_R2: sent MR2, expecting MI3 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: Main mode peer ID is ID_IPV4_ADDR: '10.2.210.176' Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: switched from "L2TP-PSK-noNAT" to "L2TP-PSK-noNAT" Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: deleting connection "L2TP-PSK-noNAT" instance with peer <clientip> {isakmp=#0/ipsec=#0} Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: new NAT mapping for #20, was <clientip>:43598, now <clientip>:49826 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: Dead Peer Detection (RFC 3706): enabled Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: the peer proposed: <public ip>/32:17/1701 -> 10.2.210.176/32:17/0 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: responding to Quick Mode proposal {msgid:311d3282} Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: us: 171.138.2.13<171.138.2.13>:17/1701 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: them: <clientip>[10.2.210.176]:17/61719 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: Dead Peer Detection (RFC 3706): enabled Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x05e23c9a <0x216077a9 xfrm=AES_256-HMAC_SHA1 NATOA=10.2.210.176 NATD=<clientip>:49826 DPD=enabled} Dec 2 21:00:26 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: received Delete SA(0x05e23c9a) payload: deleting IPSEC State #21 Dec 2 21:00:26 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: received and ignored informational message Dec 2 21:00:27 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: received Delete SA payload: deleting ISAKMP State #20 Dec 2 21:00:27 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip>: deleting connection "L2TP-PSK-noNAT" instance with peer <clientip> {isakmp=#0/ipsec=#0} Dec 2 21:00:27 vpn pluto[22711]: packet from <clientip>:49826: received and ignored informational message Dec 2 21:00:27 vpn pluto[22711]: ERROR: asynchronous network error report on eth0 (sport=4500) for message to <clientip> port 49826, complainant <clientip>: Connection refused [errno 111, origin ICMP type 3 code 3 (not authenticated)] my setup looks like this verizon fios actiontec -- DMZ-- ddwrt router -- debian xen instance actiontec : 192.168.1.1 ddwrt: 171.138.2.1 debian xen server: 171.138.2.13 forwarded udp 500, 4500, 1701 on ddwrt to debian xen instance. vpn passthrough is enabled /etc/ipsec.conf config setup dumpdir=/var/run/pluto/ nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12,%v4:25.0.0.0/8,%v6:fd00::/8,%v6:fe80::/10,%v4:!171.138.2.0/24,%v4:!192.168.1.0/24 protostack=netkey # Add connections here conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 # we cannot rekey for %any, let client rekey rekey=no # Apple iOS doesn't send delete notify so we need dead peer detection # to detect vanishing clients dpddelay=30 dpdtimeout=120 dpdaction=clear # Set ikelifetime and keylife to same defaults windows has ikelifetime=8h keylife=1h # l2tp-over-ipsec is transport mode type=transport # left=171.138.2.13 # # For updated Windows 2000/XP clients, # to support old clients as well, use leftprotoport=17/%any leftprotoport=17/1701 # # The remote user. # right=%any # Using the magic port of "%any" means "any one single port". This is # a work around required for Apple OSX clients that use a randomly # high port. rightprotoport=17/%any #force all to be nat'ed. because of ios conn passthrough-for-non-l2tp type=passthrough left=171.138.2.13 leftnexthop=171.138.2.1 right=0.0.0.0 rightsubnet=0.0.0.0/0 auto=route /etc/xl2tp/xl2tp.conf [global] ipsec saref = no listen-addr = 171.138.2.13 ;port = 1701 ;debug network = yes ;debug tunnel = yes ;debug network = yes ;debug packet = yes [lns default] ip range = 171.138.2.231-171.138.2.239 local ip = 171.138.2.13 assign ip = yes require chap = no refuse pap = no require authentication = no ;name = OpenswanVPN ppp debug = yes pppoptfile = /etc/ppp/options.xlt2tpd lenght bit = yes /etc/ppp/options.xl2tpd ;require-mschap-v2 pcp-accept-local ipcp-accept-local ipcp-accept-remote ;ms-dns 171.138.2.1 ms-dns 192.168.1.1 ms-dns 8.8.8.8 name l2tpd noccp auth crtscts idle 1800 mtu 1410 mru 1410 lock proxyarp connect-delay 5000 debug dump logfd 2 logfile /var/log/xl2tpd.log ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.0.0-1-amd64 (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Two or more interfaces found, checking IP forwarding [FAILED] Checking NAT and MASQUERADEing [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] The failed can be ignored i guess since cat /proc/sys/net/ipv4/ip_forward returns 1 any help would be much appreciated as i don't have any idea why this is not working

    Read the article

  • Setup VPN issue on Ubuntu Server 12.04

    - by Yozone W.
    I have a problem with setup VPN server on my Ubuntu VPS, here is my server environments: Ubuntu Server 12.04 x86_64 xl2tpd 1.3.1+dfsg-1 pppd 2.4.5-5ubuntu1 openswan 1:2.6.38-1~precise1 After install software and configuration: ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.2.0-24-virtual (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] /var/log/auth.log message: Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [RFC 3947] method set to=115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: ignoring Vendor ID payload [FRAGMENTATION 80000000] Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [Dead Peer Detection] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: responding to Main Mode from unknown peer [My IP Address] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R1: sent MR1, expecting MI2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): peer is NATed Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R2: sent MR2, expecting MI3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: Main mode peer ID is ID_IPV4_ADDR: '192.168.12.52' Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: switched from "L2TP-PSK-NAT" to "L2TP-PSK-NAT" Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: new NAT mapping for #5, was [My IP Address]:2251, now [My IP Address]:2847 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: the peer proposed: [My Server IP Address]/32:17/1701 -> 192.168.12.52/32:17/0 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: responding to Quick Mode proposal {msgid:8579b1fb} Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: us: [My Server IP Address]<[My Server IP Address]>:17/1701 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: them: [My IP Address][192.168.12.52]:17/65280===192.168.12.52/32 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x08bda158 <0x4920a374 xfrm=AES_256-HMAC_SHA1 NATOA=192.168.12.52 NATD=[My IP Address]:2847 DPD=enabled} Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA(0x08bda158) payload: deleting IPSEC State #6 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: ERROR: netlink XFRM_MSG_DELPOLICY response for flow eroute_connection delete included errno 2: No such file or directory Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received and ignored informational message Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA payload: deleting ISAKMP State #5 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address]: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:51:16 vpn pluto[3963]: packet from [My IP Address]:2847: received and ignored informational message xl2tpd -D message: xl2tpd[4289]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[4289]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[4289]: setsockopt recvref[30]: Protocol not available xl2tpd[4289]: This binary does not support kernel L2TP. xl2tpd[4289]: xl2tpd version xl2tpd-1.3.1 started on vpn.netools.me PID:4289 xl2tpd[4289]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[4289]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[4289]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[4289]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[4289]: Listening on IP address [My Server IP Address], port 1701 Then it just stopped here, and have no any response. I can't connect VPN on my mac client, the /var/log/system.log message: Oct 16 15:17:36 azone-iMac.local configd[17]: SCNC: start, triggered by SystemUIServer, type L2TP, status 0 Oct 16 15:17:36 azone-iMac.local pppd[3799]: pppd 2.4.2 (Apple version 596.13) started by azone, uid 501 Oct 16 15:17:38 azone-iMac.local pppd[3799]: L2TP connecting to server 'vpn.netools.me' ([My Server IP Address])... Oct 16 15:17:38 azone-iMac.local pppd[3799]: IPSec connection started Oct 16 15:17:38 azone-iMac.local racoon[359]: Connecting. Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 started (Initiated by me). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 2). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 4). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 AUTH: success. (Initiator, Main-Mode Message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 Initiator: success. (Initiator, Main-Mode). Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 started (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local pppd[3799]: IPSec connection established Oct 16 15:17:59 azone-iMac.local pppd[3799]: L2TP cannot connect to the server Oct 16 15:17:59 azone-iMac.local racoon[359]: IPSec disconnecting from server [My Server IP Address] Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Anyone help? Thanks a million!

    Read the article

  • Network Logon Issues with Group Policy and Network

    - by bobloki
    I am gravely in need of your help and assistance. We have a problem with our logon and startup to our Windows 7 Enterprise system. We have more than 3000 Windows Desktops situated in roughly 20+ buildings around campus. Almost every computer on campus has the problem that I will be describing. I have spent over one month peering over etl files from Windows Performance Analyzer (A great product) and hundreds of thousands of event logs. I come to you today humbled that I could not figure this out. The problem as simply put our logon times are extremely long. An average first time logon is roughly 2-10 minutes depending on the software installed. All computers are Windows 7, the oldest computers being 5 years old. Startup times on various computers range from good (1-2 minutes) to very bad (5-60). Our second time logons range from 30 seconds to 4 minutes. We have a gigabit connection between each computer on the network. We have 5 domain controllers which also double as our DNS servers. Initial testing led us to believe that this was a software problem. So I spent a few days testing machines only to find inconsistent results from the etl files from xperfview. Each subset of computers on campus had a different subset of software issues, none seeming to interfere with logon just startup. So I started looking at our group policy and located some very interesting event ID’s. Group Policy 1129: The processing of Group Policy failed because of lack of network connectivity to a domain controller. Group Policy 1055: The processing of Group Policy failed. Windows could not resolve the computer name. This could be caused by one of more of the following: a) Name Resolution failure on the current domain controller. b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller). NETLOGON 5719 : This computer was not able to set up a secure session with a domain controller in domain OURDOMAIN due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. E1kexpress 27: Intel®82567LM-3 Gigabit Network Connection – Network link is disconnected. NetBT 4300 – The driver could not be created. WMI 10 - Event filter with query "SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND TargetInstance.LoadPercentage 99" could not be reactivated in namespace "//./root/CIMV2" because of error 0x80041003. Events cannot be delivered through this filter until the problem is corrected. More or less with timestamps it becomes apparent that the network maybe the issue. 1:25:57 - Group Policy is trying to discover the domain controller information 1:25:57 - The network link has been disconnected 1:25:58 - The processing of Group Policy failed because of lack of network connectivity to a domain controller. This may be a transient condition. A success message would be generated once the machine gets connected to the domain controller and Group Policy has successfully processed. If you do not see a success message for several hours, then contact your administrator. 1:25:58 - Making LDAP calls to connect and bind to active directory. DC1.ourdomain.edu 1:25:58 - Call failed after 0 milliseconds. 1:25:58 - Forcing rediscovery of domain controller details. 1:25:58 - Group policy failed to discover the domain controller in 1030 milliseconds 1:25:58 - Periodic policy processing failed for computer OURDOMAIN\%name%$ in 1 seconds. 1:25:59 - A network link has been established at 1Gbps at full duplex 1:26:00 - The network link has been disconnected 1:26:02 - NtpClient was unable to set a domain peer to use as a time source because of discovery error. NtpClient will try again in 3473457 minutes and DOUBLE THE REATTEMPT INTERVAL thereafter. 1:26:05 - A network link has been established at 1Gbps at full duplex 1:26:08 - Name resolution for the name %Name% timed out after none of the configured DNS servers responded. 1:26:10 – The TCP/IP NetBIOS Helper service entered the running state. 1:26:11 - The time provider NtpClient is currently receiving valid time data at dc4.ourdomain.edu 1:26:14 – User Logon Notification for Customer Experience Improvement Program 1:26:15 - Group Policy received the notification Logon from Winlogon for session 1. 1:26:15 - Making LDAP calls to connect and bind to Active Directory. dc4.ourdomain.edu 1:26:18 - The LDAP call to connect and bind to Active Directory completed. dc4. ourdomain.edu. The call completed in 2309 milliseconds. 1:26:18 - Group Policy successfully discovered the Domain Controller in 2918 milliseconds. 1:26:18 - Computer details: Computer role : 2 Network name : (Blank) 1:26:18 - The LDAP call to connect and bind to Active Directory completed. dc4.ourdomain.edu. The call completed in 2309 milliseconds. 1:26:18 - Group Policy successfully discovered the Domain Controller in 2918 milliseconds. 1:26:19 - The WinHTTP Web Proxy Auto-Discovery Service service entered the running state. 1:26:46 - The Network Connections service entered the running state. 1:27:10 – Retrieved account information 1:27:10 – The system call to get account information completed. 1:27:10 - Starting policy processing due to network state change for computer OURDOMAIN\%name%$ 1:27:10 – Network state change detected 1:27:10 - Making system call to get account information. 1:27:11 - Making LDAP calls to connect and bind to Active Directory. dc4.ourdomain.edu 1:27:13 - Computer details: Computer role : 2 Network name : ourdomain.edu (Now not blank) 1:27:13 - Group Policy successfully discovered the Domain Controller in 2886 milliseconds. 1:27:13 - The LDAP call to connect and bind to Active Directory completed. dc4.ourdomain.edu The call completed in 2371 milliseconds. 1:27:15 - Estimated network bandwidth on one of the connections: 0 kbps. 1:27:15 - Estimated network bandwidth on one of the connections: 8545 kbps. 1:27:15 - A fast link was detected. The Estimated bandwidth is 8545 kbps. The slow link threshold is 500 kbps. 1:27:17 – Powershell - Engine state is changed from Available to Stopped. 1:27:20 - Completed Group Policy Local Users and Groups Extension Processing in 4539 milliseconds. 1:27:25 - Completed Group Policy Scheduled Tasks Extension Processing in 5210 milliseconds. 1:27:27 - Completed Group Policy Registry Extension Processing in 1529 milliseconds. 1:27:27 - Completed policy processing due to network state change for computer OURDOMAIN\%name%$ in 16 seconds. 1:27:27 – The Group Policy settings for the computer were processed successfully. There were no changes detected since the last successful processing of Group Policy. Any help would be appreciated. Please ask for any relevant information and it will be provided as soon as possible.

    Read the article

  • Heartbeat won't successfully start up resources from a cold boot when a failed node is present

    - by Matthew
    I currently have two ubuntu servers running Heartbeat and DRBD. The servers are directory connected with a 1000Mbps crossover cable on eth1 and have access to an IP camera LAN on eth0. Now, let's say that one node is down and the remaining functional node is booting after having been shut down. The node that is still functioning won't start up heartbeat and provide access to the drbd resource from a cold boot. I have to manually restart heartbeat by sudo service heartbeat restart to get everything up and running. How can I get it to start fine from a cold start, when only one server is present? Here is the ha.cf: debug /var/log/ha-debug logfile /var/log/ha-log logfacility none keepalive 2 deadtime 10 warntime 7 initdead 60 ucast eth1 192.168.2.2 ucast eth0 10.1.10.201 node EMserver1 node EMserver2 respawn hacluster /usr/lib/heartbeat/ipfail ping 10.1.10.22 10.1.10.21 10.1.10.11 auto_failback off Some material from the syslog: harc[4604]: 2012/11/27_13:54:49 info: Running /etc/ha.d//rc.d/status status mach_down[4632]: 2012/11/27_13:54:49 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired mach_down[4632]: 2012/11/27_13:54:49 info: mach_down takeover complete for node emserver2. Nov 27 13:54:49 EMserver1 heartbeat: [4586]: info: Initial resource acquisition complete (T_RESOURCES(us)) Nov 27 13:54:49 EMserver1 heartbeat: [4586]: info: mach_down takeover complete. IPaddr[4679]: 2012/11/27_13:54:49 INFO: Resource is stopped Nov 27 13:54:49 EMserver1 heartbeat: [4605]: info: Local Resource acquisition completed. harc[4713]: 2012/11/27_13:54:49 info: Running /etc/ha.d//rc.d/ip-request-resp ip-request-resp ip-request-resp[4713]: 2012/11/27_13:54:49 received ip-request-resp IPaddr::10.1.10.254 OK yes ResourceManager[4732]: 2012/11/27_13:54:50 info: Acquiring resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server IPaddr[4759]: 2012/11/27_13:54:50 INFO: Resource is stopped ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 start IPaddr[4816]: 2012/11/27_13:54:50 INFO: Using calculated nic for 10.1.10.254: eth0 IPaddr[4816]: 2012/11/27_13:54:50 INFO: Using calculated netmask for 10.1.10.254: 255.255.255.0 IPaddr[4816]: 2012/11/27_13:54:50 INFO: eval ifconfig eth0:0 10.1.10.254 netmask 255.255.255.0 broadcast 10.1.10.255 IPaddr[4804]: 2012/11/27_13:54:50 INFO: Success ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/drbddisk r0 start Filesystem[4965]: 2012/11/27_13:54:50 INFO: Resource is stopped ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 start Filesystem[5039]: 2012/11/27_13:54:50 INFO: Running start for /dev/drbd1 on /shr Filesystem[5033]: 2012/11/27_13:54:51 INFO: Success ResourceManager[4732]: 2012/11/27_13:54:51 info: Running /etc/init.d/nfs-kernel-server start Nov 27 13:55:00 EMserver1 heartbeat: [4586]: info: Local Resource acquisition completed. (none) Nov 27 13:55:00 EMserver1 heartbeat: [4586]: info: local resource transition completed. Nov 27 13:57:46 EMserver1 heartbeat: [4586]: info: Heartbeat shutdown in progress. (4586) Nov 27 13:57:46 EMserver1 heartbeat: [5286]: info: Giving up all HA resources. ResourceManager[5301]: 2012/11/27_13:57:46 info: Releasing resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server ResourceManager[5301]: 2012/11/27_13:57:46 info: Running /etc/init.d/nfs-kernel-server stop ResourceManager[5301]: 2012/11/27_13:57:46 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 stop Filesystem[5372]: 2012/11/27_13:57:46 INFO: Running stop for /dev/drbd1 on /shr Filesystem[5372]: 2012/11/27_13:57:47 INFO: Trying to unmount /shr Filesystem[5372]: 2012/11/27_13:57:47 INFO: unmounted /shr successfully Filesystem[5366]: 2012/11/27_13:57:47 INFO: Success ResourceManager[5301]: 2012/11/27_13:57:47 info: Running /etc/ha.d/resource.d/drbddisk r0 stop ResourceManager[5301]: 2012/11/27_13:57:47 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 stop IPaddr[5509]: 2012/11/27_13:57:47 INFO: ifconfig eth0:0 down IPaddr[5497]: 2012/11/27_13:57:47 INFO: Success Nov 27 13:57:47 EMserver1 heartbeat: [5286]: info: All HA resources relinquished. Nov 27 13:57:48 EMserver1 heartbeat: [4586]: info: killing /usr/lib/heartbeat/ipfail process group 4603 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBFIFO process 4589 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4590 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4591 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4592 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4593 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4594 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4595 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4596 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4597 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4598 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4599 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4589 exited. 11 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4596 exited. 10 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4598 exited. 9 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4590 exited. 8 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4595 exited. 7 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4591 exited. 6 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4592 exited. 5 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4593 exited. 4 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4597 exited. 3 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4594 exited. 2 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4599 exited. 1 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: emserver1 Heartbeat shutdown complete. Here is some more from the log ResourceManager[2576]: 2012/11/28_16:32:42 info: Acquiring resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server IPaddr[2602]: 2012/11/28_16:32:42 INFO: Running OK Filesystem[2653]: 2012/11/28_16:32:43 INFO: Running OK Nov 28 16:32:52 EMserver1 heartbeat: [1695]: WARN: node emserver2: is dead Nov 28 16:32:52 EMserver1 heartbeat: [1695]: info: Dead node emserver2 gave up resources. Nov 28 16:32:52 EMserver1 ipfail: [1807]: info: Status update: Node emserver2 now has status dead Nov 28 16:32:52 EMserver1 heartbeat: [1695]: info: Link emserver2:eth1 dead. Nov 28 16:32:53 EMserver1 ipfail: [1807]: info: NS: We are still alive! Nov 28 16:32:53 EMserver1 ipfail: [1807]: info: Link Status update: Link emserver2/eth1 now has status dead Nov 28 16:32:55 EMserver1 ipfail: [1807]: info: Asking other side for ping node count. Nov 28 16:32:55 EMserver1 ipfail: [1807]: info: Checking remote count of ping nodes. Nov 28 16:32:57 EMserver1 heartbeat: [1695]: info: Heartbeat shutdown in progress. (1695) Nov 28 16:32:57 EMserver1 heartbeat: [2734]: info: Giving up all HA resources. ResourceManager[2751]: 2012/11/28_16:32:57 info: Releasing resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server ResourceManager[2751]: 2012/11/28_16:32:57 info: Running /etc/init.d/nfs-kernel-server stop ResourceManager[2751]: 2012/11/28_16:32:57 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 stop Filesystem[2829]: 2012/11/28_16:32:57 INFO: Running stop for /dev/drbd1 on /shr Filesystem[2829]: 2012/11/28_16:32:57 INFO: Trying to unmount /shr Filesystem[2829]: 2012/11/28_16:32:58 INFO: unmounted /shr successfully Filesystem[2823]: 2012/11/28_16:32:58 INFO: Success ResourceManager[2751]: 2012/11/28_16:32:58 info: Running /etc/ha.d/resource.d/drbddisk r0 stop ResourceManager[2751]: 2012/11/28_16:32:58 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 stop IPaddr[2971]: 2012/11/28_16:32:58 INFO: ifconfig eth0:0 down IPaddr[2958]: 2012/11/28_16:32:58 INFO: Success Nov 28 16:32:58 EMserver1 heartbeat: [2734]: info: All HA resources relinquished. Nov 28 16:32:59 EMserver1 heartbeat: [1695]: info: killing /usr/lib/heartbeat/ipfail process group 1807 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBFIFO process 1777 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1778 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1779 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1780 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1781 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1782 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1783 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1784 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1785 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1786 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1787 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1778 exited. 11 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1779 exited. 10 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1780 exited. 9 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1781 exited. 8 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1782 exited. 7 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1783 exited. 6 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1784 exited. 5 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1785 exited. 4 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1786 exited. 3 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1787 exited. 2 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1777 exited. 1 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: emserver1 Heartbeat shutdown complete. If I restarted heartbeat at this point... the resources heartbeat controls would start up fine.... please help!

    Read the article

  • setup L2TP on Ubuntu 10.10

    - by luca
    I'm following this guide to setup a VPS on my Ubuntu VPS: http://riobard.com/blog/2010-04-30-l2tp-over-ipsec-ubuntu/ My config files are setup as in that guide, openswan version is 2.6.26 I think.. It doesn't work, I can show you my auth.log (on the VPS): Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [RFC 3947] method set to=109 Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] method set to=110 Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: ignoring unknown Vendor ID payload [8f8d83826d246b6fc7a8a6a428c11de8] Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: ignoring unknown Vendor ID payload [439b59f8ba676c4c7737ae22eab8f582] Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: ignoring unknown Vendor ID payload [4d1e0e136deafa34c4f3ea9f02ec7285] Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: ignoring unknown Vendor ID payload [80d0bb3def54565ee84645d4c85ce3ee] Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: ignoring unknown Vendor ID payload [9909b64eed937c6573de52ace952fa6b] Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 110 Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 110 Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 110 Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [Dead Peer Detection] Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: responding to Main Mode from unknown peer 93.36.127.12 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: STATE_MAIN_R1: sent MR1, expecting MI2 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): peer is NATed Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: STATE_MAIN_R2: sent MR2, expecting MI3 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: Main mode peer ID is ID_IPV4_ADDR: '10.0.1.8' Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: switched from "L2TP-PSK-NAT" to "L2TP-PSK-NAT" Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: deleting connection "L2TP-PSK-NAT" instance with peer 93.36.127.12 {isakmp=#0/ipsec=#0} Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: new NAT mapping for #7, was 93.36.127.12:500, now 93.36.127.12:36810 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=oakley_3des_cbc_192 prf=oakley_sha group=modp1024} Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: received and ignored informational message Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: the peer proposed: 69.147.233.173/32:17/1701 -> 10.0.1.8/32:17/0 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: responding to Quick Mode proposal {msgid:183463cf} Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: us: 69.147.233.173<69.147.233.173>[+S=C]:17/1701 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: them: 93.36.127.12[10.0.1.8,+S=C]:17/64111===10.0.1.8/32 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x0b1cf725 <0x0b719671 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=93.36.127.12:36810 DPD=none} Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: received Delete SA(0x0b1cf725) payload: deleting IPSEC State #8 Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: netlink recvfrom() of response to our XFRM_MSG_DELPOLICY message for policy eroute_connection delete was too long: 100 > 36 Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: netlink recvfrom() of response to our XFRM_MSG_DELPOLICY message for policy [email protected] was too long: 168 > 36 Feb 18 06:11:28 maverick pluto[6909]: | raw_eroute result=0 Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: received and ignored informational message Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: received Delete SA payload: deleting ISAKMP State #7 Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12: deleting connection "L2TP-PSK-NAT" instance with peer 93.36.127.12 {isakmp=#0/ipsec=#0} Feb 18 06:11:28 maverick pluto[6909]: packet from 93.36.127.12:36810: received and ignored informational message and my system log on OSX (from where I'm connecting): Feb 18 13:11:09 luca-ciorias-MacBook-Pro pppd[68656]: pppd 2.4.2 (Apple version 412.3) started by luca, uid 501 Feb 18 13:11:09 luca-ciorias-MacBook-Pro pppd[68656]: L2TP connecting to server '69.147.233.173' (69.147.233.173)... Feb 18 13:11:09 luca-ciorias-MacBook-Pro pppd[68656]: IPSec connection started Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: Connecting. Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: receive success. (Initiator, Main-Mode message 2). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: receive success. (Initiator, Main-Mode message 4). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Phase1 AUTH: success. (Initiator, Main-Mode Message 6). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: receive success. (Initiator, Main-Mode message 6). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Phase1 Initiator: success. (Initiator, Main-Mode). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Information message). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Information-Notice: transmit success. (ISAKMP-SA). Feb 18 13:11:10 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Feb 18 13:11:10 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Feb 18 13:11:10 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Feb 18 13:11:10 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Feb 18 13:11:10 luca-ciorias-MacBook-Pro racoon[68453]: Connected. Feb 18 13:11:10 luca-ciorias-MacBook-Pro pppd[68656]: IPSec connection established Feb 18 13:11:30 luca-ciorias-MacBook-Pro pppd[68656]: L2TP cannot connect to the server Feb 18 13:11:30 luca-ciorias-MacBook-Pro configd[20]: SCNCController: Disconnecting. (Connection tried to negotiate for, 22 seconds). Feb 18 13:11:30 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Information message). Feb 18 13:11:30 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Feb 18 13:11:30 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Information message). Feb 18 13:11:30 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Feb 18 13:11:31 luca-ciorias-MacBook-Pro racoon[68453]: Disconnecting. (Connection was up for, 20.157953 seconds).

    Read the article

< Previous Page | 569 570 571 572 573 574 575 576 577 578 579  | Next Page >