Search Results

Search found 22874 results on 915 pages for 'hidden network'.

Page 393/915 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • submit in html on sql query

    - by user1644661
    i need to run this sql query , which give me a list of Id and Dates i want to click each result and take with me the Id value to the next form i wrote this query above but i see in the debager that the hidden ID get his value but not pass to the next form i think i have a problem with the submit() . where should i put him ? thanks anat function ShowAllCarts($user_email) { $connB = new ProductDAO(); $connB->Connect(); $pro_query = "SELECT * FROM Cart WHERE `Email`='$user_email';"; $db_result = $connB->ExecSQL($pro_query); $html_result = '<div data-role="content"> <ul data-role="listview" data-theme="b"> '; $html_result .= '<form action="PreviouscartProduct.php" method="POST"/>'; while($row_array = $db_result->fetch_array(MYSQLI_ASSOC)) { $Id= $row_array['Id']; $Date= $row_array['Date']; //$html_result // $html_result .="<li><a href='PreviouscartProduct.php'>Cart number: $Id from Date: $Date><input type='hidden' name='Id' value'<?=$Id?>'</input></a></li>'"; $html_result .= '<a onclick="this.form.submit();" </a>; } $html_result .= ' </ul> </div>'; $html_result .= '</form>'; $connB->Disconnect(); return $html_result; } //display all carts $func_result = ShowAllCarts($Email);

    Read the article

  • php Mail function; Is this way of using it safe?

    - by Camran
    I have a classifieds website, and inside each classified, there is a small form. This form is for users to be able to tip their "friends": <form action="/bincgi/tip.php" method="post" name="tipForm" id="tipForm"> Tip: <input name="email2" id="email2" type="text" size="30 /> <input type="submit" value="Skicka Tips"/> <input type="hidden" value="<?php echo $ad_id;?>" name="ad_id2" id="ad_id2" /> <input type="hidden" value="<?php echo $headline;?>" name="headline2" id="headline2" /> </form> The form is then submitted to a tip.php page, and here is my Q, is this below code safe, ie is it good enough or do I need to make some sanitations and more safety details? $to = filter_var($_POST['email2'], FILTER_SANITIZE_EMAIL); $ad_id = $_POST['ad_id2']; $headline = $_POST['headline2']; $subject = 'You got a tip'; $message ='Hi. You got a tip: '.$headline.'.\n'; $headers = 'From: [email protected]\r\n'; mail($to, $subject, $message, $headers); I haven't tested the above yet.

    Read the article

  • jquery find element next to another

    - by Thiganofx
    Hi I have the following html: <p> <input type="text" name="field2" /> <input type="hidden" name="fieldh2"/> <button type="button" class="sendInfo">Send</button> </p> What I want is that when user clicks the button, I need to send using ajax the contents of the field field. This is what i'm trying to do with no success. $(function() { $('button.sendInfo').live('click', function() { var id = $(this).parent().next('[type=text]').val(); alert(id); }); }); I plan to set what the user types in textbox to the hidden field, and the value received from the ajax call to normal textbox. But the problem is that i can´t even get the value of the textbox that is in the same line as the button the user clicks. Can anyone help me? Thanks a lot.

    Read the article

  • Python - Submit Information on a Website to Extract Data from Resulting Page

    - by bloodstorm17
    So I am trying to figure out how to post on a website that uses a drop down menu which is holding the values like this (based on the page source): <td valign="top" align="right"><span class="emphasis">Select Item Option : </span></td> <td align="left"> <span class="notranslate"> <select name="ItemOption1"> <option value="">Select Item Option</option> <option value="321_cba">Item Option 1</option> <option value="123_abcd">Item Option 2</option> ... Now there are two of these drop down menus on top of each other. I want to be able to select an item from drop down menu 1 and drop down menu 2 and then submit the page. Now based on the code it submits the information using the following code: <td colspan="2" align="center"> <input type="submit" value="View Result" onclick="return check()"> </td> </tr> </table> <input type="hidden" name="ItemOption1" value=""> <input type="hidden" name="ItemOption2" value=""> I have no idea how to select the items in the drop down menu and then submit the page and capture the information on the resulting page into a text file. Can someone please help me with this?

    Read the article

  • HttpClient POST fails to submit the form

    - by Jayomat
    Hi, I'm writing an app to check for the bus timetable's. Therefor I need to post some data to a html page, submit it, and parse the resulting page with htmlparser. Though it may be asked a lot, can some one help me identify if 1) this page does support post/get (I think it does) 2) which fields I need to use? 3) How to make the actual request? this is my code so far: String url = "http://busspur02.aseag.de/bs.exe?Cmd=RV&Karten=true&DatumT=30&DatumM=4&DatumJ=2010&ZeitH=&ZeitM=&Suchen=%28S%29uchen&GT0=&HT0=&GT1=&HT1="; String charset = "CP1252"; System.out.println("startFrom: "+start_from); System.out.println("goTo: "+destination); //String tag.v List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("HTO", start_from)); params.add(new BasicNameValuePair("HT1", destination)); params.add(new BasicNameValuePair("GTO", "Aachen")); params.add(new BasicNameValuePair("GT1", "Aachen")); params.add(new BasicNameValuePair("DatumT", day)); params.add(new BasicNameValuePair("DatumM", month)); params.add(new BasicNameValuePair("DatumJ", year)); params.add(new BasicNameValuePair("ZeitH", hour)); params.add(new BasicNameValuePair("ZeitM", min)); UrlEncodedFormEntity query = new UrlEncodedFormEntity(params, charset); HttpPost post = new HttpPost(url); post.setEntity(query); InputStream response = new DefaultHttpClient().execute(post).getEntity().getContent(); // Now do your thing with the facebook response. String source = readText(response,"CP1252"); Log.d(TAG_AVV,response.toString()); System.out.println("STREAM "+source); One person also gave me a hint to use firebug to read what's going on at the page, but I don't really understand what to look for, or more precisely, how to use the provided information. I also find it confusing, for example, that when I enter the data by hand, the url says, for example, "....HTO=Kaiserplatz&...", but in Firebug, the same Kaiserplatz is connected to a different field, in this case: \<\td class="Start3" Kaiserplatz <\/td (I inserted \ to make it visible) The last line in my code prints the html page, but without having send a request.. it's printed as if there was no input at all... My app is almost done, I hope someone can help me out to finish it! thanks in advance EDIT: this is what the s.o.p returns: (At some point there actually is some input, but only the destination ???) 04-30 03:15:43.524: INFO/System.out(3303): STREAM <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> 04-30 03:15:43.524: INFO/System.out(3303): <html> 04-30 03:15:43.524: INFO/System.out(3303): <head> 04-30 03:15:43.545: INFO/System.out(3303): <title>Busspur online</title> 04-30 03:15:43.554: INFO/System.out(3303): <base href="http://busspur02.aseag.de"> 04-30 03:15:43.554: INFO/System.out(3303): <meta name="description" content="Busspur im Internet"> 04-30 03:15:43.554: INFO/System.out(3303): <meta name="author" content="Dr. Manfred Enning"> 04-30 03:15:43.554: INFO/System.out(3303): <meta name="AUTH_TYPE" content="Basic"> 04-30 03:15:43.574: INFO/System.out(3303): <meta HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=windows-1252"> 04-30 03:15:43.574: INFO/System.out(3303): <meta HTTP-EQUIV="Content-Language" CONTENT="de"> 04-30 03:15:43.574: INFO/System.out(3303): <link rel=stylesheet type="text/css" href="busspur.css"> 04-30 03:15:43.574: INFO/System.out(3303): </head> 04-30 03:15:43.574: INFO/System.out(3303): 04-30 03:15:43.574: INFO/System.out(3303): <body> 04-30 03:15:43.574: INFO/System.out(3303): <table border="0" cellspacing="0" cellpadding="0" width="100%"> 04-30 03:15:43.574: INFO/System.out(3303): <tr> 04-30 03:15:43.584: INFO/System.out(3303): <td align="left" width="25%"><small>Version: 6.8.1.9s2<br>Datenstand: 13.04.2010 04-30 03:15:43.584: INFO/System.out(3303): 04-30 03:15:43.584: INFO/System.out(3303): <br>12.04.2010 - 12.06.2010 04-30 03:15:43.584: INFO/System.out(3303): <br>1663 04-30 03:15:43.584: INFO/System.out(3303): 3D3B9</small> 04-30 03:15:43.584: INFO/System.out(3303): </td> 04-30 03:15:43.584: INFO/System.out(3303): 04-30 03:15:43.584: INFO/System.out(3303): <td align="center" width="50%"> 04-30 03:15:43.584: INFO/System.out(3303): <a href="/bs.exe/SL?Sprache=Nederlands&amp;SID=3D3B9"><img src="http://www.busspur.de/logos/nederlands.gif" alt="Nederlands" border="0" Width="32" Height="22"></a><a href="/bs.exe/SL?Sprache=English&amp;SID=3D3B9"><img src="http://www.busspur.de/logos/english.gif" alt="English" border="0" Width="32" Height="22"></a><a href="/bs.exe/SL?Sprache=Francais&amp;SID=3D3B9"><img src="http://www.busspur.de/logos/francais.gif" alt="Francais" border="0" Width="32" Height="22"></a> 04-30 03:15:43.584: INFO/System.out(3303): </td> 04-30 03:15:43.584: INFO/System.out(3303): 04-30 03:15:43.594: INFO/System.out(3303): <td align="right" width="25%"> 04-30 03:15:43.594: INFO/System.out(3303): <a href="http://www.avv.de/"><img src="/logos/avvlogo.gif" border="0" alt="AVV"></a> 04-30 03:15:43.594: INFO/System.out(3303): </td> 04-30 03:15:43.594: INFO/System.out(3303): </tr> 04-30 03:15:43.594: INFO/System.out(3303): </table> 04-30 03:15:43.594: INFO/System.out(3303): 04-30 03:15:43.594: INFO/System.out(3303): <!-- Kopfbereich (automatisch erzeugt) --> 04-30 03:15:43.594: INFO/System.out(3303): <div align="center"> 04-30 03:15:43.594: INFO/System.out(3303): 04-30 03:15:43.604: INFO/System.out(3303): <H2>Busspur-Online <i>Verbindungsabfrage</i></H2> 04-30 03:15:43.604: INFO/System.out(3303): </div> 04-30 03:15:43.604: INFO/System.out(3303): <!-- Ende Kopfbereich --> 04-30 03:15:43.604: INFO/System.out(3303): 04-30 03:15:43.604: INFO/System.out(3303): <!-- Ausgabebereich (automatisch erzeugt) --> 04-30 03:15:43.604: INFO/System.out(3303): <div align="center"> 04-30 03:15:43.614: INFO/System.out(3303): <p></p> 04-30 03:15:43.614: INFO/System.out(3303): <p></p> 04-30 03:15:43.614: INFO/System.out(3303): 04-30 03:15:43.614: INFO/System.out(3303): 04-30 03:15:43.624: INFO/System.out(3303): </div> 04-30 03:15:43.624: INFO/System.out(3303): <!-- Ende Ausgabebereich --> 04-30 03:15:43.634: INFO/System.out(3303): 04-30 03:15:43.634: INFO/System.out(3303): <!-- Fussnotenbereich (automatisch erzeugt) --> 04-30 03:15:43.634: INFO/System.out(3303): <div align="left"> 04-30 03:15:43.634: INFO/System.out(3303): 04-30 03:15:43.634: INFO/System.out(3303): 04-30 03:15:43.634: INFO/System.out(3303): </div> 04-30 03:15:43.634: INFO/System.out(3303): <!-- Ende Fussnotenbereich --> 04-30 03:15:43.634: INFO/System.out(3303): 04-30 03:15:43.634: INFO/System.out(3303): <!-- Nachschlageliste (automatisch erzeugt) --> 04-30 03:15:43.634: INFO/System.out(3303): <div align="center"> 04-30 03:15:43.644: INFO/System.out(3303): 04-30 03:15:43.644: INFO/System.out(3303): </div> 04-30 03:15:43.644: INFO/System.out(3303): <!-- Ende Nachschlageliste --> 04-30 03:15:43.644: INFO/System.out(3303): 04-30 03:15:43.644: INFO/System.out(3303): 04-30 03:15:43.644: INFO/System.out(3303): <!-- Eingabeformular --> 04-30 03:15:43.644: INFO/System.out(3303): 04-30 03:15:43.644: INFO/System.out(3303): 04-30 03:15:43.644: INFO/System.out(3303): <!-- Eingabeformular --> 04-30 03:15:43.644: INFO/System.out(3303): <form name="Maske" action="/bs.exe" method="get"> 04-30 03:15:43.644: INFO/System.out(3303): 04-30 03:15:43.644: INFO/System.out(3303): <input type="hidden" name="SID" value="3D3B9"> 04-30 03:15:43.644: INFO/System.out(3303): <input type="hidden" name="ScreenX" value=""> 04-30 03:15:43.654: INFO/System.out(3303): <input type="hidden" name="ScreenY" value=""> 04-30 03:15:43.654: INFO/System.out(3303): <input type="hidden" class="hiddenForm" name="CMD" value="CR" /> 04-30 03:15:43.654: INFO/System.out(3303): 04-30 03:15:43.654: INFO/System.out(3303): 04-30 03:15:43.654: INFO/System.out(3303): <input TYPE="Submit" name="Suchen" value="S" tabindex="20" style="visibility:hidden"> 04-30 03:15:43.654: INFO/System.out(3303): 04-30 03:15:43.654: INFO/System.out(3303): <table align="center" border="0" cellspacing="0" cellpadding="2"> 04-30 03:15:43.654: INFO/System.out(3303): <tr> 04-30 03:15:43.654: INFO/System.out(3303): <td class="Haupt"> 04-30 03:15:43.654: INFO/System.out(3303): 04-30 03:15:43.674: INFO/System.out(3303): <table border="0" cellspacing="0" cellpadding="2"> 04-30 03:15:43.674: INFO/System.out(3303): <!-- 1.Zeile Startauswahl --> 04-30 03:15:43.674: INFO/System.out(3303): <tr> 04-30 03:15:43.674: INFO/System.out(3303): <td rowspan="2" class="Start1"> 04-30 03:15:43.674: INFO/System.out(3303): Start 04-30 03:15:43.685: INFO/System.out(3303): </td> 04-30 03:15:43.685: INFO/System.out(3303): 04-30 03:15:43.685: INFO/System.out(3303): <td class="Start2" height="25"> 04-30 03:15:43.685: INFO/System.out(3303): Stadt/Gemeinde 04-30 03:15:43.685: INFO/System.out(3303): </td> 04-30 03:15:43.685: INFO/System.out(3303): 04-30 03:15:43.685: INFO/System.out(3303): <td class="Start3"> 04-30 03:15:43.685: INFO/System.out(3303): <input type="text" name="GT0" value="" tabindex="1" /> 04-30 03:15:43.704: INFO/System.out(3303): 04-30 03:15:43.704: INFO/System.out(3303): </td> 04-30 03:15:43.704: INFO/System.out(3303): 04-30 03:15:43.704: INFO/System.out(3303): 04-30 03:15:43.704: INFO/System.out(3303): <td rowspan="2" class="Start4"> 04-30 03:15:43.714: INFO/System.out(3303): <input type="submit" name="Map0" value="Karte" tabindex="100" /> 04-30 03:15:43.724: INFO/System.out(3303): 04-30 03:15:43.724: INFO/System.out(3303): </td> 04-30 03:15:43.724: INFO/System.out(3303): 04-30 03:15:43.724: INFO/System.out(3303): 04-30 03:15:43.724: INFO/System.out(3303): </tr> 04-30 03:15:43.724: INFO/System.out(3303): 04-30 03:15:43.724: INFO/System.out(3303): <tr> 04-30 03:15:43.734: INFO/System.out(3303): <td class="Start2" height="25"> 04-30 03:15:43.734: INFO/System.out(3303): <select name="T0" id="efaT0"> 04-30 03:15:43.734: INFO/System.out(3303): <option value="A" >Adresse 04-30 03:15:43.734: INFO/System.out(3303): <option value="H" selected="selected">Haltestelle 04-30 03:15:43.734: INFO/System.out(3303): <option value="Z" >Bes. Ziel 04-30 03:15:43.734: INFO/System.out(3303): </select> 04-30 03:15:43.734: INFO/System.out(3303): 04-30 03:15:43.734: INFO/System.out(3303): </td> 04-30 03:15:43.734: INFO/System.out(3303): 04-30 03:15:43.734: INFO/System.out(3303): <td class="Start3"> 04-30 03:15:43.734: INFO/System.out(3303): <input type="text" name="HT0" value="" tabindex="2" /> 04-30 03:15:43.734: INFO/System.out(3303): 04-30 03:15:43.745: INFO/System.out(3303): </td> 04-30 03:15:43.754: INFO/System.out(3303): 04-30 03:15:43.774: INFO/System.out(3303): </tr> 04-30 03:15:43.784: INFO/System.out(3303): 04-30 03:15:43.784: INFO/System.out(3303): <!-- 2.Zeile Ziel oder ViaAuswahl --> 04-30 03:15:43.784: INFO/System.out(3303): 04-30 03:15:43.805: INFO/System.out(3303): <tr> 04-30 03:15:43.834: INFO/System.out(3303): <td rowspan="2" class="Ziel1"> 04-30 03:15:43.834: INFO/System.out(3303): Ziel 04-30 03:15:43.834: INFO/System.out(3303): </td> 04-30 03:15:43.844: INFO/System.out(3303): 04-30 03:15:43.844: INFO/System.out(3303): <td class="Ziel2" height="25"> 04-30 03:15:43.844: INFO/System.out(3303): Stadt/Gemeinde 04-30 03:15:43.844: INFO/System.out(3303): </td> 04-30 03:15:43.854: INFO/System.out(3303): 04-30 03:15:43.854: INFO/System.out(3303): <td class="Ziel3"> 04-30 03:15:43.854: INFO/System.out(3303): Aachen 04-30 03:15:43.864: INFO/System.out(3303): </td> 04-30 03:15:43.874: INFO/System.out(3303): 04-30 03:15:43.874: INFO/System.out(3303): 04-30 03:15:43.884: INFO/System.out(3303): <td rowspan="2" class="Ziel4"> 04-30 03:15:43.884: INFO/System.out(3303): <input type="submit" name="Map1" value="Karte" tabindex="101" /> 04-30 03:15:43.884: INFO/System.out(3303): 04-30 03:15:43.884: INFO/System.out(3303): </td> 04-30 03:15:43.884: INFO/System.out(3303): 04-30 03:15:43.884: INFO/System.out(3303): 04-30 03:15:43.884: INFO/System.out(3303): </tr> 04-30 03:15:43.884: INFO/System.out(3303): 04-30 03:15:43.884: INFO/System.out(3303): <tr> 04-30 03:15:43.884: INFO/System.out(3303): <td class="Ziel2" height="25"> 04-30 03:15:43.894: INFO/System.out(3303): <small></small> 04-30 03:15:43.894: INFO/System.out(3303): </td> 04-30 03:15:43.894: INFO/System.out(3303): <td class="Ziel3"> 04-30 03:15:43.894: INFO/System.out(3303): Karlsgraben 04-30 03:15:43.904: INFO/System.out(3303): </td> 04-30 03:15:43.904: INFO/System.out(3303): </tr> 04-30 03:15:43.904: INFO/System.out(3303): 04-30 03:15:43.914: INFO/System.out(3303): 04-30 03:15:43.924: INFO/System.out(3303): 04-30 03:15:43.934: INFO/System.out(3303): 04-30 03:15:43.934: INFO/System.out(3303): 04-30 03:15:43.934: INFO/System.out(3303): 04-30 03:15:43.934: INFO/System.out(3303): <!-- 3.Zeile Datum/Zeit/Intervall --> 04-30 03:15:43.934: INFO/System.out(3303): <tr> 04-30 03:15:43.944: INFO/System.out(3303): <td rowspan="3" class="Zeit1"> 04-30 03:15:43.944: INFO/System.out(3303): Zeit 04-30 03:15:43.944: INFO/System.out(3303): </td> 04-30 03:15:43.944: INFO/System.out(3303): <td class="Datum2"> 04-30 03:15:43.944: INFO/System.out(3303): Datum 04-30 03:15:43.944: INFO/System.out(3303): </td> 04-30 03:15:43.944: INFO/System.out(3303): 04-30 03:15:43.944: INFO/System.out(3303): <!-- Für Abfragen ohne Karte alternativ Zeile ohne colspan hinzufügen --> 04-30 03:15:43.954: INFO/System.out(3303): 04-30 03:15:43.964: INFO/System.out(3303): <td class="Datum3" height="25" colspan="2"> 04-30 03:15:43.984: INFO/System.out(3303): <select name="DatumT" tabindex="10" id="efaDatumT"> 04-30 03:15:43.984: INFO/System.out(3303): <option >1</option> 04-30 03:15:43.984: INFO/System.out(3303): <option >2</option> 04-30 03:15:43.984: INFO/System.out(3303): <option >3</option> 04-30 03:15:43.984: INFO/System.out(3303): <option >4</option> 04-30 03:15:43.984: INFO/System.out(3303): <option >5</option> 04-30 03:15:43.984: INFO/System.out(3303): <option >6</option> 04-30 03:15:43.984: INFO/System.out(3303): <option >7</option> 04-30 03:15:43.984: INFO/System.out(3303): <option >8</option> 04-30 03:15:43.994: INFO/System.out(3303): <option >9</option> 04-30 03:15:43.994: INFO/System.out(3303): <option >10</option> 04-30 03:15:43.994: INFO/System.out(3303): <option >11</option> 04-30 03:15:43.994: INFO/System.out(3303): <option >12</option> 04-30 03:15:44.005: INFO/System.out(3303): <option >13</option> 04-30 03:15:44.024: INFO/System.out(3303): <option >14</option> 04-30 03:15:44.034: INFO/System.out(3303): <option >15</option> 04-30 03:15:44.034: INFO/System.out(3303): <option >16</option> 04-30 03:15:44.034: INFO/System.out(3303): <option >17</option> 04-30 03:15:44.034: INFO/System.out(3303): <option >18</option> 04-30 03:15:44.044: INFO/System.out(3303): <option >19</option> 04-30 03:15:44.044: INFO/System.out(3303): <option >20</option> 04-30 03:15:44.044: INFO/System.out(3303): <option >21</option> 04-30 03:15:44.044: INFO/System.out(3303): <option >22</option> 04-30 03:15:44.044: INFO/System.out(3303): <option >23</option> 04-30 03:15:44.044: INFO/System.out(3303): <option >24</option> 04-30 03:15:44.044: INFO/System.out(3303): <option >25</option> 04-30 03:15:44.044: INFO/System.out(3303): <option >26</option> 04-30 03:15:44.044: INFO/System.out(3303): <option >27</option> 04-30 03:15:44.044: INFO/System.out(3303): <option >28</option> 04-30 03:15:44.044: INFO/System.out(3303): <option >29</option> 04-30 03:15:44.044: INFO/System.out(3303): <option selected="selected">30</option> 04-30 03:15:44.044: INFO/System.out(3303): <option >31</option> 04-30 03:15:44.055: INFO/System.out(3303): </select> 04-30 03:15:44.055: INFO/System.out(3303): . 04-30 03:15:44.055: INFO/System.out(3303): <select name="DatumM" tabindex="11" id="efaDatumM"> 04-30 03:15:44.055: INFO/System.out(3303): <option >1</option> 04-30 03:15:44.055: INFO/System.out(3303): <option >2</option> 04-30 03:15:44.055: INFO/System.out(3303): <option >3</option> 04-30 03:15:44.064: INFO/System.out(3303): <option selected="selected">4</option> 04-30 03:15:44.064: INFO/System.out(3303): <option >5</option> 04-30 03:15:44.064: INFO/System.out(3303): <option >6</option> 04-30 03:15:44.064: INFO/System.out(3303): <option >7</option> 04-30 03:15:44.064: INFO/System.out(3303): <option >8</option> 04-30 03:15:44.064: INFO/System.out(3303): <option >9</option> 04-30 03:15:44.064: INFO/System.out(3303): <option >10</option> 04-30 03:15:44.064: INFO/System.out(3303): <option >11</option> 04-30 03:15:44.085: INFO/System.out(3303): <option >12</option> 04-30 03:15:44.085: INFO/System.out(3303): </select> 04-30 03:15:44.085: INFO/System.out(3303): . 04-30 03:15:44.085: INFO/System.out(3303): <select name="DatumJ" tabindex="12" id="efaDatumJ"> 04-30 03:15:44.095: INFO/System.out(3303): <option >2009</option> 04-30 03:15:44.095: INFO/System.out(3303): <option selected="selected">2010</option> 04-30 03:15:44.095: INFO/System.out(3303): <option >2011</option> 04-30 03:15:44.095: INFO/System.out(3303): </select> 04-30 03:15:44.095: INFO/System.out(3303): 04-30 03:15:44.095: INFO/System.out(3303): </td> 04-30 03:15:44.095: INFO/System.out(3303): 04-30 03:15:44.105: INFO/System.out(3303): </tr> 04-30 03:15:44.115: INFO/System.out(3303): 04-30 03:15:44.115: INFO/System.out(3303): <tr> 04-30 03:15:44.115: INFO/System.out(3303): <td class="Uhrzeit2"> 04-30 03:15:44.115: INFO/System.out(3303): <input type="radio" name="AbfAnk" value="Abf" checked />Abfahrten ab<br /> 04-30 03:15:44.115: INFO/System.out(3303): <input type="radio" name="AbfAnk" value="Ank" />Ankünfte bis 04-30 03:15:44.115: INFO/System.out(3303): 04-30 03:15:44.115: INFO/System.out(3303): </td> 04-30 03:15:44.125: INFO/System.out(3303): <td class="Uhrzeit3" height="25"> 04-30 03:15:44.125: INFO/System.out(3303): <select name="ZeitH" tabindex="14" id="efaZeitH"> 04-30 03:15:44.125: INFO/System.out(3303): <option >0</option> 04-30 03:15:44.125: INFO/System.out(3303): <option >1</option> 04-30 03:15:44.125: INFO/System.out(3303): <option >2</option> 04-30 03:15:44.135: INFO/System.out(3303): <option >3</option> 04-30 03:15:44.135: INFO/System.out(3303): <option >4</option> 04-30 03:15:44.135: INFO/System.out(3303): <option >5</option> 04-30 03:15:44.135: INFO/System.out(3303): <option >6</option> 04-30 03:15:44.135: INFO/System.out(3303): <option >7</option> 04-30 03:15:44.135: INFO/System.out(3303): <option >8</option> 04-30 03:15:44.145: INFO/System.out(3303): <option >9</option> 04-30 03:15:44.145: INFO/System.out(3303): <option >10</option> 04-30 03:15:44.145: INFO/System.out(3303): <option >11</option> 04-30 03:15:44.145: INFO/System.out(3303): <option >12</option> 04-30 03:15:44.145: INFO/System.out(3303): <option >13</option> 04-30 03:15:44.145: INFO/System.out(3303): <option >14</option> 04-30 03:15:44.145: INFO/System.out(3303): <option selected="selected">15</option> 04-30 03:15:44.145: INFO/System.out(3303): <option >16</option> 04-30 03:15:44.145: INFO/System.out(3303): <option >17</option> 04-30 03:15:44.145: INFO/System.out(3303): <option >18</option> 04-30 03:15:44.145: INFO/System.out(3303): <option >19</option> 04-30 03:15:44.155: INFO/System.out(3303): <option >20</option> 04-30 03:15:44.155: INFO/System.out(3303): <option >21</option> 04-30 03:15:44.155: INFO/System.out(3303): <option >22</option> 04-30 03:15:44.155: INFO/System.out(3303): <option >23</option> 04-30 03:15:44.155: INFO/System.out(3303): </select> 04-30 03:15:44.155: INFO/System.out(3303): : 04-30 03:15:44.155: INFO/System.out(3303): <select name="ZeitM" tabindex="15" id="efaZeitM"> 04-30 03:15:44.155: INFO/System.out(3303): <option >00</option> 04-30 03:15:44.155: INFO/System.out(3303): <option selected="selected">15</option> 04-30 03:15:44.155: INFO/System.out(3303): <option >30</option> 04-30 03:15:44.155: INFO/System.out(3303): <option >45</option> 04-30 03:15:44.155: INFO/System.out(3303): </select> 04-30 03:15:44.155: INFO/System.out(3303): 04-30 03:15:44.155: INFO/System.out(3303): </td> 04-30 03:15:44.155: INFO/System.out(3303): 04-30 03:15:44.165: INFO/System.out(3303): <td class="Uhrzeit2">&nbsp;</td> 04-30 03:15:44.165: INFO/System.out(3303): 04-30 03:15:44.165: INFO/System.out(3303): </tr> 04-30 03:15:44.165: INFO/System.out(3303): 04-30 03:15:44.165: INFO/System.out(3303): <tr> 04-30 03:15:44.165: INFO/System.out(3303): <td class="Intervall2"> 04-30 03:15:44.165: INFO/System.out(3303): Intervall 04-30 03:15:44.165: INFO/System.out(3303): </td> 04-30 03:15:44.184: INFO/System.out(3303): 04-30 03:15:44.184: INFO/System.out(3303): <td class="Intervall3" height="25"> 04-30 03:15:44.184: INFO/System.out(3303): <select name="Intervall" tabindex="13" id="efaIntervall"> 04-30 03:15:44.184: INFO/System.out(3303): <option value="60" >1 h</option> 04-30 03:15:44.184: INFO/System.out(3303): <option value="120" >2 h</option> 04-30 03:15:44.184: INFO/System.out(3303): <option value="240" >4 h</option> 04-30 03:15:44.184: INFO/System.out(3303): <option value="480" >8 h</option> 04-30 03:15:44.184: INFO/System.out(3303): <option value="1800" >ganzer Tag</option> 04-30 03:15:44.194: INFO/System.out(3303): </select> 04-30 03:15:44.194: INFO/System.out(3303): 04-30 03:15:44.204: INFO/System.out(3303): </td> 04-30 03:15:44.204: INFO/System.out(3303): 04-30 03:15:44.204: INFO/System.out(3303): <td class="Intervall3">&nbsp; 04-30 03:15:44.204: INFO/System.out(3303): 04-30 03:15:44.204: INFO/System.out(3303): </tr> 04-30 03:15:44.204: INFO/System.out(3303): </table> 04-30 03:15:44.204: INFO/System.out(3303): 04-30 03:15:44.204: INFO/System.out(3303): </td> 04-30 03:15:44.204: INFO/System.out(3303): 04-30 03:15:44.204: INFO/System.out(3303): <td class="Schalter" valign="top"> 04-30 03:15:44.204: INFO/System.out(3303): <table class="Schalter"> 04-30 03:15:44.204: INFO/System.out(3303): <!-- Buttons --> 04-30 03:15:44.204: INFO/System.out(3303): <tr> 04-30 03:15:44.204: INFO/System.out(3303): <td class="Schalter" align="center"> 04-30 03:15:44.226: INFO/System.out(3303): <input TYPE="Submit" accesskey="s" class="SuchenBtn" name="Suchen" tabindex="20" VALUE="(S)uchen"> 04-30 03:15:44.226: INFO/System.out(3303): </td> 04-30 03:15:44.226: INFO/System.out(3303): </tr> 04-30 03:15:44.226: INFO/System.out(3303): 04-30 03:15:44.226: INFO/System.out(3303): 04-30 03:15:44.226: INFO/System.out(3303): <tr> 04-30 03:15:44.226: INFO/System.out(3303): <td class="Schalter" align="center"> 04-30 03:15:44.226: INFO/System.out(3303): <input TYPE="Submit" accesskey="o" name="Optionen" tabindex="22" VALUE="(O)ptionen"> 04-30 03:15:44.226: INFO/System.out(3303): </td> 04-30 03:15:44.226: INFO/System.out(3303): </tr> 04-30 03:15:44.226: INFO/System.out(3303): 04-30 03:15:44.226: INFO/System.out(3303): 04-30 03:15:44.226: INFO/System.out(3303): <tr> 04-30 03:15:44.226: INFO/System.out(3303): <td class="Schalter" align="center"> 04-30 03:15:44.226: INFO/System.out(3303): <input TYPE="Button" accesskey="z" tabindex="24" VALUE="(Z)urück" onClick="history.back()"> 04-30 03:15:44.226: INFO/System.out(3303): </td> 04-30 03:15:44.226: INFO/System.out(3303): </tr> 04-30 03:15:44.226: INFO/System.out(3303): <tr> 04-30 03:15:44.226: INFO/System.out(3303): <td class="Schalter" align="center"> 04-30 03:15:44.226: INFO/System.out(3303): <input TYPE="Button" accesskey="h" tabindex="25" VALUE="(H)ilfe" onClick="self.location.href='/bs.exe/FF?N=hilfe&amp;SID=3D3B9'"> 04-30 03:15:44.226: INFO/System.out(3303): </td> 04-30 03:15:44.235: INFO/System.out(3303): </tr> 04-30 03:15:44.235: INFO/System.out(3303): <tr> 04-30 03:15:44.235: INFO/System.out(3303): <td class="Schalter" align="center"> 04-30 03:15:44.235: INFO/System.out(3303): <input TYPE="Submit" accesskey="n" tabindex="26" name="Loeschen" VALUE="(N)eue Suche"> 04-30 03:15:44.235: INFO/System.out(3303): </td> 04-30 03:15:44.235: INFO/System.out(3303): </tr> 04-30 03:15:44.235: INFO/System.out(3303): 04-30 03:15:44.235: INFO/System.out(3303): <tr> 04-30 03:15:44.235: INFO/System.out(3303): 04-30 03:15:44.244: INFO/System.out(3303): <td class="Schalter" align="center"> 04-30 03:15:44.244: INFO/System.out(3303): <input TYPE="Button" accesskey="a" tabindex="27" VALUE="H(a)ltestelle" onClick="self.location.href='/bs.exe/RHFF?Karten=true?N=Result&amp;SID=3D3B9'"> 04-30 03:15:44.244: INFO/System.out(3303): </td> 04-30 03:15:44.244: INFO/System.out(3303): 04-30 03:15:44.244: INFO/System.out(3303): </tr> 04-30 03:15:44.244: INFO/System.out(3303): </table> 04-30 03:15:44.254: INFO/System.out(3303): 04-30 03:15:44.254: INFO/System.out(3303): </td> 04-30 03:15:44.254: INFO/System.out(3303): </tr> 04-30 03:15:44.254: INFO/System.out(3303): </table> 04-30 03:15:44.254: INFO/System.out(3303): </form> 04-30 03:15:44.254: INFO/System.out(3303): 04-30 03:15:44.254: INFO/System.out(3303): 04-30 03:15:44.254: INFO/System.out(3303): <!-- Meldungsbereich (automatisch erzeugt) --> 04-30 03:15:44.254: INFO/System.out(3303): <div align="center" id="meldungen"> 04-30 03:15:44.265: INFO/System.out(3303): <table class="Bedienhinweise"><tr><td rowspan="2"><img SRC="http://www.busspur.de/logos/hinweis.png" ALIGN="top" alt="Symbol" WIDTH="32" HEIGHT="20">&nbsp;</td><td rowspan="2">Start</td><td>Geben Sie den Namen der Stadt/Gemeinde ein</td></tr><tr><td>Geben Sie den Namen der Haltestelle ein</td></tr></table> 04-30 03:15:44.265: INFO/System.out(3303): </div> 04-30 03:15:44.265:

    Read the article

  • ant ftp doesn't download files in subdirectories

    - by Kristof Neirynck
    Hello, I'm trying to download files in subdirectories from an ftp server with ant. The exact set of files is known. Some of them are in subdirectories. Ant only seems to download the ones in the root directory. It does work if I download all files without listing them. The first ftp action should do the exact same thing as the second. Instead it complains about "Hidden files" and seems to prefix the paths with "\\". Does anyone know what's wrong here? Is this a bug in commons-net? build.xml <?xml version="1.0" encoding="utf-8"?> <project name="example" default="example" basedir="."> <taskdef name="ftp" classname="org.apache.tools.ant.taskdefs.optional.net.FTP" /> <target name="example"> <!-- 2 files retrieved --> <ftp action="get" verbose="true" server="localhost" userid="example" password="example"> <fileset dir="downloads" casesensitive="false" includes="root1.txt,root2.txt,a/a.txt,a/b/ab.txt,c/c.txt" /> </ftp> <!-- 5 files retrieved --> <ftp action="get" verbose="true" server="localhost" userid="example" password="example"> <fileset dir="downloads" casesensitive="false" includes="**/*" /> </ftp> </target> </project> run_ant.bat @ECHO OFF PUSHD %~dp0 SET CLASSPATH= SET ANT_HOME=C:\apache-ant-1.8.0 SET ant=%ANT_HOME%\bin\ant.bat SET antoptions=-nouserlib -noclasspath -d SET ftpjars=^ -lib lib\jakarta-oro-2.0.8.jar ^ -lib lib\commons-net-2.0.jar CALL %ant% %antoptions% %ftpjars% > output.txt POPD output.txt Unable to locate tools.jar. Expected to find it in C:\PROGRA~1\Java\jre6\lib\tools.jar Apache Ant version 1.8.0 compiled on February 1 2010 Trying the default build file: build.xml Buildfile: G:\ftp\build.xml Adding reference: ant.PropertyHelper Detected Java version: 1.6 in: C:\PROGRA~1\Java\jre6 Detected OS: Windows 7 Adding reference: ant.ComponentHelper Setting ro project property: ant.file -> G:\ftp\build.xml Setting ro project property: ant.file.type -> file Adding reference: ant.projectHelper Adding reference: ant.parsing.context Adding reference: ant.targets parsing buildfile G:\ftp\build.xml with URI = file:/G:/ftp/build.xml Setting ro project property: ant.project.name -> example Adding reference: example Setting ro project property: ant.project.default-target -> example Setting ro project property: ant.file.example -> G:\ftp\build.xml Setting ro project property: ant.file.type.example -> file Project base dir set to: G:\ftp +Target: +Target: example Adding reference: ant.LocalProperties parsing buildfile jar:file:/C:/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml with URI = jar:file:/C:/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml from a zip file Class org.apache.tools.ant.taskdefs.optional.net.FTP loaded from parent loader (parentFirst) Setting ro project property: ant.project.invoked-targets -> example Attempting to create object of type org.apache.tools.ant.helper.DefaultExecutor Adding reference: ant.executor Build sequence for target(s) `example' is [example] Complete build sequence is [example, ] example: [ftp] Opening FTP connection to localhost [ftp] connected [ftp] logging in to FTP server [ftp] login succeeded [ftp] getting files fileset: Setup scanner in dir G:\ftp\downloads with patternSet{ includes: [root1.txt, root2.txt, a/a.txt, a/b/ab.txt] excludes: [] } will try to cd to A where a directory called a exists testing case sensitivity, attempting to cd to A remote system is case sensitive : false [ftp] Hidden file \\a\b\ assumed to not be a symlink. filelist map used in listing files filelist map used in listing files [ftp] Hidden file \\a\b\ assumed to not be a symlink. filelist map used in listing files filelist map used in listing files filelist map used in listing files filelist map used in listing files filelist map used in listing files [ftp] Hidden file \\a\a.txt assumed to not be a symlink. filelist map used in listing files [ftp] Hidden file \\a\a.txt assumed to not be a symlink. filelist map used in listing files filelist map used in listing files filelist map used in listing files [ftp] transferring root1.txt to G:\ftp\downloads\root1.txt [ftp] File G:\ftp\downloads\root1.txt copied from localhost [ftp] transferring root2.txt to G:\ftp\downloads\root2.txt [ftp] File G:\ftp\downloads\root2.txt copied from localhost [ftp] 2 files retrieved [ftp] disconnecting [ftp] Opening FTP connection to localhost [ftp] connected [ftp] logging in to FTP server [ftp] login succeeded [ftp] getting files fileset: Setup scanner in dir G:\ftp\downloads with patternSet{ includes: [**/*] excludes: [] } will try to cd to A where a directory called a exists testing case sensitivity, attempting to cd to A remote system is case sensitive : false [ftp] transferring a\a.txt to G:\ftp\downloads\a\a.txt [ftp] File G:\ftp\downloads\a\a.txt copied from localhost [ftp] transferring a\b\ab.txt to G:\ftp\downloads\a\b\ab.txt [ftp] File G:\ftp\downloads\a\b\ab.txt copied from localhost [ftp] transferring c\c.txt to G:\ftp\downloads\c\c.txt [ftp] File G:\ftp\downloads\c\c.txt copied from localhost [ftp] transferring root1.txt to G:\ftp\downloads\root1.txt [ftp] File G:\ftp\downloads\root1.txt copied from localhost [ftp] transferring root2.txt to G:\ftp\downloads\root2.txt [ftp] File G:\ftp\downloads\root2.txt copied from localhost [ftp] 5 files retrieved [ftp] disconnecting BUILD SUCCESSFUL Total time: 0 seconds server log (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> Connected, sending welcome message... (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-FileZilla Server version 0.9.34 beta (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-written by Tim Kosse ([email protected]) (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220 Please visit http://sourceforge.net/projects/filezilla/ (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> USER example (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 331 Password required for example (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> PASS ******* (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 230 Logged on (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> TYPE I (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Type set to I (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> SYST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 215 UNIX emulated by FileZilla (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,232 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD A (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/A" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD a (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD b (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //a/b (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,233 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,234 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //\\a\b\ (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //\\a\b\ (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD a (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //a (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,235 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,236 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> RETR root1.txt (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,237 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> RETR root2.txt (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> QUIT (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 221 Goodbye (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> disconnected. (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> Connected, sending welcome message... (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-FileZilla Server version 0.9.34 beta (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-written by Tim Kosse ([email protected]) (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220 Please visit http://sourceforge.net/projects/filezilla/ (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> USER example (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 331 Password required for example (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> PASS ******* (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 230 Logged on (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> TYPE I (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Type set to I (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> SYST (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 215 UNIX emulated by FileZilla (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,239 (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD A (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/A" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,240 (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD a (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,241 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD //a/ (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD b (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,242 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/a" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD c (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/c" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,243 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,244 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR a/a.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,245 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR a/b/ab.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,246 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR c/c.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,247 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR root1.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,248 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR root2.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> QUIT (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 221 Goodbye (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> disconnected.

    Read the article

  • New Product: Oracle Java ME Embedded 3.2 – Small, Smart, Connected

    - by terrencebarr
    The Internet of Things (IoT) is coming. And, with todays launch of the Oracle Java ME Embedded 3.2 product, Java is going to play an even greater role in it. Java in the Internet of Things By all accounts, intelligent embedded devices are penetrating the world around us – driving industrial processes, monitoring environmental conditions, providing better health care, analyzing and processing data, and much more. And these devices are becoming increasingly connected, adding another dimension of utility. Welcome to the Internet of Things. As I blogged yesterday, this is a huge opportunity for the Java technology and ecosystem. To enable and utilize these billions of devices effectively you need a programming model, tools, and protocols which provide a feature-rich, consistent, scalable, manageable, and interoperable platform.  Java technology is ideally suited to address these technical and business problems, enabling you eliminate many of the typical challenges in designing embedded solutions. By using Java you can focus on building smarter, more valuable embedded solutions faster. To wit, Java technology is already powering around 10 billion devices worldwide. Delivering on this vision and accelerating the growth of embedded Java solutions, Oracle is today announcing a brand-new product: Oracle Java Micro Edition (ME) Embedded 3.2, accompanied by an update release of the Java ME Software Development Kit (SDK) to version 3.2. What is Oracle Java ME Embedded 3.2? Oracle Java ME Embedded 3.2 is a complete Java runtime client, optimized for ARM architecture connected microcontrollers and other resource-constrained systems. The product provides dedicated embedded functionality and is targeted for low-power, limited memory devices requiring support for a range of network services and I/O interfaces.  What features and APIs are provided by Oracle Java ME Embedded 3.2? Oracle Java ME Embedded 3.2 is a Java ME runtime based on CLDC 1.1 (JSR-139) and IMP-NG (JSR-228). The runtime and virtual machine (VM) are highly optimized for embedded use. Also included in the product are the following optional JSRs and Oracle APIs: File I/O API’s (JSR-75)  Wireless Messaging API’s (JSR-120) Web Services (JSR-172) Security and Trust Services subset (JSR-177) Location API’s (JSR-179) XML API’s (JSR-280)  Device Access API Application Management System (AMS) API AccessPoint API Logging API Additional embedded features are: Remote application management system Support for continuous 24×7 operation Application monitoring, auto-start, and system recovery Application access to peripheral interfaces such as GPIO, I2C, SPIO, memory mapped I/O Application level logging framework, including option for remote logging Headless on-device debugging – source level Java application debugging over IP Connection Remote configuration of the Java VM What type of platforms are targeted by Oracle Java ME 3.2 Embedded? The product is designed for embedded, always-on, resource-constrained, headless (no graphics/no UI), connected (wired or wireless) devices with a variety of peripheral I/O.  The high-level system requirements are as follows: System based on ARM architecture SOCs Memory footprint (approximate) from 130 KB RAM/350KB ROM (for a minimal, customized configuration) to 700 KB RAM/1500 KB ROM (for the full, standard configuration)  Very simple embedded kernel, or a more capable embedded OS/RTOS At least one type of network connection (wired or wireless) The initial release of the product is delivered as a device emulation environment for x86/Windows desktop computers, integrated with the Java ME SDK 3.2. A standard binary of Oracle Java ME Embedded 3.2 for ARM KEIL development boards based on ARM Cortex M-3/4 (KEIL MCBSTM32F200 using ST Micro SOC STM32F207IG) will soon be available for download from the Oracle Technology Network (OTN).  What types of applications can I develop with Oracle Java ME Embedded 3.2? The Oracle Java ME Embedded 3.2 product is a full-featured embedded Java runtime supporting applications based on the IMP-NG application model, which is derived from the well-known MIDP 2 application model. The runtime supports execution of multiple concurrent applications, remote application management, versatile connectivity, and a rich set of APIs and features relevant for embedded use cases, including the ability to interact with peripheral I/O directly from Java applications. This rich feature set, coupled with familiar and best-in class software development tools, allows developers to quickly build and deploy sophisticated embedded solutions for a wide range of use cases. Target markets well supported by Oracle Java ME Embedded 3.2 include wireless modules for M2M, industrial and building control, smart grid infrastructure, home automation, and environmental sensors and tracking. What tools are available for embedded application development for Oracle Java ME Embedded 3.2? Along with the release of Oracle Java ME Embedded 3.2, Oracle is also making available an updated version of the Java ME Software Development Kit (SDK), together with plug-ins for the NetBeans and Eclipse IDEs, to deliver a complete development environment for embedded application development.  OK – sounds great! Where can I find out more? And how do I get started? There is a complete set of information, data sheet, API documentation, “Getting Started Guide”, FAQ, and download links available: For an overview of Oracle Embeddable Java, see here. For the Oracle Java ME Embedded 3.2 press release, see here. For the Oracle Java ME Embedded 3.2 data sheet, see here. For the Oracle Java ME Embedded 3.2 landing page, see here. For the Oracle Java ME Embedded 3.2 documentation page, including a “Getting Started Guide” and FAQ, see here. For the Oracle Java ME SDK 3.2 landing and download page, see here. Finally, to ask more questions, please see the OTN “Java ME Embedded” forum To get started, grab the “Getting Started Guide” and download the Java ME SDK 3.2, which includes the Oracle Java ME Embedded 3.2 device emulation.  Can I learn more about Oracle Java ME Embedded 3.2 at JavaOne and/or Java Embedded @ JavaOne? Glad you asked Both conferences, JavaOne and Java Embedded @ JavaOne, will feature a host of content and information around the new Oracle Java ME Embedded 3.2 product, from technical and business sessions, to hands-on tutorials, and demos. Stay tuned, I will post details shortly. Cheers, – Terrence Filed under: Mobile & Embedded Tagged: "Oracle Java ME Embedded", Connected, embedded, Embedded Java, Java Embedded @ JavaOne, JavaOne, Smart

    Read the article

  • Linux-Containers — Part 1: Overview

    - by Lenz Grimmer
    "Containers" by Jean-Pierre Martineau (CC BY-NC-SA 2.0). Linux Containers (LXC) provide a means to isolate individual services or applications as well as of a complete Linux operating system from other services running on the same host. To accomplish this, each container gets its own directory structure, network devices, IP addresses and process table. The processes running in other containers or the host system are not visible from inside a container. Additionally, Linux Containers allow for fine granular control of resources like RAM, CPU or disk I/O. Generally speaking, Linux Containers use a completely different approach than "classicial" virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available. Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though - if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well. For example, Oracle's Unbreakable Enterprise Kernel Release 2 (2.6.39) is supported for both Oracle Linux 5 and 6. This makes it possible to run Oracle Linux 5 and 6 container instances on top of an Oracle Linux 6 system. Since Linux Containers are fully implemented on the OS level (the Linux kernel), they can be easily combined with other virtualization technologies. It's certainly possible to set up Linux containers within a virtualized Linux instance that runs inside Oracle VM Server for Oracle VM Virtualbox. Some use cases for Linux Containers include: Consolidation of multiple separate Linux systems on one server: instances of Linux systems that are not performance-critical or only see sporadic use (e.g. a fax or print server or intranet services) do not necessarily need a dedicated server for their operations. These can easily be consolidated to run inside containers on a single server, to preserve energy and rack space. Running multiple instances of an application in parallel, e.g. for different users or customers. Each user receives his "own" application instance, with a defined level of service/performance. This prevents that one user's application could hog the entire system and ensures, that each user only has access to his own data set. It also helps to save main memory — if multiple instances of a same process are running, the Linux kernel can share memory pages that are identical and unchanged across all application instances. This also applies to shared libraries that applications may use, they are generally held in memory once and mapped to multiple processes. Quickly creating sandbox environments for development and testing purposes: containers that have been created and configured once can be archived as templates and can be duplicated (cloned) instantly on demand. After finishing the activity, the clone can safely be discarded. This allows to provide repeatable software builds and test environments, because the system will always be reset to its initial state for each run. Linux Containers also boot significantly faster than "classic" virtual machines, which can save a lot of time when running frequent build or test runs on applications. Safe execution of an individual application: if an application running inside a container has been compromised because of a security vulnerability, the host system and other containers remain unaffected. The potential damage can be minimized, analyzed and resolved directly from the host system. Note: Linux Containers on Oracle Linux 6 with the Unbreakable Enterprise Kernel Release 2 (2.6.39) are still marked as Technology Preview - their use is only recommended for testing and evaluation purposes. The Open-Source project "Linux Containers" (LXC) is driving the development of the technology behind this, which is based on the "Control Groups" (CGroups) and "Name Spaces" functionality of the Linux kernel. Oracle is actively involved in the Linux Containers development and contributes patches to the upstream LXC code base. Control Groups provide means to manage and monitor the allocation of resources for individual processes or process groups. Among other things, you can restrict the maximum amount of memory, CPU cycles as well as the disk and network throughput (in MB/s or IOP/s) that are available for an application. Name Spaces help to isolate process groups from each other, e.g. the visibility of other running processes or the exclusive access to a network device. It's also possible to restrict a process group's access and visibility of the entire file system hierarchy (similar to a classic "chroot" environment). CGroups and Name Spaces provide the foundation on which Linux containers are based on, but they can actually be used independently as well. A more detailed description of how Linux Containers can be created and managed on Oracle Linux will be explained in the second part of this article. Additional links related to Linux Containers: OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy Linux Containers on Wikipedia - Lenz Grimmer Follow me on: Personal Blog | Facebook | Twitter | Linux Blog |

    Read the article

  • Java ME SDK 3.2 is now live

    - by SungmoonCho
    Hi everyone, It has been a while since we released the last version. We have been very busy integrating new features and making lots of usability improvements into this new version. Datasheet is available here. Please visit Java ME SDK 3.2 download page to get the latest and best version yet! Some of the new features in this version are described below. Embedded Application SupportOracle Java ME SDK 3.2 now supports the new Oracle® Java ME Embedded. This includes support for JSR 228, the Information Module Profile-Next Generation API (IMP-NG). You can test and debug applications either on the built-in device emulators or on your device. Memory MonitorThe Memory Monitor shows memory use as an application runs. It displays a dynamic detailed listing of the memory usage per object in table form, and a graphical representation of the memory use over time. Eclipse IDE supportOracle Java ME SDK 3.2 now officially supports Eclipse IDE. Once you install the Java ME SDK plugins on Eclipse, you can start developing, debugging, and profiling your mobile or embedded application. Skin CreatorWith the Custom Device Skin Creator, you can create your own skins. The appearance of the custom skins is generic, but the functionality can be tailored to your own specifications.  Here are the release highlights. Implementation and support for the new Oracle® Java Wireless Client 3.2 runtime and the Oracle® Java ME Embedded runtime. The AMS in the CLDC emulators has a new look and new functionality (Install Application, Manage Certificate Authorities and Output Console). Support for JSR 228, the Information Module Profile-Next Generation API (IMP-NG). The IMP-NG platform is implemented as a subset of CLDC. Support includes: A new emulator for headless devices. Javadocs for the following Oracle APIs: Device Access API, Logging API, AMS API, and AccessPoint API. New demos for IMP-NG features can be run on the emulator or on a real device running the Oracle® Java ME Embedded runtime. New Custom Device Skin Creator. This tool provides a way to create and manage custom emulator skins. The skin appearance is generic, but the functionality, such as the JSRs supported or the device properties, are up to you. This utility only supported in NetBeans. Eclipse plugin for CLDC/MIDP. For the first time Oracle Java ME SDK is available as an Eclipse plugin. The Eclipse version does not support CDC, the Memory Monitor, and the Custom Device Skin Creator in this release. All Java ME tools are implemented as NetBeans plugins. As of the plugin integrates Java ME utilities into the standard NetBeans menus. Tools > Java ME menu is the place to launch Java ME utilities, including the new Skin Creator. Profile > Java ME is the place to work with the Network Monitor and the Memory Monitor. Use the standard NetBeans tools for debugging. Profiling, Network monitoring, and Memory monitoring are integrated with the NetBeans profiling tools. New network monitoring protocols are supported in this release: WMA, SIP, Bluetooth and OBEX, SATSA APDU and JCRMI, and server sockets. Java ME SDK Update Center. Oracle Java ME SDK can be updated or extended by new components. The Update Center can download, install, and uninstall plugins specific to the Java ME SDK. A plugin consists of runtime components and skins. Bug fixes and enhancements. This version comes with a few known problems. All of them have workarounds, so I hope you don't get stuck in these issues when you are using the product. It you cannot watch static variables during an Eclipse debugging session, and sometimes the Variable view cannot show data. In the source code, move the mouse over the required variable to inspect the variable value. A real device shown in the Device Selector is deleted from the Device Manager, yet it still appears. Kill the device manager in the system tray, and relaunch it. Then you will see the device removed from the list. On-device profiling does not work on a device. CPU profiling, networking monitoring, and memory monitoring do not work on the device, since the device runtime does not yet support it. Please do the profiling with your emulator first, and then test your application on the device. In the Device Selector, using Clean Database on real external device causes a null pointer exception. External devices do not have a database recognized by the SDK, so you can disregard this exception message. Suspending the Emulator during a Memory Monitor session hangs the emulator. Do not use the Suspend option (F5) while the Memory Monitor is running. If the emulator is hung, open the Windows task manager and stop the emulator process (javaw). To switch to another application while the Memory Monitor is running, choose Application > AMS Home (F4), and select a different application. Please let us know how we can improve it even better, by sending us your feedback. -Java ME SDK Team

    Read the article

  • Bluetooth DUN Tethering fails

    - by tacone
    I have an HTC Desire HD, with Android Froyo (2.2) and PDANet installed. I am using Ubuntu 10.10. I cannot tether it over Bluetooth either with Network Manager or BlueMan. (note, I installed Blueman only after failing with NetWork manager, and I even tried the last version from the PPA). With both my device is discovered, paired, setup. But connecting always fail. Network manager says it cannot get the details of my device Blueman says Connection Refused (111) Here are some relevant entries from syslog. Mar 11 22:13:00 tacone-macbook bluetoothd[2242]: Bluetooth deamon 4.69 Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Starting SDP server Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Starting experimental netlink support Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Failed to find Bluetooth netlink family Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Failed to init netlink plugin Mar 11 22:13:00 tacone-macbook kernel: [ 158.284357] Bluetooth: L2CAP ver 2.14 Mar 11 22:13:00 tacone-macbook kernel: [ 158.284361] Bluetooth: L2CAP socket layer initialized Mar 11 22:13:00 tacone-macbook kernel: [ 158.446781] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 Mar 11 22:13:00 tacone-macbook kernel: [ 158.446784] Bluetooth: BNEP filters: protocol multicast Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: HCI dev 0 registered Mar 11 22:13:00 tacone-macbook kernel: [ 158.569481] Bluetooth: SCO (Voice Link) ver 0.6 Mar 11 22:13:00 tacone-macbook kernel: [ 158.569484] Bluetooth: SCO socket layer initialized Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: HCI dev 0 up Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Starting security manager 0 Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: ioctl(HCIUNBLOCKADDR): Invalid argument (22) Mar 11 22:13:00 tacone-macbook kernel: [ 158.818600] Bluetooth: RFCOMM TTY layer initialized Mar 11 22:13:00 tacone-macbook kernel: [ 158.818607] Bluetooth: RFCOMM socket layer initialized Mar 11 22:13:00 tacone-macbook kernel: [ 158.818610] Bluetooth: RFCOMM ver 1.11 Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: probe failed with driver input-headset for device /org/bluez/2242/hci0/dev_F8_DB_7F_AF_6B_EE Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: Adapter /org/bluez/2242/hci0 has been enabled Mar 11 22:13:00 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from ListDevices reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:13:00 tacone-macbook NetworkManager[1247]: <warn> bluez error getting adapter properties: Rejected send message, 1 matched rules; type="method_call", sender=":1.4" (uid=0 pid=1247 comm="NetworkManager) interface="org.bluez.Adapter" member="GetProperties" error name="(unset)" requested_reply=0 destination="org.bluez" (uid=0 pid=2242 comm="/usr/sbin/bluetoothd)) Mar 11 22:13:00 tacone-macbook bluetoothd[2243]: return_link_keys (sba=00:23:6C:B5:03:6F, dba=00:23:6C:C0:F1:B0) Mar 11 22:13:00 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from GetProperties reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:15:02 tacone-macbook bluetoothd[2243]: Discovery session 0x2262d7c0 with :1.45 activated Mar 11 22:15:15 tacone-macbook bluetoothd[2243]: Stopping discovery Mar 11 22:15:15 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from GetProperties reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:15:16 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:16 tacone-macbook bluetoothd[2243]: io_capa_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:17 tacone-macbook bluetoothd[2243]: io_capa_response (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:18 tacone-macbook bluetoothd[2243]: Stopping discovery Mar 11 22:15:28 tacone-macbook bluetoothd[2243]: link_key_notify (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE, type=5) Mar 11 22:15:28 tacone-macbook kernel: [ 306.585725] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:28 tacone-macbook kernel: [ 306.630757] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:28 tacone-macbook bluetoothd[2243]: Authentication requested Mar 11 22:15:28 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:15:28 tacone-macbook kernel: [ 306.784829] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:28 tacone-macbook kernel: [ 306.857861] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:15:29 tacone-macbook bluetoothd[2243]: probe failed with driver input-headset for device /org/bluez/2242/hci0/dev_F8_DB_7F_AF_6B_EE Mar 11 22:15:29 tacone-macbook pulseaudio[1757]: bluetooth-util.c: Error from GetProperties reply: org.freedesktop.DBus.Error.AccessDenied Mar 11 22:15:29 tacone-macbook pulseaudio[1757]: last message repeated 8 times Mar 11 22:15:29 tacone-macbook bluetoothd[2243]: Stopping discovery Mar 11 22:15:30 tacone-macbook modem-manager: (tty/rfcomm0): could not get port's parent device Mar 11 22:15:30 tacone-macbook modem-manager: (rfcomm0) opening serial device... Mar 11 22:15:30 tacone-macbook modem-manager: (rfcomm0): probe requested by plugin 'Generic' Mar 11 22:15:43 tacone-macbook modem-manager: (rfcomm0) closing serial device... Mar 11 22:15:43 tacone-macbook modem-manager: (rfcomm0) opening serial device... Mar 11 22:15:49 tacone-macbook modem-manager: (rfcomm0) closing serial device... Mar 11 22:16:15 tacone-macbook modem-manager: (tty/rfcomm0): could not get port's parent device Mar 11 22:16:19 tacone-macbook kernel: [ 357.375108] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:16:24 tacone-macbook kernel: [ 362.169506] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook kernel: [ 362.215529] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook bluetoothd[2243]: link_key_request (sba=00:23:6C:B5:03:6F, dba=F8:DB:7F:AF:6B:EE) Mar 11 22:16:24 tacone-macbook kernel: [ 362.281559] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook kernel: [ 362.330588] l2cap_recv_acldata: Unexpected continuation frame (len 0) Mar 11 22:16:24 tacone-macbook modem-manager: (tty/rfcomm0): could not get port's parent device Any help ? PS: tethering via USB or WiFi is not an option, I need to do it over Bluetooth.

    Read the article

  • Last week I was presented with a Microsoft MVP award in Virtual Machines – time to thank all who hel

    - by Liam Westley
    MVP in Virtual Machines Last week, on 1st April, I received an e-mail from Microsoft letting me know that I had been presented with a 2010 Microsoft® MVP Award for outstanding contributions in Virtual Machine technical communities during the past year.   It was an honour to be nominated, and is a great reflection on the vibrancy of the UK user group community which made this possible. Virtualisation for developers, not just IT Pros I consider it a special honour as my expertise in virtualisation is as a software developer utilising virtual machines to aid my software development, rather than an IT Pro who manages data centre and network infrastructure.  I’ve been on a minor mission over the past few years to enthuse developers in a topic usually seen as only for network admins, but which can make their life a whole lot easier once understood properly. Continuous learning is fun In 1676, the scientist Isaac Newton, in a letter to Robert Hooke used the phrase (http://www.phrases.org.uk/meanings/268025.html) ‘If I have seen a little further it is by standing on the shoulders of Giants’ I’m a nuclear physicist by education, so I am more than comfortable that any knowledge I have is based on the work of others.  Although far from a science, software development and IT is equally built upon the work of others. It’s one of the reasons I despise software patents. So in that sense this MVP award is a result of all the great minds that have provided virtualisation solutions for me to talk about.  I hope that I have always acknowledged those whose work I have used when blogging or giving presentations, and that I have executed my responsibility to share any knowledge gained as widely as possible. Thanks to all those who helped – a big thanks to the UK user group community I reckon this journey started in 2003 when I started attending a user group called the London .Net Users Group (http://www.dnug.org.uk) started by a nice chap called Ian Cooper. The great thing about Ian was that he always encouraged non professional speakers to take the stage at the user group, and my first ever presentation was on 30th September 2003; SQL Server CE 2.0 and the.NET Compact Framework. In 2005 Ian Cooper was on the committee for the first DeveloperDeveloperDeveloper! day, the free community conference held at Microsoft’s UK HQ in Thames Valley park in Reading.  He encouraged me to take part and so on 14th May 2005 I presented a talk previously given to the London .Net User Group on Simplifying access to multiple DB providers in .NET.  From that point on I definitely had the bug; presenting at DDD2, DDD3, groking at DDD4 and SQLBits I and after a break, DDD7, DDD Scotland and DDD8.  What definitely made me keen was the encouragement and infectious enthusiasm of some of the other DDD organisers; Craig Murphy, Barry Dorrans, Phil Winstanley and Colin Mackay. During the first few DDD events I met the Dave McMahon and Richard Costall from NxtGenUG who made it easy to start presenting at their user groups.  Along the way I’ve met a load of great user group organisers; Guy Smith-Ferrier of the .Net Developer Network, Jimmy Skowronski of GL.Net and the double act of Ray Booysen and Gavin Osborn behind what was Vista Squad and is now Edge UG. Final thanks to those who suggested virtualisation as a topic ... Final thanks have to go the people who inspired me to create my Virtualisation for Developers talk.  Toby Henderson (@holytshirt) ensured I took notice of Sun’s VirtualBox, Peter Ibbotson for being a fine sounding board at the Kew Railway over quite a few Adnam’s Broadside and to Guy Smith-Ferrier for allowing his user group to be the guinea pigs for the talk before it was seen at DDD7.  Thanks to all of you I now know much more about virtualisation than I would have thought possible and it continues to be great fun. Conclusion If this was an academy award acceptance speech I would have been cut off after the first few paragraphs, so well done if you made it this far.  I’ll be doing my best to do justice to the MVP award and the UK community.  I’m fortunate in having a new employer who considers presenting at user groups as a good thing, so don’t expect me to stop any time soon. If you’ve never seen me in action, then you can view the original DDD7 Virtualisation for Developers presentation (filmed by the Microsoft Channel 9 team) as part of the full DDD7 video list here, http://www.craigmurphy.com/blog/?p=1591.  Also thanks to Craig Murphy’s fine video work you can also view my latest DDD8 presentation on Commercial Software Development, here, http://vimeo.com/9216563 P.S. If I’ve missed anyone out, do feel free to lambast me in comments, it’s your duty.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Lessons from a SAN Failure

    - by Bill Graziano
    At 1:10AM Sunday morning the main SAN at one of my clients suffered a “partial” failure.  Partial means that the SAN was still online and functioning but the LUNs attached to our two main SQL Servers “failed”.  Failed means that SQL Server wouldn’t start and the MDF and LDF files mostly showed a zero file size.  But they were online and responding and most other LUNs were available.  I’m not sure how SANs know to fail at 1AM on a Saturday night but they seem to.  From a personal standpoint this worked out poorly: I was out with friends and after more than a few drinks.  From a work standpoint this was about the best time to fail you could imagine.  Everything was running well before Monday morning.  But it was a long, long Sunday.  I started tipsy, got tired and ended up hung over later in the day. Note to self: Try not to go out drinking right before the SAN fails. This caught us at an interesting time.  We’re in the process of migrating to an entirely new set of servers so some things were partially moved.  This made it difficult to follow our procedures as cleanly as we’d like.  The benefit was that we had much better documentation of everything on the server.  I would encourage everyone to really think through the process of implementing your DR plan and document as much as possible.  Following a checklist is much easier than trying to remember at night under pressure in a hurry after a few drinks. I had a series of estimates on how long things would take.  They were accurate for any single server failure.  They weren’t accurate for a SAN failure that took two servers down.  This wasn’t bad but we should have communicated better. Don’t forget how many things are outside the database.  Logins, linked servers, DTS packages (yikes!), jobs, service broker, DTC (especially DTC), database triggers and any objects in the master database are all things you need backed up.  We’d done a decent job on this and didn’t find significant problems here.  That said this still took a lot of time.  There were many annoyances as a result of this.  Small settings like a login’s default database had a big impact on whether an application could run.  This is probably the single biggest area of concern when looking to recreate a server.  I’d encourage everyone to go through every single node of SSMS and look for user created objects or settings outside the database. Script out your logins with the proper SID and already encrypted passwords and keep it updated.  This makes life so much easier.  I used an approach based on KB246133 that worked well.  I’ll get my scripts posted over the next few days. The disaster can cause your DR process to fail in unexpected ways.  We have a job that scripts out all logins and role memberships and writes it to a file.  This runs on the DR server and pulls from the production server.  Upon opening the file I found that the contents were a “server not found” error.  Fortunately we had other copies and didn’t need to try and restore the master database.  This now runs on the production server and pushes the script to the DR site.  Soon we’ll get it pushed to our version control software. One of the biggest challenges is keeping your DR resources up to date.  Any server change (new linked server, new SQL Server Agent job, etc.) means that your DR plan (and scripts) is out of date.  It helps to automate the generation of these resources if possible. Take time now to test your database restore process.  We test ours quarterly.  If you have a large database I’d also encourage you to invest in a compressed backup solution.  Restoring backups was the single larger consumer of time during our recovery. And yes, there’s a database mirroring solution planned in our new architecture. I didn’t have much involvement in things outside SQL Server but this caused many, many things to change in our environment.  Many applications today aren’t just executables or web sites.  They are a combination of those plus network infrastructure, reports, network ports, IP addresses, DTS and SSIS packages, batch systems and many other things.  These all needed a little bit of attention to make sure they were functioning properly. Profiler turned out to be a handy tool.  I started a trace for failed logins and kept that running.  That let me fix a number of problems before people were able to report them.  I also ran traces to capture exceptions.  This helped identify problems with linked servers. Overall the thing that gave me the most problem was linked servers.  In order for a linked server to function properly you need to be pointed to the right server, have the proper login information, have the network routes available and have MSDTC configured properly.  We have a lot of linked servers and this created many failure points.  Some of the older linked servers used IP addresses and not DNS names.  This meant we had to go in and touch all those linked servers when the servers moved.

    Read the article

  • Know more about Enqueue Deadlock Detection

    - by Liu Maclean(???)
    ??? ORACLE ALLSTAR???????????????????,??????? ???????enqueue lock?????????3 ??????,????????????????????????????ora-00060 dead lock??process???3s: SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE 10.2.0.5.0 Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com PROCESS A: set timing on; update maclean1 set t1=t1+1; PROCESS B: update maclean2 set t1=t1+1; PROCESS A: update maclean2 set t1=t1+1; PROCESS B: update maclean1 set t1=t1+1; ??3s? PROCESS A ?? ERROR at line 1: ORA-00060: deadlock detected while waiting for resource Elapsed: 00:00:03.02 ????Process A????????????? 3s,?????????????,??????? ?????????? ???????: SQL> col name for a30 SQL> col value for a5 SQL> col DESCRIB for a50 SQL> set linesize 140 pagesize 1400 SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ 2 FROM SYS.x$ksppi x, SYS.x$ksppcv y 3 WHERE x.inst_id = USERENV ('Instance') 4 AND y.inst_id = USERENV ('Instance') 5 AND x.indx = y.indx 6 AND x.ksppinm='_enqueue_deadlock_scan_secs'; NAME VALUE DESCRIB ------------------------------ ----- -------------------------------------------------- _enqueue_deadlock_scan_secs 0 deadlock scan interval SQL> alter system set "_enqueue_deadlock_scan_secs"=18 scope=spfile; System altered. Elapsed: 00:00:00.01 SQL> startup force; ORACLE instance started. Total System Global Area 851443712 bytes Fixed Size 2100040 bytes Variable Size 738198712 bytes Database Buffers 104857600 bytes Redo Buffers 6287360 bytes Database mounted. Database opened. PROCESS A: SQL> set timing on; SQL> update maclean1 set t1=t1+1; 1 row updated. Elapsed: 00:00:00.06 Process B SQL> update maclean2 set t1=t1+1; 1 row updated. SQL> update maclean1 set t1=t1+1; Process A: SQL> SQL> alter session set events '10704 trace name context forever,level 10:10046 trace name context forever,level 8'; Session altered. SQL> update maclean2 set t1=t1+1; update maclean2 set t1=t1+1 * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource  Elapsed: 00:00:18.05 ksqcmi: TX,90011,4a9 mode=6 timeout=21474836 WAIT #12: nam='enq: TX - row lock contention' ela= 2930070 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114759849120 WAIT #12: nam='enq: TX - row lock contention' ela= 2930636 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114762779801 WAIT #12: nam='enq: TX - row lock contention' ela= 2930439 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114765710430 *** 2012-06-12 09:58:43.089 WAIT #12: nam='enq: TX - row lock contention' ela= 2931698 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114768642192 WAIT #12: nam='enq: TX - row lock contention' ela= 2930428 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114771572755 WAIT #12: nam='enq: TX - row lock contention' ela= 2931408 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114774504207 DEADLOCK DETECTED ( ORA-00060 ) [Transaction Deadlock] The following deadlock is not an ORACLE error. It is a deadlock due to user error in the design of an application or from issuing incorrect ad-hoc SQL. The following information may aid in determining the deadlock: ??????Process A?’enq: TX – row lock contention’ ?????ORA-00060 deadlock detected????3s ??? 18s , ???hidden parameter “_enqueue_deadlock_scan_secs”?????,????????0? ??????????: SQL> alter system set "_enqueue_deadlock_scan_secs"=4 scope=spfile; System altered. Elapsed: 00:00:00.01 SQL> alter system set "_enqueue_deadlock_time_sec"=9 scope=spfile; System altered. Elapsed: 00:00:00.00 SQL> startup force; ORACLE instance started. Total System Global Area 851443712 bytes Fixed Size 2100040 bytes Variable Size 738198712 bytes Database Buffers 104857600 bytes Redo Buffers 6287360 bytes Database mounted. Database opened. SQL> set linesize 140 pagesize 1400 SQL> show parameter dead NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ _enqueue_deadlock_scan_secs integer 4 _enqueue_deadlock_time_sec integer 9 SQL> set timing on SQL> select * from maclean1 for update wait 8; T1 ---------- 11 Elapsed: 00:00:00.01 PROCESS B SQL> select * from maclean2 for update wait 8; T1 ---------- 3 SQL> select * from maclean1 for update wait 8; select * from maclean1 for update wait 8 PROCESS A SQL> select * from maclean2 for update wait 8; select * from maclean2 for update wait 8 * ERROR at line 1: ORA-30006: resource busy; acquire with WAIT timeout expired Elapsed: 00:00:08.00 ???????? ??? select for update wait?enqueue request timeout ?????8s? ,???????”_enqueue_deadlock_scan_secs”=4(deadlock scan interval),?4s???deadlock detected,????Process A????deadlock ???, ??????? ??Process A?????8s?raised??”ORA-30006: resource busy; acquire with WAIT timeout expired”??,??ORA-00060,?????process A???????? ????????”_enqueue_deadlock_time_sec”(requests with timeout <= this will not have deadlock detection)???,?enqueue request time < “_enqueue_deadlock_time_sec”?Server process?????dead lock detection,?????????enqueue request ??????timeout??????(_enqueue_deadlock_time_sec????5,?timeout<5s),???????????????;??????timeout>”_enqueue_deadlock_time_sec”???,Oracle????????????????????? ??????????: SQL> show parameter dead NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ _enqueue_deadlock_scan_secs integer 4 _enqueue_deadlock_time_sec integer 9 Process A: SQL> set timing on; SQL> select * from maclean1 for update wait 10; T1 ---------- 11 Process B: SQL> select * from maclean2 for update wait 10; T1 ---------- 3 SQL> select * from maclean1 for update wait 10; PROCESS A: SQL> select * from maclean2 for update wait 10; select * from maclean2 for update wait 10 * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource Elapsed: 00:00:06.02 ??????? select for update wait 10?10s??, ?? 10s?????_enqueue_deadlock_time_sec???(9s),??Process A???????? ???????????????6s ???????_enqueue_deadlock_scan_secs?4s ? ???????????,???????????_enqueue_deadlock_scan_secs?????????3???? ??: enqueue lock?????????????? 1. ?????????deadlock detection??3s????, ????????_enqueue_deadlock_scan_secs(deadlock scan interval)???,??????0,????????_enqueue_deadlock_scan_secs?????????3???, ?_enqueue_deadlock_scan_secs=0 ??3s??, ?_enqueue_deadlock_scan_secs=4??6s??,????? 2. ???????_enqueue_deadlock_time_sec(requests with timeout <= this will not have deadlock detection)???,?enqueue request timeout< _enqueue_deadlock_time_sec(????5),?Server process?????????enqueue request timeout>_enqueue_deadlock_time_sec ????_enqueue_deadlock_scan_secs???????, ??request timeout??????select for update wait [TIMEOUT]??? ??: ???10.2.0.1?????????2?hidden parameter , ???patchset 10.2.0.3????? _enqueue_deadlock_time_sec, ?patchset 10.2.0.5??????_enqueue_deadlock_scan_secs? ?????RAC???????????10s, ???????_lm_dd_interval(dd time interval in seconds) ,????????8.0.6???? ???????????????,??????,  ?10g???????60s,?11g???????10s?  ???????11g??_lm_dd_interval?????????????,?????11g??LMD????????????,??????????RAC?LMD?Deadlock Detection???????CPU,???11g?Oracle????Team???LMD????????CPU????: ????????11g?LMD???????,???????11g??? UTS TRACE ????? DD???: SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> SQL> select * from global_name 2 ; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> alter system set "_lm_dd_interval"=20 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 1570009088 bytes Fixed Size 2228704 bytes Variable Size 1325403680 bytes Database Buffers 234881024 bytes Redo Buffers 7495680 bytes Database mounted. Database opened. SQL> set linesize 140 pagesize 1400 SQL> show parameter lm_dd NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ _lm_dd_interval integer 20 SQL> select count(*) from gv$instance; COUNT(*) ---------- 2 instance 1: SQL> oradebug setorapid 12 Oracle pid: 12, Unix process pid: 8608, image: [email protected] (LMD0) ? LMD0??? UTS TRACE??RAC???????????? SQL> oradebug event 10046 trace name context forever,level 8:10708 trace name context forever,level 103: trace[rac.*] disk high; Statement processed. Elapsed: 00:00:00.00 SQL> update maclean1 set t1=t1+1; 1 row updated. instance 2: SQL> update maclean2 set t1=t1+1; 1 row updated. SQL> update maclean1 set t1=t1+1; Instance 1: SQL> update maclean2 set t1=t1+1; update maclean2 set t1=t1+1 * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource Elapsed: 00:00:20.51 LMD0???UTS TRACE 2012-06-12 22:27:00.929284 : [kjmpbmsg:process][type 22][msg 0x7fa620ac85a8][from 1][seq 8148.0][len 192] 2012-06-12 22:27:00.929346 : [kjmxmpm][type 22][seq 0.0][msg 0x7fa620ac85a8][from 1] *** 2012-06-12 22:27:00.929 * kjddind: received DDIND msg with subtype x6 * reqp->dd_master_inst_kjxmddi == 1 * kjddind: dump sgh: 2012-06-12 22:27:00.929346*: kjddind: req->timestamp [0.15], kjddt [0.13] 2012-06-12 22:27:00.929346*: >> DDmsg:KJX_DD_REMOTE,TS[0.15],Inst 1->2,ddxid[id1,id2,inst:2097153,31,1],ddlock[0x95023930,829],ddMasterInst 1 2012-06-12 22:27:00.929346*: lock [0x95023930,829], op = [mast] 2012-06-12 22:27:00.929346*: reqp->timestamp [0.15], kjddt [0.13] 2012-06-12 22:27:00.929346*: kjddind: updated local timestamp [0.15] * kjddind: case KJX_DD_REMOTE 2012-06-12 22:27:00.929346*: ADD IO NODE WFG: 0 frame pointer 2012-06-12 22:27:00.929346*: PUSH: type=res, enqueue(0xffffffff.0xffffffff)=0xbbb9af40, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: PROCESS: type=res, enqueue(0xffffffff.0xffffffff)=0xbbb9af40, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: POP: type=res, enqueue(0xffffffff.0xffffffff)=0xbbb9af40, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: kjddopr[TX 0xe000c.0x32][ext 0x5,0x0]: blocking lock 0xbbb9a800, owner 2097154 of inst 2 2012-06-12 22:27:00.929346*: PUSH: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: PROCESS: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: ADD NODE TO WFG: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: POP: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: kjddopt: converting lock 0xbbce92f8 on 'TX' 0x80016.0x5d4,txid [2097154,34]of inst 2 2012-06-12 22:27:00.929346*: PUSH: type=res, enqueue(0xffffffff.0xffffffff)=0xbbce92f8, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: PROCESS: type=res, enqueue(0xffffffff.0xffffffff)=0xbbce92f8, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: ADD NODE TO WFG: type=res, enqueue(0xffffffff.0xffffffff)=0xbbce92f8, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929855 : GSIPC:AMBUF: rcv buff 0x7fa620aa8cd8, pool rcvbuf, rqlen 1102 2012-06-12 22:27:00.929878 : GSIPC:GPBMSG: new bmsg 0x7fa620aa8d48 mb 0x7fa620aa8cd8 msg 0x7fa620aa8d68 mlen 192 dest x100 flushsz -1 2012-06-12 22:27:00.929878*: << DDmsg:KJX_DD_REMOTE,TS[0.15],Inst 2->1,ddxid[id1,id2,inst:2097153,31,1],ddlock[0x95023930,829],ddMasterInst 1 2012-06-12 22:27:00.929878*: lock [0xbbce92f8,287], op = [mast] 2012-06-12 22:27:00.929878*: ADD IO NODE WFG: 0 frame pointer 2012-06-12 22:27:00.929923 : [kjmpbmsg:compl][msg 0x7fa620ac8588][typ p][nmsgs 1][qtime 0][ptime 0] 2012-06-12 22:27:00.929947 : GSIPC:PBAT: flush start. flag 0x79 end 0 inc 4.4 2012-06-12 22:27:00.929963 : GSIPC:PBAT: send bmsg 0x7fa620aa8d48 blen 224 dest 1.0 2012-06-12 22:27:00.929979 : GSIPC:SNDQ: enq msg 0x7fa620aa8d48, type 65521 seq 8325, inst 1, receiver 0, queued 1 012-06-12 22:27:00.929979 : GSIPC:SNDQ: enq msg 0x7fa620aa8d48, type 65521 seq 8325, inst 1, receiver 0, queued 1 2012-06-12 22:27:00.929996 : GSIPC:BSEND: flushing sndq 0xb491dd28, id 0, dcx 0xbc517770, inst 1, rcvr 0 qlen 0 1 2012-06-12 22:27:00.930014 : GSIPC:BSEND: no batch1 msg 0x7fa620aa8d48 type 65521 len 224 dest (1:0) 2012-06-12 22:27:00.930088 : kjbsentscn[0x0.3f72dc][to 1] 2012-06-12 22:27:00.930144 : GSIPC:SENDM: send msg 0x7fa620aa8d48 dest x10000 seq 8325 type 65521 tkts x1 mlen xe00110 2012-06-12 22:27:00.930531 : GSIPC:KSXPCB: msg 0x7fa620aa8d48 status 30, type 65521, dest 1, rcvr 0 WAIT #0: nam='ges remote message' ela= 1372 waittime=80 loop=0 p3=74 obj#=-1 tim=1339554420931640 2012-06-12 22:27:00.931728 : GSIPC:RCVD: ksxp msg 0x7fa620af6490 sndr 1 seq 0.8149 type 65521 tkts 1 2012-06-12 22:27:00.931746 : GSIPC:RCVD: watq msg 0x7fa620af6490 sndr 1, seq 8149, type 65521, tkts 1 2012-06-12 22:27:00.931763 : GSIPC:RCVD: seq update (0.8148)->(0.8149) tp -15 fg 0x4 from 1 pbattr 0x0 2012-06-12 22:27:00.931779 : GSIPC:TKT: collect msg 0x7fa620af6490 from 1 for rcvr 0, tickets 1 2012-06-12 22:27:00.931794 : kjbrcvdscn[0x0.3f72dc][from 1][idx 2012-06-12 22:27:00.931810 : kjbrcvdscn[no bscn dd_master_inst_kjxmddi == 1 * kjddind: dump sgh: NXTIN (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 BLOCKER 0xbbb9a800 5 wq 1 cvtops x28 TX 0xe000c.0x32(ext 0x5,0x0)[20000-0002-00000022] inst 2 BLOCKED 0xbbce92f8 5 wq 2 cvtops x1 TX 0x80016.0x5d4(ext 0x2,0x0)[20000-0002-00000022] inst 2 NXTOUT (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 2012-06-12 22:27:00.932058*: kjddind: req->timestamp [0.15], kjddt [0.15] 2012-06-12 22:27:00.932058*: >> DDmsg:KJX_DD_VALIDATE,TS[0.15],Inst 1->2,ddxid[id1,id2,inst:2097153,31,1],ddlock[0x95023930,829],ddMasterInst 1 2012-06-12 22:27:00.932058*: lock [(nil),0], op = [vald_dd] 2012-06-12 22:27:00.932058*: kjddind: updated local timestamp [0.15] * kjddind: case KJX_DD_VALIDATE *** 2012-06-12 22:27:00.932 * kjddvald called: kjxmddi stuff: * cont_lockp (nil) * dd_lockp 0x95023930 * dd_inst 1 * dd_master_inst 1 * sgh graph: NXTIN (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 BLOCKER 0xbbb9a800 5 wq 1 cvtops x28 TX 0xe000c.0x32(ext 0x5,0x0)[20000-0002-00000022] inst 2 BLOCKED 0xbbce92f8 5 wq 2 cvtops x1 TX 0x80016.0x5d4(ext 0x2,0x0)[20000-0002-00000022] inst 2 NXTOUT (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 POP WFG NODE: lock=(nil) * kjddvald: dump the PRQ: BLOCKER 0xbbb9a800 5 wq 1 cvtops x28 TX 0xe000c.0x32(ext 0x5,0x0)[20000-0002-00000022] inst 2 BLOCKED 0xbbce92f8 5 wq 2 cvtops x1 TX 0x80016.0x5d4(ext 0x2,0x0)[20000-0002-00000022] inst 2 * kjddvald: KJDD_NXTONOD ->node_kjddsg.dinst_kjddnd =1 * kjddvald: ... which is not my node, my subgraph is validated but the cycle is not complete Global blockers dump start:--------------------------------- DUMP LOCAL BLOCKER/HOLDER: block level 5 res [0x80016][0x5d4],[TX][ext 0x2,0x0] ??dead lock!!! ???????11.2.0.3???? RAC LMD???????????”_lm_dd_interval”????????????20s?  ???????10g?_lm_dd_interval???60s,??????Processes?????????????????,????????????Server Process????????60s??????11g?????(??????LMD???????)???????,???????????10s??? Enqueue Deadlock Detection? ?11g??? RAC?LMD???????hidden parameter ????”_lm_dd_interval”???,RAC????????????????,???????????: SQL> col name for a50 SQL> col describ for a60 SQL> col value for a20 SQL> set linesize 140 pagesize 1400 SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ 2 FROM SYS.x$ksppi x, SYS.x$ksppcv y 3 WHERE x.inst_id = USERENV ('Instance') 4 AND y.inst_id = USERENV ('Instance') 5 AND x.indx = y.indx 6 AND x.ksppinm like '_lm_dd%'; NAME VALUE DESCRIB -------------------------------------------------- -------------------- ------------------------------------------------------------ _lm_dd_interval 20 dd time interval in seconds _lm_dd_scan_interval 5 dd scan interval in seconds _lm_dd_search_cnt 3 number of dd search per token get _lm_dd_max_search_time 180 max dd search time per token _lm_dd_maxdump 50 max number of locks to be dumped during dd validation _lm_dd_ignore_nodd FALSE if TRUE nodeadlockwait/nodeadlockblock options are ignored 6 rows selected.

    Read the article

  • SQLAuthority News – Why VoIP Service Providers Should Think About NuoDB’s Geo Distribution

    - by Pinal Dave
    You can always tell when someone’s showing off their cool, cutting edge comms technology. They tend to raise their voice a lot. Back in the day they’d announce their gadget leadership to the rest of the herd by shouting into their cellphone. Usually the message was no more urgent than “Hi, I’m on my cellphone!” Now the same types will loudly name-drop a different technology to the rest of the airport lounge. “I’m leveraging the wifi,” a fellow passenger bellowed, the other day, as we filtered through the departure gate. Nobody needed to know that, but the subtext was “look at me everybody”. You can tell the really advanced mobile user – they tend to whisper. Their handset has a microphone (how cool is that!) and they know how to use it. Sometimes these shouty public broadcasters aren’t even connected anyway because the database for their Voice over IP (VoIP) platform can’t cope. This will happen if they are using a traditional SQL model to try and cope with a phone network which has far flung offices and hundreds of mobile employees. That, like shouting into your phone, is just wrong on so many levels. What VoIP needs now is a single, logical database across multiple servers in different geographies. It needs to be updated in real-time and automatically scaled out during times of peak demand. A VoIP system should scale up to handle increased traffic, but just as importantly is must then go back down in the off peak hours. Try this with a MySQL database. It can’t scale easily enough, so it will keep your developers busy. They’ll have spent many hours trying to knit the different databases together. Traditional relational databases can possibly achieve this, at a price. Mind you, you could extend baked bean cans and string to every point on the network and that would be no less elegant. That’s not really following engineering principles though is it? Having said that, most telcos and VoIP systems use a separate, independent solution for each office location, which they link together – loosely.  The more office locations, the more complex and expensive the solution becomes and so the more you spend on maintenance. Ideally, you’d have a fluid system that can automatically shift its shape as the need arises. That’s the point of software isn’t it – it adapts. Otherwise, we might as well return to the old days. A MySQL system isn’t exactly baked bean cans attached by string, but it’s closer in spirit to the old many teethed mechanical beast that was employed in the first type of automated switchboard. NuoBD’s NewSQL is designed to be a single database that works across multiple servers, which can scale easily, and scale on demand. That’s one system that gives high connectivity but no latency, complexity or maintenance issues. MySQL works in some circumstances, but a period of growth isn’t one of them. So as a company moves forward, the MySQL database can’t keep pace. Data storage and data replication errors creep in. Soon the diaspora of offices becomes a problem. Your telephone system isn’t just distributed, it is literally all over the place. Though voice calls are often a software function, some of the old habits of telephony remain. When you call an engineer out, some of them will listen to what you’re asking for and announce that it cannot be done. This is what happens if you ask, say, database engineers familiar with Oracle or Microsoft to fulfill your wish for a low maintenance system built on a single, fluid, scalable database. No can do, they’d say. In fact, I heard one shouting something similar into his VoIP handset at the airport. “I can’t get on the network, Mac. I’m on MySQL.” You can download NuoDB from here. “NuoDB provides the ability to replicate data globally in real-time, which is not available with any other product offering,” states Weeks.  “That alone is remarkable and it works. I’ve seen it. I’ve used it.  I’ve tested it. The ability to deploy NuoDB removes a tremendous burden from our support and engineering teams.” Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: NuoDB

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • Unobtrusive Client Side Validation with Dynamic Contents in ASP.NET MVC 3

    - by imran_ku07
        Introduction:          A while ago, I blogged about how to perform client side validation for dynamic contents in ASP.NET MVC 2 at here. Using the approach given in that blog, you can easily validate your dynamic ajax contents at client side. ASP.NET MVC 3 also supports unobtrusive client side validation in addition to ASP.NET MVC 2 client side validation for backward compatibility. I feel it is worth to rewrite that blog post for ASP.NET MVC 3 unobtrusive client side validation. In this article I will show you how to do this.       Description:           I am going to use the same example presented at here. Create a new ASP.NET MVC 3 application. Then just open HomeController.cs and add the following code,   public ActionResult CreateUser() { return View(); } [HttpPost] public ActionResult CreateUserPrevious(UserInformation u) { return View("CreateUserInformation", u); } [HttpPost] public ActionResult CreateUserInformation(UserInformation u) { if(ModelState.IsValid) return View("CreateUserCompanyInformation"); return View("CreateUserInformation"); } [HttpPost] public ActionResult CreateUserCompanyInformation(UserCompanyInformation uc, UserInformation ui) { if (ModelState.IsValid) return Content("Thank you for submitting your information"); return View("CreateUserCompanyInformation"); }             Next create a CreateUser view and add the following lines,   <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<UnobtrusiveValidationWithDynamicContents.Models.UserInformation>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> CreateUser </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <div id="dynamicData"> <%Html.RenderPartial("CreateUserInformation"); %> </div> </asp:Content>             Next create a CreateUserInformation partial view and add the following lines,   <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<UnobtrusiveValidationWithDynamicContents.Models.UserInformation>" %> <% Html.EnableClientValidation(); %> <%using (Html.BeginForm("CreateUserInformation", "Home")) { %> <table id="table1"> <tr style="background-color:#E8EEF4;font-weight:bold"> <td colspan="3" align="center"> User Information </td> </tr> <tr> <td> First Name </td> <td> <%=Html.TextBoxFor(a => a.FirstName)%> </td> <td> <%=Html.ValidationMessageFor(a => a.FirstName)%> </td> </tr> <tr> <td> Last Name </td> <td> <%=Html.TextBoxFor(a => a.LastName)%> </td> <td> <%=Html.ValidationMessageFor(a => a.LastName)%> </td> </tr> <tr> <td> Email </td> <td> <%=Html.TextBoxFor(a => a.Email)%> </td> <td> <%=Html.ValidationMessageFor(a => a.Email)%> </td> </tr> <tr> <td colspan="3" align="center"> <input type="submit" name="userInformation" value="Next"/> </td> </tr> </table> <%} %> <script type="text/javascript"> $("form").submit(function (e) { if ($(this).valid()) { $.post('<%= Url.Action("CreateUserInformation")%>', $(this).serialize(), function (data) { $("#dynamicData").html(data); $.validator.unobtrusive.parse($("#dynamicData")); }); } e.preventDefault(); }); </script>             Next create a CreateUserCompanyInformation partial view and add the following lines,   <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<UnobtrusiveValidationWithDynamicContents.Models.UserCompanyInformation>" %> <% Html.EnableClientValidation(); %> <%using (Html.BeginForm("CreateUserCompanyInformation", "Home")) { %> <table id="table1"> <tr style="background-color:#E8EEF4;font-weight:bold"> <td colspan="3" align="center"> User Company Information </td> </tr> <tr> <td> Company Name </td> <td> <%=Html.TextBoxFor(a => a.CompanyName)%> </td> <td> <%=Html.ValidationMessageFor(a => a.CompanyName)%> </td> </tr> <tr> <td> Company Address </td> <td> <%=Html.TextBoxFor(a => a.CompanyAddress)%> </td> <td> <%=Html.ValidationMessageFor(a => a.CompanyAddress)%> </td> </tr> <tr> <td> Designation </td> <td> <%=Html.TextBoxFor(a => a.Designation)%> </td> <td> <%=Html.ValidationMessageFor(a => a.Designation)%> </td> </tr> <tr> <td colspan="3" align="center"> <input type="button" id="prevButton" value="Previous"/>   <input type="submit" name="userCompanyInformation" value="Next"/> <%=Html.Hidden("FirstName")%> <%=Html.Hidden("LastName")%> <%=Html.Hidden("Email")%> </td> </tr> </table> <%} %> <script type="text/javascript"> $("#prevButton").click(function () { $.post('<%= Url.Action("CreateUserPrevious")%>', $($("form")[0]).serialize(), function (data) { $("#dynamicData").html(data); $.validator.unobtrusive.parse($("#dynamicData")); }); }); $("form").submit(function (e) { if ($(this).valid()) { $.post('<%= Url.Action("CreateUserCompanyInformation")%>', $(this).serialize(), function (data) { $("#dynamicData").html(data); $.validator.unobtrusive.parse($("#dynamicData")); }); } e.preventDefault(); }); </script>             Next create a new class file UserInformation.cs inside Model folder and add the following code,   public class UserInformation { public int Id { get; set; } [Required(ErrorMessage = "First Name is required")] [StringLength(10, ErrorMessage = "First Name max length is 10")] public string FirstName { get; set; } [Required(ErrorMessage = "Last Name is required")] [StringLength(10, ErrorMessage = "Last Name max length is 10")] public string LastName { get; set; } [Required(ErrorMessage = "Email is required")] [RegularExpression(@"^\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*$", ErrorMessage = "Email Format is wrong")] public string Email { get; set; } }             Next create a new class file UserCompanyInformation.cs inside Model folder and add the following code,    public class UserCompanyInformation { public int UserId { get; set; } [Required(ErrorMessage = "Company Name is required")] [StringLength(10, ErrorMessage = "Company Name max length is 10")] public string CompanyName { get; set; } [Required(ErrorMessage = "CompanyAddress is required")] [StringLength(50, ErrorMessage = "Company Address max length is 50")] public string CompanyAddress { get; set; } [Required(ErrorMessage = "Designation is required")] [StringLength(50, ErrorMessage = "Designation max length is 10")] public string Designation { get; set; } }            Next add the necessary script files in Site.Master,   <script src="<%= Url.Content("~/Scripts/jquery-1.4.4.min.js")%>" type="text/javascript"></script> <script src="<%= Url.Content("~/Scripts/jquery.validate.min.js")%>" type="text/javascript"></script> <script src="<%= Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")%>" type="text/javascript"></script>            Now run this application. You will get the same behavior as described in this article. The key important feature to note here is the $.validator.unobtrusive.parse method, which is used by ASP.NET MVC 3 unobtrusive client side validation to initialize jQuery validation plug-in to start the client side validation process. Another important method to note here is the jQuery.valid method which return true if the form is valid and return false if the form is not valid .       Summary:          There may be several occasions when you need to load your HTML contents dynamically. These dynamic HTML contents may also include some input elements and you need to perform some client side validation for these input elements before posting thier values to server. In this article I shows you how you can enable client side validation for dynamic input elements in ASP.NET MVC 3. I am also attaching a sample application. Hopefully you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • SQLAuthority News – Storing Data and Files in Cloud – Dropbox – Personal Technology Tip

    - by pinaldave
    I thought long and hard about doing a Personal Technology Tips series for this blog.  I have so many tips I’d like to share.  I am on my computer almost all day, every day, so I have a treasure trove of interesting tidbits I like to share if given the chance.  The only thing holding me back – which tip to share first?  The first tip obviously has the weight of seeming like the most important.  But this would mean choosing amongst my favorite tricks and shortcuts.  This is a hard task. Source: Dropbox.com My Dropbox I have finally decided, though, and have determined that the first Personal Technology Tip may not be the most secret or even trickier to master – in fact, it is probably the easiest.  My today’s Personal Technology Tip is Dropbox. I hope that all of you are nodding along in recognition right now.  If you do not use Dropbox, or have not even heard of it before, get on the internet and find their site.  You won’t be disappointed.  A quick recap for those in the dark: Dropbox is an online storage site with a lot of additional syncing and cloud-computing capabilities.  Now that we’ve covered the basics, let’s explore some of my favorite options in Dropbox. Collaborate with All The first thing I love about Dropbox is the ability it gives you to collaborate with others.  You can share files easily with other Dropbox users, and they can alter them, share them with you, all while keeping track of different versions in on easy place.  I’d like to see anyone try to accomplish that key idea – “easily” – using e-mail versions and multiple computers.  It’s even difficult to accomplish using a shared network. Afraid that this kind of ease looks too good to be true?  Afraid that maybe there isn’t enough storage space, or the user interface is confusing?  Think again.  There is plenty of space – you can get 2 GB with just a free account, and upgrades are inexpensive and go up to 100 GB of storage.  And the user interface is so easy that anyone can learn to use it. What I use Dropbox for I love Dropbox because I give a lot of presentations and often they are far from home.  I can keep my presentations on Dropbox and have easy access to them anywhere, without needing to have my whole computer with me.  This is just one small way that you can use Dropbox. You can sync your entire hard drive, or hard drives if you have multiple computers (home, work, office, shared), and you can set Dropbox to automatically sync files on a certain timeline, or whenever Dropbox notices that they’ve been changed. Why I love Dropbox Dropbox has plenty of storage, but 2 GB still has a hard time competing with the average desktop’s storage space.  So what if you want to sync most of your files, but only the ones you use the most and share between work and home, and not all your files (especially large files like pictures and videos)?  You can use selective sync to choose which files to sync. Above all, my favorite feature is LanSync.  Dropbox will search your Local Area Network (LAN) for new files and sync them to Dropbox, as well as downloading the new version to all the shared files across the network.  That means that if move around on different computers at work or at home, you will have the same version of the file every time.  Or, other users on the LAN will have access to the new version, which makes collaboration extremely easy. Ref: rzfeeser.com Dropbox has so many other features that I feel like I could create a Personal Technology Tips series devoted entirely to Dropbox.  I’m going to create a bullet list here to make things shorter, but I strongly encourage you to look further into these into options if it sounds like something you would use. Theft Recover Home Security File Hosting and Sharing Portable Dropbox Sync your iCal calendar Password Storage What is your favorite tool and why? I could go on and on, but I will end here.  In summary – I strongly encourage everyone to investigate Dropbox to see if it’s something they would find useful.  If you use Dropbox and know of a great feature I failed to mention, please share it with me, I’d love to hear how everyone uses this program. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Personal Technology

    Read the article

  • Verizon Wireless Supports its Mission-Critical Employee Portal with MySQL

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Verizon Wireless, the #1 mobile carrier in the United States, operates the nation’s largest 3G and 4G LTE network, with the most subscribers (109 millions) and the highest revenue ($70.2 Billion in 2011). Verizon Wireless built the first wide-area wireless broadband network and delivered the first wireless consumer 3G multimedia service in the US, and offers global voice and data services in more than 200 destinations around the world. To support 4.2 million daily wireless transactions and 493,000 calls and emails transactions produced by 94.2 million retail customers, Verizon Wireless employs over 78,000 employees with area headquarters across the United States. The Business Challenge Seeing the stupendous rise in social media, video streaming, live broadcasting…etc which redefined the scope of technology, Verizon Wireless, as a technology savvy company, wanted to provide a platform to its employees where they could network socially, view and host microsites, stream live videos, blog and provide the latest news. The IT team at Verizon Wireless had abundant experience with various technology platforms to support the huge number of applications in the company. However, open-source products weren’t yet widely used in the organization and the team had the ambition to adopt such technologies and see if the architecture could meet Verizon Wireless’ rigid requirements. After evaluating a few solutions, the IT team decided to use the LAMP stack for Vzweb, its mission-critical, 24x7 employee portal, with Drupal as the front end and MySQL on Linux as the backend, and for a few other internal websites also on MySQL. The MySQL Solution Verizon Wireless started to support its employee portal, Vzweb, its online streaming website, Vztube, and internal wiki pages, Vzwiki, with MySQL 5.1 in 2010. Vzweb is the main internal communication channel for Verizon Wireless, while Vztube hosts important company-wide webcasts regularly for executive-level announcements, so both channels have to be live and accessible all the time for its 78,000 employees across the United States. However during the initial deployment of the MySQL based Intranet, the application experienced performance issues. High connection spikes occurred causing slow user response time, and the IT team applied workarounds to continue the service. A number of key performance indexes (KPI) for the infrastructure were identified and the operational framework redesigned to support a more robust website and conform to the 99.985% uptime SLA (Service-Level Agreement). The MySQL DBA team made a series of upgrades in MySQL: Step 1: Moved from MyISAM to InnoDB storage engine in 2010 Step 2: Upgraded to the latest MySQL 5.1.54 release in 2010 Step 3: Upgraded from MySQL 5.1 to the latest GA release MySQL 5.5 in 2011, and leveraging MySQL Thread Pool as part of MySQL Enterprise Edition to scale better After making those changes, the team saw a much better response time during high concurrency use cases, and achieved an amazing performance improvement of 1400%! In January 2011, Verizon CEO, Ivan Seidenberg, announced the iPhone launch during the opening keynote at Consumer Electronic Show (CES) in Las Vegas, and that presentation was streamed live to its 78,000 employees. The event was broadcasted flawlessly with MySQL as the database. Later in 2011, Hurricane Irene attacked the East Coast of United States and caused major life and financial damages. During the hurricane, the team directed more traffic to its west coast data center to avoid potential infrastructure damage in the East Coast. Such transition was executed smoothly and even though the geographical distance became longer for the East Coast users, there was no impact in the performance of Vzweb and Vztube, and the SLA goal was achieved. “MySQL is the key component of Verizon Wireless’ mission-critical employee portal application,” said Shivinder Singh, senior DBA at Verizon Wireless. “We achieved 1400% performance improvement by moving from the MyISAM storage engine to InnoDB, upgrading to the latest GA release MySQL 5.5, and using the MySQL Thread Pool to support high concurrent user connections. MySQL has become part of our IT infrastructure, on which potentially more future applications will be built.” To learn more about MySQL Enterprise Edition, Get our Product Guide.

    Read the article

  • Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA)

    - by Anand Akela
    Contributed by Mahesh sharma, Oracle Enterprise Manager Ops Center team In Oracle Enterprise Manager Ops Center 12c we introduced a new feature to make the Enterprise Controllers highly available. With EC HA if the hardware crashes, or if the Enterprise Controller services and/or the remote database stop responding, then the enterprise services are immediately restarted on the other standby Enterprise Controller without administrative intervention. In today's post, I'll briefly describe EC HA, look at some of the prerequisites and then show some screen shots of how the Enterprise Controller is represented in the BUI. In my next post, I'll show you how to install the EC in a HA environment and some of the new commands. What is EC HA? Enterprise Controller High Availability (EC HA) provides an active/standby fail-over solution for two or more Ops Center Enterprise Controllers, all within an Oracle Clusterware framework. This allows EC resources to relocate to a standby if the hardware crashes, or if certain services fail. It is also possible to manually relocate the services if maintenance on the active EC is required. When the EC services are relocated to the standby, EC services are interrupted only for the period it takes for the EC services to stop on the active node and to start back up on a standby node. What are the prerequisites? To install EC in a HA framework an understanding of the prerequisites are required. There are many possibilities on how these prerequisites can be installed and configured - we will not discuss these in this post. However, best practices should be applied when installing and configuring, I would suggest that you get expert help if you are not familiar with them. Lets briefly look at each of these prerequisites in turn: Hardware : Servers are required to host the active and standby node(s). As the nodes will be in a clustered environment, they need to be the same model and configured identically. The nodes should have the same processor class, number of cores, memory, network cards, for example. Operating System : We can use Solaris 10 9/10 or higher, Solaris 11, OEL 5.5 or higher on x86 or Sparc Network : There are a number of requirements for network cards in clusterware, and cables should be networked identically on all the nodes. We must also consider IP allocation for public / private and Virtual IP's (VIP's). Storage : Shared storage will be required for the cluster voting disks, Oracle Cluster Register (OCR) and the EC's libraries. Clusterware : Oracle Clusterware version 11.2.0.3 or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html Remote Database : Oracle RDBMS 11.1.0.x or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html For detailed information on how to install EC HA , please read : http://docs.oracle.com/cd/E27363_01/doc.121/e25140/install_config-shared.htm#OPCSO242 For detailed instructions on installing Oracle Clusterware, please read : http://docs.oracle.com/cd/E11882_01/install.112/e17214/chklist.htm#BHACBGII For detailed instructions on installing the remote Oracle database have a read of: http://www.oracle.com/technetwork/database/enterprise-edition/documentation/index.html The schematic diagram below gives a visual view of how the prerequisites are connected. When a fail-over occurs the Enterprise Controller resources and the VIP are relocated to one of the standby nodes. The standby node then becomes active and all Ops Center services are resumed. Connecting to the Enterprise Controller from your favourite browser. Let's presume we have installed and configured all the prerequisites, and installed Ops Center on the active and standby nodes. We can now connect to the active node from a browser i.e. http://<active_node1>/, this will redirect us to the virtual IP address (VIP). The VIP is the IP address that moves with the Enterprise Controller resource. Once you log on and view the assets, you will see some new symbols, these represent that the nodes are cluster members, with one being an active member and the other a standby member in this case. If you connect to the standby node, the browser will redirect you to a splash page, indicating that you have connected to the standby node. Hope you find this topic interesting. Next time I will post about how to install the Enterprise Controller in the HA frame work. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Solaris 11 pkg fix is my new friend

    - by user12611829
    While putting together some examples of the Solaris 11 Automated Installer (AI), I managed to really mess up my system, to the point where AI was completely unusable. This was my fault as a combination of unfortunate incidents left some remnants that were causing problems, so I tried to clean things up. Unsuccessfully. Perhaps that was a bad idea (OK, it was a terrible idea), but this is Solaris 11 and there are a few more tricks in the sysadmin toolbox. Here's what I did. # rm -rf /install/* # rm -rf /var/ai # installadm create-service -n solaris11-x86 --imagepath /install/solaris11-x86 \ -s [email protected] Warning: Service svc:/network/dns/multicast:default is not online. Installation services will not be advertised via multicast DNS. Creating service from: [email protected] DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 130/130 264.4/264.4 0B/s PHASE ITEMS Installing new actions 284/284 Updating package state database Done Updating image state Done Creating fast lookup database Done Reading search index Done Updating search index 1/1 Creating i386 service: solaris11-x86 Image path: /install/solaris11-x86 So far so good. Then comes an oops..... setup-service[168]: cd: /var/ai//service/.conf-templ: [No such file or directory] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This is where you generally say a few things to yourself, and then promise to quit deleting configuration files and directories when you don't know what you are doing. Then you recall that the new Solaris 11 packaging system has some ability to correct common mistakes (like the one I just made). Let's give it a try. # pkg fix installadm Verifying: pkg://solaris/install/installadm ERROR dir: var/ai Group: 'root (0)' should be 'sys (3)' dir: var/ai/ai-webserver Missing: directory does not exist dir: var/ai/ai-webserver/compatibility-configuration Missing: directory does not exist dir: var/ai/ai-webserver/conf.d Missing: directory does not exist dir: var/ai/image-server Group: 'root (0)' should be 'sys (3)' dir: var/ai/image-server/cgi-bin Missing: directory does not exist dir: var/ai/image-server/images Group: 'root (0)' should be 'sys (3)' dir: var/ai/image-server/logs Missing: directory does not exist dir: var/ai/profile Missing: directory does not exist dir: var/ai/service Group: 'root (0)' should be 'sys (3)' dir: var/ai/service/.conf-templ Missing: directory does not exist dir: var/ai/service/.conf-templ/AI_data Missing: directory does not exist dir: var/ai/service/.conf-templ/AI_files Missing: directory does not exist file: var/ai/ai-webserver/ai-httpd-templ.conf Missing: regular file does not exist file: var/ai/service/.conf-templ/AI.db Missing: regular file does not exist file: var/ai/image-server/cgi-bin/cgi_get_manifest.py Missing: regular file does not exist Created ZFS snapshot: 2012-12-11-21:09:53 Repairing: pkg://solaris/install/installadm Creating Plan (Evaluating mediators): | DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 3/3 0.0/0.0 0B/s PHASE ITEMS Updating modified actions 16/16 Updating image state Done Creating fast lookup database Done In just a few moments, IPS found the missing files and incorrect ownerships/permissions. Instead of reinstalling the system, or falling back to an earlier Live Upgrade boot environment, I was able to create my AI services and now all is well. # installadm create-service -n solaris11-x86 --imagepath /install/solaris11-x86 \ -s [email protected] Warning: Service svc:/network/dns/multicast:default is not online. Installation services will not be advertised via multicast DNS. Creating service from: [email protected] DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 130/130 264.4/264.4 0B/s PHASE ITEMS Installing new actions 284/284 Updating package state database Done Updating image state Done Creating fast lookup database Done Reading search index Done Updating search index 1/1 Creating i386 service: solaris11-x86 Image path: /install/solaris11-x86 Refreshing install services Warning: mDNS registry of service solaris11-x86 could not be verified. Creating default-i386 alias Setting the default PXE bootfile(s) in the local DHCP configuration to: bios clients (arch 00:00): default-i386/boot/grub/pxegrub Refreshing install services Warning: mDNS registry of service default-i386 could not be verified. # installadm create-service -n solaris11u1-x86 --imagepath /install/solaris11u1-x86 \ -s [email protected] Warning: Service svc:/network/dns/multicast:default is not online. Installation services will not be advertised via multicast DNS. Creating service from: [email protected] DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 514/514 292.3/292.3 0B/s PHASE ITEMS Installing new actions 661/661 Updating package state database Done Updating image state Done Creating fast lookup database Done Reading search index Done Updating search index 1/1 Creating i386 service: solaris11u1-x86 Image path: /install/solaris11u1-x86 Refreshing install services Warning: mDNS registry of service solaris11u1-x86 could not be verified. # installadm list Service Name Alias Of Status Arch Image Path ------------ -------- ------ ---- ---------- default-i386 solaris11-x86 on i386 /install/solaris11-x86 solaris11-x86 - on i386 /install/solaris11-x86 solaris11u1-x86 - on i386 /install/solaris11u1-x86 This is way way better than pkgchk -f in Solaris 10. I'm really beginning to like this new IPS packaging system.

    Read the article

  • Creating a Reverse Proxy with URL Rewrite for IIS

    - by OWScott
    There are times when you need to reverse proxy through a server. The most common example is when you have an internal web server that isn’t exposed to the internet, and you have a public web server accessible to the internet. If you want to serve up traffic from the internal web server, you can do this through the public web server by creating a tunnel (aka reverse proxy). Essentially, you can front the internal web server with a friendly URL, even hiding custom ports. For example, consider an internal web server with a URL of http://10.10.0.50:8111. You can make that available through a public URL like http://tools.mysite.com/ as seen in the following image. The URL can be made public or it can be used for your internal staff and have it password protected and/or locked down by IP address. This is easy to do with URL Rewrite and IIS. You will also need Application Request Routing (ARR) installed even though for a simple reverse proxy you won’t use most of ARR’s functionality. If you don’t already have URL Rewrite and ARR installed you can do so easily with the Web Platform Installer. A lot can be said about reverse proxies and many different situations and ways to route the traffic and handle different URL patterns. However, my goal here is to get you up and going in the easiest way possible. Then you can dig in deeper after you get the base configuration in place. URL Rewrite makes a reverse proxy very easy to set up. Note that the URL Rewrite Add Rules template doesn’t include Reverse Proxy at the server level. That’s not to say that you can’t create a server-level reverse proxy, but the URL Rewrite rules template doesn’t help you with that. Getting Started First you must create a website on your public web server that has the public bindings that you need. Alternately, you can use an existing site and route using conditions for certain traffic. After you’ve created your site then open up URL Rewrite at the site level. Using the “Add Rule(s)…” template that is opened from the right-hand actions pane, create a new Reverse Proxy rule. If you receive a prompt (the first time) that the proxy functionality needs to be enabled, select OK. This is telling you that a proxy can route traffic outside of your web server, which happens to be our goal in this case. Be aware that reverse proxy rules can be dangerous if you open sites from inside you network to the world, so just be aware of what you’re doing and why. The next and final step of the template asks a few questions. The first textbox asks the name of the internal web server. In our example, it’s 10.10.0.50:8111. This can be any URL, including a subfolder like internal.mysite.com/blog. Don’t include the http or https here. The template assumes that it’s not entered. You can choose whether to perform SSL Offloading or not. If you leave this checked then all requests to the internal server will be over HTTP regardless of the original web request. This can help with performance and SSL bindings if all requests are within a trusted network. If the network path between the two web servers is not completely trusted and safe then uncheck this. Next, the template enables you to create an outbound rule. This is used to rewrite links in the page to look like your public domain name rather than the internal domain name. Outbound rules have a lot of CPU overhead because the entire web content needs to be parsed and updated. However, if you need it, then it’s well worth the extra CPU hit on the web server. If you check the “Rewrite the domain names of the links in HTTP responses” checkbox then the From textbox will be filled in with what you entered for the inbound rule. You can enter your friendly public URL for the outbound rule. This will essentially replace any reference to 10.10.0.50:8111 (or whatever you enter) with tools.mysite.com in all <a>, <form>, and <img> tags on your site. That’s it! Well, there is a lot more that you can do, this but will give you the base configuration. You can now visit www.mysite.com on your public web server and it will serve up the site from your internal web server. You should see two rules show up; one inbound and one outbound. You can edit these, add conditions, and tweak them further as needed. One common issue that can occur without outbound rules has to do with compression. If you run into errors with the new proxied site, try turning off compression to confirm if that’s the issue. Here’s a link with details on how to deal with compression and outbound rules. I hope this was helpful to get started and to see how easy it is to create a simple reverse proxy using URL Rewrite for IIS.

    Read the article

  • How to Share Files Between User Accounts on Windows, Linux, or OS X

    - by Chris Hoffman
    Your operating system provides each user account with its own folders when you set up several different user accounts on the same computer. Shared folders allow you to share files between user accounts. This process works similarly on Windows, Linux, and Mac OS X. These are all powerful multi-user operating systems with similar folder and file permission systems. Windows On Windows, the “Public” user’s folders are accessible to all users. You’ll find this folder under C:\Users\Public by default. Files you place in any of these folders will be accessible to other users, so it’s a good way to share music, videos, and other types of files between users on the same computer. Windows even adds these folders to each user’s libraries by default. For example, a user’s Music library contains the user’s music folder under C:\Users\NAME\as well as the public music folder under C:\Users\Public\. This makes it easy for each user to find the shared, public files. It also makes it easy to make a file public — just drag and drop a file from the user-specific folder to the public folder in the library. Libraries are hidden by default on Windows 8.1, so you’ll have to unhide them to do this. These Public folders can also be used to share folders publically on the local network. You’ll find the Public folder sharing option under Advanced sharing settings in the Network and Sharing Control Panel. You could also choose to make any folder shared between users, but this will require messing with folder permissions in Windows. To do this, right-click a folder anywhere in the file system and select Properties. Use the options on the Security tab to change the folder’s permissions and make it accessible to different user accounts. You’ll need administrator access to do this. Linux This is a bit more complicated on Linux, as typical Linux distributions don’t come with a special user folder all users have read-write access to. The Public folder on Ubuntu is for sharing files between computers on a network. You can use Linux’s permissions system to give other user accounts read or read-write access to specific folders. The process below is for Ubuntu 14.04, but it should be identical on any other Linux distribution using GNOME with the Nautilus file manager. It should be similar for other desktop environments, too. Locate the folder you want to make accessible to other users, right-click it, and select Properties. On the Permissions tab, give “Others” the “Create and delete files” permission. Click the Change Permissions for Enclosed Files button and give “Others” the “Read and write” and “Create and Delete Files” permissions. Other users on the same computer will then have read and write access to your folder. They’ll find it under /home/YOURNAME/folder under Computer. To speed things up, they can create a link or bookmark to the folder so they always have easy access to it. Mac OS X Mac OS X creates a special Shared folder that all user accounts have access to. This folder is intended for sharing files between different user accounts. It’s located at /Users/Shared. To access it, open the Finder and click Go > Computer. Navigate to Macintosh HD > Users > Shared. Files you place in this folder can be accessed by any user account on your Mac. These tricks are useful if you’re sharing a computer with other people and you all have your own user accounts — maybe your kids have their own limited accounts. You can share a music library, downloads folder, picture archive, videos, documents, or anything else you like without keeping duplicate copies.

    Read the article

  • Reflections on GiveCamp

    - by Reed
    I participated in the Seattle GiveCamp over the weekend, and am entirely impressed.  GiveCamp is a great event – I especially like how rewarding it is for everybody involved.  I strongly encourage any and all developers to watch for future GiveCamp events, and consider participating, for many reasons… GiveCamp provides real value to organizations that truly need help.  The Seattle event alone succeeded in helping sixteen non-profit organizations in many different ways.  The projects involved varied dramatically, including website redesigns, SEO, reworking data management workflows, and even game development.  Many non-profits have a strong need for good, quality technical help.  However, nearly every non-profit organization has an incredibly limited budget.  GiveCamp is a way to really give back, and provide incredibly valuable help to organizations that truly benefit. My experience has shown many developers to be incredibly generous – this is a chance to dedicate your energy to helping others in a way that really takes advantage of your expertise.  Your time as a developer is incredibly valuable, and this puts something of incredible value directly into the hands of places its needed. First, and foremost, GiveCamp is about providing technical help to non-profit organizations in need. GiveCamp can make you a better developer.  This is a fantastic opportunity for us, as developers, to work with new people, in a new setting.  The incredibly short time frame (one weekend for a deliverable project) and intense motivation to succeed provides a huge opportunity for learning from peers.  I’d personally like to thank off the developers with whom I worked – I learned something from each and every one of you.  I hope to see and work with all of you again someday. GiveCamp provides an opportunity for you to work outside of your comfort zone. While it’s always nice to be an expert, it’s also valuable to work on a project where you have little or no direct experience.  My team focused on a complete reworking of our organizations message and a complete new website redesign and deployment using WordPress.  While I’d used WordPress for my blog, and had some experience, this is completely unrelated to my professional work.  In fact, nobody on our team normally worked directly with the technologies involved – yet together we managed to succeed in delivering our goals.  As developers, it’s easy to want to stay abreast of new technology surrounding our expertise, but its rare that we get a chance to sit down and work on something practical that is completely outside of our normal realm of work.  I’m a desktop developer by trade, and spent much of the weekend working with CSS and Photoshop.  Many of the projects organizations need don’t match perfectly with the skill set in the room – yet all of the software professionals rose to the occasion and delivered practical, usable applications. GiveCamp is a short term, known commitment. While this seems obvious, I think it’s an important aspect to remember.  This is a huge part of what makes it successful – you can work, completely focused, on a project, then walk away completely when you’re done.  There is no expectation of continued involvement.  While many of the professionals I’ve talked to are willing to contribute some amount of their time beyond the camp, this is not expected. The freedom this provides is immense.  In addition, the motivation this brings is incredibly valuable.  Every developer in the room was very focused on delivering in time – you have one shot to get it as good as possible, and leave it with the organization in a way that can be maintained by them.  This is a rare experience – and excellent practice at time management for everyone involved. GiveCamp provides a great way to meet and network with your peers. Not only do you get to network with other software professionals in your area – you get to network with amazing people.  Every single person in the room is there to try to help people.  The balance of altruism, intelligence, and expertise in the room is something I’ve never before experienced. During the presentations of what was accomplished, I felt blessed to participate.  I know many people in the room were incredibly touched by the level of dedication and accomplishment over the weekend. GiveCamp is fun. At the end of the experience, I would have signed up again, even if it was a painful, tedious weekend – merely due to the amazing accomplishments achieved throughout the event.  However, the event is fun.  Everybody I talked to, the entire weekend, was having a good time.  While there were many faces focused into a near grimace at times (including mine, I’ll admit), this was always in response to a particularly challenging problem or task.  The challenges just added to the overall enjoyment of the weekend – part of why I became a developer in the first place is my love for challenge and puzzles, and a short deadline using unfamiliar technology provided plenty of opportunity for puzzles.  As soon as people would stand up, it was another smile.   If you’re a developer, I’d recommend looking at GiveCamp more closely.  Watch for an event in your area.  If there isn’t one, consider building a team and organizing an event.  The experience is worth the commitment. 

    Read the article

  • Backup Your Windows Home Server Off-Site with Asus Webstorage

    - by Mysticgeek
    Windows Home Server lets you backup machines on your network easily. But what about backing up the server data? Today we take a look at ASUS WebStorage for Windows Home Server, which provides you with secure off-site backup for WHS. To use the ASUS WebStorage service you’ll need to sign up for a free account. It offers 1GB of free storage, then you can purchase an unlimited backup package for $39.99 for a year subscription. Note: They also offer online storage for individual PCs as well. Install ASUS WebStorage for WHS Browse to your shared folders on the server and open the Add-Ins folder and copy over the WHSConnectorSetup2.2.4.088.msi file (link below) then close out of the folder. Now launch Windows Home Server Console from one of the computers on your network, click Settings, then Add-ins. Under Available Add-ins click the Available tab and you’ll see the Asus WebStorage installer file we just copied over. Click the Install button. Installation kicks off and when it’s complete, you’ll need to close out of the console and reconnect. Using ASUS WebStorage WHS Connector  When you reconnect to WHS Console, scroll over to the ASUS WebStorage icon and click on Settings. Now log into your ASUS account… Now select the folders you want to backup to the WebStorage service. Select the radio button next to Enable to initialize the backup process… The backup process begins. You can change which folders are backed up simply by disabling the backup process, uncheck the folder(s), then enable the backup again. ASUS WebStorage Site After you have files backed up to the ASUS site, log into your account, and your presented with an overview of the amount of storage you’re using. It also shows what type of files are taking certain amounts of space.   You can browse through your backed up files and folders. It allows you to share and sync backed up data as well. Navigate to the file you want and you can easily download it by clicking on it, or share it out by clicking the share link below it. If you choose to share it, you’re provided with a link to the file to send out to other users.   Conclusion Users of Windows Home Server have been looking for an inexpensive cloud backup solution for quite some time. There are services such as JungleDisk, KeepVault, Wuala…etc. These services probably do a better job, but can start getting expensive once you start uploading a GBs of data. Another disappointment of ASUS WebStorage is you can only backup your WHS shares (from what we’ve been able to determine), it’s an “all or nothing” type of thing. You cannot go in and select individual files and folders. The initial upload speeds can be a bit slow as well, although that might have something to do with limited upload speeds on the DSL connection we used to test it. Retrieving your data from the ASUS site is a breeze though, and all the data files are organized quite well. The WHS Addin is very easy to install and use. If you’re looking for an off-site solution to backup your WHS data, you can test out ASUS WebStorage for free with a 1GB limit. This is good for testing the service and it might be exactly what you’re looking for. Other users may want a more advanced solution like KeepVault or CloudBerry…which is a front end for Amazon S3 storage. Download ASUS WebStorage WHS Addin Other WHS Offsite Backup Solutions CloudBerry, JungleDisk, KeepVault, Wuala Similar Articles Productive Geek Tips Restore Files from Backups on Windows Home ServerGMedia Blog: Setting Up a Windows Home ServerCreate A Windows Home Server Home Computer Restore DiscRemove a Network Computer from Windows Home ServerShare Ubuntu Home Directories using Samba TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >