Search Results

Search found 5233 results on 210 pages for 'records'.

Page 208/210 | < Previous Page | 204 205 206 207 208 209 210  | Next Page >

  • Searching a set of data with multiple terms using Linq

    - by Cj Anderson
    I'm in the process of moving from ADO.NET to Linq. The application is a directory search program to look people up. The users are allowed to type the search criteria into a single textbox. They can separate each term with a space, or wrap a phrase in quotes such as "park place" to indicate that it is one term. Behind the scenes the data comes from a XML file that has about 90,000 records in it and is about 65 megs. I load the data into a DataTable and then use the .Select method with a SQL query to perform the searches. The query I pass is built from the search terms the user passed. I split the string from the textbox into an array using a regular expression that will split everything into a separate element that has a space in it. However if there are quotes around a phrase, that becomes it's own element in the array. I then end up with a single dimension array with x number of elements, which I iterate over to build a long query. I then build the search expression below: query = query & _ "((userid LIKE '" & tempstr & "%') OR " & _ "(nickname LIKE '" & tempstr & "%') OR " & _ "(lastname LIKE '" & tempstr & "%') OR " & _ "(firstname LIKE '" & tempstr & "%') OR " & _ "(department LIKE '" & tempstr & "%') OR " & _ "(telephoneNumber LIKE '" & tempstr & "%') OR " & _ "(email LIKE '" & tempstr & "%') OR " & _ "(Office LIKE '" & tempstr & "%'))" Each term will have a set of the above query. If there is more than one term, I put an AND in between, and build another query like above with the next term. I'm not sure how to do this in Linq. So far, I've got the XML file loading correctly. I'm able to search it with specific criteria, but I'm not sure how to best implement the search over multiple terms. 'this works but far too simple to get the job done Dim results = From c In m_DataSet...<Users> _ Where c.<userid>.Value = "XXXX" _ Select c The above code also doesn't use the LIKE operator either. So partial matches don't work. It looks like what I'd want to use is the .Startswith but that appears to be only in Linq2SQL. Any guidance would be appreciated. I'm new to Linq, so I might be missing a simple way to do this. The XML file looks like so: <?xml version="1.0" standalone="yes"?> <theusers> <Users> <userid>person1</userid> <nickname></nickname> <lastname></lastname> <firstname></firstname> <department></department> <telephoneNumber></telephoneNumber> <email></email> </Users> <Users> <userid>person2</userid> <nickname></nickname> <lastname></lastname> <firstname></firstname> <department></department> <telephoneNumber></telephoneNumber> <email></email> </Users>

    Read the article

  • migrating from Prototype to jQuery in Rails, having trouble with duplicate get request

    - by aressidi
    I'm in the process of migrating from Prototype to jQuery and moving all JS outside of the view files. All is going fairly well with one exception. Here's what I'm trying to do, and the problem I'm having. I have a diary where users can update records in-line in the page like so: user clicks 'edit' link to edit an entry in the diary a get request is performed via jQuery and an edit form is displayed allowing the user to modify the record user updates the record, the form disappears and the updated record is shown in place of the form All of that works so far. The problem arises when: user updates a record user clicks 'edit' to update another record in this case, the edit form is shown twice! In firebug I get a status code 200 when the form shows, and then moments later, another edit form shows again with a status code of 304 I only want the form to show once, not twice. The form shows twice only after I update a record, otherwise everything works fine. Here's the code, any ideas? I think this might have to do with the fact that in food_item_update.js I call the editDiaryEntry() after a record is updated, but if I don't call that function and try and update the record after it's been modified, then it just spits up the .js.erb response on the screen. That's also why I have the editDiaryEntry() in the add_food.js.erb file. Any help would be greatly appreciated. diary.js jQuery(document).ready(function() { postFoodEntry(); editDiaryEntry(); initDatePicker(); }); function postFoodEntry() { jQuery('form#add_entry').submit(function(e) { e.preventDefault(); jQuery.post(this.action, jQuery(this).serialize(), null, "script"); // return this }); } function editDiaryEntry() { jQuery('.edit_link').click(function(e) { e.preventDefault(); // This should look to see if one version of this is open... if (jQuery('#edit_container_' + this.id).length == 0 ) { jQuery.get('/diary/entry/edit', {id: this.id}, null, "script"); } }); } function closeEdit () { jQuery('.close_edit').click(function(e) { e.preventDefault(); jQuery('.entry_edit_container').remove(); jQuery("#entry_" + this.id).show(); }); } function updateDiaryEntry() { jQuery('.edit_entry_form').submit(function(e) { e.preventDefault(); jQuery.post(this.action, $(this).serialize(), null, "script"); }); } function initDatePicker() { jQuery("#date, #edit_date").datepicker(); }; add_food.js.erb jQuery("#entry_alert").show(); jQuery('#add_entry')[ 0 ].reset(); jQuery('#diary_entries').html("<%= escape_javascript(render :partial => 'members/diary/diary_entries', :object => @diary, :locals => {:record_counter => 0, :date_header => 0, :edit_mode => @diary_edit}, :layout => false ) %>"); jQuery('#entry_alert').html("<%= escape_javascript(render :partial => 'members/diary/entry_alert', :locals => {:type => @type, :message => @alert_message}) %>"); jQuery('#entry_alert').show(); setTimeout(function() { jQuery('#entry_alert').fadeOut('slow'); }, 5000); editDiaryEntry(); food_item_edit.js.erb jQuery("#entry_<%= @entry.id %>").hide(); jQuery("#entry_<%= @entry.id %>").after("<%= escape_javascript(render :partial => 'members/diary/food_item_edit', :locals => {:user_food_profile => @entry}) %>"); closeEdit(); updateDiaryEntry(); initDatePicker(); food_item_update.js jQuery("#entry_<%= @entry.id %>").replaceWith("<%= escape_javascript(render :partial => 'members/diary/food_item', :locals => {:entry => @entry, :total_calories => 0}) %>"); jQuery('.entry_edit_container').remove(); editDiaryEntry();

    Read the article

  • Stuck with the first record while parsing an XML in Java

    - by Ritwik G
    I am parsing the following XML : <table ID="customer"> <T><C_CUSTKEY>1</C_CUSTKEY><C_NAME>Customer#000000001</C_NAME><C_ADDRESS>IVhzIApeRb ot,c,E</C_ADDRESS><C_NATIONKEY>15</C_NATIONKEY><C_PHONE>25-989-741-2988</C_PHONE><C_ACCTBAL>711.56</C_ACCTBAL><C_MKTSEGMENT>BUILDING</C_MKTSEGMENT><C_COMMENT>regular, regular platelets are fluffily according to the even attainments. blithely iron</C_COMMENT></T> <T><C_CUSTKEY>2</C_CUSTKEY><C_NAME>Customer#000000002</C_NAME><C_ADDRESS>XSTf4,NCwDVaWNe6tEgvwfmRchLXak</C_ADDRESS><C_NATIONKEY>13</C_NATIONKEY><C_PHONE>23-768-687-3665</C_PHONE><C_ACCTBAL>121.65</C_ACCTBAL><C_MKTSEGMENT>AUTOMOBILE</C_MKTSEGMENT><C_COMMENT>furiously special deposits solve slyly. furiously even foxes wake alongside of the furiously ironic ideas. pending</C_COMMENT></T> <T><C_CUSTKEY>3</C_CUSTKEY><C_NAME>Customer#000000003</C_NAME><C_ADDRESS>MG9kdTD2WBHm</C_ADDRESS><C_NATIONKEY>1</C_NATIONKEY><C_PHONE>11-719-748-3364</C_PHONE><C_ACCTBAL>7498.12</C_ACCTBAL><C_MKTSEGMENT>AUTOMOBILE</C_MKTSEGMENT><C_COMMENT>special packages wake. slyly reg</C_COMMENT></T> <T><C_CUSTKEY>4</C_CUSTKEY><C_NAME>Customer#000000004</C_NAME><C_ADDRESS>XxVSJsLAGtn</C_ADDRESS><C_NATIONKEY>4</C_NATIONKEY><C_PHONE>14-128-190-5944</C_PHONE><C_ACCTBAL>2866.83</C_ACCTBAL><C_MKTSEGMENT>MACHINERY</C_MKTSEGMENT><C_COMMENT>slyly final accounts sublate carefully. slyly ironic asymptotes nod across the quickly regular pack</C_COMMENT></T> <T><C_CUSTKEY>5</C_CUSTKEY><C_NAME>Customer#000000005</C_NAME><C_ADDRESS>KvpyuHCplrB84WgAiGV6sYpZq7Tj</C_ADDRESS><C_NATIONKEY>3</C_NATIONKEY><C_PHONE>13-750-942-6364</C_PHONE><C_ACCTBAL>794.47</C_ACCTBAL><C_MKTSEGMENT>HOUSEHOLD</C_MKTSEGMENT><C_COMMENT>blithely final instructions haggle; stealthy sauternes nod; carefully regu</C_COMMENT></T> </table> with the following java code: package xmlparserformining; import java.util.List; import java.util.Iterator; import org.dom4j.Document; import org.dom4j.DocumentException; import org.dom4j.Node; import org.dom4j.io.SAXReader; public class XmlParserForMining { public static Document getDocument( final String xmlFileName ) { Document document = null; SAXReader reader = new SAXReader(); try { document = reader.read( xmlFileName ); } catch (DocumentException e) { e.printStackTrace(); } return document; } public static void main(String[] args) { String xmlFileName = "/home/r/javaCodez/parsing in java/customer.xml"; String xPath = "//table/T/C_ADDRESS"; Document document = getDocument( xmlFileName ); List<Node> nodes = document.selectNodes( xPath ); System.out.println(nodes.size()); for (Node node : nodes) { String customer_address = node.valueOf(xPath); System.out.println( "Customer address: " + customer_address); } } } However, instead of getting all the various customer records, I am getting the following output: 1500 Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E Customer address: IVhzIApeRb ot,c,E and so on .. What is wrong here? Why is it printing only the first record ?

    Read the article

  • textbox disappears in paging - php

    - by fusion
    i'm calling the search.php page via ajax to search.html. the problem is, since i've implemented paging, the textbox with the search keyword from search.html 'disappears' when the user clicks the 'Next' button [because the page goes to search.php which has no textbox element] i'd like the textbox with the search keyword to be there, when the user goes through the records via paging. how'd i achieve this? search.html: <body> <form name="myform" class="wrapper"> <input type="text" name="q" onkeyup="showPage();" class="txt_search"/> <input type="button" name="button" onclick="showPage();" class="button"/> <p> <div id="txtHint"></div> <div id="page"></div> </form> </body> search.php [the relevant part]: $self = $_SERVER['PHP_SELF']; $limit = 5; //Number of results per page $numpages=ceil($totalrows/$limit); $query = $query." ORDER BY idQuotes LIMIT " . ($page-1)*$limit . ",$limit"; $result = mysql_query($query, $conn) or die('Error:' .mysql_error()); ?> <div class="caption">Search Results</div> <div class="center_div"> <table> <?php while ($row= mysql_fetch_array($result, MYSQL_ASSOC)) { $cQuote = highlightWords(htmlspecialchars($row['cQuotes']), $search_result); ?> <tr> <td style="text-align:right; font-size:15px;"><?php h($row['cArabic']); ?></td> <td style="font-size:16px;"><?php echo $cQuote; ?></td> <td style="font-size:12px;"><?php h($row['vAuthor']); ?></td> <td style="font-size:12px; font-style:italic; text-align:right;"><?php h($row['vReference']); ?></td> </tr> <?php } ?> </table> </div> <?php //Create and print the Navigation bar $nav=""; if($page > 1) { $nav .= "<a href=\"$self?page=" . ($page-1) . "&q=" .urlencode($search_result) . "\">< Prev</a>"; $first = "<a href=\"$self?page=1&q=" .urlencode($search_result) . "\"><< First</a>" ; } else { $nav .= "&nbsp;"; $first = "&nbsp;"; } for($i = 1 ; $i <= $numpages ; $i++) { if($i == $page) { $nav .= "<B>$i</B>"; }else{ $nav .= "<a href=\"$self?page=" . $i . "&q=" .urlencode($search_result) . "\">$i</a>"; } } if($page < $numpages) { $nav .= "<a href=\"$self?page=" . ($page+1) . "&q=" .urlencode($search_result) . "\">Next ></a>"; $last = "<a href=\"$self?page=$numpages&q=" .urlencode($search_result) . "\">Last >></a> "; } else { $nav .= "&nbsp;"; $last = "&nbsp;"; } echo "<br /><br />" . $first . $nav . $last; }

    Read the article

  • How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    - by Gweebz
    I am working on a legacy code base with an existing DB schema. The existing code uses SQL and PL/SQL to execute queries on the DB. We have been tasked with making a small part of the project database-engine agnostic (at first, change everything eventually). We have chosen to use Hibernate 3.3.2.GA and "*.hbm.xml" mapping files (as opposed to annotations). Unfortunately, it is not feasible to change the existing schema because we cannot regress any legacy features. The problem I am encountering is when I am trying to map a uni-directional, one-to-many relationship where the FK is also part of a composite PK. Here are the classes and mapping file... CompanyEntity.java public class CompanyEntity { private Integer id; private Set<CompanyNameEntity> names; ... } CompanyNameEntity.java public class CompanyNameEntity implements Serializable { private Integer id; private String languageId; private String name; ... } CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This code works just fine for SELECT and INSERT of a Company with names. I encountered a problem when I tried to update and existing record. I received a BatchUpdateException and after looking through the SQL logs I saw Hibernate was trying to do something stupid... update COMPANY_NAME set COMPANY_ID=null where COMPANY_ID=? Hibernate was trying to dis-associate child records before updating them. The problem is that this field is part of the PK and not-nullable. I found the quick solution to make Hibernate not do this is to add "not-null='true'" to the "key" element in the parent mapping. SO now may mapping looks like this... CompanyNameEntity.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.jboss.org/dtd/hibernate/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.example"> <class name="com.example.CompanyEntity" table="COMPANY"> <id name="id" column="COMPANY_ID"/> <set name="names" table="COMPANY_NAME" cascade="all-delete-orphan" fetch="join" batch-size="1" lazy="false"> <key column="COMPANY_ID" not-null="true"/> <one-to-many entity-name="vendorName"/> </set> </class> <class entity-name="companyName" name="com.example.CompanyNameEntity" table="COMPANY_NAME"> <composite-id> <key-property name="id" column="COMPANY_ID"/> <key-property name="languageId" column="LANGUAGE_ID"/> </composite-id> <property name="name" column="NAME" length="255"/> </class> </hibernate-mapping> This mapping gives the exception... org.hibernate.MappingException: Repeated column in mapping for entity: companyName column: COMPANY_ID (should be mapped with insert="false" update="false") My problem now is that I have tryed to add these attributes to the key-property element but that is not supported by the DTD. I have also tryed changing it to a key-many-to-one element but that didn't work either. So... How can I map "insert='false' update='false'" on a composite-id key-property which is also used in a one-to-many FK?

    Read the article

  • ListView slow performance

    - by Mohamed Hemdan
    I've created a list of recipes using Listview/customcursoradapter. A custom layout includes a photo for the recipe , Now I've some problems with the performance of viewing and scrolling the Listview although it has only 10 records (Target is 150). sometimes i get this error java.lang.OutOfMemoryError: bitmap size exceeds VM budget , I've tried to implement the Async task but i failed to do it. Is there any way i can overcome this problem? Your help is highly appreciated !! Here is my GetView method public View getView(int position, View convertView, ViewGroup parent) { View row = super.getView(position, convertView, parent); Cursor cursbbn = getCursor(); if (row == null) { LayoutInflater inflater = (LayoutInflater) localContext.getSystemService(Context.LAYOUT_INFLATER_SERVICE); row = inflater.inflate(R.layout.listtype, null); } String Title = cursbbn.getString(2); String SandID=cursbbn.getString(1); String Readyin = cursbbn.getString(4); String Faovoites=cursbbn.getString(8); TextView titler=(TextView)row.findViewById(R.id.listmaintitle); TextView readyinr=(TextView)row.findViewById(R.id.listreadyin); int colorPos = position % colors.length; row.setBackgroundColor(colors[colorPos]); titler.setText(Title); readyinr.setText(Readyin); ImageView picture = (ImageView) row.findViewById(R.id.imageView1); Bitmap bitImg1 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0001); Bitmap bitImg2 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0002); Bitmap bitImg3 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0003); Bitmap bitImg4 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0004); Bitmap bitImg5 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0005); Bitmap bitImg6 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0006); Bitmap bitImg7 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0007); Bitmap bitImg8 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0008); Bitmap bitImg9 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0009); Bitmap bitImg10 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0010); if(SandID.contentEquals("0001")) picture.setImageBitmap(getRoundedCornerImage(bitImg1)); if(SandID.contentEquals("0002")) picture.setImageBitmap(getRoundedCornerImage(bitImg2)); if(SandID.contentEquals("0003")) picture.setImageBitmap(getRoundedCornerImage(bitImg3)); if(SandID.contentEquals("0004")) picture.setImageBitmap(getRoundedCornerImage(bitImg4)); if(SandID.contentEquals("0005")) picture.setImageBitmap(getRoundedCornerImage(bitImg5)); if(SandID.contentEquals("0006")) picture.setImageBitmap(getRoundedCornerImage(bitImg6)); if(SandID.contentEquals("0007")) picture.setImageBitmap(getRoundedCornerImage(bitImg7)); if(SandID.contentEquals("0008")) picture.setImageBitmap(getRoundedCornerImage(bitImg8)); if(SandID.contentEquals("0009")) picture.setImageBitmap(getRoundedCornerImage(bitImg9)); if(SandID.contentEquals("0010")) picture.setImageBitmap(getRoundedCornerImage(bitImg10)); return row; } And This is the error : 05-02 03:11:55.898: E/AndroidRuntime(376): FATAL EXCEPTION: main 05-02 03:11:55.898: E/AndroidRuntime(376): java.lang.OutOfMemoryError: bitmap size exceeds VM budget 05-02 03:11:55.898: E/AndroidRuntime(376): at android.graphics.BitmapFactory.nativeDecodeAsset(Native Method) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:460) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.graphics.BitmapFactory.decodeResourceStream(BitmapFactory.java:336) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.graphics.BitmapFactory.decodeResource(BitmapFactory.java:359) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.graphics.BitmapFactory.decodeResource(BitmapFactory.java:385) 05-02 03:11:55.898: E/AndroidRuntime(376): at master.chef.mediamaster.AlternateRowCursorAdapter.getView(AlternateRowCursorAdapter.java:83) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.widget.AbsListView.obtainView(AbsListView.java:1409) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.widget.ListView.makeAndAddView(ListView.java:1745) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.widget.ListView.fillUp(ListView.java:700) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.widget.ListView.fillGap(ListView.java:646) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.widget.AbsListView.trackMotionScroll(AbsListView.java:3399) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.widget.AbsListView.onTouchEvent(AbsListView.java:2233) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.widget.ListView.onTouchEvent(ListView.java:3446) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.View.dispatchTouchEvent(View.java:3885) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:903) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchTouchEvent(PhoneWindow.java:1691) 05-02 03:11:55.898: E/AndroidRuntime(376): at com.android.internal.policy.impl.PhoneWindow.superDispatchTouchEvent(PhoneWindow.java:1125) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.app.Activity.dispatchTouchEvent(Activity.java:2096) 05-02 03:11:55.898: E/AndroidRuntime(376): at com.android.internal.policy.impl.PhoneWindow$DecorView.dispatchTouchEvent(PhoneWindow.java:1675) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewRoot.deliverPointerEvent(ViewRoot.java:2194) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewRoot.handleMessage(ViewRoot.java:1878) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.os.Handler.dispatchMessage(Handler.java:99) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.os.Looper.loop(Looper.java:123) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.app.ActivityThread.main(ActivityThread.java:3683) 05-02 03:11:55.898: E/AndroidRuntime(376): at java.lang.reflect.Method.invokeNative(Native Method) 05-02 03:11:55.898: E/AndroidRuntime(376): at java.lang.reflect.Method.invoke(Method.java:507) 05-02 03:11:55.898: E/AndroidRuntime(376): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 05-02 03:11:55.898: E/AndroidRuntime(376): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 05-02 03:11:55.898: E/AndroidRuntime(376): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Access Qry Questions

    - by kralco626
    It was suggested that I repost this questions as I didn't do a very good job discribing my issue the first time. (http://stackoverflow.com/questions/2921286/access-question) THE SITUATION: I have inspections from many months of many years. Sometimes there is more than one inspection in a month, sometimes there is no inspection. However, the report that is desired by the clients requires that I have EXACTLY ONE record per month for the time frame they request the report. They understand the data issues and have stated that if there is more than one inspection in a month to take the latest one. If the is not an inspection for that month, go back in time untill you find one and use that one. So a sample of the data is as follows: (I am including many records because I was told I did not include enough data on my last try) equip_id month year runtime date 1 5 2008 400 5/10/2008 12:34 PM 1 7 2008 500 7/12/2008 1:45 PM 1 8 2008 600 8/20/2008 1:12 PM 1 8 2008 605 8/30/2008 8:00 AM 1 1 2010 2000 1/12/2010 2:00 PM 1 3 2010 2200 3/24/2010 10:00 AM 2 7 2009 1000 7/20/2009 8:00 AM 2 10 2009 1400 10/14/2009 9:00 AM 2 1 2010 1600 1/15/2010 1:00 PM 2 1 2010 1610 1/30/2010 4:00 PM 2 3 2010 1800 3/15/2010 1:00PM After all the transformations to the data are done, it should look like this: equip_id month year runtime date 1 5 2008 400 5/10/2008 12:34 PM 1 6 2008 400 5/10/2008 12:34 PM 1 7 2008 500 7/12/2008 1:45 PM 1 8 2008 605 8/30/2008 8:00 AM 1 9 2008 605 8/30/2008 8:00 AM 1 10 2008 605 8/30/2008 8:00 AM 1 11 2008 605 8/30/2008 8:00 AM 1 12 2008 605 8/30/2008 8:00 AM 1 1 2009 605 8/30/2008 8:00 AM 1 2 2009 605 8/30/2008 8:00 AM 1 3 2009 605 8/30/2008 8:00 AM 1 4 2009 605 8/30/2008 8:00 AM 1 5 2009 605 8/30/2008 8:00 AM 1 6 2009 605 8/30/2008 8:00 AM 1 7 2009 605 8/30/2008 8:00 AM 1 8 2009 605 8/30/2008 8:00 AM 1 9 2009 605 8/30/2008 8:00 AM 1 10 2009 605 8/30/2008 8:00 AM 1 11 2009 605 8/30/2008 8:00 AM 1 12 2009 605 8/30/2008 8:00 AM 1 1 2010 2000 1/12/2010 2:00 PM 1 2 2010 2000 1/12/2010 2:00 PM 1 3 2010 2200 3/24/2010 10:00 AM 2 7 2009 1000 7/20/2009 8:00 AM 2 8 2009 1000 7/20/2009 8:00 AM 2 9 2009 1000 7/20/2009 8:00 AM 2 10 2009 1400 10/14/2009 9:00 AM 2 11 2009 1400 10/14/2009 9:00 AM 2 12 2009 1400 10/14/2009 9:00 AM 2 1 2010 1610 1/30/2010 4:00 PM 2 2 2010 1610 1/30/2010 4:00 PM 2 3 2010 1800 3/15/2010 1:00PM I think that this is the most accurate dipiction of the problem that I can give. I will now say what I have tried. Although if someone else has a better approach, I am perfectly willing to throw away what I have done and do it differently... STEP 1: create a query that removes the duplicates from the data. Ie. only one record per equip_id for each month/year, keeping the latest one. (done successfully) STEP 2: create a table of the date ranges the client wants the report for. (This is done dynamically at runtime) This table two field, Month and Year. So if the client wants a report from FEb 2008 to March 2010 the table would look like: Month Year 2 2008 3 2008 . . . 12 2008 1 2009 . . . 12 2009 1 2010 2 2010 3 2010 I then left joined this table with my query from step 1. So now I have a record for every month and every year that they want the report for, with nulls(or blanks) or sometimes 0s (not sure why, access is weird, but sometiems they are nulls and sumtimes they are 0s...) for the runtimes that are not avaiable. I don't particurally like this solution, but ill do it if i have to. (this is also done successfully) STEP 3: Fill in the missing runtime values. This I HAVE NOT done successfully. Note that if the request range for the report is feb 2008 to march 2010 and the oldest record for a particular equip_id is say june 2008, it is O.K. for the runtimes to be null (or zeros) for feb - may 2008. I am working with the following query for this step: SELECT equip_id as e_id,year,month, (select top 1 runhours from qry_1_c_One_Record_per_Month a where a.equip_id = e_id order by year,month) FROM qry_1_c_One_Record_per_Month where runhours is null or runhours = 0; UNION SELECT equip_id, year, month, runhours FROM qry_1_c_One_Record_per_Month WHERE .runhours Is Not Null And runhours <> 0 However I clearly can't check the a.equip_id = e_id ... so i don't have anyway to make sure i'm looking at the correct equip_id SUMMARY: So like i said i'm willing to throw away any part, or all of what I tried. Just trying to give everyone a complete picture. I REALLY apreciate ANY help! Thanks so much in advance!

    Read the article

  • Understanding the passing of data/life of a script in web development/CodeIgniter

    - by Pete Jodo
    I hope I worded the title accurately enough but I typically use Java and don't have much experience in Web Development/PHP/CodeIgniter. I have a difficult time understanding the life cycle of a script as I found out trying to implement a certain feature to a website I am developing (as a means of learning how to). I'll first describe the feature I tried implementing and then the problem I ran into that made me question my fundamental understanding of how scripts work since I'm used to typical OOP. Ok so here goes... I have a webpage that has 2 basic tasks a user can do, create and delete an entry. What I attempted to implement was a way to time a user how long it takes them to complete a certain task. The way I did this was have a homepage where there would be a list of tasks a user to choose from (in this case 2, create and delete). A user would click a task which would link to the 'true' homepage where the user then would be expected to complete the task. My script looks like this: <?php class Site extends CI_Controller { var $task1; var $tasks = array( "task1" => NULL, "date1" => 0, "date2" => 0, "diff" => 0); function __construct() { parent::__construct(); include 'timetask.php'; $this->task1 = new TimeTask("create"); } function index() { $this->tasks['task1'] = $this->task1->getTask(); $this->tasks['diff'] = $this->task1->getTimeDiff(); if($this->tasks['diff'] == NULL) { $this->tasks['diff'] = 0; } $this->load->view('usability_test', $this->tasks); } function origIndex() { $this->task1->setDate1(new DateTime()); $this->tasks['date1'] = $this->task1->getDate1()->getTimestamp(); $data = array(); if($q = $this->site_model->get_records()) { $data['records'] = $q; } $this->load->view('options_view', $data); } function create() { $this->task1->setDate2(new DateTime()); $this->tasks['date2'] = $this->task1->getDate2()->getTimestamp(); $data = array( 'author' => $this->input->post('author'), 'title' => $this->input->post('title'), 'contents' => $this->input->post('contents') ); $this->site_model->add_record($data); $this->index(); } I only included create to keep it short. Then I also have the TimeTask class, that actually another StackOverflow so kindly helped me with: <?php class TimeTask { private $task; /** * @var DateTime */ private $date1, $date2; function __construct($currTask) { $this->task = $currTask; } public function getTimeDiff() { $hasDiff = $this->date1 && $this->date2; if ($hasDiff) { return $this->date2->getTimestamp() - $this->date1->getTimestamp(); } else { return NULL; } } public function __toString() { return (string) $this->getTimeDiff(); } /** * @return \DateTime */ public function getDate1() { return $this->date1; } /** * @param \DateTime $date1 */ public function setDate1(DateTime $date1) { $this->date1 = $date1; } /** * @return \DateTime */ public function getDate2() { return $this->date2; } /** * @param \DateTime $date2 */ public function setDate2(DateTime $date2) { $this->date2 = $date2; } /** * @return get current task */ public function getTask() { return $this->task; } } ?> I don't think posting the views is necessary for the question but here is atleast how the links are made. ...and... id", $row-title); ? Now there's no error in the code but it doesn't do what I expect of it and the reason I assume why is because that each time a function of the script is called via a new page it is NOT the same instance of the script called previously so any previously created objects are no longer there. This confuses me and leaves me quite unsure of how to implement this gracefully. Some ways I would guess of how to do this is by passing the necessary data through the URL or have data saved in a database and retrieve it later to compare the times. What would be a recommended way to do, not just this, but anything that needs previously created data? Also, am I correct to think that a script is only 'alive' for one webpage at a time? Thanks!

    Read the article

  • pagination in php error

    - by fusion
    i've implemented this pagination class for my webpage in a separate file called class.pagination.php, but when i execute the page, nothing happens. it just displays a blank page. this is my search.php file, where i'm calling this class: <?php include 'config.php'; require ('class.pagination.php'); $search_result = ""; $search_result = $_GET["q"]; $search_result = trim($search_result); //Check if the string is empty if ($search_result == "") { echo "<p class='error'>Search Error. Please Enter Your Search Query.</p>" ; exit(); } //search query for multiple keywords if(!empty($search_result)) { // the table to search $table = "thquotes"; // explode search words into an array $arraySearch = explode(" ", $search_result); // table fields to search $arrayFields = array(0 => "cQuotes"); $countSearch = count($arraySearch); $a = 0; $b = 0; $query = "SELECT cQuotes, vAuthor, cArabic, vReference FROM ".$table." WHERE ("; $countFields = count($arrayFields); while ($a < $countFields) { while ($b < $countSearch) { $query = $query."$arrayFields[$a] LIKE '%$arraySearch[$b]%'"; $b++; if ($b < $countSearch) { $query = $query." AND "; } } $b = 0; $a++; if ($a < $countFields) { $query = $query.") OR ("; } } $query = $query.")"; $result = mysql_query($query, $conn) or die ('Error: '.mysql_error()); $totalrows = mysql_num_rows($result); if($totalrows < 1) { echo '<span class="error2">No matches found for "'.$search_result.'"</span>'; } else { ?> <div class="caption">Search Results</div> <div class="center_div"> <table> <?php while ($row= mysql_fetch_array($result, MYSQL_ASSOC)) { $cQuote = highlightWords(htmlspecialchars($row['cQuotes']), $search_result); ?> <tr> <td style="text-align:right; font-size:15px;"><?php h($row['cArabic']); ?></td> <td style="font-size:16px;"><?php echo $cQuote; ?></td> <td style="font-size:12px;"><?php h($row['vAuthor']); ?></td> <td style="font-size:12px; font-style:italic; text-align:right;"><?php h($row['vReference']); ?></td> </tr> <?php } ?> </table> </div> <?php } } else { exit(); } //Setting Pagination $pagination = new pagination(); $pagination->byPage = 5; $pagination->rows = $totalrows; // number of records in a table-back mysql_num_rows () instance or another, you have to play $from = $pagination->fromPagination(); // sql used for applications such LIMIT $ from, $ pagination-> byPage $pages = $pagination->pages(); if(isset($pages)) {?> <div class="pagination"> <?foreach ($pages as $key){?> <?if($key['current'] == 1) {?> <a href="?p=<?=$key['p']?>" class="active"><?=$key['page']?></a> <?} else {?> <a href="?p=<?=$key['p']?>" class="inactive"><?=$key['page']?></a> <?}?> <?}?> </div> <?} //End Pagination ?>

    Read the article

  • how can we generate the bit greater than 60000?

    - by thinthinyu
    we can now generate about 50000bits. my code cannot generate more than 60000 bit..please help me............m_B is member variable and type is CString. // LFSR_ECDlg.cpp : implementation file // #include "stdafx.h" #include "myecc.h" #include "LFSR_ECDlg.h" #include "MyClass.h" #ifdef _DEBUG #define new DEBUG_NEW #undef THIS_FILE static char THIS_FILE[] = __FILE__; #endif extern MyClass mycrv; ///////////////////////////////////////////////////////////////////////////// // LFSR_ECDlg dialog LFSR_ECDlg::LFSR_ECDlg(CWnd* pParent /*=NULL*/) : CDialog(LFSR_ECDlg::IDD, pParent) { //{{AFX_DATA_INIT(LFSR_ECDlg) m_C1 = 0; m_C2 = 0; m_B = _T(""); m_p = _T(""); m_Qty = 0; m_time = _T(""); //}}AFX_DATA_INIT } void LFSR_ECDlg::DoDataExchange(CDataExchange* pDX) { CDialog::DoDataExchange(pDX); //{{AFX_DATA_MAP(LFSR_ECDlg) DDX_Text(pDX, IDC_C1, m_C1); DDX_Text(pDX, IDC_C2, m_C2); DDX_Text(pDX, IDC_Sequence, m_B); DDX_Text(pDX, IDC_Sequence2, m_p); DDX_Text(pDX, IDC_QTY, m_Qty); DDV_MinMaxLong(pDX, m_Qty, 0, 2147483647); DDX_Text(pDX, IDC_time, m_time); //}}AFX_DATA_MAP } BEGIN_MESSAGE_MAP(LFSR_ECDlg, CDialog) //{{AFX_MSG_MAP(LFSR_ECDlg) ON_WM_SETCURSOR() ON_EN_CHANGE(IDC_Sequence, OnGeneratorLFSR) ON_MESSAGE(WM_MYPAINTMESSAGE,PaintMyCaption)//by ttyu ON_BN_CLICKED(IDC_save, Onsave) //}}AFX_MSG_MAP END_MESSAGE_MAP() ///////////////////////////////////////////////////////////////////////////// // LFSR_ECDlg message handlers bool LFSR_ECDlg::CheckDataEntry() { //if((m_Px>=mycrv.p)|(m_Py>=mycrv.p)) {AfxMessageBox("Seed [P] is invalid!");return false;}//by ttyu if((m_C1<=0) | (m_C1>mycrv.n)) {AfxMessageBox("Constant c1 is not valid!");return false;} if((m_C2<=0 )| (m_C2>mycrv.n)) {AfxMessageBox("Constant c2 is not valid!");return false;} return true; } void LFSR_ECDlg::OnOK() { UpdateData(true); static int stime,etime,dtime; CString txt; m_time=""; CTime t(CTime::GetCurrentTime()); CString txt1; txt1=""; //ms = t.GetDay(); // TODO: Add extra validation here stime=t.GetTime(); txt1.Format("%d",stime); AfxMessageBox (txt1); txt=""; if (CheckDataEntry()) OnGeneratorLFSR(); etime=t.GetTime(); CString txt2; txt2=""; txt2.Format("%d",etime); AfxMessageBox (txt2); dtime=etime-stime; txt.Format("%f",dtime); m_time+=txt; // UpdateData(false); //rtime.Format("%s, %s %d, %d.",day,month,dd,yy); //CDialog::OnOK(); } void LFSR_ECDlg::OnCancel() { // TODO: Add extra cleanup here CDialog::OnCancel(); } void LFSR_ECDlg::OnGeneratorLFSR() { // TODO: If this is a RICHEDIT control, the control will not // send this notification unless you override the CDialog::OnInitDialog() // function and call CRichEditCtrl().SetEventMask() // with the ENM_CHANGE flag ORed into the mask. // TODO: Add your control notification handler code here point P0,P1,P2; P0 = mycrv.G; P1 = mycrv.MulPoint(P0,2); int C1=m_C1, C2=m_C2, n=m_Qty, k=0; int q= (mycrv.p-1) / 2; m_p = ""; m_B = ""; CString txt; for(int i=0;i<n;i++) { txt=""; if(P0==mycrv.O) txt.Format("O"); else txt.Format("(%d, %d)",P0.x,P0.y); m_p +=txt; m_p += 13; m_p += 10; if((P0.y >= 0)&&(P0.y <= q)) m_B += "0"; else if(P0 == mycrv.O) m_B += "0"; else m_B += "1"; //m_B += 13;//by ttyu // m_B += 10;//by ttyu P2 = mycrv.AddPoints(mycrv.MulPoint(P1,C2), mycrv.MulPoint(P0,C1)); P0 = P1; P1 = P2; } } BOOL LFSR_ECDlg::OnInitDialog() { CDialog::OnInitDialog(); // TODO: Add extra initialization here //code for dlg bar CString str="LFSR_EC"; m_cap.SetCaption (str); m_cap.Install (this,WM_MYPAINTMESSAGE); ////////////////////////////// return TRUE; // return TRUE unless you set the focus to a control // EXCEPTION: OCX Property Pages should return FALSE } LRESULT LFSR_ECDlg::PaintMyCaption(WPARAM wp, LPARAM lp) { m_cap.PaintCaption(wp,lp); return 0; } BOOL LFSR_ECDlg::OnSetCursor(CWnd* pWnd, UINT nHitTest, UINT message) { // TODO: Add your message handler code here and/or call default return CDialog::OnSetCursor(pWnd, nHitTest, message); } void LFSR_ECDlg::Onsave() { this->UpdateData(); CFile bitstream; char strFilter[] = { "Stream Records (*.mpl)|*.mpl| (*.pis)|*.pis|All Files (*.*)|*.*||" }; CFileDialog FileDlg(FALSE, ".mpl", NULL, 0, strFilter); //insertion//by TTT CFile cf_object; if( FileDlg.DoModal() == IDOK ){ cf_object.Open( FileDlg.GetFileName(), CFile::modeCreate|CFile::modeWrite); //char szText[100]; //strcpy(szText, "File Write Test"); CString txt; txt=""; txt.Format("%s",m_B);//by ANO AfxMessageBox (txt);//by ANO int mB_size=m_B.GetLength(); cf_object.Write (m_B,mB_size); //insertion end /* if( FileDlg.DoModal() == IDOK ) { if( bitstream.Open(FileDlg.GetFileName(), CFile::modeCreate | CFile::modeWrite) == FALSE ) return; CArchive ar(&bitstream, CArchive::store); CString txt; txt=""; txt.Format("%s",m_B);//by ANO AfxMessageBox (txt);//by ANO //txt=m_B;//by ANO ar <<txt;//by ANO ar.Close(); } else return; bitstream.Close(); */ // TODO: Add your control notification handler code here } }

    Read the article

  • XSLT ... I can'f find a (working) find minimum value in XML and set variable

    - by Bob
    I've search for hours and not found an example that allows for the very first position to be the lowest. I'm getting 'False' instead of the value returned .... EDIT: Oddly enough if I run a 2nd instance as MAX_Landed with ascending it returns a value just fine. If I switch the order in the XSLT the first instance will return 'False' and the 2nd will work. Hope I'm making sense ..... Example XML which I can't get formatted to show correctly and in a hurry so you get the gist I hope: <?xml version="1.0"?> <GetLowestOfferListingsForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01"> <GetLowestOfferListingsForASINResult ASIN="0470067802" status="Success"> <AllOfferListingsConsidered>false</AllOfferListingsConsidered> <Product xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01" xmlns:ns2="http://mws.amazonservices.com/schema/Products/2011-10-01/default.xsd"> <LowestOfferListings> <LowestOfferListing> <Qualifiers> <ItemCondition>Used</ItemCondition> <ItemSubcondition>Good</ItemSubcondition> </Qualifiers> <Price> <LandedPrice> <Amount>15.71</Amount> </LandedPrice> </Price> </LowestOfferListing> <LowestOfferListing> <Qualifiers> <ItemCondition>Used</ItemCondition> <ItemSubcondition>Good</ItemSubcondition> </Qualifiers> <Price> <LandedPrice> <Amount>16.71</Amount> </LandedPrice> </Price> </LowestOfferListing> <LowestOfferListing> <Qualifiers> <ItemCondition>Used</ItemCondition> <ItemSubcondition>Good</ItemSubcondition> </Qualifiers> <Price> <LandedPrice> <Amount>18.71</Amount> </LandedPrice> </Price> </LowestOfferListing> </LowestOfferListings> </Product> </GetLowestOfferListingsForASINResult> </GetLowestOfferListingsForASINResponse> Example XSLT : <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" xmlns:amz="http://mws.amazonservices.com/schema/Products/2011-10-01" exclude-result-prefixes="amz"> <xsl:output method="xml" version="1.0" encoding="utf-8" indent="yes"/> <xsl:template match="/"> <xsl:variable name="MIN_Landed"> <xsl:for-each select="//Amount"> <xsl:sort data-type="number" order="ascending"/> <xsl:if test="position()=1"><xsl:value-of select="."/></xsl:if> </xsl:for-each> </xsl:variable> <FMPXMLRESULT xmlns="http://www.filemaker.com/fmpxmlresult"> <ERRORCODE>0</ERRORCODE> <PRODUCT BUILD="" NAME="" VERSION=""/> <DATABASE DATEFORMAT="M/d/yyyy" LAYOUT="" NAME="" RECORDS="1" TIMEFORMAT="h:mm:ss a"/> <METADATA> <FIELD EMPTYOK="YES" MAXREPEAT="1" NAME="DATA" TYPE="TEXT"/> <FIELD EMPTYOK="YES" MAXREPEAT="1" NAME="Min_Landed" TYPE="TEXT"/> </METADATA> <RESULTSET> <xsl:attribute name="FOUND">1</xsl:attribute> <xsl:for-each select="amz:GetLowestOfferListingsForASINResponse/amz:GetLowestOfferListingsForASINResult/amz:Product/amz:LowestOfferListings/amz:LowestOfferListing"> <ROW> <xsl:attribute name="MODID">0</xsl:attribute> <xsl:attribute name="RECORDID">1</xsl:attribute> <COL> <DATA> <xsl:value-of select="amz:Qualifiers/amz:ItemCondition"/> </DATA> </COL> <COL> <DATA> <xsl:value-of select="$MIN_Landed"/> </DATA> </COL> </ROW> </xsl:for-each> </RESULTSET> </FMPXMLRESULT> </xsl:template> </xsl:stylesheet> HELP PLEASE! I really didn't want to post so much Amazon code but here it is stripped down to a bare bones response

    Read the article

  • Problems configuring nameserver in plesk

    - by Saif Bechan
    Hello, i have some troubles with setting up a nameserver in PLESK for months now. I have tried all possible scenario's but i can not get this to work. I am really in need for some help, and if you can i will really appreciate it. Basically what i want is to just set up a nameserver in PLESK. I have a primary IP, and my host gave me a secondary nameserver i can use. My host is leaseweb in the netherlands. I have made some screenshots of the important parts in my opinion, maybe you guys can see some errors in them. To use the secondary nameserver provided by leaseweb i had to enable ACL on that account, i did so and made a screenshot of that too. The DNS recursion is set to localnets. These settings have not changed for months, so the dns should be fully updated everywhere. The check i run is the following: https://www.sidn.nl/over-nl/aanvraag...-server-check/ Domeinnaam (inclusief .nl): rdshosting.nl Eerste Nameserver: ns1.rdshosting.nl Eerste IP: 62.212.66.33 Tweede Nameserver: ns7.leaseweb.net Tweede ip: 62.212.76.50 If i run the dns check of the netherlands it gives me the following errors: primary name server "ns1.rdshosting.nl." Error: specified name server is not listed as NS record. All public name servers for a domain must also be listed as NS records in the zone of the domain. This domain was specified explicitly as a name server, but not found in the zone description of the primary name server. TE.6a rdshosting.nl. 86400 IN SOA ns1.rdspartners.nl. saif2k.hotmail.com. (2010031102 12H 1H 7D 3H) Error: the MNAME in SOA says "ns1.rdspartners.nl." is the primary name server. The MNAME field in the SOA record (first parameter) lists a different primary name server from the one specified for this check. RFC1035 section 3.3.13 rdshosting.nl. 86400 IN NS ns1.rdspartners.nl. Warning: hidden name server "ns1.rdspartners.nl." never used for first contact. The zone contains an NS record for a host which is not in the list of specified name servers. Hence, this name server will not be used to initiate contact to the domain. It may be used in sequential lookups, so it may still be useful. secondary name server "ns1.rdspartners.nl." [BROKEN] [HIDDEN] Failure: name server at 77.232.85.129 cannot be reached: (unknown error) The name server could not be contacted, which may be due to temporary technical problems or global DNS configuration mistakes. The internal error is shown, but not always clear about the cause. secondary name server "ns7.leaseweb.net." Info: name server looks correctly configured. I have the content of the file etc/named.conf also: // $Id: named.conf,v 1.1.1.1 2001/10/15 07:44:36 kap Exp $ // // Refer to the named(8) man page for details. If you are ever going // to setup a primary server, make sure you've understood the hairy // details of how DNS is working. Even with simple mistakes, you can // break connectivity for affected parties, or cause huge amount of // useless Internet traffic. options { allow-recursion { localnets; }; directory "/var"; auth-nxdomain no; pid-file "/var/run/named/named.pid"; // In addition to the "forwarders" clause, you can force your name // server to never initiate queries of its own, but always ask its // forwarders only, by enabling the following line: // // forward only; // If you've got a DNS server around at your upstream provider, enter // its IP address here, and enable the line below. This will make you // benefit from its cache, thus reduce overall DNS traffic in the Internet. /* forwarders { 127.0.0.1; }; */ /* * If there is a firewall between you and nameservers you want * to talk to, you might need to uncomment the query-source * directive below. Previous versions of BIND always asked * questions using port 53, but BIND 8.1 uses an unprivileged * port by default. */ // query-source address * port 53; /* * If running in a sandbox, you may have to specify a different * location for the dumpfile. */ // dump-file "s/named_dump.db"; }; //Use with the following in named.conf, adjusting the allow list as needed: key "rndc-key" { algorithm hmac-md5; secret "CeMgS23y0oWE20nyv0x40Q=="; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; // Note: the following will be supported in a future release. /* host { any; } { topology { 127.0.0.0/8; }; }; */ // Setting up secondaries is way easier and the rough picture for this // is explained below. // // If you enable a local name server, don't forget to enter 127.0.0.1 // into your /etc/resolv.conf so this server will be queried first. // Also, make sure to enable it in /etc/rc.conf. zone "." { type hint; file "named.root"; }; zone "0.0.127.IN-ADDR.ARPA" { type master; file "localhost.rev"; }; // NB: Do not use the IP addresses below, they are faked, and only // serve demonstration/documentation purposes! // // Example secondary config entries. It can be convenient to become // a secondary at least for the zone where your own domain is in. Ask // your network administrator for the IP address of the responsible // primary. // // Never forget to include the reverse lookup (IN-ADDR.ARPA) zone! // (This is the first bytes of the respective IP address, in reverse // order, with ".IN-ADDR.ARPA" appended.) // // Before starting to setup a primary zone, better make sure you fully // understand how DNS and BIND works, however. There are sometimes // unobvious pitfalls. Setting up a secondary is comparably simpler. // // NB: Don't blindly enable the examples below. :-) Use actual names // and addresses instead. // // NOTE!!! FreeBSD runs bind in a sandbox (see named_flags in rc.conf). // The directory containing the secondary zones must be write accessible // to bind. The following sequence is suggested: // // mkdir /etc/namedb/s // chown bind.bind /etc/namedb/s // chmod 750 /etc/namedb/s zone "rdshosting.nl" { type master; file "rdshosting.nl"; allow-transfer { 77.232.85.129; 62.212.76.50; common-allow-transfer; }; }; zone "66.212.62.in-addr.arpa" { type master; file "66.212.62.in-addr.arpa"; allow-transfer { common-allow-transfer; }; }; acl common-allow-transfer { 62.212.76.50; }; As i mentioned i made some screenshots of some parts: First the dns settings in plesk: http://www.freeimagehosting.net/uploads/2480faed5e.jpg Second the acl settings in plesk: http://www.freeimagehosting.net/uploads/777f5e69b0.jpg Third my settings at leaseweb: http://www.freeimagehosting.net/uploads/de7122b19c.jpg And last the secondary nameserver settings from leaseweb: http://www.freeimagehosting.net/uploads/fd1da38a8f.jpg If someone has anysuggestion at all on this this will be highly appriciated. Thank you for your time! PS. I am dutch so dutch answers are welcome aswell

    Read the article

  • Set up tunnel to HE.net and now only ipv6.google.com works, but other sites ping fine.

    - by AndrejaKo
    I'm setting up IPv6 using my router which is running OpenWRT, version Backfire 10.03.1-rc4. I made a tunnel using Hurricane Electric's tunnel broker and set it up on the router and I'm using RADVD to hand out IPv6 addresses. My problem is that on computers on the network, I can only access ipv6.google.com using a browser, but other sites seem to be loading forever and won't open in any browser. I can ping and traceroute to them fine, but can't open them with a browser. I can open any site normally with a browser from the router. Stopping firewall service on the router doesn't help, so it's probably not a firewall issue. All AAAA records resolve fine, so it's probably not a DNS issue. Computers on the network get their IPv6 addresses fine, so it's probably not a radvd issue. Similar setup worked fine for SixXs, but I'm having problems with my PoP there, so I decided to move to HE. Here are some traceroutes: From a client computer: Tracing route to ipv6.he.net [2001:470:0:64::2] over a maximum of 30 hops: 1 <1 ms 1 ms 1 ms 2001:470:1f0b:de5::1 2 62 ms 63 ms 62 ms andrejako-1.tunnel.tserv6.fra1.ipv6.he.net [2001:470:1f0a:de5::1] 3 60 ms 60 ms 63 ms gige-g2-4.core1.fra1.he.net [2001:470:0:69::1] 4 63 ms 68 ms 68 ms 10gigabitethernet1-4.core1.ams1.he.net [2001:470:0:47::1] 5 84 ms 74 ms 76 ms 10gigabitethernet1-4.core1.lon1.he.net [2001:470:0:3f::1] 6 146 ms 147 ms 151 ms 10gigabitethernet4-4.core1.nyc4.he.net [2001:470:0:128::1] 7 200 ms 198 ms 202 ms 10gigabitethernet5-3.core1.lax1.he.net [2001:470:0:10e::1] 8 219 ms * 210 ms 10gigabitethernet2-2.core1.fmt2.he.net [2001:470:0:18d::1] 9 221 ms 338 ms 209 ms gige-g4-18.core1.fmt1.he.net [2001:470:0:2d::1] 10 206 ms 210 ms 207 ms ipv6.he.net [2001:470:0:64::2] Trace complete. and another from a cliet computer Tracing route to whatismyipv6.com [2001:4870:a24f:2::90] over a maximum of 30 hops: 1 7 ms 1 ms 1 ms 2001:470:1f0b:de5::1 2 69 ms 70 ms 63 ms AndrejaKo-1.tunnel.tserv6.fra1.ipv6.he.net [2001:470:1f0a:de5::1] 3 57 ms 65 ms 58 ms gige-g2-4.core1.fra1.he.net [2001:470:0:69::1] 4 73 ms 74 ms 75 ms 10gigabitethernet1-4.core1.ams1.he.net [2001:470:0:47::1] 5 71 ms 74 ms 76 ms 10gigabitethernet1-4.core1.lon1.he.net [2001:470:0:3f::1] 6 141 ms 149 ms 148 ms 10gigabitethernet2-3.core1.nyc4.he.net [2001:470:0:3e::1] 7 141 ms 147 ms 143 ms 10gigabitethernet1-2.core1.nyc1.he.net [2001:470:0:37::2] 8 144 ms 145 ms 142 ms 2001:504:1::a500:4323:1 9 226 ms 225 ms 218 ms 2001:4870:a240::2 10 220 ms 224 ms 219 ms 2001:4870:a240::2 11 219 ms 218 ms 220 ms 2001:4870:a24f::2 12 221 ms 222 ms 220 ms www.whatismyipv6.com [2001:4870:a24f:2::90] Trace complete. Here's some firewall info on the router: root@OpenWrt:/# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 syn_flood tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 input_rule all -- 0.0.0.0/0 0.0.0.0/0 input all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) target prot opt source destination zone_wan_MSSFIX all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED forwarding_rule all -- 0.0.0.0/0 0.0.0.0/0 forward all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 output_rule all -- 0.0.0.0/0 0.0.0.0/0 output all -- 0.0.0.0/0 0.0.0.0/0 Chain forward (1 references) target prot opt source destination zone_lan_forward all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_forward all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_forward all -- 0.0.0.0/0 0.0.0.0/0 Chain forwarding_lan (1 references) target prot opt source destination Chain forwarding_rule (1 references) target prot opt source destination nat_reflection_fwd all -- 0.0.0.0/0 0.0.0.0/0 Chain forwarding_wan (1 references) target prot opt source destination Chain input (1 references) target prot opt source destination zone_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan all -- 0.0.0.0/0 0.0.0.0/0 Chain input_lan (1 references) target prot opt source destination Chain input_rule (1 references) target prot opt source destination Chain input_wan (1 references) target prot opt source destination Chain nat_reflection_fwd (1 references) target prot opt source destination ACCEPT tcp -- 192.168.1.0/24 192.168.1.2 tcp dpt:80 Chain output (1 references) target prot opt source destination zone_lan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain output_rule (1 references) target prot opt source destination Chain reject (7 references) target prot opt source destination REJECT tcp -- 0.0.0.0/0 0.0.0.0/0 reject-with tcp-reset REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain syn_flood (1 references) target prot opt source destination RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 25/sec burst 50 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan (1 references) target prot opt source destination input_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_lan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_ACCEPT (2 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_DROP (0 references) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_MSSFIX (0 references) target prot opt source destination TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU Chain zone_lan_REJECT (1 references) target prot opt source destination reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_forward (1 references) target prot opt source destination zone_wan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 forwarding_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_lan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan (2 references) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 ACCEPT 41 -- 0.0.0.0/0 0.0.0.0/0 input_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_ACCEPT (2 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_DROP (0 references) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_MSSFIX (1 references) target prot opt source destination TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU Chain zone_wan_REJECT (2 references) target prot opt source destination reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_forward (2 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 192.168.1.2 forwarding_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Here's some routing info: root@OpenWrt:/# ip -f inet6 route 2001:470:1f0a:de5::/64 via :: dev 6in4-henet proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0 2001:470:1f0b:de5::/64 dev br-lan proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev br-lan proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0.1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0.2 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 via :: dev 6in4-henet proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0 default dev 6in4-henet metric 1024 mtu 1280 advmss 1220 hoplimit 0 I have computers running windows 7 SP1 and openSUSE 11.3 and all of them have same problem. I also made a thread about this on HE's forum, but it seems that people there are out of ideas what to do.

    Read the article

  • SQL IO and SAN troubles

    - by James
    We are running two servers with identical software setup but different hardware. The first one is a VM on VMWare on a normal tower server with dual core xeons, 16 GB RAM and a 7200 RPM drive. The second one is a VM on XenServer on a powerful brand new rack server, with 4 core xeons and shared storage. We are running Dynamics AX 2012 and SQL Server 2008 R2. When I insert 15 000 records into a table on the slow tower server (as a test), it does so in 13 seconds. On the fast server it takes 33 seconds. I re-ran these tests several times with the same results. I have a feeling it is some sort of IO bottleneck, so I ran SQLIO on both. Here are the results for the slow tower server: C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS C:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads writing for 120 secs to file C:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 226.97 MBs/sec: 1.77 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 281 Max_Latency(ms): 467 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 99 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS C:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads reading for 120 secs from file C:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 91.34 MBs/sec: 0.71 latency metrics: Min_Latency(ms): 14 Avg_Latency(ms): 699 Max_Latency(ms): 1124 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS C :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads writing for 120 secs to file C:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1094.50 MBs/sec: 68.40 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 58 Max_Latency(ms): 467 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS C :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads reading for 120 secs from file C:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1155.31 MBs/sec: 72.20 latency metrics: Min_Latency(ms): 17 Avg_Latency(ms): 55 Max_Latency(ms): 205 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 Here are the results of the fast rack server: C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS E:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file E:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for write): The system cannot find the pa th specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS E:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file E:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for read): The system cannot find the pat h specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS E :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file E:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for write): The system cannot find the pa th specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS E :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file E:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for read): The system cannot find the pat h specified. exiting C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS c:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file c:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 2575.77 MBs/sec: 20.12 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 24 Max_Latency(ms): 655 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 5 8 9 9 9 8 5 3 1 1 1 1 0 0 0 0 0 0 0 0 0 37 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS c:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file c:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1141.39 MBs/sec: 8.91 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 55 Max_Latency(ms): 652 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 91 C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS c :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file c:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 341.37 MBs/sec: 21.33 latency metrics: Min_Latency(ms): 5 Avg_Latency(ms): 186 Max_Latency(ms): 120037 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS c :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file c:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1024.07 MBs/sec: 64.00 latency metrics: Min_Latency(ms): 5 Avg_Latency(ms): 61 Max_Latency(ms): 81632 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 Three of the four tests are, to my mind, within reasonable parameters for the rack server. However, the 64 write test is incredibly slow on the rack server. (68 mb/sec on the slow tower vs 21 mb/s on the rack). The read speed for 64k also seems slow. Is this enough to say there is some sort of bottleneck with the shared storage? I need to know if I can take this evidence and say we need to launch an investigation into this. Any help is appreciated.

    Read the article

  • Windows using the DNS suffix search list on all lookups, even valid FQDNs. How to stop this?

    - by RealityGone
    When doing DNS lookups (specifically using nslookup, for some reason most things are not effected) Windows XP Pro SP3 is using the DNS suffix search list for every single one. Even for fully qualified domain names. For example I lookup "www.microsoft.com" but windows actually asks for "www.microsoft.com.eondream.com" (eondream.com is my primary domain). Now I can fix the issue by removing the Primary DNS suffix, but it seems to me that the DNS suffix search list should be for short, invalid names (where dots=0 or something). I'm sure I have a misconfiguration somewhere in windows but I don't know where. I've changed every option I can think of or find. Below is the output of ipconfig /all and nslookup (with debug & db2 enabled). This is using a static IP & (internal) DNS server. C:\ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : frayedlogic Primary Dns Suffix . . . . . . . : eondream.com Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : eondream.com Ethernet adapter Wireless Network Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Dell Wireless 1390 WLAN Mini-Card Physical Address. . . . . . . . . : 00-1B-FC-29-EB-6B Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.13.32 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.13.13 DNS Servers . . . . . . . . . . . : 192.168.19.19 C:\nslookup Default Server: shardik.eondream.com Address: 192.168.19.19 set debug set db2 www.microsoft.com Server: shardik.eondream.com Address: 192.168.19.19 ------------ Got answer: HEADER: opcode = QUERY, id = 2, rcode = NOERROR header flags: response, want recursion, recursion avail. questions = 1, answers = 1, authority records = 0, additional = 0 QUESTIONS: www.microsoft.com.eondream.com, type = A, class = IN ANSWERS: - www.microsoft.com.eondream.com internet address = 208.69.36.132 ttl = 0 (0 secs) ------------ Non-authoritative answer: Name: www.microsoft.com.eondream.com Address: 208.69.36.132 (Note: it resolves to that IP because I use the opendns service and that is their suggestion page or whatever you want to call it) If I am reading the nslookup output correctly then it is not a problem with my DNS server because windows is actually asking for the incorrect domain.

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Issue 15: Oracle Exadata Marketing Campaigns

    - by rituchhibber
         PARTNER FOCUS Oracle ExadataMarketing Campaign Steve McNickleVP Europe, cVidya Steve McNickle is VP Europe for cVidya, an innovative provider of revenue intelligence solutions for telecom, media and entertainment service providers including AT&T, BT, Deutsche Telecom and Vodafone. The company's product portfolio helps operators and service providers maximise margins, improve customer experience and optimise ecosystem relationships through revenue assurance, fraud and security management, sales performance management, pricing analytics, and inter-carrier services. cVidya has partnered with Oracle for more than a decade. RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Engineered Systems Oracle Communications cVidya SUBSCRIBE FEEDBACK PREVIOUS ISSUES Are you ready for Oracle OpenWorld this October? -- -- Please could you tell us a little about cVidya's partnering history with Oracle, and expand on your Oracle Exastack accreditations? "cVidya was established just over ten years ago and we've had a strong relationship with Oracle almost since the very beginning. Through our Revenue Intelligence work with some of the world's largest service providers we collect tremendous amounts of information, amounting to billions of records per day. We help our clients to collect, store and analyse that data to ensure that their end customers are getting the best levels of service, are billed correctly, and are happy that they are on the correct price plan. We have been an Oracle Gold level partner for seven years, and crucially just two months ago we were also accredited as Oracle Exastack Optimized for MoneyMap, our core Revenue Assurance solution. Very soon we also expect to be Oracle Exastack Optimized DRMap, our Data Retention solution." What unique capabilities and customer benefits does Oracle Exastack add to your applications? "Oracle Exastack enables us to deliver radical benefits to our customers. A typical mobile operator in the UK might handle between 500 million and two billion call data record details daily. Each transaction needs to be validated, billed correctly and fraud checked. Because of the enormous volumes involved, our clients demand scalable infrastructure that allows them to efficiently acquire, store and process all that data within controlled cost, space and environmental constraints. We have proved that the Oracle Exadata system can process data up to seven times faster and load it as much as 20 times faster than other standard best-of-breed server approaches. With the Oracle Exadata Database Machine they can reduce their datacentre equipment from say, the six or seven cabinets that they needed in the past, down to just one. This dramatic simplification delivers incredible value to the customer by cutting down enormously on all of their significant cost, space, energy, cooling and maintenance overheads." "The Oracle Exastack Program has given our clients the ability to switch their focus from reactive to proactive. Traditionally they may have spent 80 percent of their day processing, and just 20 percent enabling end customers to see advanced analytics, and avoiding issues before they occur. With our solutions and Oracle Exadata they can now switch that balance around entirely, resulting not only in reduced revenue leakage, but a far higher focus on proactive leakage prevention. How has the Oracle Exastack Program transformed your customer business? "We can already see the impact. Oracle solutions allow our delivery teams to achieve successful deployments, happy customers and self-satisfaction, and the power of Oracle's Exa solutions is easy to measure in terms of their transformational ability. We gained our first sale into a major European telco by demonstrating the major performance gains that would transform their business. Clients can measure the ease of organisational change, the early prevention of business issues, the reduction in manpower required to provide protection and coverage across all their products and services, plus of course end customer satisfaction. If customers know that that service is provided accurately and that their bills are calculated correctly, then over time this satisfaction can be attributed to revenue intelligence and the underlying systems which provide it. Combine this with the further integration we have with the other layers of the Oracle stack, including the telecommunications offerings such as NCC, OCDM and BRM, and the result is even greater customer value—not to mention the increased speed to market and the reduced project risk." What does the Oracle Exastack community bring to cVidya, both in terms of general benefits, and also tangible new opportunities and partnerships? "A great deal. We have participated in the Oracle Exastack community heavily over the past year, and have had lots of meetings with Oracle and our peers around the globe. It brings us into contact with like-minded, innovative partners, who like us are not happy to just stand still and want to take fresh technology to their customer base in order to gain enhanced value. We identified three new partnerships in each of two recent meetings, and hope these will open up new opportunities, not only in areas that exactly match where we operate today, but also in some new associative areas that will expand our reach into new business sectors. Notably, thanks to the Exastack community we were invited on stage at last year's Oracle OpenWorld conference. Appearing so publically with Oracle senior VP Judson Althoff elevated awareness and visibility of cVidya and has enabled us to participate in a number of other events with Oracle over the past eight months. We've been involved in speaking opportunities, forums and exhibitions, providing us with invaluable opportunities that we wouldn't otherwise have got close to." How has Exastack differentiated cVidya as an ISV, and helped you to evolve your business to the next level? "When we are selling to our core customer base of Tier 1 telecommunications providers, we know that they want more than just software. They want an enduring partnership that will last many years, they want innovation, and a forward thinking partner who knows how to guide them on where they need to be to meet market demand three, five or seven years down the line. Membership of respected global bodies, such as the Telemanagement Forum enables us to lead standard adherence in our area of business, giving us a lot of credibility, but Oracle is also involved in this forum with its own telecommunications portfolio, strengthening our position still further. When we approach CEOs, CTOs and CIOs at the very largest Tier 1 operators, not only can we easily show them that our technology is fantastic, we can also talk about our strong partnership with Oracle, and our joint embracing of today's standards and tomorrow's innovation." Where would you like cVidya to be in one year's time? "We want to get all of our relevant products Oracle Exastack Optimized. Our MoneyMap Revenue Assurance solution is already Exastack Optimised, our DRMAP Data Retention Solution should be Exastack Optimised within the next month, and our FraudView Fraud Management solution within the next two to three months. We'd then like to extend our Oracle accreditation out to include other members of the Oracle Engineered Systems family. We are moving into the 'Big Data' space, and so we're obviously very keen to work closely with Oracle to conduct pilots, map new technologies onto Oracle Big Data platforms, and embrace and measure the benefits of other Oracle systems, namely Oracle Exalogic Elastic Cloud, the Oracle Exalytics In-Memory Machine and the Oracle SPARC SuperCluster. We would also like to examine how the Oracle Database Appliance might benefit our Tier 2 service provider customers. Finally, we'd also like to continue working with the Oracle Communications Global Business Unit (CGBU), furthering our integration with Oracle billing products so that we are able to quickly deploy fraud solutions into Oracle's Engineered System stack, give operational benefits to our clients that are pre-integrated, more cost-effective, and can be rapidly deployed rapidly and producing benefits in three months, not nine months." Chris Baker ,Senior Vice President, Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners' business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? "Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best – building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best in class application platform so the ISV is free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success." How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? "We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme." How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? "One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry." What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? "My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data – a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle." What opportunities are immediately opened to new ISV partners joining the OPN? "As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation." Finally, are there any other messages that you would like to share with the Oracle ISV community? "The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: “I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud”. The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them." -- Gergely Strbik is Oracle Hardware and Software Product Manager for Avnet in Hungary. Avnet Technology Solutions is an OracleValue Added Distributor focused on the development of the existing Oracle channel. This includes the recruitment and enablement of Oracle partners as well as driving deeper adoption of Oracle's technology and application products within the IT channel. "The main business benefits of ODA for our customers and partners are scalability, flexibility, a great price point for the high performance delivered, and the easily configurable embedded Linux operating system. People welcome a lower point of entry and the ability to grow capacity on demand as their business expands." "Marketing and selling the ODA requires another way of thinking because it is an appliance. We have to transform the ways in which our partners and customers think from buying hardware and software independently to buying complete solutions. Successful early adopters and satisfied customer reactions will certainly help us to sell the ODA. We will have more experience with the product after the first deliveries and installations—end users need to see the power and benefits for themselves." "Our typical ODA customers will be those looking for complete solutions from a single reseller partner who is also able to manage the appliance. They will have enjoyed using Oracle Database but now want a new product that is able to unlock new levels of performance. A higher proportion of potential customers will come from our existing Oracle base, with around 30% from new business, but we intend to evangelise the ODA on the market to see how we can change this balance as all our customers adjust to the concept of 'Hardware and Software, Engineered to Work Together'. -- Back to the welcome page

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Can't join OS X Mavericks to AD Domain

    - by watkipet
    I'm attempting to join an OS X Mavericks (10.9) client to a Windows Server 2008 Active Directory domain, however the bind fails with this error in the OS X client's system.log: Oct 24 15:03:15 host.domain.com com.apple.preferences.users.remoteservice[5547]: -[ODCAddServerSheetController handleOtherActionError: gotError: Error Domain=com.apple.OpenDirectory Code=5202 "Authentication server encountered an error while attempting the requested operation." UserInfo=0x7f9e6cb3e180 {NSLocalizedDescription=Authentication server encountered an error while attempting the requested operation., NSLocalizedFailureReason=Authentication server encountered an error while attempting the requested operation.}, Authentication server encountered an error while attempting the requested operation. I've joined (bound) Ubuntu Linux clients to the same domain with net ads join in the past with no problems (using the same administrative user). I don't have access to any server logs. Here's the GUI error (from Directory Utility) on the OS X client: Here's the GUI error (from User's and Groups) in System Preferences on the OS X client: Update After some Wiresharking I've got some more info: OS X Client - KDC (over UDP): AS_REQ (no padata) OS X Client <- KDC (over UDP): KRB5KDC_ERR_PREAUTH_REQUIRED OS X Client - KDC (over UDP): AS_REQ (this time with PA-ENC-TIMESTAMP in padata) OS X Client <- KDC (over UDP): KRB5KDC_ERR_RESPONSE_TOO_BIG OS X Client - KDC (over TCP): AS_REQ (also with PA-ENC-TIMESTAMP in padata) OS X Client <- KDC (over TCP): KDC_ERR_ETYPE_NOSUPP ...and that's it. This is what I think is going on: The OS X client sends a kerberos request. The KDC says, "You need to pre-authenticate. Try again" The OS X client tries to pre-authenticate (all this so far is over UDP) Something gets lost on our network and the KDC says, "Oops something went wrong" The OS X client switches to TCP and tries again. Over TCP, the KDC says, "You're using an encryption type I don't support" Note that in its padata records, the OS X client is always using "aes256-cts-hmac-sha1-96" as its encryption type. However, in its KDC_REQ_BODY record it lists the aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, and rc4-hmac encryption types. When the KDC comes back with KDC_ERR_ETYPE_NOSUPP, it uses rc4-hmac as its encryption type in its padata record. I know next to nothing about Kerberos, but it seems to me that the OS X client should go ahead and try the rc4-hmac encryption type. However, it does nothing after this. Update 2 Here's the debug log from Directory Services on the OS X client. Sorry--it's long. 2013-10-25 14:19:13.219128 PDT - 10544.20463 - ODNodeCustomCall request, NodeID: 52A65FAE-4B24-455D-86EC-2199A780D234, Code: 80 2013-10-25 14:19:13.220409 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - client requested OU - 'CN=Computers,DC=domain,DC=com' 2013-10-25 14:19:13.220427 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Binding using '[email protected]' for kerberos ID 2013-10-25 14:19:13.220571 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - new kerberos credential cache 'MEMORY:0x7fa713635470' for '[email protected]' 2013-10-25 14:19:13.220623 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: loop 1 2013-10-25 14:19:13.220639 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send 0 patypes 2013-10-25 14:19:13.220653 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - fast disabled, not doing any fast wrapping 2013-10-25 14:19:13.220699 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - Trying to find service kdc for realm DOMAIN.COM flags 0 2013-10-25 14:19:13.221275 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - submissing new requests to new host 2013-10-25 14:19:13.221326 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to host: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00000001 2013-10-25 14:19:13.221373 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - writing packet: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00000001 2013-10-25 14:19:13.222588 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - reading packet: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00000001 2013-10-25 14:19:13.222617 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - host completed: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00000001 2013-10-25 14:19:13.222665 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_sendto_context DOMAIN.COM done: 0 hosts 1 packets 1 wc: 0.001960 nr: 0.000000 kh: 0.000560 tid: 00000001 2013-10-25 14:19:13.222705 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: loop 2 2013-10-25 14:19:13.222737 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: processing input 2013-10-25 14:19:13.222752 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: got an KRB-ERROR from KDC 2013-10-25 14:19:13.222775 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: KRB-ERROR -1765328359/Additional pre-authentication required 2013-10-25 14:19:13.222791 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send 4 patypes 2013-10-25 14:19:13.222800 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send PA-DATA type: 19 2013-10-25 14:19:13.222808 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send PA-DATA type: 2 2013-10-25 14:19:13.222816 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send PA-DATA type: 16 2013-10-25 14:19:13.222825 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send PA-DATA type: 15 2013-10-25 14:19:13.222840 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: using ENC-TS with enctype 18 2013-10-25 14:19:13.222850 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: using default_s2k_func 2013-10-25 14:19:13.227443 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - fast disabled, not doing any fast wrapping 2013-10-25 14:19:13.227502 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - Trying to find service kdc for realm DOMAIN.COM flags 0 2013-10-25 14:19:13.228233 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - submissing new requests to new host 2013-10-25 14:19:13.228320 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to host: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00010001 2013-10-25 14:19:13.228374 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - writing packet: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00010001 2013-10-25 14:19:13.229930 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - reading packet: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00010001 2013-10-25 14:19:13.229957 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - host completed: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00010001 2013-10-25 14:19:13.229975 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_sendto trying over again (reset): 0 2013-10-25 14:19:13.230023 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - Trying to find service kdc for realm DOMAIN.COM flags 2 2013-10-25 14:19:13.230664 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - submissing new requests to new host 2013-10-25 14:19:13.230726 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to host: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00010002 2013-10-25 14:19:13.230818 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to 11: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00010002 2013-10-25 14:19:13.231101 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - writing packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00010002 2013-10-25 14:19:13.232743 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - reading packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00010002 2013-10-25 14:19:13.232777 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - host completed: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00010002 2013-10-25 14:19:13.232798 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_sendto_context DOMAIN.COM done: 0 hosts 2 packets 2 wc: 0.005316 nr: 0.000000 kh: 0.001339 tid: 00010002 2013-10-25 14:19:13.232856 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: loop 3 2013-10-25 14:19:13.232868 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: processing input 2013-10-25 14:19:13.232900 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: using keyproc 2013-10-25 14:19:13.232910 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: using default_s2k_func 2013-10-25 14:19:13.236487 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: extracting ticket 2013-10-25 14:19:13.236557 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: wc: 0.015944 2013-10-25 14:19:13.237022 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - Trying to find service kdc for realm DOMAIN.COM flags 2 2013-10-25 14:19:13.237444 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - submissing new requests to new host 2013-10-25 14:19:13.237482 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to host: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00020001 2013-10-25 14:19:13.237551 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to 11: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00020001 2013-10-25 14:19:13.237900 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - writing packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00020001 2013-10-25 14:19:13.238616 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - reading packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00020001 2013-10-25 14:19:13.238645 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - host completed: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00020001 2013-10-25 14:19:13.238674 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_sendto_context DOMAIN.COM done: 0 hosts 1 packets 1 wc: 0.001656 nr: 0.000000 kh: 0.000409 tid: 00020001 2013-10-25 14:19:13.238839 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - Trying to find service kdc for realm DOMAIN.COM flags 2 2013-10-25 14:19:13.239302 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - submissing new requests to new host 2013-10-25 14:19:13.239360 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to host: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00030001 2013-10-25 14:19:13.239429 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to 11: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00030001 2013-10-25 14:19:13.239683 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - writing packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00030001 2013-10-25 14:19:13.240350 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - reading packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00030001 2013-10-25 14:19:13.240387 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - host completed: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00030001 2013-10-25 14:19:13.240415 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_sendto_context DOMAIN.COM done: 0 hosts 1 packets 1 wc: 0.001578 nr: 0.000000 kh: 0.000445 tid: 00030001 2013-10-25 14:19:13.240514 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_credentials_with_flags: DOMAIN.COM wc: 0.003615 2013-10-25 14:19:13.240537 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - valid credentials for [email protected] 2013-10-25 14:19:13.240541 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching to cache 'MEMORY:0x7fa713635470' 2013-10-25 14:19:13.240545 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching GSS to cache 'MEMORY:0x7fa713635470 2013-10-25 14:19:13.240555 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Bind Step 5 - Bind/Join computer to domain - 'domain.com' 2013-10-25 14:19:13.241345 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - resolving 'server.domain.com' 2013-10-25 14:19:13.241646 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - added socket 12 for host 'server.domain.com:389' address '192.168.0.2' to kqueue list 2013-10-25 14:19:13.241930 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Setting kerberos server for 'Kerberos:DOMAIN.COM' to 'server.domain.com' 2013-10-25 14:19:13.241962 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching to cache 'MEMORY:0x7fa713635470' 2013-10-25 14:19:13.241969 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching GSS to cache 'MEMORY:0x7fa713635470 2013-10-25 14:19:13.242231 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI allow Confidentiality 2013-10-25 14:19:13.242234 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - setting realm 'DOMAIN.COM' for node '/Active Directory/domain.com' 2013-10-25 14:19:13.242239 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI allow Integrity (signing) 2013-10-25 14:19:13.242274 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI using hostname 'server.domain.com' 2013-10-25 14:19:13.242282 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI using initiator credential '[email protected]' 2013-10-25 14:19:13.250771 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Authenticate to LDAP using Kerberos credential - 0 2013-10-25 14:19:13.250784 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - verified connectivity to '192.168.0.2' with socket 12 2013-10-25 14:19:13.251513 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - locating site using domain domain.com using CLDAP 2013-10-25 14:19:13.252145 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - using site of 'DOMAINGROUP' from CLDAP 2013-10-25 14:19:13.253626 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - resolving 'server2.domain.com' 2013-10-25 14:19:13.253933 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - added socket 13 for host 'server2.domain.com:389' address '192.168.0.1' to kqueue list 2013-10-25 14:19:13.254428 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Setting kerberos server for 'Kerberos:DOMAIN.COM' to 'server2.domain.com' 2013-10-25 14:19:13.254462 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching to cache 'MEMORY:0x7fa713635470' 2013-10-25 14:19:13.254468 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching GSS to cache 'MEMORY:0x7fa713635470 2013-10-25 14:19:13.254617 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - setting realm 'DOMAIN.COM' for node '/Active Directory/domain.com' 2013-10-25 14:19:13.254661 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI allow Confidentiality 2013-10-25 14:19:13.254670 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI allow Integrity (signing) 2013-10-25 14:19:13.254689 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI using hostname 'server2.domain.com' 2013-10-25 14:19:13.254695 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI using initiator credential '[email protected]' 2013-10-25 14:19:13.262092 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Authenticate to LDAP using Kerberos credential - 0 2013-10-25 14:19:13.262108 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - verified connectivity to '192.168.0.1' with socket 13 2013-10-25 14:19:13.262982 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Computer account either already exists or DC is already Read/Write 2013-10-25 14:19:13.264968 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Adding record 'cn=spike,CN=Computers,DC=domain,DC=com' in 'domain.com' The failure point seems to be Computer account either already exists or DC is already Read/Write, however, I can search for 'spike' on the Active Directory server using Active Directory Explorer and it's not there. If I do the same search for the Linux and Windows PCs I added previously, I can find them.

    Read the article

  • Inbound SIP calls through Cisco 881 NAT hang up after a few seconds

    - by MasterRoot24
    I've recently moved to a Cisco 881 router for my WAN link. I was previously using a Cisco Linksys WAG320N as my modem/router/WiFi AP/NAT firewall. The WAG320N is now running in bridged mode, so it's simply acting as a modem with one of it's LAN ports connected to FE4 WAN on my Cisco 881. The Cisco 881 get's a DHCP provided IP from my ISP. My LAN is part of default Vlan 1 (192.168.1.0/24). General internet connectivity is working great, I've managed to setup static NAT rules for my HTTP/HTTPS/SMTP/etc. services which are running on my LAN. I don't know whether it's worth mentioning that I've opted to use NVI NAT (ip nat enable as opposed to the traditional ip nat outside/ip nat inside) setup. My reason for this is that NVI allows NAT loopback from my LAN to the WAN IP and back in to the necessary server on the LAN. I run an Asterisk 1.8 PBX on my LAN, which connects to a SIP provider on the internet. Both inbound and outbound calls through the old setup (WAG320N providing routing/NAT) worked fine. However, since moving to the Cisco 881, inbound calls drop after around 10 seconds, whereas outbound calls work fine. The following message is logged on my Asterisk PBX: [Dec 9 15:27:45] WARNING[27734]: chan_sip.c:3641 retrans_pkt: Retransmission timeout reached on transmission [email protected] for seqno 1 (Critical Response) -- See https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions Packet timed out after 6528ms with no response [Dec 9 15:27:45] WARNING[27734]: chan_sip.c:3670 retrans_pkt: Hanging up call [email protected] - no reply to our critical packet (see https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions). (I know that this is quite a common issue - I've spend the best part of 2 days solid on this, trawling Google.) I've done as I am told and checked https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions. Referring to the section "Other SIP requests" in the page linked above, I believe that the hangup to be caused by the ACK from my SIP provider not being passed back through NAT to Asterisk on my PBX. I tried to ascertain this by dumping the packets on my WAN interface on the 881. I managed to obtain a PCAP dump of packets in/out of my WAN interface. Here's an example of an ACK being reveived by the router from my provider: 689 21.219999 193.x.x.x 188.x.x.x SIP 502 Request: ACK sip:[email protected] | However a SIP trace on the Asterisk server show's that there are no ACK's received in response to the 200 OK from my PBX: http://pastebin.com/wwHpLPPz In the past, I have been strongly advised to disable any sort of SIP ALGs on routers and/or firewalls and the many posts regarding this issue on the internet seem to support this. However, I believe on Cisco IOS, the config command to disable SIP ALG is no ip nat service sip udp port 5060 however, this doesn't appear to help the situation. To confirm that config setting is set: Router1#show running-config | include sip no ip nat service sip udp port 5060 Another interesting twist: for a short period of time, I tried another provider. Luckily, my trial account with them is still available, so I reverted my Asterisk config back to the revision before I integrated with my current provider. I then dialled in to the DDI associated with the trial trunk and the call didn't get hung up and I didn't get the error above! To me, this points at the provider, however I know, like all providers do, will say "There's no issues with our SIP proxies - it's your firewall." I'm tempted to agree with this, as this issue was not apparent with the old WAG320N router when it was doing the NAT'ing. I'm sure you'll want to see my running-config too: ! ! Last configuration change at 15:55:07 UTC Sun Dec 9 2012 by xxx version 15.2 no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug datetime msec localtime show-timezone service timestamps log datetime msec localtime show-timezone no service password-encryption service sequence-numbers ! hostname Router1 ! boot-start-marker boot-end-marker ! ! security authentication failure rate 10 log security passwords min-length 6 logging buffered 4096 logging console critical enable secret 4 xxx ! aaa new-model ! ! aaa authentication login local_auth local ! ! ! ! ! aaa session-id common ! memory-size iomem 10 ! crypto pki trustpoint TP-self-signed-xxx enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-xxx revocation-check none rsakeypair TP-self-signed-xxx ! ! crypto pki certificate chain TP-self-signed-xxx certificate self-signed 01 quit no ip source-route no ip gratuitous-arps ip auth-proxy max-login-attempts 5 ip admission max-login-attempts 5 ! ! ! ! ! no ip bootp server ip domain name dmz.merlin.local ip domain list dmz.merlin.local ip domain list merlin.local ip name-server x.x.x.x ip inspect audit-trail ip inspect udp idle-time 1800 ip inspect dns-timeout 7 ip inspect tcp idle-time 14400 ip inspect name autosec_inspect ftp timeout 3600 ip inspect name autosec_inspect http timeout 3600 ip inspect name autosec_inspect rcmd timeout 3600 ip inspect name autosec_inspect realaudio timeout 3600 ip inspect name autosec_inspect smtp timeout 3600 ip inspect name autosec_inspect tftp timeout 30 ip inspect name autosec_inspect udp timeout 15 ip inspect name autosec_inspect tcp timeout 3600 ip cef login block-for 3 attempts 3 within 3 no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO881-SEC-K9 sn ! ! username xxx privilege 15 secret 4 xxx username xxx secret 4 xxx ! ! ! ! ! ip ssh time-out 60 ! ! ! ! ! ! ! ! ! interface FastEthernet0 no ip address ! interface FastEthernet1 no ip address ! interface FastEthernet2 no ip address ! interface FastEthernet3 switchport access vlan 2 no ip address ! interface FastEthernet4 ip address dhcp no ip redirects no ip unreachables no ip proxy-arp ip nat enable duplex auto speed auto ! interface Vlan1 ip address 192.168.1.1 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip nat enable ! interface Vlan2 ip address 192.168.0.2 255.255.255.0 ! ip forward-protocol nd ip http server ip http access-class 1 ip http authentication local ip http secure-server ip http timeout-policy idle 60 life 86400 requests 10000 ! ! no ip nat service sip udp port 5060 ip nat source list 1 interface FastEthernet4 overload ip nat source static tcp x.x.x.x 80 interface FastEthernet4 80 ip nat source static tcp x.x.x.x 443 interface FastEthernet4 443 ip nat source static tcp x.x.x.x 25 interface FastEthernet4 25 ip nat source static tcp x.x.x.x 587 interface FastEthernet4 587 ip nat source static tcp x.x.x.x 143 interface FastEthernet4 143 ip nat source static tcp x.x.x.x 993 interface FastEthernet4 993 ip nat source static tcp x.x.x.x 1723 interface FastEthernet4 1723 ! ! logging trap debugging logging facility local2 access-list 1 permit 192.168.1.0 0.0.0.255 access-list 1 permit 192.168.0.0 0.0.0.255 no cdp run ! ! ! ! control-plane ! ! banner motd Authorized Access only ! line con 0 login authentication local_auth length 0 transport output all line aux 0 exec-timeout 15 0 login authentication local_auth transport output all line vty 0 1 access-class 1 in logging synchronous login authentication local_auth length 0 transport preferred none transport input telnet transport output all line vty 2 4 access-class 1 in login authentication local_auth length 0 transport input ssh transport output all ! ! end ...and, if it's of any use, here's my Asterisk SIP config: [general] context=default ; Default context for calls allowoverlap=no ; Disable overlap dialing support. (Default is yes) udpbindaddr=0.0.0.0 ; IP address to bind UDP listen socket to (0.0.0.0 binds to all) ; Optionally add a port number, 192.168.1.1:5062 (default is port 5060) tcpenable=no ; Enable server for incoming TCP connections (default is no) tcpbindaddr=0.0.0.0 ; IP address for TCP server to bind to (0.0.0.0 binds to all interfaces) ; Optionally add a port number, 192.168.1.1:5062 (default is port 5060) srvlookup=yes ; Enable DNS SRV lookups on outbound calls ; Note: Asterisk only uses the first host ; in SRV records ; Disabling DNS SRV lookups disables the ; ability to place SIP calls based on domain ; names to some other SIP users on the Internet ; Specifying a port in a SIP peer definition or ; when dialing outbound calls will supress SRV ; lookups for that peer or call. directmedia=no ; Don't allow direct RTP media between extensions (doesn't work through NAT) externhost=<MY DYNDNS HOSTNAME> ; Our external hostname to resolve to IP and be used in NAT'ed packets localnet=192.168.1.0/24 ; Define our local network so we know which packets need NAT'ing qualify=yes ; Qualify peers by default dtmfmode=rfc2833 ; Set the default DTMF mode disallow=all ; Disallow all codecs by default allow=ulaw ; Allow G.711 u-law allow=alaw ; Allow G.711 a-law ; ---------------------- ; SIP Trunk Registration ; ---------------------- ; Orbtalk register => <MY SIP PROVIDER USER NAME>:[email protected]/<MY DDI> ; Main Orbtalk number ; ---------- ; Trunks ; ---------- [orbtalk] ; Main Orbtalk trunk type=peer insecure=invite host=sipgw3.orbtalk.co.uk nat=yes username=<MY SIP PROVIDER USER NAME> defaultuser=<MY SIP PROVIDER USER NAME> fromuser=<MY SIP PROVIDER USER NAME> secret=xxx context=inbound I really don't know where to go with this. If anyone can help me find out why these calls are being dropped off, I'd be grateful if you could chime in! Please let me know if any further info is required.

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • DNSSEC - Ad Flag not activated

    - by Arancha
    Hi all, I have some doubts regarding DNSSEC. I have one server acting as an Authoritative Name Server and another one as a Cache/Resolver. I'm using Bind 9.7.1-P2 and these are my configuration files: Named.conf (Authoritative Server) // Opciones de configuracion del servidor include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { rndc-key; }; }; options{ version "Peticion no permitida/Query not allowed"; hostname "Peticion no permitida/Query not allowed"; server-id "Peticion no permitida/Query not allowed"; directory "/etc/DNS_RIMA"; pid-file "named.pid"; notify yes; #files 65535; dnssec-enable yes; dnssec-validation yes; allow-transfer { 172.23.2.37; 172.23.3.39; }; transfer-format many-answers; transfers-per-ns 5; transfers-in 10; max-transfer-time-in 120; check-names master ignore; listen-on {172.23.2.57; 80.58.102.13; 80.58.102.103; 127.0.0.1; }; }; zone "test.dnssec" { type master; key-directory "keys"; file "db.test.dnssec.signed"; also-notify { 172.23.2.37 ; 172.23.3.39 ; }; allow-transfer { 172.23.2.37 ; 172.23.3.39 ; }; }; test.dnssec zone test.dnssec. 86400 IN SOA ns.test.dnssec. mxadmin.test.dnssec. ( 2010090902 ; serial 21600 ; refresh (6 hours) 3600 ; retry (1 hour) 1814400 ; expire (3 weeks) 172800 ; minimum (2 days) ) 86400 RRSIG SOA 5 2 86400 20101009062248 ( 20100909062248 40665 test.dnssec. eY99laB6PrtETaXLdCS+G8Uq1lIK7d5vxUB1 pAQ9npv/YbvX1pdWZKGojDgPGw8V65Q0zKQo YW1VuBzvwfSRKax+yrjJzvHQGfCZPJWARehK hgLxHOfXLVH7tyndvLD49ZKcWtrop+Tuy4n9 apWWfSJZxCOngwS7zUi0zCTKfPs= ) 86400 NS ns1.test.dnssec. 86400 RRSIG NS 5 2 86400 20101009062248 ( 20100909062248 40665 test.dnssec. lmlP/Mb2qEXPSlajgSDn/CqWk/jokVCmqjeo idNuytxbiFnbCOunzvaYpgvDpEr0CPrwXaDL TSnb/w53tZl7GHRImJo50vwwNZljLzNT6CFw aaQXFc3rDLsXjCi+WF0/Z7meteM4jYdx5nrV Qx9pgur7VPbP88bJOqWCPBev2Ho= ) 172800 NSEC a.test.dnssec. NS SOA RRSIG NSEC DNSKEY 172800 RRSIG NSEC 5 2 172800 20101009062248 ( 20100909062248 40665 test.dnssec. E76ayamsAAz8Zcj7060KY0nTFzHPztM/Pkc5 OM0EcP7C5+ocn4L8M2J0rmR3jxfYvCpOk0BQ Zniqn9Aw41Qk068yJ2dfDPwV5zT0+te0nzwC /awJGPMXLzMj4JejYTlTiKfspGDJCG44F+lb lHXdcUhbjXf3loqMQadZFQ/eSn0= ) 86400 DNSKEY 256 3 5 ( AwEAAbQ8qrNN5vetx/7E1VOgXZ7fLqwG1y/i 55hWGCeLbcS95ratT9A6UospOvPSwPTlrFgF RWP67Pubzbsy7/damS1F1+p4GgBQway52Hd1 8HjdHKKC6kIxna9pOJBRfhCdzAsv9LnpRvrw mDpcFAqhdn5k5RqwcUF1eOZrKjxXjAOr ) ; key id = 40665 86400 DNSKEY 257 3 5 ( AwEAAcd4dxWyTgOuqha0DJADUH0pk5jvnwdM ZhgZaqnayUdeTh8U9WOjOUHdVCGywZS6NTVp xXqhcegWzh2ZR5VN6thuhezt7kbzLNWbPe7m YF29/ZTXB6nmdSxruQlSvYhzkWTaPNtfrUnI UlbDRxUFWQkSHj9LA1TG76FpR6uqOj1sNrWX nPb/Hwp1Sb2Ik4FlifKb/Vu1+/UnclRJgfPm p2HGTeNYpfk15JHBPSYxJ1TuedXQIdkPGlQX ISmAeV1evGomCC/x9DNleDHCszJOptwurzRP Z7wRXcWnbXz1BU8rAqvUZL3M4UgdNRR5LLTz CkRnrlvXYJpgzDtgmQxE9Bs= ) ; key id = 59647 86400 RRSIG DNSKEY 5 2 86400 20101009062248 ( 20100909062248 40665 test.dnssec. sa4W3tvl6n0TkIcq3xzhG17C2O0lRhllrpUd n5Hs6yVo8r7stewP6tm2XscQiAeseDgmv28w s6Mtiz8uPUbrgFRb6SJk7coH2n/2Y3//S9YP NldDFv3luPnnU1TBb3jDsBKIZWHU9yl/cLNA OKUhlMDd40txk+fQi3iiV5Ls9K8= ) 86400 RRSIG DNSKEY 5 2 86400 20101009062248 ( 20100909062248 59647 test.dnssec. b5fz0dEp2co2pVO7biY896XmsJanjQIR69vC MvSF104/9iZk6eGVFi6hsa4aZcXutEjUDESB ynPkDjMWWIIhN6K1jYKGIc/sFKv1IUONRYHF KXGgZhC6aI0B1E4NA9AXLjlBVF60nHdc3iw8 5gTLDjypP3qAZrnzMvdiBopLnVdB25UZYKn8 mGpOuzKqX02TGMCFMlEVtMX4FP/XKAE8UjiQ 5ehC1JvIKIyg/2zM+ot3nmcqqtUfzp/Hweyc aIkl/9wPJPwMedfTqOjfUKFdB+GiZ0Zz16HZ 5MfJui5IGh5Y6Q04kMrnap2V5U7mByTzx/ud V/eFYhmSHGtAXzBjMA== ) a.test.dnssec. 86400 IN A 1.1.1.1 86400 RRSIG A 5 3 86400 20101009062248 ( 20100909062248 40665 test.dnssec. P52N9ypCrYsgS4CFcUmII0xjyE6KNL9ndhzH oU63fHJHQHeQV+fc0Rx8cCmZSzuqk1lSBelV 3Gcl9UNNuCAQ4ORQ/yJkiZ1zn7h93Mep9qsg YEUQJMfk4FLjYW67DHNcuoCnKbDJhZS0ndVf I474k7ZEZJsGslwk/vcIoFnTa4o= ) 172800 NSEC b.test.dnssec. A RRSIG NSEC 172800 RRSIG NSEC 5 3 172800 20101009062248 ( 20100909062248 40665 test.dnssec. TCduf7xPSrWvEAzBO7Kx5haR85yA/lbsswkQ v0QxlskqAqo+9YedGQV+wGblbCIOmkomrYcq u/rXQ5yoQ3SDXd/bw6EFdoQmH8UJOjMc7SdR xY93MjawPB6XXlJsSlbBFPWJwEpILVRhdBFX czdS5VCa1KmhAYZYQp1FY9rMelA= ) b.test.dnssec. 86400 IN A 2.2.2.2 86400 RRSIG A 5 3 86400 20101009062248 ( 20100909062248 40665 test.dnssec. f0M6Tcqe6B09ctaN3BGAit4u4cJE8x3Ik8sh gyMu0GN/lMv/Bo7PB6hgylLam3HXtF1pPAzX oYudXmhU8afPapHMXfUitC1lFQB5ZW052ZC7 JXV9MnGULydz1blj2EdN+JL3Za8SJKM0LrLB XdQ+QUV+A/6N7hUV6usz5YmdBeI= ) 172800 NSEC ns1.test.dnssec. A RRSIG NSEC 172800 RRSIG NSEC 5 3 172800 20101009062248 ( 20100909062248 40665 test.dnssec. sc6v19dcOFVa295/Xf1pKxBhbdpEErY8CTDQ fw2fjJf0Y3wL1Y1Mlr5zi5ShceQwgua+6YHE DWNbAPcXrJ0lLMU4DU5r0sAyBiBCgCavngGk i59W+nv11zuIpPMnlaMHpJVfJrQ+c4z7H9MH 77B0fMRFTUnvAXoq6ag8Q5POITI= ) ns1.test.dnssec. 86400 IN A 3.3.3.3 86400 RRSIG A 5 3 86400 20101009062248 ( 20100909062248 40665 test.dnssec. UQ3hR/++ta1GokxGz8Yh+GomMcA+xhd3z2Ke z0tdFiNfxvGbm85XyCtSqJIo2S/ZLVJUv/mG nGJbicTfJSziKzYZsD7dp0WJiUK3l7lQ/HpP 5FL8SbjlovVYYAG5woW4p3+os28mmCAJA8gP JTywbcREEhFB4cir2M/QVP+9h+Y= ) 172800 NSEC test.dnssec. A RRSIG NSEC 172800 RRSIG NSEC 5 3 172800 20101009062248 ( 20100909062248 40665 test.dnssec. i7F/ezGl/pGXCC6JyVDaxuwdZMAgv9QLxwzi PTgjCG8Sj6pTIxaQkSLwXsoB9gF77WWBANow R2SWdz0Zai2vWnv/NYoNm9ZfRJEQ9NuExeYp rvX/+lLOHvZXN6tUerIQbWAxO2GwdzHoejSn wReUNVr9MxzZUvuJ33Z7X/7s9VQ= ) Named.conf (Cache/Resolver) include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { rndc-key; }; }; options{ version "Peticion no permitida/Query not allowed"; hostname "Peticion no permitida/Query not allowed"; server-id "Peticion no permitida/Query not allowed"; directory "/etc/DNS_RIMA"; pid-file "named.pid"; recursion yes; notify no; #DNSSEC dnssec-enable yes; dnssec-validation yes; listen-on {127.0.0.1; 172.23.2.87; 80.58.102.37; 80.58.102.115; }; #listen-on {127.0.0.1; 80.58.102.37; 80.58.102.115; }; allow-query { telefonica; }; allow-transfer { none; }; recursive-clients 40000; max-cache-size 838860800; rrset-order { order fixed;}; max-ncache-ttl 600; }; trusted-keys { "test.dnssec." 257 3 5 "AwEAAcd4dxWyTgOuqha0DJADUH0pk5jvnwdMZhgZaqnayUdeTh8U9WOjOUHdVCGywZS6NTVpxXqhcegWzh2ZR5VN6thuhezt7kbzLNWbPe7mYF29/ZT XB6nmdSxruQlSvYhzkWTaPNtfrUnIUlbDRxUFWQkSHj9LA1TG76FpR6uqOj1sNrWXnPb/Hwp1Sb2Ik4FlifKb/Vu1+/UnclRJgfPmp2HGTeNYpfk15JHBPSYxJ1TuedXQIdkPGlQXIS mAeV1evGomCC/x9DNleDHCszJOptwurzRPZ7wRXcWnbXz1BU8rAqvUZL3M4UgdNRR5LLTzCkRnrlvXYJpgzDtgmQxE9Bs="; }; I have configured a secure zone (test.dnssec) and I'm trying to perform some queries from the resolver to the Name server (172.23.2.57): /usr/local/bin/dig @172.23.2.57 a.test.dnssec +dnssec ; <<>> DiG 9.7.1-P2 <<>> @172.23.2.57 a.test.dnssec +dnssec ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2654 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 3 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 4096 ;; QUESTION SECTION: ;a.test.dnssec. IN A ;; ANSWER SECTION: a.test.dnssec. 86400 IN A 1.1.1.1 a.test.dnssec. 86400 IN RRSIG A 5 3 86400 20101009062248 20100909062248 40665 test.dnssec. P52N9ypCrYsgS4CFcUmII0xjyE6KNL9ndhzHoU63fHJHQHeQV+ fc0Rx8 cCmZSzuqk1lSBelV3Gcl9UNNuCAQ4ORQ/yJkiZ1zn7h93Mep9qsgYEUQ JMfk4FLjYW67DHNcuoCnKbDJhZS0ndVfI474k7ZEZJsGslwk/vcIoFnT a4o= ;; AUTHORITY SECTION: test.dnssec. 86400 IN NS ns1.test.dnssec. test.dnssec. 86400 IN RRSIG NS 5 2 86400 20101009062248 20100909062248 40665 test.dnssec. lmlP/Mb2qEXPSlajgSDn/CqWk/jokVCmqjeoidNuytxbiFnbCOunzvaY pgvDpEr0CPrwXaDLTSnb/w53tZl7GHRImJo50vwwNZljLzNT6CFwaaQX Fc3rDLsXjCi+WF0/Z7meteM4jYdx5nrVQx9pgur7VPbP88bJOqWCPBev 2Ho= ;; ADDITIONAL SECTION: ns1.test.dnssec. 86400 IN A 3.3.3.3 ns1.test.dnssec. 86400 IN RRSIG A 5 3 86400 20101009062248 20100909062248 40665 test.dnssec. UQ3hR/++ta1GokxGz8Yh+GomMcA+xhd3z2Kez0tdFiNfxvGbm85XyCtS qJIo2S/ZLVJUv/mGnGJbicTfJSziKzYZsD7dp0WJiUK3l7lQ/HpP5FL8 SbjlovVYYAG5woW4p3+os28mmCAJA8gPJTywbcREEhFB4cir2M/QVP+9 h+Y= ;; Query time: 1 msec ;; SERVER: 172.23.2.57#53(172.23.2.57) ;; WHEN: Thu Sep 9 09:47:14 2010 ;; MSG SIZE rcvd: 605 I obtain the right answer along with the RRSIG records, but the problem is that I'm not seeing the ad flag activated. Any idea about what is wrong????

    Read the article

  • Unstable DNS with bind

    - by yasser abd
    we have a Centos machine called jupiter, on which I have installed bind9, On every other machine the DNS is set to be the IP address of jupiter (192.168.2.101), as you can see in the output of the following command in windows >ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : mypcs Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme 57xx Gigabit Controller Physical Address. . . . . . . . . : 00-1A-A0-AC-E4-CC DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::c16d:3ae4:5907:30c4%8(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.2.98(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Thursday, September 20, 2012 10:26:11 AM Lease Expires . . . . . . . . . . : Sunday, September 23, 2012 10:26:10 AM Default Gateway . . . . . . . . . : 192.168.2.1 DHCP Server . . . . . . . . . . . : 192.168.2.1 DHCPv6 IAID . . . . . . . . . . . : 201333408 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-16-3A-50-01-00-1A-A0-AC-E4-CC DNS Servers . . . . . . . . . . . : 192.168.2.101 192.168.2.1 192.168.2.1 NetBIOS over Tcpip. . . . . . . . : Enabled All machines can always nslookup one of the domain (mydomain.com) that is set in the jupiter's DNS server, you can see that in the output of nslookup on the same windows machine: >nslookup mydomain.com Server: UnKnown Address: 192.168.2.101 Name: mydomain.com Address: 192.168.2.100 The problem is, sometimes mydomain.com can not be pinged, here is the output of the ping on the same windows machine >ping mydomain.com Ping request could not find host mydomain.com. Please check the name and try again. This looks very random, and happens once in a while, so the machine can lookup the DNS records but can't ping it, nor can browse the website that is hosted on mydomain.com, which should resolve to 192.168.2.100 On a linux machine that has the same DNS settings, the output of dig command for mydomain is as follows: $ dig mydomain.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.2 <<>> mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36090 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1 ;; QUESTION SECTION: ;mydomain.com. IN A ;; ANSWER SECTION: mydomain.com. 86400 IN A 192.168.2.100 ;; AUTHORITY SECTION: mydomain.com. 86400 IN NS jupiter. ;; ADDITIONAL SECTION: jupiter. 86400 IN A 192.168.2.101 ;; Query time: 1 msec ;; SERVER: 192.168.2.101#53(192.168.2.101) ;; WHEN: Thu Sep 20 16:32:14 2012 ;; MSG SIZE rcvd: 83 We've never had the same problem on MACs, they always resolve mydomain.com Here is how I have defined mydomain.com on Bind9's configs on Jupiter, notice that the name of the machine on 192.168.2.100 is venus, so I have this file: /var/named/named.venus: $TTL 1D @ IN SOA jupiter. admin.ourcompany.com. ( 2003052800 ; serial 86400 ; refresh 300 ; retry 604800 ; expire 3600 ; minimum ) @ IN NS jupiter. @ IN A 192.168.2.100 * IN A 192.168.2.100 /var/named/zones/named.venus.zone zone "mydomain.com" IN {type master;file "/var/named/named.venus";allow-update {none;};}; One thing to note is that I haven't defined reverse DNS lookups, only the forward DNS lookups are defined in Bind9 configs, not sure if that's relevant or not. So my question is, why is this being so unstable? what could be the cause?

    Read the article

< Previous Page | 204 205 206 207 208 209 210  | Next Page >