Search Results

Search found 2187 results on 88 pages for 'combined fragment'.

Page 87/88 | < Previous Page | 83 84 85 86 87 88  | Next Page >

  • jqGrid - Problems opening in jquery tabs (on Firefox and Google Chrome)

    - by Ben Hargreaves
    I have developed a very simple MVC app to test out trirand's jqGrid for MVC. The app opens a jqgrid in a jquery tab group and everything is ok with IE. However when I use Firefox jqgrid only opens occasionaly in the first tab (but not under any other tab), and in Chrome my jqgrids dont appear to open under any tab of the group. I'm a bit of an MVC newbie (and have only been testing jqgrid out for a few days), but I know my users will want to use different browsers. Trirand have not come back with any answer so wondered if anyone else had had a similar issue. I have really just implemented jqgrid as per the controllers and model in the sample application on the Trirand site, and then combined it with a straightforward jquery tab group. My MVC Details Page is as follows; <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<PRAMSAPP.Models.Family>" %> <%@ Import Namespace="Trirand.Web.Mvc" %> <%@ Import Namespace="PRAMSAPP.Controllers" %> <%@ Import Namespace="PRAMSAPP.Models" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Details </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <link rel="stylesheet" type="text/css" href="/scripts/jquery-ui-1.7.2.custom.css" /> <script type="text/javascript" src="/scripts/jquery-1.3.2.min.js"></script> <script type="text/javascript" src="/scripts/jquery-ui-1.7.2.custom.min.js"></script> <fieldset> <legend>Family</legend> <div class="display-field"><%= Html.Encode(Model.FamilyID) %></div> <div class="display-field"><%= Html.Encode(Model.FamilySurname) %></div> </fieldset> <div id="tabs"> <ul> <li> <%= Html.ActionLink("GridChildren", "GridDemo", new { controller = "Grid", id = Model.FamilyID })%> </li> <li> <%= Html.ActionLink("Children", "ShowFamiliesChildren", new { famid = Model.FamilyID, page = Page})%> </li> </ul> </div> <p> <%= Html.ActionLink("Edit", "Edit", new { id=Model.FamilyID }) %> | <%= Html.ActionLink("Back to List", "Index") %> </p> <script type="text/javascript"> $(function() { $('#tabs').tabs(); }); </script> </asp:Content> And My Controller page is as follows; <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<PRAMSAPP.Models.FamiliesChildrenJqGridModel>" %> <%@ Import Namespace="Trirand.Web.Mvc" %> <%@ Import Namespace="PRAMSAPP.Controllers" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head id="Head1" runat="server"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <!-- The jQuery UI theme that will be used by the grid --> <link rel="stylesheet" type="text/css" media="screen" href="/Content/themes/redmond/jquery-ui-1.7.1.custom.css" /> <!-- The Css UI theme extension of jqGrid --> <link rel="stylesheet" type="text/css" media="screen" href="/Content/themes/ui.jqgrid.css" /> <!-- jQuery library is a prerequisite for jqGrid --> <script type="text/javascript" src="/Scripts/jquery-1.3.2.min.js"></script> <!-- language pack - MUST be included before the jqGrid javascript --> <script type="text/javascript" src="/Scripts/grid.locale-en.js"></script> <script type="text/javascript" src="/Scripts/jqgrid/jquery.jqGrid.min.js"></script> </head> <body> <div> <%= Html.Trirand().JQGrid(Model.FamiliesChildrenGrid, "JQGrid1") %> </div> </body>

    Read the article

  • Mixing inheritance mapping strategies in NHibernate

    - by MylesRip
    I have a rather large inheritance hierarchy in which some of the subclasses add very little and others add quite a bit. I don't want to map the entire hierarchy using either "table per class hierarchy" or "table per subclass" due to the size and complexity of the hierarchy. Ideally I'd like to mix mapping strategies such that portions of the hierarchy where the subclasses add very little are combined into a common table a la "table per class hierarchy" and subclasses that add a lot are broken out into a separate table. Using this approach, I would expect to have 2 or 3 tables with very little wasted space instead of either 1 table with lots of fields that don't apply to most of the objects, or 20+ tables, several of which would have only a couple of columns. In the NHibernate Reference Documentation version 2.1.0, I found section 8.1.4 "Mixing table per class hierarchy with table per subclass". This approach switches strategies partway down the hierarchy by using: ... <subclass ...> <join ...> <property ...> ... </join> </subclass> ... This is great in theory. In practice, though, I found that the schema was too restrictive in what was allowed inside the "join" element for me to be able to accomplish what I needed. Here is the related part of the schema definition: <xs:element name="join"> <xs:complexType> <xs:sequence> <xs:element ref="subselect" minOccurs="0" /> <xs:element ref="comment" minOccurs="0" /> <xs:element ref="key" /> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element ref="property" /> <xs:element ref="many-to-one" /> <xs:element ref="component" /> <xs:element ref="dynamic-component" /> <xs:element ref="any" /> <xs:element ref="map" /> <xs:element ref="set" /> <xs:element ref="list" /> <xs:element ref="bag" /> <xs:element ref="idbag" /> <xs:element ref="array" /> <xs:element ref="primitive-array" /> </xs:choice> <xs:element ref="sql-insert" minOccurs="0" /> <xs:element ref="sql-update" minOccurs="0" /> <xs:element ref="sql-delete" minOccurs="0" /> </xs:sequence> <xs:attribute name="table" use="required" type="xs:string" /> <xs:attribute name="schema" type="xs:string" /> <xs:attribute name="catalog" type="xs:string" /> <xs:attribute name="subselect" type="xs:string" /> <xs:attribute name="fetch" default="join"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="join" /> <xs:enumeration value="select" /> </xs:restriction> </xs:simpleType> </xs:attribute> <xs:attribute name="inverse" default="false" type="xs:boolean"> </xs:attribute> <xs:attribute name="optional" default="false" type="xs:boolean"> </xs:attribute> </xs:complexType> </xs:element> As you can see, this allows the use of "property" child elements or "component" child elements, but not both. It also doesn't allow for "subclass" child elements to continue the hierarchy below the point at which the strategy was changed. Is there a way to accomplish this?

    Read the article

  • Design pattern for parsing data that will be grouped to two different ways and flipped

    - by lewisblackfan
    I'm looking for an easily maintainable and extendable design model for a script to parse an excel workbook into two separate workbooks after pulling data from other locations like the command line, and a database. The high level details are as follows. I need to parse an excel workbook containing a sheet that lists unique question names, the only reliable information that can be parsed from the question name is the book code that identifies the title and edition of the textbook the question is associated with, the rest of the question name is not standardized well enough to be reliably parsed by computer. The general form of the question name is best described by the following regular expression. '^(\w+)\s(\w{1,2})\.(\w{1,2})\.(\w{1,3})\.(\w{1,3}\.)*$' The first sub-pattern is the book code, the second sub-pattern is 90% of the time the chapter, and the rest of the sub-patterns could be section, problem type, problem number, or question type information. There is no simple logic, at least not one I can find. There will be a minimum of three other columns in this spreadsheet; one column will be the chapter the question is associated with, the second will be the section within the chapter the question is associated with, and the third will be some kind of asset indicated by a uniform resource locator. 1 | 1 | qname1 | url | description | url | description ... 1 | 1 | qname2 | url | description 1 | 1 | qname3 | url | description | url | description | url | The asset can be indicated by a full or partial uniform resource locator, the partial url will need to be completed before it can be fed into the application. There theoretically could be no limit to the number of asset columns, the assets will be grouped in columns by type. Some times additional data will have to be retrieved from a database or combined with the book code before the asset url is complete and can be understood by the application that will be using the asset. The type is an abstraction, there are eight types right now, each with their own logic in how the uniform resource locator is handled and or completed, and I have to add a new type and its logic every three or four months. For each asset url there is the possibility of a description column, a character string for display in the application, but not always. (I've already worked out validating the description text, and squashing MSs obscure code page down to something 7-bit ascii can handle.) Now that all the details are filled-in I can get to the actual problem of parsing the file. I need to split the information in this excel workbook into two separate workbooks. The first workbook will group all the questions by section in rows. With the first cell being the section doublet and the rest of the cells in the row are the question names. 1.1 | qname1 | qname2 | qname3 | qname4 | 1.2 | qname1 | qname2 | qname3 | 1.3 | qname1 | qname2 | qname3 | qname4 | qname5 There is no set number of questions for each section as you can see from the above example. The second workbook is more complicated, there is one row per asset, and question names that have more than one asset will be duplicated. There will be four or five columns on this sheet. The first is the question name for the asset, the second is a media type used to select the correct icon for the asset in the application, the third is string representing the asset type, the four is the full and complete uniform resource locator for the asset, and the fifth columns is the optional text description for the asset. q1 | mtype1 | atype1 | url | description q1 | mtype2 | atype2 | url | description q1 | mtype2 | atype3 | url | description q2 | mtype1 | atype1 | url | description q2 | mtype2 | atype3 | url | description For the original six types I did have a script that parsed the source excel workbook into the other two excel workbooks, and I was able to add two more types until I ran aground on the implementation of the ninth type and tenth types. What broke my script was the fact that the ninth type is actually a sub-type of one of the original six, but with entirely different logic, and my mostly procedural script could not accommodate without duplicating a lot of code. I also had a lot of bugs in the script and will be writing the test first on this time around. I'm stuck with the format for the resulting two workbooks, this script is glue code, development went ahead with the project without bothering to get a complete spec from the sponsor. I work for the same company as the developers but in the editorial department, editorial is co-sponsor of the project, and am expected to fix pesky details like this (I'm foaming at the mouth as I type this). I've tried factories, I've tried different object models, but each resulting workbook is so different when I find a design that works for generating one workbook the code is not really usable for generating the other. What I would really like are ideas about a maintainable and extensible design for parsing the source workbook into both workbooks with maximum code reuse, and or sympathy.

    Read the article

  • Why is android:FLAG_BLUR_BEHIND creating a gradient background in my new activity instead of bluring

    - by nderraugh
    Hi, I've got two activities. One is supposed to be a blur in front of the other. The background activity has several ImageViews which are set up as thin gradients extending across most of the screen and 10dip high. When I start the second activity it sets the background as a gradient occupying the entire window space, that is it appears to be fill_parent'd for both height and width. If I comment out the ImageViews then it blurs and looks as expected. Any thoughts? Here's the code doing the blur. import android.app.Activity; import android.os.Bundle; import android.view.View; import android.view.WindowManager; import android.view.View.OnClickListener; public class TransluscentBlurSummaryB extends Activity { @Override protected void onCreate(Bundle icicle) { super.onCreate(icicle); getWindow().setFlags(WindowManager.LayoutParams.FLAG_BLUR_BEHIND, WindowManager.LayoutParams.FLAG_BLUR_BEHIND); getWindow().getAttributes().dimAmount = 0.5f; getWindow().setFlags(WindowManager.LayoutParams.FLAG_DIM_BEHIND, WindowManager.LayoutParams.FLAG_DIM_BEHIND); setContentView(R.layout.sheetbdetails); OnClickListener clickListener = new OnClickListener() { public void onClick(View v) { TransluscentBlurSummaryB.this.finish(); } }; findViewById(R.id.sheetbdetailstable).setOnClickListener(clickListener); } } And here's the layout with the ImageView gradients. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:id="@+id/summarysparent" > <!-- view1 goes on top --> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/view2" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true"> <Button android:layout_height="wrap_content" android:id="@+id/ButtonBack" android:layout_width="wrap_content" android:text="Back" android:width="100dp"></Button> <Button android:layout_height="wrap_content" android:id="@+id/ButtonNext" android:layout_width="wrap_content" android:layout_alignParentRight="true" android:text="Start Over" android:width="100dp"></Button> </RelativeLayout> <TextView android:id="@+id/view1" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:layout_width="wrap_content" android:layout_centerHorizontal="true" android:textSize="10pt" android:text="Summary"/> <ScrollView xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content" android:id="@+id/summaryscrollview" android:layout_below="@+id/view1" android:layout_above="@+id/view2"> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content" android:id="@+id/summarydetails" > <!-- view2 goes on the bottom --> <TextView android:id="@+id/textview2" android:layout_height="wrap_content" android:layout_width="wrap_content" android:layout_below="@+id/view1" android:layout_centerHorizontal="true" android:text="Recommended Child Support Order" android:layout_marginTop="10dip" /> <ImageView android:id="@+id/horizontalLine1" android:layout_width="fill_parent" android:layout_marginLeft="5dip" android:layout_marginRight="5dip" android:layout_height="10dip" android:src="@drawable/black_white_gradient" android:layout_below="@+id/textview2" android:layout_marginTop="10dip" /> <TextView android:id="@+id/textview3" android:layout_height="wrap_content" android:layout_width="wrap_content" android:layout_below="@+id/horizontalLine1" android:layout_centerHorizontal="true" android:text="You" android:layout_marginTop="10dip" /> <TextView android:id="@+id/textview10" android:layout_height="wrap_content" android:layout_width="150dp" android:layout_below="@+id/textview3" android:layout_centerHorizontal="true" android:layout_marginTop="10dip" android:gravity="center_horizontal" /> <ImageView android:id="@+id/horizontalLine2" android:layout_width="fill_parent" android:layout_marginLeft="5dip" android:layout_marginRight="5dip" android:layout_height="10dip" android:src="@drawable/black_white_gradient" android:layout_below="@+id/textview10" android:layout_marginTop="10dip" /> <TextView android:id="@+id/textview4" android:layout_height="wrap_content" android:layout_width="wrap_content" android:layout_below="@+id/horizontalLine2" android:layout_centerHorizontal="true" android:text="Other Parent" android:layout_marginTop="10dip" /> <TextView android:id="@+id/textview11" android:layout_height="wrap_content" android:layout_width="150dp" android:layout_below="@+id/textview4" android:layout_centerHorizontal="true" android:layout_marginTop="10dip" android:text="$536.18" android:gravity="center_horizontal" /> <ImageView android:id="@+id/horizontalLine3" android:layout_width="fill_parent" android:layout_marginLeft="5dip" android:layout_marginRight="5dip" android:layout_height="10dip" android:src="@drawable/black_white_gradient" android:layout_below="@+id/textview11" android:layout_marginTop="10dip" /> <TextView android:id="@+id/textview5" android:layout_height="wrap_content" android:layout_width="wrap_content" android:layout_below="@+id/horizontalLine3" android:layout_centerHorizontal="true" android:text="Calculation Details" android:layout_marginTop="15dip" /> <ImageView android:id="@+id/infoButton" android:src="@drawable/ic_menu_info_details" android:layout_height="wrap_content" android:layout_width="wrap_content" android:layout_below="@+id/horizontalLine3" android:layout_toRightOf="@+id/textview5" android:clickable="true" /> <ImageView android:id="@+id/horizontalLine4" android:layout_width="fill_parent" android:layout_marginLeft="5dip" android:layout_marginRight="5dip" android:layout_height="10dip" android:src="@drawable/black_white_gradient" android:layout_below="@+id/textview5" android:layout_marginTop="18dip" /> </RelativeLayout> </ScrollView> </RelativeLayout> The gradient drawable is this. <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle"> <gradient android:startColor="#FFFFFF" android:centerColor="#000000" android:endColor="#FFFFFF" android:angle="270"/> <padding android:left="7dp" android:top="7dp" android:right="7dp" android:bottom="7dp" /> <corners android:radius="8dp" /> </shape> And here's the layout from the activity doing the blurring on top. <?xml version="1.0" encoding="utf-8"?> <ScrollView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/sheetbdetails" android:layout_width="fill_parent" android:layout_height="fill_parent" android:clickable="true" > <TableLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:scrollbars="vertical" android:shrinkColumns="0" android:id="@+id/sheetbdetailstable" > <TableRow> <TextView android:padding="3dip" /> <TextView android:text="You" android:padding="3dip" /> <TextView android:text="@string/otherparent" android:padding="3dip" /> <TextView android:text="Combined" android:padding="3dip" /> </TableRow> </TableLayout> </ScrollView> The transparent windows are themed from styles.xml in the apidemos using @style/Theme.Transparent.

    Read the article

  • How to find vector for the quaternion from X Y Z rotations

    - by can poyrazoglu
    I am creating a very simple project on OpenGL and I'm stuck with rotations. I am trying to rotate an object indepentdently in all 3 axes: X, Y, and Z. I've had sleepless nights due to the "gimbal lock" problem after rotating about one axis. I've then learned that quaternions would solve my problem. I've researched about quaternions and implementd it, but I havent't been able to convert my rotations to quaternions. For example, if I want to rotate around Z axis 90 degrees, I just create the {0,0,1} vector for my quaternion and rotate it around that axis 90 degrees using the code here: http://iphonedevelopment.blogspot.com/2009/06/opengl-es-from-ground-up-part-7_04.html (the most complicated matrix towards the bottom) That's ok for one vector, but, say, I first want to rotate 90 degrees around Z, then 90 degrees around X (just as an example). What vector do I need to pass in? How do I calculate that vector. I am not good with matrices and trigonometry (I know the basics and the general rules, but I'm just not a whiz) but I need to get this done. There are LOTS of tutorials about quaternions, but I seem to understand none (or they don't answer my question). I just need to learn to construct the vector for rotations around more than one axis combined. UPDATE: I've found this nice page about quaternions and decided to implement them this way: http://www.cprogramming.com/tutorial/3d/quaternions.html Here is my code for quaternion multiplication: void cube::quatmul(float* q1, float* q2, float* resultRef){ float w = q1[0]*q2[0] - q1[1]*q2[1] - q1[2]*q2[2] - q1[3]*q2[3]; float x = q1[0]*q2[1] + q1[1]*q2[0] + q1[2]*q2[3] - q1[3]*q2[2]; float y = q1[0]*q2[2] - q1[1]*q2[3] + q1[2]*q2[0] + q1[3]*q2[1]; float z = q1[0]*q2[3] + q1[1]*q2[2] - q1[2]*q2[1] + q1[3]*q2[0]; resultRef[0] = w; resultRef[1] = x; resultRef[2] = y; resultRef[3] = z; } Here is my code for applying a quaternion to my modelview matrix (I have a tmodelview variable that is my target modelview matrix): void cube::applyquat(){ float& x = quaternion[1]; float& y = quaternion[2]; float& z = quaternion[3]; float& w = quaternion[0]; float magnitude = sqrtf(w * w + x * x + y * y + z * z); if(magnitude == 0){ x = 1; w = y = z = 0; }else if(magnitude != 1){ x /= magnitude; y /= magnitude; z /= magnitude; w /= magnitude; } tmodelview[0] = 1 - (2 * y * y) - (2 * z * z); tmodelview[1] = 2 * x * y + 2 * w * z; tmodelview[2] = 2 * x * z - 2 * w * y; tmodelview[3] = 0; tmodelview[4] = 2 * x * y - 2 * w * z; tmodelview[5] = 1 - (2 * x * x) - (2 * z * z); tmodelview[6] = 2 * y * z - 2 * w * x; tmodelview[7] = 0; tmodelview[8] = 2 * x * z + 2 * w * y; tmodelview[9] = 2 * y * z + 2 * w * x; tmodelview[10] = 1 - (2 * x * x) - (2 * y * y); tmodelview[11] = 0; glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadMatrixf(tmodelview); glMultMatrixf(modelview); glGetFloatv(GL_MODELVIEW_MATRIX, tmodelview); glPopMatrix(); } And my code for rotation (that I call externally), where quaternion is a class variable of the cube: void cube::rotatex(int angle){ float quat[4]; float ang = angle * PI / 180.0; quat[0] = cosf(ang / 2); quat[1] = sinf(ang/2); quat[2] = 0; quat[3] = 0; quatmul(quat, quaternion, quaternion); applyquat(); } void cube::rotatey(int angle){ float quat[4]; float ang = angle * PI / 180.0; quat[0] = cosf(ang / 2); quat[1] = 0; quat[2] = sinf(ang/2); quat[3] = 0; quatmul(quat, quaternion, quaternion); applyquat(); } void cube::rotatez(int angle){ float quat[4]; float ang = angle * PI / 180.0; quat[0] = cosf(ang / 2); quat[1] = 0; quat[2] = 0; quat[3] = sinf(ang/2); quatmul(quat, quaternion, quaternion); applyquat(); } I call, say rotatex, for 10-11 times for rotating only 1 degree, but my cube gets rotated almost 90 degrees after 10-11 times of 1 degree, which doesn't make sense. Also, after calling rotation functions in different axes, My cube gets skewed, gets 2 dimensional, and disappears (a column in modelview matrix becomes all zeros) irreversibly, which obviously shouldn't be happening with a correct implementation of the quaternions.

    Read the article

  • Core Plot: only works ok with three plots

    - by Luis
    I am adding a scatter plot to my app (iGear) so when the user selects one, two or three chainrings combined with a cogset on a bike, lines will show the gears meters. The problem is that Core Plot only shows the plots when three chainrings are selected. I need your help, this is my first try at Core Plot and I'm lost. My code is the following: iGearMainViewController.m - (IBAction)showScatterIpad:(id)sender { cogsetToPass = [NSMutableArray new]; arrayForChainringOne = [NSMutableArray new]; arrayForChainringTwo = [NSMutableArray new]; arrayForChainringThree = [NSMutableArray new]; //behavior according to number of chainrings switch (self.segmentedControl.selectedSegmentIndex) { case 0: // one chainring selected for (int i = 1; i<= [cassette.numCogs intValue]; i++) { if (i <10) { corona = [NSString stringWithFormat:@"cog0%d",i]; }else { corona = [NSString stringWithFormat:@"cog%d",i]; } float one = (wheelSize*[_oneChainring.text floatValue]/[[cassette valueForKey:corona]floatValue])/1000; float teeth = [[cassette valueForKey:corona] floatValue]; [cogsetToPass addObject:[NSNumber numberWithFloat:teeth]]; [arrayForChainringOne addObject:[NSNumber numberWithFloat:one]]; } break; case 1: // two chainrings selected for (int i = 1; i<= [cassette.numCogs intValue]; i++) { if (i <10) { corona = [NSString stringWithFormat:@"cog0%d",i]; }else { corona = [NSString stringWithFormat:@"cog%d",i]; } float one = (wheelSize*[_oneChainring.text floatValue]/[[cassette valueForKey:corona]floatValue])/1000; //NSLog(@" gearsForOneChainring = %@",[NSNumber numberWithFloat:one]); float two = (wheelSize*[_twoChainring.text floatValue]/[[cassette valueForKey:corona]floatValue])/1000; [cogsetToPass addObject:[NSNumber numberWithFloat:[[cassette valueForKey:corona]floatValue]]]; [arrayForChainringOne addObject:[NSNumber numberWithFloat:one]]; [arrayForChainringTwo addObject:[NSNumber numberWithFloat:two]]; } break; case 2: // three chainrings selected for (int i = 1; i<= [cassette.numCogs intValue]; i++) { if (i <10) { corona = [NSString stringWithFormat:@"cog0%d",i]; }else { corona = [NSString stringWithFormat:@"cog%d",i]; } float one = (wheelSize*[_oneChainring.text floatValue]/[[cassette valueForKey:corona]floatValue])/1000; float two = (wheelSize*[_twoChainring.text floatValue]/[[cassette valueForKey:corona]floatValue])/1000; float three = (wheelSize*[_threeChainring.text floatValue]/[[cassette valueForKey:corona]floatValue])/1000; [cogsetToPass addObject:[cassette valueForKey:corona]]; [arrayForChainringOne addObject:[NSNumber numberWithFloat:one]]; [arrayForChainringTwo addObject:[NSNumber numberWithFloat:two]]; [arrayForChainringThree addObject:[NSNumber numberWithFloat:three]]; } default: break; } ScatterIpadViewController *sivc = [[ScatterIpadViewController alloc]initWithNibName: @"ScatterIpadViewController" bundle:nil]; [sivc setModalTransitionStyle:UIModalTransitionStyleFlipHorizontal]; sivc.records = [cassetteNumCogs integerValue]; sivc.cogsetSelected = self.cogsetToPass; sivc.chainringOne = self.arrayForChainringOne; sivc.chainringThree = self.arrayForChainringThree; sivc.chainringTwo = self.arrayForChainringTwo; [self presentViewController:sivc animated:YES completion:nil]; } And the child view with the code to draw the plots: ScatterIpadViewController.m #pragma mark - CPTPlotDataSource methods - (NSUInteger)numberOfRecordsForPlot: (CPTPlot *)plot { return records; } - (NSNumber *)numberForPlot: (CPTPlot *)plot field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)index{ switch (fieldEnum) { case CPTScatterPlotFieldX: return [NSNumber numberWithInt:index]; break; case CPTScatterPlotFieldY:{ if ([plot.identifier isEqual:@"one"]==YES) { //NSLog(@"chainringOne objectAtIndex:index = %@", [chainringOne objectAtIndex:index]); return [chainringOne objectAtIndex:index]; }else if ([plot.identifier isEqual:@"two"] == YES ){ //NSLog(@"chainringTwo objectAtIndex:index = %@", [chainringTwo objectAtIndex:index]); return [chainringTwo objectAtIndex:index]; }else if ([plot.identifier isEqual:@"three"] == YES){ //NSLog(@"chainringThree objectAtIndex:index = %@", [chainringThree objectAtIndex:index]); return [chainringThree objectAtIndex:index]; } default: break; } } return nil; } The error returned is an exception on trying to access an empty array. 2012-11-15 11:02:42.962 iGearScatter[3283:11603] Terminating app due to uncaught exception 'NSRangeException', reason: ' -[__NSArrayM objectAtIndex:]: index 0 beyond bounds for empty array' First throw call stack: (0x1989012 0x1696e7e 0x192b0b4 0x166cd 0x183f4 0x1bd39 0x179c0 0x194fb 0x199e1 0x43250 0x14b66 0x13ef0 0x13e89 0x3b5753 0x3b5b2f 0x3b5d54 0x3c35c9 0x5c0814 0x392594 0x39221c 0x394563 0x3103b6 0x310554 0x1e87d8 0x27b3014 0x27a37d5 0x192faf5 0x192ef44 0x192ee1b 0x29ea7e3 0x29ea668 0x2d265c 0x22dd 0x2205 0x1)* libc++abi.dylib: terminate called throwing an exception Thank you!

    Read the article

  • Patching this Code to apply dynamic (iPhone) background position

    - by Brian
    I posted a question previously that got off topic, I'm reposting with better code that I have VERIFIED is compatible with iPhone (it works with mine anyway!) I just want to apply background-position coordinates to the body element and call the script conditionally for iPhone, iPod, & iPad. Here's my conditional call for those devices: var deviceAgent = navigator.userAgent.toLowerCase(); var agentID = deviceAgent.match(/(iphone|ipod|ipad)/); if (agentID) { // do something } else { //do this } Now, I've found this excellent script that sets the "top: x" dynamically on the basis of scroll position. Everyone has told me (and ALL of the tutorials and Google search results as well) that it's impossible to set scroll position dynamically for iPhone because of the viewport issue. HOWEVER, they are wrong because if you scroll to the bottom of the page and view this javascript demo on iPhone, you can scroll and the <div style="background-position: fixed; top: x (variable)"></div> div DOES stay centered on iPhone. I really hope this question helps alot of people, I thought it was impossible, but it's NOT... I just need help stitching it together! The original code (you can test it on iPhone yourself) is here: http://stevenbenner.com/2010/04/calculate-page-size-and-view-port-position-in-javascript/ So I just need help getting the following code to apply the dynamic "background-position: 0 x" to the BODY tag where x is centered and relative to the viewport position. Also, needs to be nested inside the above code that is conditional for iPhone and similar devices. // Page Size and View Port Dimension Tools // http://stevenbenner.com/2010/04/calculate-page-size-and-view-port-position-in-javascript/ if (!sb_windowTools) { var sb_windowTools = new Object(); }; sb_windowTools = { scrollBarPadding: 17, // padding to assume for scroll bars // EXAMPLE METHODS // center an element in the viewport centerElementOnScreen: function(element) { var pageDimensions = this.updateDimensions(); element.style.top = ((this.pageDimensions.verticalOffset() + this.pageDimensions.windowHeight() / 2) - (this.scrollBarPadding + element.offsetHeight / 2)) + 'px'; element.style.left = ((this.pageDimensions.windowWidth() / 2) - (this.scrollBarPadding + element.offsetWidth / 2)) + 'px'; element.style.position = 'absolute'; }, // INFORMATION GETTERS // load the page size, view port position and vertical scroll offset updateDimensions: function() { this.updatePageSize(); this.updateWindowSize(); this.updateScrollOffset(); }, // load page size information updatePageSize: function() { // document dimensions var viewportWidth, viewportHeight; if (window.innerHeight && window.scrollMaxY) { viewportWidth = document.body.scrollWidth; viewportHeight = window.innerHeight + window.scrollMaxY; } else if (document.body.scrollHeight > document.body.offsetHeight) { // all but explorer mac viewportWidth = document.body.scrollWidth; viewportHeight = document.body.scrollHeight; } else { // explorer mac...would also work in explorer 6 strict, mozilla and safari viewportWidth = document.body.offsetWidth; viewportHeight = document.body.offsetHeight; }; this.pageSize = { viewportWidth: viewportWidth, viewportHeight: viewportHeight }; }, // load window size information updateWindowSize: function() { // view port dimensions var windowWidth, windowHeight; if (self.innerHeight) { // all except explorer windowWidth = self.innerWidth; windowHeight = self.innerHeight; } else if (document.documentElement && document.documentElement.clientHeight) { // explorer 6 strict mode windowWidth = document.documentElement.clientWidth; windowHeight = document.documentElement.clientHeight; } else if (document.body) { // other explorers windowWidth = document.body.clientWidth; windowHeight = document.body.clientHeight; }; this.windowSize = { windowWidth: windowWidth, windowHeight: windowHeight }; }, // load scroll offset information updateScrollOffset: function() { // viewport vertical scroll offset var horizontalOffset, verticalOffset; if (self.pageYOffset) { horizontalOffset = self.pageXOffset; verticalOffset = self.pageYOffset; } else if (document.documentElement && document.documentElement.scrollTop) { // Explorer 6 Strict horizontalOffset = document.documentElement.scrollLeft; verticalOffset = document.documentElement.scrollTop; } else if (document.body) { // all other Explorers horizontalOffset = document.body.scrollLeft; verticalOffset = document.body.scrollTop; }; this.scrollOffset = { horizontalOffset: horizontalOffset, verticalOffset: verticalOffset }; }, // INFORMATION CONTAINERS // raw data containers pageSize: {}, windowSize: {}, scrollOffset: {}, // combined dimensions object with bounding logic pageDimensions: { pageWidth: function() { return sb_windowTools.pageSize.viewportWidth > sb_windowTools.windowSize.windowWidth ? sb_windowTools.pageSize.viewportWidth : sb_windowTools.windowSize.windowWidth; }, pageHeight: function() { return sb_windowTools.pageSize.viewportHeight > sb_windowTools.windowSize.windowHeight ? sb_windowTools.pageSize.viewportHeight : sb_windowTools.windowSize.windowHeight; }, windowWidth: function() { return sb_windowTools.windowSize.windowWidth; }, windowHeight: function() { return sb_windowTools.windowSize.windowHeight; }, horizontalOffset: function() { return sb_windowTools.scrollOffset.horizontalOffset; }, verticalOffset: function() { return sb_windowTools.scrollOffset.verticalOffset; } } };

    Read the article

  • SimpleMembership, Membership Providers, Universal Providers and the new ASP.NET 4.5 Web Forms and ASP.NET MVC 4 templates

    - by Jon Galloway
    The ASP.NET MVC 4 Internet template adds some new, very useful features which are built on top of SimpleMembership. These changes add some great features, like a much simpler and extensible membership API and support for OAuth. However, the new account management features require SimpleMembership and won't work against existing ASP.NET Membership Providers. I'll start with a summary of top things you need to know, then dig into a lot more detail. Summary: SimpleMembership has been designed as a replacement for traditional the previous ASP.NET Role and Membership provider system SimpleMembership solves common problems people ran into with the Membership provider system and was designed for modern user / membership / storage needs SimpleMembership integrates with the previous membership system, but you can't use a MembershipProvider with SimpleMembership The new ASP.NET MVC 4 Internet application template AccountController requires SimpleMembership and is not compatible with previous MembershipProviders You can continue to use existing ASP.NET Role and Membership providers in ASP.NET 4.5 and ASP.NET MVC 4 - just not with the ASP.NET MVC 4 AccountController The existing ASP.NET Role and Membership provider system remains supported as is part of the ASP.NET core ASP.NET 4.5 Web Forms does not use SimpleMembership; it implements OAuth on top of ASP.NET Membership The ASP.NET Web Site Administration Tool (WSAT) is not compatible with SimpleMembership The following is the result of a few conversations with Erik Porter (PM for ASP.NET MVC) to make sure I had some the overall details straight, combined with a lot of time digging around in ILSpy and Visual Studio's assembly browsing tools. SimpleMembership: The future of membership for ASP.NET The ASP.NET Membership system was introduces with ASP.NET 2.0 back in 2005. It was designed to solve common site membership requirements at the time, which generally involved username / password based registration and profile storage in SQL Server. It was designed with a few extensibility mechanisms - notably a provider system (which allowed you override some specifics like backing storage) and the ability to store additional profile information (although the additional  profile information was packed into a single column which usually required access through the API). While it's sometimes frustrating to work with, it's held up for seven years - probably since it handles the main use case (username / password based membership in a SQL Server database) smoothly and can be adapted to most other needs (again, often frustrating, but it can work). The ASP.NET Web Pages and WebMatrix efforts allowed the team an opportunity to take a new look at a lot of things - e.g. the Razor syntax started with ASP.NET Web Pages, not ASP.NET MVC. The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership. As Matthew Osborn said in his post Using SimpleMembership With ASP.NET WebPages: With the introduction of ASP.NET WebPages and the WebMatrix stack our team has really be focusing on making things simpler for the developer. Based on a lot of customer feedback one of the areas that we wanted to improve was the built in security in ASP.NET. So with this release we took that time to create a new built in (and default for ASP.NET WebPages) security provider. I say provider because the new stuff is still built on the existing ASP.NET framework. So what do we call this new hotness that we have created? Well, none other than SimpleMembership. SimpleMembership is an umbrella term for both SimpleMembership and SimpleRoles. Part of simplifying membership involved fixing some common problems with ASP.NET Membership. Problems with ASP.NET Membership ASP.NET Membership was very obviously designed around a set of assumptions: Users and user information would most likely be stored in a full SQL Server database or in Active Directory User and profile information would be optimized around a set of common attributes (UserName, Password, IsApproved, CreationDate, Comment, Role membership...) and other user profile information would be accessed through a profile provider Some problems fall out of these assumptions. Requires Full SQL Server for default cases The default, and most fully featured providers ASP.NET Membership providers (SQL Membership Provider, SQL Role Provider, SQL Profile Provider) require full SQL Server. They depend on stored procedure support, and they rely on SQL Server cache dependencies, they depend on agents for clean up and maintenance. So the main SQL Server based providers don't work well on SQL Server CE, won't work out of the box on SQL Azure, etc. Note: Cory Fowler recently let me know about these Updated ASP.net scripts for use with Microsoft SQL Azure which do support membership, personalization, profile, and roles. But the fact that we need a support page with a set of separate SQL scripts underscores the underlying problem. Aha, you say! Jon's forgetting the Universal Providers, a.k.a. System.Web.Providers! Hold on a bit, we'll get to those... Custom Membership Providers have to work with a SQL-Server-centric API If you want to work with another database or other membership storage system, you need to to inherit from the provider base classes and override a bunch of methods which are tightly focused on storing a MembershipUser in a relational database. It can be done (and you can often find pretty good ones that have already been written), but it's a good amount of work and often leaves you with ugly code that has a bunch of System.NotImplementedException fun since there are a lot of methods that just don't apply. Designed around a specific view of users, roles and profiles The existing providers are focused on traditional membership - a user has a username and a password, some specific roles on the site (e.g. administrator, premium user), and may have some additional "nice to have" optional information that can be accessed via an API in your application. This doesn't fit well with some modern usage patterns: In OAuth and OpenID, the user doesn't have a password Often these kinds of scenarios map better to user claims or rights instead of monolithic user roles For many sites, profile or other non-traditional information is very important and needs to come from somewhere other than an API call that maps to a database blob What would work a lot better here is a system in which you were able to define your users, rights, and other attributes however you wanted and the membership system worked with your model - not the other way around. Requires specific schema, overflow in blob columns I've already mentioned this a few times, but it bears calling out separately - ASP.NET Membership focuses on SQL Server storage, and that storage is based on a very specific database schema. SimpleMembership as a better membership system As you might have guessed, SimpleMembership was designed to address the above problems. Works with your Schema As Matthew Osborn explains in his Using SimpleMembership With ASP.NET WebPages post, SimpleMembership is designed to integrate with your database schema: All SimpleMembership requires is that there are two columns on your users table so that we can hook up to it – an “ID” column and a “username” column. The important part here is that they can be named whatever you want. For instance username doesn't have to be an alias it could be an email column you just have to tell SimpleMembership to treat that as the “username” used to log in. Matthew's example shows using a very simple user table named Users (it could be named anything) with a UserID and Username column, then a bunch of other columns he wanted in his app. Then we point SimpleMemberhip at that table with a one-liner: WebSecurity.InitializeDatabaseFile("SecurityDemo.sdf", "Users", "UserID", "Username", true); No other tables are needed, the table can be named anything we want, and can have pretty much any schema we want as long as we've got an ID and something that we can map to a username. Broaden database support to the whole SQL Server family While SimpleMembership is not database agnostic, it works across the SQL Server family. It continues to support full SQL Server, but it also works with SQL Azure, SQL Server CE, SQL Server Express, and LocalDB. Everything's implemented as SQL calls rather than requiring stored procedures, views, agents, and change notifications. Note that SimpleMembership still requires some flavor of SQL Server - it won't work with MySQL, NoSQL databases, etc. You can take a look at the code in WebMatrix.WebData.dll using a tool like ILSpy if you'd like to see why - there places where SQL Server specific SQL statements are being executed, especially when creating and initializing tables. It seems like you might be able to work with another database if you created the tables separately, but I haven't tried it and it's not supported at this point. Note: I'm thinking it would be possible for SimpleMembership (or something compatible) to run Entity Framework so it would work with any database EF supports. That seems useful to me - thoughts? Note: SimpleMembership has the same database support - anything in the SQL Server family - that Universal Providers brings to the ASP.NET Membership system. Easy to with Entity Framework Code First The problem with with ASP.NET Membership's system for storing additional account information is that it's the gate keeper. That means you're stuck with its schema and accessing profile information through its API. SimpleMembership flips that around by allowing you to use any table as a user store. That means you're in control of the user profile information, and you can access it however you'd like - it's just data. Let's look at a practical based on the AccountModel.cs class in an ASP.NET MVC 4 Internet project. Here I'm adding a Birthday property to the UserProfile class. [Table("UserProfile")] public class UserProfile { [Key] [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] public int UserId { get; set; } public string UserName { get; set; } public DateTime Birthday { get; set; } } Now if I want to access that information, I can just grab the account by username and read the value. var context = new UsersContext(); var username = User.Identity.Name; var user = context.UserProfiles.SingleOrDefault(u => u.UserName == username); var birthday = user.Birthday; So instead of thinking of SimpleMembership as a big membership API, think of it as something that handles membership based on your user database. In SimpleMembership, everything's keyed off a user row in a table you define rather than a bunch of entries in membership tables that were out of your control. How SimpleMembership integrates with ASP.NET Membership Okay, enough sales pitch (and hopefully background) on why things have changed. How does this affect you? Let's start with a diagram to show the relationship (note: I've simplified by removing a few classes to show the important relationships): So SimpleMembershipProvider is an implementaiton of an ExtendedMembershipProvider, which inherits from MembershipProvider and adds some other account / OAuth related things. Here's what ExtendedMembershipProvider adds to MembershipProvider: The important thing to take away here is that a SimpleMembershipProvider is a MembershipProvider, but a MembershipProvider is not a SimpleMembershipProvider. This distinction is important in practice: you cannot use an existing MembershipProvider (including the Universal Providers found in System.Web.Providers) with an API that requires a SimpleMembershipProvider, including any of the calls in WebMatrix.WebData.WebSecurity or Microsoft.Web.WebPages.OAuth.OAuthWebSecurity. However, that's as far as it goes. Membership Providers still work if you're accessing them through the standard Membership API, and all of the core stuff  - including the AuthorizeAttribute, role enforcement, etc. - will work just fine and without any change. Let's look at how that affects you in terms of the new templates. Membership in the ASP.NET MVC 4 project templates ASP.NET MVC 4 offers six Project Templates: Empty - Really empty, just the assemblies, folder structure and a tiny bit of basic configuration. Basic - Like Empty, but with a bit of UI preconfigured (css / images / bundling). Internet - This has both a Home and Account controller and associated views. The Account Controller supports registration and login via either local accounts and via OAuth / OpenID providers. Intranet - Like the Internet template, but it's preconfigured for Windows Authentication. Mobile - This is preconfigured using jQuery Mobile and is intended for mobile-only sites. Web API - This is preconfigured for a service backend built on ASP.NET Web API. Out of these templates, only one (the Internet template) uses SimpleMembership. ASP.NET MVC 4 Basic template The Basic template has configuration in place to use ASP.NET Membership with the Universal Providers. You can see that configuration in the ASP.NET MVC 4 Basic template's web.config: <profile defaultProvider="DefaultProfileProvider"> <providers> <add name="DefaultProfileProvider" type="System.Web.Providers.DefaultProfileProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </profile> <membership defaultProvider="DefaultMembershipProvider"> <providers> <add name="DefaultMembershipProvider" type="System.Web.Providers.DefaultMembershipProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" /> </providers> </membership> <roleManager defaultProvider="DefaultRoleProvider"> <providers> <add name="DefaultRoleProvider" type="System.Web.Providers.DefaultRoleProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </roleManager> <sessionState mode="InProc" customProvider="DefaultSessionProvider"> <providers> <add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" /> </providers> </sessionState> This means that it's business as usual for the Basic template as far as ASP.NET Membership works. ASP.NET MVC 4 Internet template The Internet template has a few things set up to bootstrap SimpleMembership: \Models\AccountModels.cs defines a basic user account and includes data annotations to define keys and such \Filters\InitializeSimpleMembershipAttribute.cs creates the membership database using the above model, then calls WebSecurity.InitializeDatabaseConnection which verifies that the underlying tables are in place and marks initialization as complete (for the application's lifetime) \Controllers\AccountController.cs makes heavy use of OAuthWebSecurity (for OAuth account registration / login / management) and WebSecurity. WebSecurity provides account management services for ASP.NET MVC (and Web Pages) WebSecurity can work with any ExtendedMembershipProvider. There's one in the box (SimpleMembershipProvider) but you can write your own. Since a standard MembershipProvider is not an ExtendedMembershipProvider, WebSecurity will throw exceptions if the default membership provider is a MembershipProvider rather than an ExtendedMembershipProvider. Practical example: Create a new ASP.NET MVC 4 application using the Internet application template Install the Microsoft ASP.NET Universal Providers for LocalDB NuGet package Run the application, click on Register, add a username and password, and click submit You'll get the following execption in AccountController.cs::Register: To call this method, the "Membership.Provider" property must be an instance of "ExtendedMembershipProvider". This occurs because the ASP.NET Universal Providers packages include a web.config transform that will update your web.config to add the Universal Provider configuration I showed in the Basic template example above. When WebSecurity tries to use the configured ASP.NET Membership Provider, it checks if it can be cast to an ExtendedMembershipProvider before doing anything else. So, what do you do? Options: If you want to use the new AccountController, you'll either need to use the SimpleMembershipProvider or another valid ExtendedMembershipProvider. This is pretty straightforward. If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things: Replace  the AccountController.cs and AccountModels.cs in an ASP.NET MVC 4 Internet project with one from an ASP.NET MVC 3 application (you of course won't have OAuth support). Then, if you want, you can go through and remove other things that were built around SimpleMembership - the OAuth partial view, the NuGet packages (e.g. the DotNetOpenAuthAuth package, etc.) Use an ASP.NET MVC 4 Internet application template and add in a Universal Providers NuGet package. Then copy in the AccountController and AccountModel classes. Create an ASP.NET MVC 3 project and upgrade it to ASP.NET MVC 4 using the steps shown in the ASP.NET MVC 4 release notes. None of these are particularly elegant or simple. Maybe we (or just me?) can do something to make this simpler - perhaps a NuGet package. However, this should be an edge case - hopefully the cases where you'd need to create a new ASP.NET but use legacy ASP.NET Membership Providers should be pretty rare. Please let me (or, preferably the team) know if that's an incorrect assumption. Membership in the ASP.NET 4.5 project template ASP.NET 4.5 Web Forms took a different approach which builds off ASP.NET Membership. Instead of using the WebMatrix security assemblies, Web Forms uses Microsoft.AspNet.Membership.OpenAuth assembly. I'm no expert on this, but from a bit of time in ILSpy and Visual Studio's (very pretty) dependency graphs, this uses a Membership Adapter to save OAuth data into an EF managed database while still running on top of ASP.NET Membership. Note: There may be a way to use this in ASP.NET MVC 4, although it would probably take some plumbing work to hook it up. How does this fit in with Universal Providers (System.Web.Providers)? Just to summarize: Universal Providers are intended for cases where you have an existing ASP.NET Membership Provider and you want to use it with another SQL Server database backend (other than SQL Server). It doesn't require agents to handle expired session cleanup and other background tasks, it piggybacks these tasks on other calls. Universal Providers are not really, strictly speaking, universal - at least to my way of thinking. They only work with databases in the SQL Server family. Universal Providers do not work with Simple Membership. The Universal Providers packages include some web config transforms which you would normally want when you're using them. What about the Web Site Administration Tool? Visual Studio includes tooling to launch the Web Site Administration Tool (WSAT) to configure users and roles in your application. WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)

    Read the article

  • Creating Custom Ajax Control Toolkit Controls

    - by Stephen Walther
    The goal of this blog entry is to explain how you can extend the Ajax Control Toolkit with custom Ajax Control Toolkit controls. I describe how you can create the two halves of an Ajax Control Toolkit control: the server-side control extender and the client-side control behavior. Finally, I explain how you can use the new Ajax Control Toolkit control in a Web Forms page. At the end of this blog entry, there is a link to download a Visual Studio 2010 solution which contains the code for two Ajax Control Toolkit controls: SampleExtender and PopupHelpExtender. The SampleExtender contains the minimum skeleton for creating a new Ajax Control Toolkit control. You can use the SampleExtender as a starting point for your custom Ajax Control Toolkit controls. The PopupHelpExtender control is a super simple custom Ajax Control Toolkit control. This control extender displays a help message when you start typing into a TextBox control. The animated GIF below demonstrates what happens when you click into a TextBox which has been extended with the PopupHelp extender. Here’s a sample of a Web Forms page which uses the control: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ShowPopupHelp.aspx.cs" Inherits="MyACTControls.Web.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head runat="server"> <title>Show Popup Help</title> </head> <body> <form id="form1" runat="server"> <div> <act:ToolkitScriptManager ID="tsm" runat="server" /> <%-- Social Security Number --%> <asp:Label ID="lblSSN" Text="SSN:" AssociatedControlID="txtSSN" runat="server" /> <asp:TextBox ID="txtSSN" runat="server" /> <act:PopupHelpExtender id="ph1" TargetControlID="txtSSN" HelpText="Please enter your social security number." runat="server" /> <%-- Social Security Number --%> <asp:Label ID="lblPhone" Text="Phone Number:" AssociatedControlID="txtPhone" runat="server" /> <asp:TextBox ID="txtPhone" runat="server" /> <act:PopupHelpExtender id="ph2" TargetControlID="txtPhone" HelpText="Please enter your phone number." runat="server" /> </div> </form> </body> </html> In the page above, the PopupHelp extender is used to extend the functionality of the two TextBox controls. When focus is given to a TextBox control, the popup help message is displayed. An Ajax Control Toolkit control extender consists of two parts: a server-side control extender and a client-side behavior. For example, the PopupHelp extender consists of a server-side PopupHelpExtender control (PopupHelpExtender.cs) and a client-side PopupHelp behavior JavaScript script (PopupHelpBehavior.js). Over the course of this blog entry, I describe how you can create both the server-side extender and the client-side behavior. Writing the Server-Side Code Creating a Control Extender You create a control extender by creating a class that inherits from the abstract ExtenderControlBase class. For example, the PopupHelpExtender control is declared like this: public class PopupHelpExtender: ExtenderControlBase { } The ExtenderControlBase class is part of the Ajax Control Toolkit. This base class contains all of the common server properties and methods of every Ajax Control Toolkit extender control. The ExtenderControlBase class inherits from the ExtenderControl class. The ExtenderControl class is a standard class in the ASP.NET framework located in the System.Web.UI namespace. This class is responsible for generating a client-side behavior. The class generates a call to the Microsoft Ajax Library $create() method which looks like this: <script type="text/javascript"> $create(MyACTControls.PopupHelpBehavior, {"HelpText":"Please enter your social security number.","id":"ph1"}, null, null, $get("txtSSN")); }); </script> The JavaScript $create() method is part of the Microsoft Ajax Library. The reference for this method can be found here: http://msdn.microsoft.com/en-us/library/bb397487.aspx This method accepts the following parameters: type – The type of client behavior to create. The $create() method above creates a client PopupHelpBehavior. Properties – Enables you to pass initial values for the properties of the client behavior. For example, the initial value of the HelpText property. This is how server property values are passed to the client. Events – Enables you to pass client-side event handlers to the client behavior. References – Enables you to pass references to other client components. Element – The DOM element associated with the client behavior. This will be the DOM element associated with the control being extended such as the txtSSN TextBox. The $create() method is generated for you automatically. You just need to focus on writing the server-side control extender class. Specifying the Target Control All Ajax Control Toolkit extenders inherit a TargetControlID property from the ExtenderControlBase class. This property, the TargetControlID property, points at the control that the extender control extends. For example, the Ajax Control Toolkit TextBoxWatermark control extends a TextBox, the ConfirmButton control extends a Button, and the Calendar control extends a TextBox. You must indicate the type of control which your extender is extending. You indicate the type of control by adding a [TargetControlType] attribute to your control. For example, the PopupHelp extender is declared like this: [TargetControlType(typeof(TextBox))] public class PopupHelpExtender: ExtenderControlBase { } The PopupHelp extender can be used to extend a TextBox control. If you try to use the PopupHelp extender with another type of control then an exception is thrown. If you want to create an extender control which can be used with any type of ASP.NET control (Button, DataView, TextBox or whatever) then use the following attribute: [TargetControlType(typeof(Control))] Decorating Properties with Attributes If you decorate a server-side property with the [ExtenderControlProperty] attribute then the value of the property gets passed to the control’s client-side behavior. The value of the property gets passed to the client through the $create() method discussed above. The PopupHelp control contains the following HelpText property: [ExtenderControlProperty] [RequiredProperty] public string HelpText { get { return GetPropertyValue("HelpText", "Help Text"); } set { SetPropertyValue("HelpText", value); } } The HelpText property determines the help text which pops up when you start typing into a TextBox control. Because the HelpText property is decorated with the [ExtenderControlProperty] attribute, any value assigned to this property on the server is passed to the client automatically. For example, if you declare the PopupHelp extender in a Web Form page like this: <asp:TextBox ID="txtSSN" runat="server" /> <act:PopupHelpExtender id="ph1" TargetControlID="txtSSN" HelpText="Please enter your social security number." runat="server" />   Then the PopupHelpExtender renders the call to the the following Microsoft Ajax Library $create() method: $create(MyACTControls.PopupHelpBehavior, {"HelpText":"Please enter your social security number.","id":"ph1"}, null, null, $get("txtSSN")); You can see this call to the JavaScript $create() method by selecting View Source in your browser. This call to the $create() method calls a method named set_HelpText() automatically and passes the value “Please enter your social security number”. There are several attributes which you can use to decorate server-side properties including: ExtenderControlProperty – When a property is marked with this attribute, the value of the property is passed to the client automatically. ExtenderControlEvent – When a property is marked with this attribute, the property represents a client event handler. Required – When a value is not assigned to this property on the server, an error is displayed. DefaultValue – The default value of the property passed to the client. ClientPropertyName – The name of the corresponding property in the JavaScript behavior. For example, the server-side property is named ID (uppercase) and the client-side property is named id (lower-case). IDReferenceProperty – Applied to properties which refer to the IDs of other controls. URLProperty – Calls ResolveClientURL() to convert from a server-side URL to a URL which can be used on the client. ElementReference – Returns a reference to a DOM element by performing a client $get(). The WebResource, ClientResource, and the RequiredScript Attributes The PopupHelp extender uses three embedded resources named PopupHelpBehavior.js, PopupHelpBehavior.debug.js, and PopupHelpBehavior.css. The first two files are JavaScript files and the final file is a Cascading Style sheet file. These files are compiled as embedded resources. You don’t need to mark them as embedded resources in your Visual Studio solution because they get added to the assembly when the assembly is compiled by a build task. You can see that these files get embedded into the MyACTControls assembly by using Red Gate’s .NET Reflector tool: In order to use these files with the PopupHelp extender, you need to work with both the WebResource and the ClientScriptResource attributes. The PopupHelp extender includes the following three WebResource attributes. [assembly: WebResource("PopupHelp.PopupHelpBehavior.js", "text/javascript")] [assembly: WebResource("PopupHelp.PopupHelpBehavior.debug.js", "text/javascript")] [assembly: WebResource("PopupHelp.PopupHelpBehavior.css", "text/css", PerformSubstitution = true)] These WebResource attributes expose the embedded resource from the assembly so that they can be accessed by using the ScriptResource.axd or WebResource.axd handlers. The first parameter passed to the WebResource attribute is the name of the embedded resource and the second parameter is the content type of the embedded resource. The PopupHelp extender also includes the following ClientScriptResource and ClientCssResource attributes: [ClientScriptResource("MyACTControls.PopupHelpBehavior", "PopupHelp.PopupHelpBehavior.js")] [ClientCssResource("PopupHelp.PopupHelpBehavior.css")] Including these attributes causes the PopupHelp extender to request these resources when you add the PopupHelp extender to a page. If you open View Source in a browser which uses the PopupHelp extender then you will see the following link for the Cascading Style Sheet file: <link href="/WebResource.axd?d=0uONMsWXUuEDG-pbJHAC1kuKiIMteQFkYLmZdkgv7X54TObqYoqVzU4mxvaa4zpn5H9ch0RDwRYKwtO8zM5mKgO6C4WbrbkWWidKR07LD1d4n4i_uNB1mHEvXdZu2Ae5mDdVNDV53znnBojzCzwvSw2&amp;t=634417392021676003" type="text/css" rel="stylesheet" /> You also will see the following script include for the JavaScript file: <script src="/ScriptResource.axd?d=pIS7xcGaqvNLFBvExMBQSp_0xR3mpDfS0QVmmyu1aqDUjF06TrW1jVDyXNDMtBHxpRggLYDvgFTWOsrszflZEDqAcQCg-hDXjun7ON0Ol7EXPQIdOe1GLMceIDv3OeX658-tTq2LGdwXhC1-dE7_6g2&amp;t=ffffffff88a33b59" type="text/javascript"></script> The JavaScrpt file returned by this request to ScriptResource.axd contains the combined scripts for any and all Ajax Control Toolkit controls in a page. By default, the Ajax Control Toolkit combines all of the JavaScript files required by a page into a single JavaScript file. Combining files in this way really speeds up how quickly all of the JavaScript files get delivered from the web server to the browser. So, by default, there will be only one ScriptResource.axd include for all of the JavaScript files required by a page. If you want to disable Script Combining, and create separate links, then disable Script Combining like this: <act:ToolkitScriptManager ID="tsm" runat="server" CombineScripts="false" /> There is one more important attribute used by Ajax Control Toolkit extenders. The PopupHelp behavior uses the following two RequirdScript attributes to load the JavaScript files which are required by the PopupHelp behavior: [RequiredScript(typeof(CommonToolkitScripts), 0)] [RequiredScript(typeof(PopupExtender), 1)] The first parameter of the RequiredScript attribute represents either the string name of a JavaScript file or the type of an Ajax Control Toolkit control. The second parameter represents the order in which the JavaScript files are loaded (This second parameter is needed because .NET attributes are intrinsically unordered). In this case, the RequiredScript attribute will load the JavaScript files associated with the CommonToolkitScripts type and the JavaScript files associated with the PopupExtender in that order. The PopupHelp behavior depends on these JavaScript files. Writing the Client-Side Code The PopupHelp extender uses a client-side behavior written with the Microsoft Ajax Library. Here is the complete code for the client-side behavior: (function () { // The unique name of the script registered with the // client script loader var scriptName = "PopupHelpBehavior"; function execute() { Type.registerNamespace('MyACTControls'); MyACTControls.PopupHelpBehavior = function (element) { /// <summary> /// A behavior which displays popup help for a textbox /// </summmary> /// <param name="element" type="Sys.UI.DomElement">The element to attach to</param> MyACTControls.PopupHelpBehavior.initializeBase(this, [element]); this._textbox = Sys.Extended.UI.TextBoxWrapper.get_Wrapper(element); this._cssClass = "ajax__popupHelp"; this._popupBehavior = null; this._popupPosition = Sys.Extended.UI.PositioningMode.BottomLeft; this._popupDiv = null; this._helpText = "Help Text"; this._element$delegates = { focus: Function.createDelegate(this, this._element_onfocus), blur: Function.createDelegate(this, this._element_onblur) }; } MyACTControls.PopupHelpBehavior.prototype = { initialize: function () { MyACTControls.PopupHelpBehavior.callBaseMethod(this, 'initialize'); // Add event handlers for focus and blur var element = this.get_element(); $addHandlers(element, this._element$delegates); }, _ensurePopup: function () { if (!this._popupDiv) { var element = this.get_element(); var id = this.get_id(); this._popupDiv = $common.createElementFromTemplate({ nodeName: "div", properties: { id: id + "_popupDiv" }, cssClasses: ["ajax__popupHelp"] }, element.parentNode); this._popupBehavior = new $create(Sys.Extended.UI.PopupBehavior, { parentElement: element }, {}, {}, this._popupDiv); this._popupBehavior.set_positioningMode(this._popupPosition); } }, get_HelpText: function () { return this._helpText; }, set_HelpText: function (value) { if (this._HelpText != value) { this._helpText = value; this._ensurePopup(); this._popupDiv.innerHTML = value; this.raisePropertyChanged("Text") } }, _element_onfocus: function (e) { this.show(); }, _element_onblur: function (e) { this.hide(); }, show: function () { this._popupBehavior.show(); }, hide: function () { if (this._popupBehavior) { this._popupBehavior.hide(); } }, dispose: function() { var element = this.get_element(); $clearHandlers(element); if (this._popupBehavior) { this._popupBehavior.dispose(); this._popupBehavior = null; } } }; MyACTControls.PopupHelpBehavior.registerClass('MyACTControls.PopupHelpBehavior', Sys.Extended.UI.BehaviorBase); Sys.registerComponent(MyACTControls.PopupHelpBehavior, { name: "popupHelp" }); } // execute if (window.Sys && Sys.loader) { Sys.loader.registerScript(scriptName, ["ExtendedBase", "ExtendedCommon"], execute); } else { execute(); } })();   In the following sections, we’ll discuss how this client-side behavior works. Wrapping the Behavior for the Script Loader The behavior is wrapped with the following script: (function () { // The unique name of the script registered with the // client script loader var scriptName = "PopupHelpBehavior"; function execute() { // Behavior Content } // execute if (window.Sys && Sys.loader) { Sys.loader.registerScript(scriptName, ["ExtendedBase", "ExtendedCommon"], execute); } else { execute(); } })(); This code is required by the Microsoft Ajax Library Script Loader. You need this code if you plan to use a behavior directly from client-side code and you want to use the Script Loader. If you plan to only use your code in the context of the Ajax Control Toolkit then you can leave out this code. Registering a JavaScript Namespace The PopupHelp behavior is declared within a namespace named MyACTControls. In the code above, this namespace is created with the following registerNamespace() method: Type.registerNamespace('MyACTControls'); JavaScript does not have any built-in way of creating namespaces to prevent naming conflicts. The Microsoft Ajax Library extends JavaScript with support for namespaces. You can learn more about the registerNamespace() method here: http://msdn.microsoft.com/en-us/library/bb397723.aspx Creating the Behavior The actual Popup behavior is created with the following code. MyACTControls.PopupHelpBehavior = function (element) { /// <summary> /// A behavior which displays popup help for a textbox /// </summmary> /// <param name="element" type="Sys.UI.DomElement">The element to attach to</param> MyACTControls.PopupHelpBehavior.initializeBase(this, [element]); this._textbox = Sys.Extended.UI.TextBoxWrapper.get_Wrapper(element); this._cssClass = "ajax__popupHelp"; this._popupBehavior = null; this._popupPosition = Sys.Extended.UI.PositioningMode.BottomLeft; this._popupDiv = null; this._helpText = "Help Text"; this._element$delegates = { focus: Function.createDelegate(this, this._element_onfocus), blur: Function.createDelegate(this, this._element_onblur) }; } MyACTControls.PopupHelpBehavior.prototype = { initialize: function () { MyACTControls.PopupHelpBehavior.callBaseMethod(this, 'initialize'); // Add event handlers for focus and blur var element = this.get_element(); $addHandlers(element, this._element$delegates); }, _ensurePopup: function () { if (!this._popupDiv) { var element = this.get_element(); var id = this.get_id(); this._popupDiv = $common.createElementFromTemplate({ nodeName: "div", properties: { id: id + "_popupDiv" }, cssClasses: ["ajax__popupHelp"] }, element.parentNode); this._popupBehavior = new $create(Sys.Extended.UI.PopupBehavior, { parentElement: element }, {}, {}, this._popupDiv); this._popupBehavior.set_positioningMode(this._popupPosition); } }, get_HelpText: function () { return this._helpText; }, set_HelpText: function (value) { if (this._HelpText != value) { this._helpText = value; this._ensurePopup(); this._popupDiv.innerHTML = value; this.raisePropertyChanged("Text") } }, _element_onfocus: function (e) { this.show(); }, _element_onblur: function (e) { this.hide(); }, show: function () { this._popupBehavior.show(); }, hide: function () { if (this._popupBehavior) { this._popupBehavior.hide(); } }, dispose: function() { var element = this.get_element(); $clearHandlers(element); if (this._popupBehavior) { this._popupBehavior.dispose(); this._popupBehavior = null; } } }; The code above has two parts. The first part of the code is used to define the constructor function for the PopupHelp behavior. This is a factory method which returns an instance of a PopupHelp behavior: MyACTControls.PopupHelpBehavior = function (element) { } The second part of the code modified the prototype for the PopupHelp behavior: MyACTControls.PopupHelpBehavior.prototype = { } Any code which is particular to a single instance of the PopupHelp behavior should be placed in the constructor function. For example, the default value of the _helpText field is assigned in the constructor function: this._helpText = "Help Text"; Any code which is shared among all instances of the PopupHelp behavior should be added to the PopupHelp behavior’s prototype. For example, the public HelpText property is added to the prototype: get_HelpText: function () { return this._helpText; }, set_HelpText: function (value) { if (this._HelpText != value) { this._helpText = value; this._ensurePopup(); this._popupDiv.innerHTML = value; this.raisePropertyChanged("Text") } }, Registering a JavaScript Class After you create the PopupHelp behavior, you must register the behavior as a class by using the Microsoft Ajax registerClass() method like this: MyACTControls.PopupHelpBehavior.registerClass('MyACTControls.PopupHelpBehavior', Sys.Extended.UI.BehaviorBase); This call to registerClass() registers PopupHelp behavior as a class which derives from the base Sys.Extended.UI.BehaviorBase class. Like the ExtenderControlBase class on the server side, the BehaviorBase class on the client side contains method used by every behavior. The documentation for the BehaviorBase class can be found here: http://msdn.microsoft.com/en-us/library/bb311020.aspx The most important methods and properties of the BehaviorBase class are the following: dispose() – Use this method to clean up all resources used by your behavior. In the case of the PopupHelp behavior, the dispose() method is used to remote the event handlers created by the behavior and disposed the Popup behavior. get_element() -- Use this property to get the DOM element associated with the behavior. In other words, the DOM element which the behavior extends. get_id() – Use this property to the ID of the current behavior. initialize() – Use this method to initialize the behavior. This method is called after all of the properties are set by the $create() method. Creating Debug and Release Scripts You might have noticed that the PopupHelp behavior uses two scripts named PopupHelpBehavior.js and PopupHelpBehavior.debug.js. However, you never create these two scripts. Instead, you only create a single script named PopupHelpBehavior.pre.js. The pre in PopupHelpBehavior.pre.js stands for preprocessor. When you build the Ajax Control Toolkit (or the sample Visual Studio Solution at the end of this blog entry), a build task named JSBuild generates the PopupHelpBehavior.js release script and PopupHelpBehavior.debug.js debug script automatically. The JSBuild preprocessor supports the following directives: #IF #ELSE #ENDIF #INCLUDE #LOCALIZE #DEFINE #UNDEFINE The preprocessor directives are used to mark code which should only appear in the debug version of the script. The directives are used extensively in the Microsoft Ajax Library. For example, the Microsoft Ajax Library Array.contains() method is created like this: $type.contains = function Array$contains(array, item) { //#if DEBUG var e = Function._validateParams(arguments, [ {name: "array", type: Array, elementMayBeNull: true}, {name: "item", mayBeNull: true} ]); if (e) throw e; //#endif return (indexOf(array, item) >= 0); } Notice that you add each of the preprocessor directives inside a JavaScript comment. The comment prevents Visual Studio from getting confused with its Intellisense. The release version, but not the debug version, of the PopupHelpBehavior script is also minified automatically by the Microsoft Ajax Minifier. The minifier is invoked by a build step in the project file. Conclusion The goal of this blog entry was to explain how you can create custom AJAX Control Toolkit controls. In the first part of this blog entry, you learned how to create the server-side portion of an Ajax Control Toolkit control. You learned how to derive a new control from the ExtenderControlBase class and decorate its properties with the necessary attributes. Next, in the second part of this blog entry, you learned how to create the client-side portion of an Ajax Control Toolkit control by creating a client-side behavior with JavaScript. You learned how to use the methods of the Microsoft Ajax Library to extend your client behavior from the BehaviorBase class. Download the Custom ACT Starter Solution

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • The broken Promise of the Mobile Web

    - by Rick Strahl
    High end mobile devices have been with us now for almost 7 years and they have utterly transformed the way we access information. Mobile phones and smartphones that have access to the Internet and host smart applications are in the hands of a large percentage of the population of the world. In many places even very remote, cell phones and even smart phones are a common sight. I’ll never forget when I was in India in 2011 I was up in the Southern Indian mountains riding an elephant out of a tiny local village, with an elephant herder in front riding atop of the elephant in front of us. He was dressed in traditional garb with the loin wrap and head cloth/turban as did quite a few of the locals in this small out of the way and not so touristy village. So we’re slowly trundling along in the forest and he’s lazily using his stick to guide the elephant and… 10 minutes in he pulls out his cell phone from his sash and starts texting. In the middle of texting a huge pig jumps out from the side of the trail and he takes a picture running across our path in the jungle! So yeah, mobile technology is very pervasive and it’s reached into even very buried and unexpected parts of this world. Apps are still King Apps currently rule the roost when it comes to mobile devices and the applications that run on them. If there’s something that you need on your mobile device your first step usually is to look for an app, not use your browser. But native app development remains a pain in the butt, with the requirement to have to support 2 or 3 completely separate platforms. There are solutions that try to bridge that gap. Xamarin is on a tear at the moment, providing their cross-device toolkit to build applications using C#. While Xamarin tools are impressive – and also *very* expensive – they only address part of the development madness that is app development. There are still specific device integration isssues, dealing with the different developer programs, security and certificate setups and all that other noise that surrounds app development. There’s also PhoneGap/Cordova which provides a hybrid solution that involves creating local HTML/CSS/JavaScript based applications, and then packaging them to run in a specialized App container that can run on most mobile device platforms using a WebView interface. This allows for using of HTML technology, but it also still requires all the set up, configuration of APIs, security keys and certification and submission and deployment process just like native applications – you actually lose many of the benefits that  Web based apps bring. The big selling point of Cordova is that you get to use HTML have the ability to build your UI once for all platforms and run across all of them – but the rest of the app process remains in place. Apps can be a big pain to create and manage especially when we are talking about specialized or vertical business applications that aren’t geared at the mainstream market and that don’t fit the ‘store’ model. If you’re building a small intra department application you don’t want to deal with multiple device platforms and certification etc. for various public or corporate app stores. That model is simply not a good fit both from the development and deployment perspective. Even for commercial, big ticket apps, HTML as a UI platform offers many advantages over native, from write-once run-anywhere, to remote maintenance, single point of management and failure to having full control over the application as opposed to have the app store overloads censor you. In a lot of ways Web based HTML/CSS/JavaScript applications have so much potential for building better solutions based on existing Web technologies for the very same reasons a lot of content years ago moved off the desktop to the Web. To me the Web as a mobile platform makes perfect sense, but the reality of today’s Mobile Web unfortunately looks a little different… Where’s the Love for the Mobile Web? Yet here we are in the middle of 2014, nearly 7 years after the first iPhone was released and brought the promise of rich interactive information at your fingertips, and yet we still don’t really have a solid mobile Web platform. I know what you’re thinking: “But we have lots of HTML/JavaScript/CSS features that allows us to build nice mobile interfaces”. I agree to a point – it’s actually quite possible to build nice looking, rich and capable Web UI today. We have media queries to deal with varied display sizes, CSS transforms for smooth animations and transitions, tons of CSS improvements in CSS 3 that facilitate rich layout, a host of APIs geared towards mobile device features and lately even a number of JavaScript framework choices that facilitate development of multi-screen apps in a consistent manner. Personally I’ve been working a lot with AngularJs and heavily modified Bootstrap themes to build mobile first UIs and that’s been working very well to provide highly usable and attractive UI for typical mobile business applications. From the pure UI perspective things actually look very good. Not just about the UI But it’s not just about the UI - it’s also about integration with the mobile device. When it comes to putting all those pieces together into what amounts to a consolidated platform to build mobile Web applications, I think we still have a ways to go… there are a lot of missing pieces to make it all work together and integrate with the device more smoothly, and more importantly to make it work uniformly across the majority of devices. I think there are a number of reasons for this. Slow Standards Adoption HTML standards implementations and ratification has been dreadfully slow, and browser vendors all seem to pick and choose different pieces of the technology they implement. The end result is that we have a capable UI platform that’s missing some of the infrastructure pieces to make it whole on mobile devices. There’s lots of potential but what is lacking that final 10% to build truly compelling mobile applications that can compete favorably with native applications. Some of it is the fragmentation of browsers and the slow evolution of the mobile specific HTML APIs. A host of mobile standards exist but many of the standards are in the early review stage and they have been there stuck for long periods of time and seem to move at a glacial pace. Browser vendors seem even slower to implement them, and for good reason – non-ratified standards mean that implementations may change and vendor implementations tend to be experimental and  likely have to be changed later. Neither Vendors or developers are not keen on changing standards. This is the typical chicken and egg scenario, but without some forward momentum from some party we end up stuck in the mud. It seems that either the standards bodies or the vendors need to carry the torch forward and that doesn’t seem to be happening quickly enough. Mobile Device Integration just isn’t good enough Current standards are not far reaching enough to address a number of the use case scenarios necessary for many mobile applications. While not every application needs to have access to all mobile device features, almost every mobile application could benefit from some integration with other parts of the mobile device platform. Integration with GPS, phone, media, messaging, notifications, linking and contacts system are benefits that are unique to mobile applications and could be widely used, but are mostly (with the exception of GPS) inaccessible for Web based applications today. Unfortunately trying to do most of this today only with a mobile Web browser is a losing battle. Aside from PhoneGap/Cordova’s app centric model with its own custom API accessing mobile device features and the token exception of the GeoLocation API, most device integration features are not widely supported by the current crop of mobile browsers. For example there’s no usable messaging API that allows access to SMS or contacts from HTML. Even obvious components like the Media Capture API are only implemented partially by mobile devices. There are alternatives and workarounds for some of these interfaces by using browser specific code, but that’s might ugly and something that I thought we were trying to leave behind with newer browser standards. But it’s not quite working out that way. It’s utterly perplexing to me that mobile standards like Media Capture and Streams, Media Gallery Access, Responsive Images, Messaging API, Contacts Manager API have only minimal or no traction at all today. Keep in mind we’ve had mobile browsers for nearly 7 years now, and yet we still have to think about how to get access to an image from the image gallery or the camera on some devices? Heck Windows Phone IE Mobile just gained the ability to upload images recently in the Windows 8.1 Update – that’s feature that HTML has had for 20 years! These are simple concepts and common problems that should have been solved a long time ago. It’s extremely frustrating to see build 90% of a mobile Web app with relative ease and then hit a brick wall for the remaining 10%, which often can be show stoppers. The remaining 10% have to do with platform integration, browser differences and working around the limitations that browsers and ‘pinned’ applications impose on HTML applications. The maddening part is that these limitations seem arbitrary as they could easily work on all mobile platforms. For example, SMS has a URL Moniker interface that sort of works on Android, works badly with iOS (only works if the address is already in the contact list) and not at all on Windows Phone. There’s no reason this shouldn’t work universally using the same interface – after all all phones have supported SMS since before the year 2000! But, it doesn’t have to be this way Change can happen very quickly. Take the GeoLocation API for example. Geolocation has taken off at the very beginning of the mobile device era and today it works well, provides the necessary security (a big concern for many mobile APIs), and is supported by just about all major mobile and even desktop browsers today. It handles security concerns via prompts to avoid unwanted access which is a model that would work for most other device APIs in a similar fashion. One time approval and occasional re-approval if code changes or caches expire. Simple and only slightly intrusive. It all works well, even though GeoLocation actually has some physical limitations, such as representing the current location when no GPS device is present. Yet this is a solved problem, where other APIs that are conceptually much simpler to implement have failed to gain any traction at all. Technically none of these APIs should be a problem to implement, but it appears that the momentum is just not there. Inadequate Web Application Linking and Activation Another important piece of the puzzle missing is the integration of HTML based Web applications. Today HTML based applications are not first class citizens on mobile operating systems. When talking about HTML based content there’s a big difference between content and applications. Content is great for search engine discovery and plain browser usage. Content is usually accessed intermittently and permanent linking is not so critical for this type of content.  But applications have different needs. Applications need to be started up quickly and must be easily switchable to support a multi-tasking user workflow. Therefore, it’s pretty crucial that mobile Web apps are integrated into the underlying mobile OS and work with the standard task management features. Unfortunately this integration is not as smooth as it should be. It starts with actually trying to find mobile Web applications, to ‘installing’ them onto a phone in an easily accessible manner in a prominent position. The experience of discovering a Mobile Web ‘App’ and making it sticky is by no means as easy or satisfying. Today the way you’d go about this is: Open the browser Search for a Web Site in the browser with your search engine of choice Hope that you find the right site Hope that you actually find a site that works for your mobile device Click on the link and run the app in a fully chrome’d browser instance (read tiny surface area) Pin the app to the home screen (with all the limitations outline above) Hope you pointed at the right URL when you pinned Even for you and me as developers, there are a few steps in there that are painful and annoying, but think about the average user. First figuring out how to search for a specific site or URL? And then pinning the app and hopefully from the right location? You’ve probably lost more than half of your audience at that point. This experience sucks. For developers too this process is painful since app developers can’t control the shortcut creation directly. This problem often gets solved by crazy coding schemes, with annoying pop-ups that try to get people to create shortcuts via fancy animations that are both annoying and add overhead to each and every application that implements this sort of thing differently. And that’s not the end of it - getting the link onto the home screen with an application icon varies quite a bit between browsers. Apple’s non-standard meta tags are prominent and they work with iOS and Android (only more recent versions), but not on Windows Phone. Windows Phone instead requires you to create an actual screen or rather a partial screen be captured for a shortcut in the tile manager. Who had that brilliant idea I wonder? Surprisingly Chrome on recent Android versions seems to actually get it right – icons use pngs, pinning is easy and pinned applications properly behave like standalone apps and retain the browser’s active page state and content. Each of the platforms has a different way to specify icons (WP doesn’t allow you to use an icon image at all), and the most widely used interface in use today is a bunch of Apple specific meta tags that other browsers choose to support. The question is: Why is there no standard implementation for installing shortcuts across mobile platforms using an official format rather than a proprietary one? Then there’s iOS and the crazy way it treats home screen linked URLs using a crazy hybrid format that is neither as capable as a Web app running in Safari nor a WebView hosted application. Moving off the Web ‘app’ link when switching to another app actually causes the browser and preview it to ‘blank out’ the Web application in the Task View (see screenshot on the right). Then, when the ‘app’ is reactivated it ends up completely restarting the browser with the original link. This is crazy behavior that you can’t easily work around. In some situations you might be able to store the application state and restore it using LocalStorage, but for many scenarios that involve complex data sources (like say Google Maps) that’s not a possibility. The only reason for this screwed up behavior I can think of is that it is deliberate to make Web apps a pain in the butt to use and forcing users trough the App Store/PhoneGap/Cordova route. App linking and management is a very basic problem – something that we essentially have solved in every desktop browser – yet on mobile devices where it arguably matters a lot more to have easy access to web content we have to jump through hoops to have even a remotely decent linking/activation experience across browsers. Where’s the Money? It’s not surprising that device home screen integration and Mobile Web support in general is in such dismal shape – the mobile OS vendors benefit financially from App store sales and have little to gain from Web based applications that bypass the App store and the cash cow that it presents. On top of that, platform specific vendor lock-in of both end users and developers who have invested in hardware, apps and consumables is something that mobile platform vendors actually aspire to. Web based interfaces that are cross-platform are the anti-thesis of that and so again it’s no surprise that the mobile Web is on a struggling path. But – that may be changing. More and more we’re seeing operations shifting to services that are subscription based or otherwise collect money for usage, and that may drive more progress into the Web direction in the end . Nothing like the almighty dollar to drive innovation forward. Do we need a Mobile Web App Store? As much as I dislike moderated experiences in today’s massive App Stores, they do at least provide one single place to look for apps for your device. I think we could really use some sort of registry, that could provide something akin to an app store for mobile Web apps, to make it easier to actually find mobile applications. This could take the form of a specialized search engine, or maybe a more formal store/registry like structure. Something like apt-get/chocolatey for Web apps. It could be curated and provide at least some feedback and reviews that might help with the integrity of applications. Coupled to that could be a native application on each platform that would allow searching and browsing of the registry and then also handle installation in the form of providing the home screen linking, plus maybe an initial security configuration that determines what features are allowed access to for the app. I’m not holding my breath. In order for this sort of thing to take off and gain widespread appeal, a lot of coordination would be required. And in order to get enough traction it would have to come from a well known entity – a mobile Web app store from a no name source is unlikely to gain high enough usage numbers to make a difference. In a way this would eliminate some of the freedom of the Web, but of course this would also be an optional search path in addition to the standard open Web search mechanisms to find and access content today. Security Security is a big deal, and one of the perceived reasons why so many IT professionals appear to be willing to go back to the walled garden of deployed apps is that Apps are perceived as safe due to the official review and curation of the App stores. Curated stores are supposed to protect you from malware, illegal and misleading content. It doesn’t always work out that way and all the major vendors have had issues with security and the review process at some time or another. Security is critical, but I also think that Web applications in general pose less of a security threat than native applications, by nature of the sandboxed browser and JavaScript environments. Web applications run externally completely and in the HTML and JavaScript sandboxes, with only a very few controlled APIs allowing access to device specific features. And as discussed earlier – security for any device interaction can be granted the same for mobile applications through a Web browser, as they can for native applications either via explicit policies loaded from the Web, or via prompting as GeoLocation does today. Security is important, but it’s certainly solvable problem for Web applications even those that need to access device hardware. Security shouldn’t be a reason for Web apps to be an equal player in mobile applications. Apps are winning, but haven’t we been here before? So now we’re finding ourselves back in an era of installed app, rather than Web based and managed apps. Only it’s even worse today than with Desktop applications, in that the apps are going through a gatekeeper that charges a toll and censors what you can and can’t do in your apps. Frankly it’s a mystery to me why anybody would buy into this model and why it’s lasted this long when we’ve already been through this process. It’s crazy… It’s really a shame that this regression is happening. We have the technology to make mobile Web apps much more prominent, but yet we’re basically held back by what seems little more than bureaucracy, partisan bickering and self interest of the major parties involved. Back in the day of the desktop it was Internet Explorer’s 98+%  market shareholding back the Web from improvements for many years – now it’s the combined mobile OS market in control of the mobile browsers. If mobile Web apps were allowed to be treated the same as native apps with simple ways to install and run them consistently and persistently, that would go a long way to making mobile applications much more usable and seriously viable alternatives to native apps. But as it is mobile apps have a severe disadvantage in placement and operation. There are a few bright spots in all of this. Mozilla’s FireFoxOs is embracing the Web for it’s mobile OS by essentially building every app out of HTML and JavaScript based content. It supports both packaged and certified package modes (that can be put into the app store), and Open Web apps that are loaded and run completely off the Web and can also cache locally for offline operation using a manifest. Open Web apps are treated as full class citizens in FireFoxOS and run using the same mechanism as installed apps. Unfortunately FireFoxOs is getting a slow start with minimal device support and specifically targeting the low end market. We can hope that this approach will change and catch on with other vendors, but that’s also an uphill battle given the conflict of interest with platform lock in that it represents. Recent versions of Android also seem to be working reasonably well with mobile application integration onto the desktop and activation out of the box. Although it still uses the Apple meta tags to find icons and behavior settings, everything at least works as you would expect – icons to the desktop on pinning, WebView based full screen activation, and reliable application persistence as the browser/app is treated like a real application. Hopefully iOS will at some point provide this same level of rudimentary Web app support. What’s also interesting to me is that Microsoft hasn’t picked up on the obvious need for a solid Web App platform. Being a distant third in the mobile OS war, Microsoft certainly has nothing to lose and everything to gain by using fresh ideas and expanding into areas that the other major vendors are neglecting. But instead Microsoft is trying to beat the market leaders at their own game, fighting on their adversary’s terms instead of taking a new tack. Providing a kick ass mobile Web platform that takes the lead on some of the proposed mobile APIs would be something positive that Microsoft could do to improve its miserable position in the mobile device market. Where are we at with Mobile Web? It sure sounds like I’m really down on the Mobile Web, right? I’ve built a number of mobile apps in the last year and while overall result and response has been very positive to what we were able to accomplish in terms of UI, getting that final 10% that required device integration dialed was an absolute nightmare on every single one of them. Big compromises had to be made and some features were left out or had to be modified for some devices. In two cases we opted to go the Cordova route in order to get the integration we needed, along with the extra pain involved in that process. Unless you’re not integrating with device features and you don’t care deeply about a smooth integration with the mobile desktop, mobile Web development is fraught with frustration. So, yes I’m frustrated! But it’s not for lack of wanting the mobile Web to succeed. I am still a firm believer that we will eventually arrive a much more functional mobile Web platform that allows access to the most common device features in a sensible way. It wouldn't be difficult for device platform vendors to make Web based applications first class citizens on mobile devices. But unfortunately it looks like it will still be some time before this happens. So, what’s your experience building mobile Web apps? Are you finding similar issues? Just giving up on raw Web applications and building PhoneGap apps instead? Completely skipping the Web and going native? Leave a comment for discussion. Resources Rick Strahl on DotNet Rocks talking about Mobile Web© Rick Strahl, West Wind Technologies, 2005-2014Posted in HTML5  Mobile   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Setting up a local AI server - easy with Solaris 11

    - by Stefan Hinker
    Many things are new in Solaris 11, Autoinstall is one of them.  If, like me, you've known Jumpstart for the last 2 centuries or so, you'll have to start from scratch.  Well, almost, as the concepts are similar, and it's not all that difficult.  Just new. I wanted to have an AI server that I could use for demo purposes, on the train if need be.  That answers the question of hardware requirements: portable.  But let's start at the beginning. First, you need an OS image, of course.  In the new world of Solaris 11, it is now called a repository.  The original can be downloaded from the Solaris 11 page at Oracle.   What you want is the "Oracle Solaris 11 11/11 Repository Image", which comes in two parts that can be combined using cat.  MD5 checksums for these (and all other downloads from that page) are available closer to the top of the page. With that, building the repository is quick and simple: # zfs create -o mountpoint=/export/repo rpool/ai/repo # zfs create rpool/ai/repo/s11 # mount -o ro -F hsfs /tmp/sol-11-1111-repo-full.iso /mnt # rsync -aP /mnt/repo /export/repo/s11 # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@fcs # pkgrepo info -s /export/repo/sol11/repo PUBLISHER PACKAGES STATUS UPDATED solaris 4292 online 2012-03-12T20:47:15.378639Z That's all there's to it.  Let's make a snapshot, just to be on the safe side.  You never know when one will come in handy.  To use this repository, you could just add it as a file-based publisher: # pkg set-publisher -g file:///export/repo/sol11/repo solaris In case I'd want to access this repository through a (virtual) network, i'll now quickly activate the repository-service: # svccfg -s application/pkg/server \ setprop pkg/inst_root=/export/repo/sol11/repo # svccfg -s application/pkg/server setprop pkg/readonly=true # svcadm refresh application/pkg/server # svcadm enable application/pkg/server That's all you need - now point your browser to http://localhost/ to view your beautiful repository-server. Step 1 is done.  All of this, by the way, is nicely documented in the README file that's contained in the repository image. Of course, we already have updates to the original release.  You can find them in MOS in the Oracle Solaris 11 Support Repository Updates (SRU) Index.  You can simply add these to your existing repository or create separate repositories for each SRU.  The individual SRUs are self-sufficient and incremental - SRU4 includes all updates from SRU2 and SRU3.  With ZFS, you can also get both: A full repository with all updates and at the same time incremental ones up to each of the updates: # mount -o ro -F hsfs /tmp/sol-11-1111-sru4-05-incr-repo.iso /mnt # pkgrecv -s /mnt/repo -d /export/repo/sol11/repo '*' # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@sru4 # zfs set snapdir=visible rpool/ai/repo/sol11 # svcadm restart svc:/application/pkg/server:default The normal repository is now updated to SRU4.  Thanks to the ZFS snapshots, there is also a valid repository of Solaris 11 11/11 without the update located at /export/repo/sol11/.zfs/snapshot/fcs . If you like, you can also create another repository service for each update, running on a separate port. But now lets continue with the AI server.  Just a little bit of reading in the dokumentation makes it clear that we will need to run a DHCP server for this.  Since I already have one active (for my SunRay installation) and since it's a good idea to have these kinds of services separate anyway, I decided to create this in a Zone.  So, let's create one first: # zfs create -o mountpoint=/export/install rpool/ai/install # zfs create -o mountpoint=/zones rpool/zones # zonecfg -z ai-server zonecfg:ai-server> create create: Using system default template 'SYSdefault' zonecfg:ai-server> set zonepath=/zones/ai-server zonecfg:ai-server> add dataset zonecfg:ai-server:dataset> set name=rpool/ai/install zonecfg:ai-server:dataset> set alias=install zonecfg:ai-server:dataset> end zonecfg:ai-server> commit zonecfg:ai-server> exit # zoneadm -z ai-server install # zoneadm -z ai-server boot ; zlogin -C ai-server Give it a hostname and IP address at first boot, and there's the Zone.  For a publisher for Solaris packages, it will be bound to the "System Publisher" from the Global Zone.  The /export/install filesystem, of course, is intended to be used by the AI server.  Let's configure it now: #zlogin ai-server root@ai-server:~# pkg install install/installadm root@ai-server:~# installadm create-service -n x86-fcs -a i386 \ -s pkg://solaris/install-image/[email protected],5.11-0.175.0.0.0.2.1482 \ -d /export/install/fcs -i 192.168.2.20 -c 3 With that, the core AI server is already done.  What happened here?  First, I installed the AI server software.  IPS makes that nice and easy.  If necessary, it'll also pull in the required DHCP-Server and anything else that might be missing.  Watch out for that DHCP server software.  In Solaris 11, there are two different versions.  There's the one you might know from Solaris 10 and earlier, and then there's a new one from ISC.  The latter is the one we need for AI.  The SMF service names of both are very similar.  The "old" one is "svc:/network/dhcp-server:default". The ISC-server comes with several SMF-services. We at least need "svc:/network/dhcp/server:ipv4".  The command "installadm create-service" creates the installation-service. It's called "x86-fcs", serves the "i386" architecture and gets its boot image from the repository of the system publisher, using version 5.11,5.11-0.175.0.0.0.2.1482, which is Solaris 11 11/11.  (The option "-a i386" in this example is optional, since the installserver itself runs on a x86 machine.) The boot-environment for clients is created in /export/install/fcs and the DHCP-server is configured for 3 IP-addresses starting at 192.168.2.20.  This configuration is stored in a very human readable form in /etc/inet/dhcpd4.conf.  An AI-service for SPARC systems could be created in the very same way, using "-a sparc" as the architecture option. Now we would be ready to register and install the first client.  It would be installed with the default "solaris-large-server" using the publisher "http://pkg.oracle.com/solaris/release" and would query it's configuration interactively at first boot.  This makes it very clear that an AI-server is really only a boot-server.  The true source of packets to install can be different.  Since I don't like these defaults for my demo setup, I did some extra config work for my clients. The configuration of a client is controlled by manifests and profiles.  The manifest controls which packets are installed and how the filesystems are layed out.  In that, it's very much like the old "rules.ok" file in Jumpstart.  Profiles contain additional configuration like root passwords, primary user account, IP addresses, keyboard layout etc.  Hence, profiles are very similar to the old sysid.cfg file. The easiest way to get your hands on a manifest is to ask the AI server we just created to give us it's default one.  Then modify that to our liking and give it back to the installserver to use: root@ai-server:~# mkdir -p /export/install/configs/manifests root@ai-server:~# cd /export/install/configs/manifests root@ai-server:~# installadm export -n x86-fcs -m orig_default \ -o orig_default.xml root@ai-server:~# cp orig_default.xml s11-fcs.small.local.xml root@ai-server:~# vi s11-fcs.small.local.xml root@ai-server:~# more s11-fcs.small.local.xml <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="S11 Small fcs local"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="IPS"> <destination>  </destination> <source> <publisher name="solaris"> <origin name="http://192.168.2.12/"/> </publisher> </source> <!-- By default the latest build available, in the specified IPS repository, is installed. If another build is required, the build number has to be appended to the 'entire' package in the following form: <name>pkg:/[email protected]#</name> --> <software_data action="install"> <name>pkg:/[email protected],5.11-0.175.0.0.0.2.0</name> <name>pkg:/group/system/solaris-small-server</name> </software_data> </software> </ai_instance> </auto_install> root@ai-server:~# installadm create-manifest -n x86-fcs -d \ -f ./s11-fcs.small.local.xml root@ai-server:~# installadm list -m -n x86-fcs Manifest Status Criteria -------- ------ -------- S11 Small fcs local Default None orig_default Inactive None The major points in this new manifest are: Install "solaris-small-server" Install a few locales less than the default.  I'm not that fluid in French or Japanese... Use my own package service as publisher, running on IP address 192.168.2.12 Install the initial release of Solaris 11:  pkg:/[email protected],5.11-0.175.0.0.0.2.0 Using a similar approach, I'll create a default profile interactively and use it as a template for a few customized building blocks, each defining a part of the overall system configuration.  The modular approach makes it easy to configure numerous clients later on: root@ai-server:~# mkdir -p /export/install/configs/profiles root@ai-server:~# cd /export/install/configs/profiles root@ai-server:~# sysconfig create-profile -o default.xml root@ai-server:~# cp default.xml general.xml; cp default.xml mars.xml root@ai-server:~# cp default.xml user.xml root@ai-server:~# vi general.xml mars.xml user.xml root@ai-server:~# more general.xml mars.xml user.xml :::::::::::::: general.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/timezone"> <instance enabled="true" name="default"> <property_group type="application" name="timezone"> <propval type="astring" name="localtime" value="Europe/Berlin"/> </property_group> </instance> </service> <service version="1" type="service" name="system/environment"> <instance enabled="true" name="init"> <property_group type="application" name="environment"> <propval type="astring" name="LANG" value="C"/> </property_group> </instance> </service> <service version="1" type="service" name="system/keymap"> <instance enabled="true" name="default"> <property_group type="system" name="keymap"> <propval type="astring" name="layout" value="US-English"/> </property_group> </instance> </service> <service version="1" type="service" name="system/console-login"> <instance enabled="true" name="default"> <property_group type="application" name="ttymon"> <propval type="astring" name="terminal_type" value="vt100"/> </property_group> </instance> </service> <service version="1" type="service" name="network/physical"> <instance enabled="true" name="default"> <property_group type="application" name="netcfg"> <propval type="astring" name="active_ncp" value="DefaultFixed"/> </property_group> </instance> </service> <service version="1" type="service" name="system/name-service/switch"> <property_group type="application" name="config"> <propval type="astring" name="default" value="files"/> <propval type="astring" name="host" value="files dns"/> <propval type="astring" name="printer" value="user files"/> </property_group> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="system/name-service/cache"> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="network/dns/client"> <property_group type="application" name="config"> <property type="net_address" name="nameserver"> <net_address_list> <value_node value="192.168.2.1"/> </net_address_list> </property> </property_group> <instance enabled="true" name="default"/> </service> </service_bundle> :::::::::::::: mars.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="network/install"> <instance enabled="true" name="default"> <property_group type="application" name="install_ipv4_interface"> <propval type="astring" name="address_type" value="static"/> <propval type="net_address_v4" name="static_address" value="192.168.2.100/24"/> <propval type="astring" name="name" value="net0/v4"/> <propval type="net_address_v4" name="default_route" value="192.168.2.1"/> </property_group> <property_group type="application" name="install_ipv6_interface"> <propval type="astring" name="stateful" value="yes"/> <propval type="astring" name="stateless" value="yes"/> <propval type="astring" name="address_type" value="addrconf"/> <propval type="astring" name="name" value="net0/v6"/> </property_group> </instance> </service> <service version="1" type="service" name="system/identity"> <instance enabled="true" name="node"> <property_group type="application" name="config"> <propval type="astring" name="nodename" value="mars"/> </property_group> </instance> </service> </service_bundle> :::::::::::::: user.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/config-user"> <instance enabled="true" name="default"> <property_group type="application" name="root_account"> <propval type="astring" name="login" value="root"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="role"/> </property_group> <property_group type="application" name="user_account"> <propval type="astring" name="login" value="stefan"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="normal"/> <propval type="astring" name="description" value="Stefan Hinker"/> <propval type="count" name="uid" value="12345"/> <propval type="count" name="gid" value="10"/> <propval type="astring" name="shell" value="/usr/bin/bash"/> <propval type="astring" name="roles" value="root"/> <propval type="astring" name="profiles" value="System Administrator"/> <propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/> </property_group> </instance> </service> </service_bundle> root@ai-server:~# installadm create-profile -n x86-fcs -f general.xml root@ai-server:~# installadm create-profile -n x86-fcs -f user.xml root@ai-server:~# installadm create-profile -n x86-fcs -f mars.xml \ -c ipv4=192.168.2.100 root@ai-server:~# installadm list -p Service Name Profile ------------ ------- x86-fcs general.xml mars.xml user.xml root@ai-server:~# installadm list -n x86-fcs -p Profile Criteria ------- -------- general.xml None mars.xml ipv4 = 192.168.2.100 user.xml None Here's the idea behind these files: "general.xml" contains settings valid for all my clients.  Stuff like DNS servers, for example, which in my case will always be the same. "user.xml" only contains user definitions.  That is, a root password and a primary user.Both of these profiles will be valid for all clients (for now). "mars.xml" defines network settings for an individual client.  This profile is associated with an IP-Address.  For this to work, I'll have to tweak the DHCP-settings in the next step: root@ai-server:~# installadm create-client -e 08:00:27:AA:3D:B1 -n x86-fcs root@ai-server:~# vi /etc/inet/dhcpd4.conf root@ai-server:~# tail -5 /etc/inet/dhcpd4.conf host 080027AA3DB1 { hardware ethernet 08:00:27:AA:3D:B1; fixed-address 192.168.2.100; filename "01080027AA3DB1"; } This completes the client preparations.  I manually added the IP-Address for mars to /etc/inet/dhcpd4.conf.  This is needed for the "mars.xml" profile.  Disabling arbitrary DHCP-replies will shut up this DHCP server, making my life in a shared environment a lot more peaceful ;-)Now, I of course want this installation to be completely hands-off.  For this to work, I'll need to modify the grub boot menu for this client slightly.  You can find it in /etc/netboot.  "installadm create-client" will create a new boot menu for every client, identified by the client's MAC address.  The template for this can be found in a subdirectory with the name of the install service, /etc/netboot/x86-fcs in our case.  If you don't want to change this manually for every client, modify that template to your liking instead. root@ai-server:~# cd /etc/netboot root@ai-server:~# cp menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org root@ai-server:~# vi menu.lst.01080027AA3DB1 root@ai-server:~# diff menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org 1,2c1,2 < default=1 < timeout=10 --- > default=0 > timeout=30 root@ai-server:~# more menu.lst.01080027AA3DB1 default=1 timeout=10 min_mem64=0 title Oracle Solaris 11 11/11 Text Installer and command line kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install_media=htt p://$serverIP:5555//export/install/fcs,install_service=x86-fcs,install_svc_addre ss=$serverIP:5555 module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive title Oracle Solaris 11 11/11 Automated Install kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install=true,inst all_media=http://$serverIP:5555//export/install/fcs,install_service=x86-fcs,inst all_svc_address=$serverIP:5555,livemode=text module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive Now just boot the client off the network using PXE-boot.  For my demo purposes, that's a client from VirtualBox, of course.  That's all there's to it.  And despite the fact that this blog entry is a little longer - that wasn't that hard now, was it?

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • XNA Screen Manager problem with transitions

    - by NexAddo
    I'm having issues using the game statemanagement example in the game I am developing. I have no issues with my first three screens transitioning between one another. I have a main menu screen, a splash screen and a high score screen that cycle: mainMenuScreen->splashScreen->highScoreScreen->mainMenuScreen The screens change every 15 seconds. Transition times public MainMenuScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.0); currentCreditAmount = Global.CurrentCredits; } public SplashScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } public HighScoreScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } public GamePlayScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } When a user inserts credits they can play the game after pressing start mainMenuScreen->splashScreen->highScoreScreen->(loops forever) || || || ===========Credits In============= || Start || \/ LoadingScreen || Start || \/ GamePlayScreen During each of these transitions, between screens, the same code is used, which exits(removes) all current active screens and respects transitions, then adds the new screen to the screen manager: foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); //AddScreen takes a new screen to manage and the controlling player ScreenManager.AddScreen(new NameOfScreenHere(), null); Each screen is removed from the ScreenManager with ExitScreen() and using this function, each screen transition is respected. The problem I am having is with my gamePlayScreen. When the current game is finished and the transition is complete for the gamePlayScreen, it should be removed and the next screens should be added to the ScreenManager. GamePlayScreen Code Snippet private void FinishCurrentGame() { AudioManager.StopSounds(); this.UnloadContent(); if (Global.SaveDevice.IsReady) Stats.Save(); if (HighScoreScreen.IsInHighscores(timeLimit)) { foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); Global.TimeRemaining = timeLimit; ScreenManager.AddScreen(new BackgroundScreen(), null); ScreenManager.AddScreen(new MessageBoxScreen("Enter your Initials", true), null); } else { foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); ScreenManager.AddScreen(new BackgroundScreen(), null); ScreenManager.AddScreen(new MainMenuScreen(), null); } } The problem is that when isExiting is set to true by screen.ExitScreen() for the gamePlayScreen, the transition never completes the transition and removes the screen from the ScreenManager. Every other screen that I use the same technique to add and remove each screen fully transitions On/Off and is removed at the appropriate time from the ScreenManager, but noy my GamePlayScreen. Has anyone that has used the GameStateManagement example experienced this issue or can someone see the mistake I am making? EDIT This is what I tracked down. When the game is done, I call foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); to start the transition off process for the gameplay screen. At this point there is only 1 screen on the ScreenManager stack. The gamePlay screen gets isExiting set to true and starts to transition off. Right after the above call to ExitScreen() I add a background screen and menu screen to the screenManager: ScreenManager.AddScreen(new background(), null); ScreenManager.AddScreen(new Menu(), null); The count of the ScreenManager is now 3. What I noticed while stepping through the updates for GameScreen and ScreenManager, the gameplay screen never gets to the point where the transistion process finishes so the ScreenManager can remove it from the stack. This anomaly does not happen to any of my other screens when I switch between them. Screen Manager Code #region File Description //----------------------------------------------------------------------------- // ScreenManager.cs // // Microsoft XNA Community Game Platform // Copyright (C) Microsoft Corporation. All rights reserved. //----------------------------------------------------------------------------- #endregion #define DEMO #region Using Statements using System; using System.Diagnostics; using System.Collections.Generic; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using PerformanceUtility.GameDebugTools; #endregion namespace GameStateManagement { /// <summary> /// The screen manager is a component which manages one or more GameScreen /// instances. It maintains a stack of screens, calls their Update and Draw /// methods at the appropriate times, and automatically routes input to the /// topmost active screen. /// </summary> public class ScreenManager : DrawableGameComponent { #region Fields List<GameScreen> screens = new List<GameScreen>(); List<GameScreen> screensToUpdate = new List<GameScreen>(); InputState input = new InputState(); SpriteBatch spriteBatch; SpriteFont font; Texture2D blankTexture; bool isInitialized; bool getOut; bool traceEnabled; #if DEBUG DebugSystem debugSystem; Stopwatch stopwatch = new Stopwatch(); bool debugTextEnabled; #endif #endregion #region Properties /// <summary> /// A default SpriteBatch shared by all the screens. This saves /// each screen having to bother creating their own local instance. /// </summary> public SpriteBatch SpriteBatch { get { return spriteBatch; } } /// <summary> /// A default font shared by all the screens. This saves /// each screen having to bother loading their own local copy. /// </summary> public SpriteFont Font { get { return font; } } public Rectangle ScreenRectangle { get { return new Rectangle(0, 0, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height); } } /// <summary> /// If true, the manager prints out a list of all the screens /// each time it is updated. This can be useful for making sure /// everything is being added and removed at the right times. /// </summary> public bool TraceEnabled { get { return traceEnabled; } set { traceEnabled = value; } } #if DEBUG public bool DebugTextEnabled { get { return debugTextEnabled; } set { debugTextEnabled = value; } } public DebugSystem DebugSystem { get { return debugSystem; } } #endif #endregion #region Initialization /// <summary> /// Constructs a new screen manager component. /// </summary> public ScreenManager(Game game) : base(game) { // we must set EnabledGestures before we can query for them, but // we don't assume the game wants to read them. //TouchPanel.EnabledGestures = GestureType.None; } /// <summary> /// Initializes the screen manager component. /// </summary> public override void Initialize() { base.Initialize(); #if DEBUG debugSystem = DebugSystem.Initialize(Game, "Fonts/MenuFont"); #endif isInitialized = true; } /// <summary> /// Load your graphics content. /// </summary> protected override void LoadContent() { // Load content belonging to the screen manager. ContentManager content = Game.Content; spriteBatch = new SpriteBatch(GraphicsDevice); font = content.Load<SpriteFont>(@"Fonts\menufont"); blankTexture = content.Load<Texture2D>(@"Textures\Backgrounds\blank"); // Tell each of the screens to load their content. foreach (GameScreen screen in screens) { screen.LoadContent(); } } /// <summary> /// Unload your graphics content. /// </summary> protected override void UnloadContent() { // Tell each of the screens to unload their content. foreach (GameScreen screen in screens) { screen.UnloadContent(); } } #endregion #region Update and Draw /// <summary> /// Allows each screen to run logic. /// </summary> public override void Update(GameTime gameTime) { #if DEBUG debugSystem.TimeRuler.StartFrame(); debugSystem.TimeRuler.BeginMark("Update", Color.Blue); if (debugTextEnabled && getOut == false) { debugSystem.FpsCounter.Visible = true; debugSystem.TimeRuler.Visible = true; debugSystem.TimeRuler.ShowLog = true; getOut = true; } else if (debugTextEnabled == false) { getOut = false; debugSystem.FpsCounter.Visible = false; debugSystem.TimeRuler.Visible = false; debugSystem.TimeRuler.ShowLog = false; } #endif // Read the keyboard and gamepad. input.Update(); // Make a copy of the master screen list, to avoid confusion if // the process of updating one screen adds or removes others. screensToUpdate.Clear(); foreach (GameScreen screen in screens) screensToUpdate.Add(screen); bool otherScreenHasFocus = !Game.IsActive; bool coveredByOtherScreen = false; // Loop as long as there are screens waiting to be updated. while (screensToUpdate.Count > 0) { // Pop the topmost screen off the waiting list. GameScreen screen = screensToUpdate[screensToUpdate.Count - 1]; screensToUpdate.RemoveAt(screensToUpdate.Count - 1); // Update the screen. screen.Update(gameTime, otherScreenHasFocus, coveredByOtherScreen); if (screen.ScreenState == ScreenState.TransitionOn || screen.ScreenState == ScreenState.Active) { // If this is the first active screen we came across, // give it a chance to handle input. if (!otherScreenHasFocus) { screen.HandleInput(input); otherScreenHasFocus = true; } // If this is an active non-popup, inform any subsequent // screens that they are covered by it. if (!screen.IsPopup) coveredByOtherScreen = true; } } // Print debug trace? if (traceEnabled) TraceScreens(); #if DEBUG debugSystem.TimeRuler.EndMark("Update"); #endif } /// <summary> /// Prints a list of all the screens, for debugging. /// </summary> void TraceScreens() { List<string> screenNames = new List<string>(); foreach (GameScreen screen in screens) screenNames.Add(screen.GetType().Name); Debug.WriteLine(string.Join(", ", screenNames.ToArray())); } /// <summary> /// Tells each screen to draw itself. /// </summary> public override void Draw(GameTime gameTime) { #if DEBUG debugSystem.TimeRuler.StartFrame(); debugSystem.TimeRuler.BeginMark("Draw", Color.Yellow); #endif foreach (GameScreen screen in screens) { if (screen.ScreenState == ScreenState.Hidden) continue; screen.Draw(gameTime); } #if DEBUG debugSystem.TimeRuler.EndMark("Draw"); #endif #if DEMO SpriteBatch.Begin(); SpriteBatch.DrawString(font, "DEMO - NOT FOR RESALE", new Vector2(20, 80), Color.White); SpriteBatch.End(); #endif } #endregion #region Public Methods /// <summary> /// Adds a new screen to the screen manager. /// </summary> public void AddScreen(GameScreen screen, PlayerIndex? controllingPlayer) { screen.ControllingPlayer = controllingPlayer; screen.ScreenManager = this; screen.IsExiting = false; // If we have a graphics device, tell the screen to load content. if (isInitialized) { screen.LoadContent(); } screens.Add(screen); } /// <summary> /// Removes a screen from the screen manager. You should normally /// use GameScreen.ExitScreen instead of calling this directly, so /// the screen can gradually transition off rather than just being /// instantly removed. /// </summary> public void RemoveScreen(GameScreen screen) { // If we have a graphics device, tell the screen to unload content. if (isInitialized) { screen.UnloadContent(); } screens.Remove(screen); screensToUpdate.Remove(screen); } /// <summary> /// Expose an array holding all the screens. We return a copy rather /// than the real master list, because screens should only ever be added /// or removed using the AddScreen and RemoveScreen methods. /// </summary> public GameScreen[] GetScreens() { return screens.ToArray(); } /// <summary> /// Helper draws a translucent black fullscreen sprite, used for fading /// screens in and out, and for darkening the background behind popups. /// </summary> public void FadeBackBufferToBlack(float alpha) { Viewport viewport = GraphicsDevice.Viewport; spriteBatch.Begin(); spriteBatch.Draw(blankTexture, new Rectangle(0, 0, viewport.Width, viewport.Height), Color.Black * alpha); spriteBatch.End(); } #endregion } } Game Screen Parent of GamePlayScreen #region File Description //----------------------------------------------------------------------------- // GameScreen.cs // // Microsoft XNA Community Game Platform // Copyright (C) Microsoft Corporation. All rights reserved. //----------------------------------------------------------------------------- #endregion #region Using Statements using System; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Input; //using Microsoft.Xna.Framework.Input.Touch; using System.IO; #endregion namespace GameStateManagement { /// <summary> /// Enum describes the screen transition state. /// </summary> public enum ScreenState { TransitionOn, Active, TransitionOff, Hidden, } /// <summary> /// A screen is a single layer that has update and draw logic, and which /// can be combined with other layers to build up a complex menu system. /// For instance the main menu, the options menu, the "are you sure you /// want to quit" message box, and the main game itself are all implemented /// as screens. /// </summary> public abstract class GameScreen { #region Properties /// <summary> /// Normally when one screen is brought up over the top of another, /// the first screen will transition off to make room for the new /// one. This property indicates whether the screen is only a small /// popup, in which case screens underneath it do not need to bother /// transitioning off. /// </summary> public bool IsPopup { get { return isPopup; } protected set { isPopup = value; } } bool isPopup = false; /// <summary> /// Indicates how long the screen takes to /// transition on when it is activated. /// </summary> public TimeSpan TransitionOnTime { get { return transitionOnTime; } protected set { transitionOnTime = value; } } TimeSpan transitionOnTime = TimeSpan.Zero; /// <summary> /// Indicates how long the screen takes to /// transition off when it is deactivated. /// </summary> public TimeSpan TransitionOffTime { get { return transitionOffTime; } protected set { transitionOffTime = value; } } TimeSpan transitionOffTime = TimeSpan.Zero; /// <summary> /// Gets the current position of the screen transition, ranging /// from zero (fully active, no transition) to one (transitioned /// fully off to nothing). /// </summary> public float TransitionPosition { get { return transitionPosition; } protected set { transitionPosition = value; } } float transitionPosition = 1; /// <summary> /// Gets the current alpha of the screen transition, ranging /// from 1 (fully active, no transition) to 0 (transitioned /// fully off to nothing). /// </summary> public float TransitionAlpha { get { return 1f - TransitionPosition; } } /// <summary> /// Gets the current screen transition state. /// </summary> public ScreenState ScreenState { get { return screenState; } protected set { screenState = value; } } ScreenState screenState = ScreenState.TransitionOn; /// <summary> /// There are two possible reasons why a screen might be transitioning /// off. It could be temporarily going away to make room for another /// screen that is on top of it, or it could be going away for good. /// This property indicates whether the screen is exiting for real: /// if set, the screen will automatically remove itself as soon as the /// transition finishes. /// </summary> public bool IsExiting { get { return isExiting; } protected internal set { isExiting = value; } } bool isExiting = false; /// <summary> /// Checks whether this screen is active and can respond to user input. /// </summary> public bool IsActive { get { return !otherScreenHasFocus && (screenState == ScreenState.TransitionOn || screenState == ScreenState.Active); } } bool otherScreenHasFocus; /// <summary> /// Gets the manager that this screen belongs to. /// </summary> public ScreenManager ScreenManager { get { return screenManager; } internal set { screenManager = value; } } ScreenManager screenManager; public KeyboardState KeyboardState { get {return Keyboard.GetState();} } /// <summary> /// Gets the index of the player who is currently controlling this screen, /// or null if it is accepting input from any player. This is used to lock /// the game to a specific player profile. The main menu responds to input /// from any connected gamepad, but whichever player makes a selection from /// this menu is given control over all subsequent screens, so other gamepads /// are inactive until the controlling player returns to the main menu. /// </summary> public PlayerIndex? ControllingPlayer { get { return controllingPlayer; } internal set { controllingPlayer = value; } } PlayerIndex? controllingPlayer; /// <summary> /// Gets whether or not this screen is serializable. If this is true, /// the screen will be recorded into the screen manager's state and /// its Serialize and Deserialize methods will be called as appropriate. /// If this is false, the screen will be ignored during serialization. /// By default, all screens are assumed to be serializable. /// </summary> public bool IsSerializable { get { return isSerializable; } protected set { isSerializable = value; } } bool isSerializable = true; #endregion #region Initialization /// <summary> /// Load graphics content for the screen. /// </summary> public virtual void LoadContent() { } /// <summary> /// Unload content for the screen. /// </summary> public virtual void UnloadContent() { } #endregion #region Update and Draw /// <summary> /// Allows the screen to run logic, such as updating the transition position. /// Unlike HandleInput, this method is called regardless of whether the screen /// is active, hidden, or in the middle of a transition. /// </summary> public virtual void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) { this.otherScreenHasFocus = otherScreenHasFocus; if (isExiting) { // If the screen is going away to die, it should transition off. screenState = ScreenState.TransitionOff; if (!UpdateTransition(gameTime, transitionOffTime, 1)) { // When the transition finishes, remove the screen. ScreenManager.RemoveScreen(this); } } else if (coveredByOtherScreen) { // If the screen is covered by another, it should transition off. if (UpdateTransition(gameTime, transitionOffTime, 1)) { // Still busy transitioning. screenState = ScreenState.TransitionOff; } else { // Transition finished! screenState = ScreenState.Hidden; } } else { // Otherwise the screen should transition on and become active. if (UpdateTransition(gameTime, transitionOnTime, -1)) { // Still busy transitioning. screenState = ScreenState.TransitionOn; } else { // Transition finished! screenState = ScreenState.Active; } } } /// <summary> /// Helper for updating the screen transition position. /// </summary> bool UpdateTransition(GameTime gameTime, TimeSpan time, int direction) { // How much should we move by? float transitionDelta; if (time == TimeSpan.Zero) transitionDelta = 1; else transitionDelta = (float)(gameTime.ElapsedGameTime.TotalMilliseconds / time.TotalMilliseconds); // Update the transition position. transitionPosition += transitionDelta * direction; // Did we reach the end of the transition? if (((direction < 0) && (transitionPosition <= 0)) || ((direction > 0) && (transitionPosition >= 1))) { transitionPosition = MathHelper.Clamp(transitionPosition, 0, 1); return false; } // Otherwise we are still busy transitioning. return true; } /// <summary> /// Allows the screen to handle user input. Unlike Update, this method /// is only called when the screen is active, and not when some other /// screen has taken the focus. /// </summary> public virtual void HandleInput(InputState input) { } public KeyboardState currentKeyState; public KeyboardState lastKeyState; public bool IsKeyHit(Keys key) { if (currentKeyState.IsKeyDown(key) && lastKeyState.IsKeyUp(key)) return true; return false; } /// <summary> /// This is called when the screen should draw itself. /// </summary> public virtual void Draw(GameTime gameTime) { } #endregion #region Public Methods /// <summary> /// Tells the screen to serialize its state into the given stream. /// </summary> public virtual void Serialize(Stream stream) { } /// <summary> /// Tells the screen to deserialize its state from the given stream. /// </summary> public virtual void Deserialize(Stream stream) { } /// <summary> /// Tells the screen to go away. Unlike ScreenManager.RemoveScreen, which /// instantly kills the screen, this method respects the transition timings /// and will give the screen a chance to gradually transition off. /// </summary> public void ExitScreen() { if (TransitionOffTime == TimeSpan.Zero) { // If the screen has a zero transition time, remove it immediately. ScreenManager.RemoveScreen(this); } else { // Otherwise flag that it should transition off and then exit. isExiting = true; } } #endregion #region Helper Methods /// <summary> /// A helper method which loads assets using the screen manager's /// associated game content loader. /// </summary> /// <typeparam name="T">Type of asset.</typeparam> /// <param name="assetName">Asset name, relative to the loader root /// directory, and not including the .xnb extension.</param> /// <returns></returns> public T Load<T>(string assetName) { return ScreenManager.Game.Content.Load<T>(assetName); } #endregion } }

    Read the article

  • C#/.NET Little Wonders: The ConcurrentDictionary

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In this series of posts, we will discuss how the concurrent collections have been developed to help alleviate these multi-threading concerns.  Last week’s post began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  Today's post discusses the ConcurrentDictionary<T> (originally I had intended to discuss ConcurrentBag this week as well, but ConcurrentDictionary had enough information to create a very full post on its own!).  Finally next week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. Recap As you'll recall from the previous post, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  While these were convenient because you didn't have to worry about writing your own synchronization logic, they were a bit too finely grained and if you needed to perform multiple operations under one lock, the automatic synchronization didn't buy much. With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  This cuts both ways in that you have a lot more control as a developer over when and how fine-grained you want to synchronize, but on the other hand if you just want simple synchronization it creates more work. With .NET 4.0, we get the best of both worlds in generic collections.  A new breed of collections was born called the concurrent collections in the System.Collections.Concurrent namespace.  These amazing collections are fine-tuned to have best overall performance for situations requiring concurrent access.  They are not meant to replace the generic collections, but to simply be an alternative to creating your own locking mechanisms. Among those concurrent collections were the ConcurrentStack<T> and ConcurrentQueue<T> which provide classic LIFO and FIFO collections with a concurrent twist.  As we saw, some of the traditional methods that required calls to be made in a certain order (like checking for not IsEmpty before calling Pop()) were replaced in favor of an umbrella operation that combined both under one lock (like TryPop()). Now, let's take a look at the next in our series of concurrent collections!For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentDictionary – the fully thread-safe dictionary The ConcurrentDictionary<TKey,TValue> is the thread-safe counterpart to the generic Dictionary<TKey, TValue> collection.  Obviously, both are designed for quick – O(1) – lookups of data based on a key.  If you think of algorithms where you need lightning fast lookups of data and don’t care whether the data is maintained in any particular ordering or not, the unsorted dictionaries are generally the best way to go. Note: as a side note, there are sorted implementations of IDictionary, namely SortedDictionary and SortedList which are stored as an ordered tree and a ordered list respectively.  While these are not as fast as the non-sorted dictionaries – they are O(log2 n) – they are a great combination of both speed and ordering -- and still greatly outperform a linear search. Now, once again keep in mind that if all you need to do is load a collection once and then allow multi-threaded reading you do not need any locking.  Examples of this tend to be situations where you load a lookup or translation table once at program start, then keep it in memory for read-only reference.  In such cases locking is completely non-productive. However, most of the time when we need a concurrent dictionary we are interleaving both reads and updates.  This is where the ConcurrentDictionary really shines!  It achieves its thread-safety with no common lock to improve efficiency.  It actually uses a series of locks to provide concurrent updates, and has lockless reads!  This means that the ConcurrentDictionary gets even more efficient the higher the ratio of reads-to-writes you have. ConcurrentDictionary and Dictionary differences For the most part, the ConcurrentDictionary<TKey,TValue> behaves like it’s Dictionary<TKey,TValue> counterpart with a few differences.  Some notable examples of which are: Add() does not exist in the concurrent dictionary. This means you must use TryAdd(), AddOrUpdate(), or GetOrAdd().  It also means that you can’t use a collection initializer with the concurrent dictionary. TryAdd() replaced Add() to attempt atomic, safe adds. Because Add() only succeeds if the item doesn’t already exist, we need an atomic operation to check if the item exists, and if not add it while still under an atomic lock. TryUpdate() was added to attempt atomic, safe updates. If we want to update an item, we must make sure it exists first and that the original value is what we expected it to be.  If all these are true, we can update the item under one atomic step. TryRemove() was added to attempt atomic, safe removes. To safely attempt to remove a value we need to see if the key exists first, this checks for existence and removes under an atomic lock. AddOrUpdate() was added to attempt an thread-safe “upsert”. There are many times where you want to insert into a dictionary if the key doesn’t exist, or update the value if it does.  This allows you to make a thread-safe add-or-update. GetOrAdd() was added to attempt an thread-safe query/insert. Sometimes, you want to query for whether an item exists in the cache, and if it doesn’t insert a starting value for it.  This allows you to get the value if it exists and insert if not. Count, Keys, Values properties take a snapshot of the dictionary. Accessing these properties may interfere with add and update performance and should be used with caution. ToArray() returns a static snapshot of the dictionary. That is, the dictionary is locked, and then copied to an array as a O(n) operation.  GetEnumerator() is thread-safe and efficient, but allows dirty reads. Because reads require no locking, you can safely iterate over the contents of the dictionary.  The only downside is that, depending on timing, you may get dirty reads. Dirty reads during iteration The last point on GetEnumerator() bears some explanation.  Picture a scenario in which you call GetEnumerator() (or iterate using a foreach, etc.) and then, during that iteration the dictionary gets updated.  This may not sound like a big deal, but it can lead to inconsistent results if used incorrectly.  The problem is that items you already iterated over that are updated a split second after don’t show the update, but items that you iterate over that were updated a split second before do show the update.  Thus you may get a combination of items that are “stale” because you iterated before the update, and “fresh” because they were updated after GetEnumerator() but before the iteration reached them. Let’s illustrate with an example, let’s say you load up a concurrent dictionary like this: 1: // load up a dictionary. 2: var dictionary = new ConcurrentDictionary<string, int>(); 3:  4: dictionary["A"] = 1; 5: dictionary["B"] = 2; 6: dictionary["C"] = 3; 7: dictionary["D"] = 4; 8: dictionary["E"] = 5; 9: dictionary["F"] = 6; Then you have one task (using the wonderful TPL!) to iterate using dirty reads: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); And one task to attempt updates in a separate thread (probably): 1: // attempt updates in a separate thread 2: var updateTask = new Task(() => 3: { 4: // iterates, and updates the value by one 5: foreach (var pair in dictionary) 6: { 7: dictionary[pair.Key] = pair.Value + 1; 8: } 9: }); Now that we’ve done this, we can fire up both tasks and wait for them to complete: 1: // start both tasks 2: updateTask.Start(); 3: iterationTask.Start(); 4:  5: // wait for both to complete. 6: Task.WaitAll(updateTask, iterationTask); Now, if I you didn’t know about the dirty reads, you may have expected to see the iteration before the updates (such as A:1, B:2, C:3, D:4, E:5, F:6).  However, because the reads are dirty, we will quite possibly get a combination of some updated, some original.  My own run netted this result: 1: F:6 2: E:6 3: D:5 4: C:4 5: B:3 6: A:2 Note that, of course, iteration is not in order because ConcurrentDictionary, like Dictionary, is unordered.  Also note that both E and F show the value 6.  This is because the output task reached F before the update, but the updates for the rest of the items occurred before their output (probably because console output is very slow, comparatively). If we want to always guarantee that we will get a consistent snapshot to iterate over (that is, at the point we ask for it we see precisely what is in the dictionary and no subsequent updates during iteration), we should iterate over a call to ToArray() instead: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary.ToArray()) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); The atomic Try…() methods As you can imagine TryAdd() and TryRemove() have few surprises.  Both first check the existence of the item to determine if it can be added or removed based on whether or not the key currently exists in the dictionary: 1: // try add attempts an add and returns false if it already exists 2: if (dictionary.TryAdd("G", 7)) 3: Console.WriteLine("G did not exist, now inserted with 7"); 4: else 5: Console.WriteLine("G already existed, insert failed."); TryRemove() also has the virtue of returning the value portion of the removed entry matching the given key: 1: // attempt to remove the value, if it exists it is removed and the original is returned 2: int removedValue; 3: if (dictionary.TryRemove("C", out removedValue)) 4: Console.WriteLine("Removed C and its value was " + removedValue); 5: else 6: Console.WriteLine("C did not exist, remove failed."); Now TryUpdate() is an interesting creature.  You might think from it’s name that TryUpdate() first checks for an item’s existence, and then updates if the item exists, otherwise it returns false.  Well, note quite... It turns out when you call TryUpdate() on a concurrent dictionary, you pass it not only the new value you want it to have, but also the value you expected it to have before the update.  If the item exists in the dictionary, and it has the value you expected, it will update it to the new value atomically and return true.  If the item is not in the dictionary or does not have the value you expected, it is not modified and false is returned. 1: // attempt to update the value, if it exists and if it has the expected original value 2: if (dictionary.TryUpdate("G", 42, 7)) 3: Console.WriteLine("G existed and was 7, now it's 42."); 4: else 5: Console.WriteLine("G either didn't exist, or wasn't 7."); The composite Add methods The ConcurrentDictionary also has composite add methods that can be used to perform updates and gets, with an add if the item is not existing at the time of the update or get. The first of these, AddOrUpdate(), allows you to add a new item to the dictionary if it doesn’t exist, or update the existing item if it does.  For example, let’s say you are creating a dictionary of counts of stock ticker symbols you’ve subscribed to from a market data feed: 1: public sealed class SubscriptionManager 2: { 3: private readonly ConcurrentDictionary<string, int> _subscriptions = new ConcurrentDictionary<string, int>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public void AddSubscription(string tickerKey) 7: { 8: // add a new subscription with count of 1, or update existing count by 1 if exists 9: var resultCount = _subscriptions.AddOrUpdate(tickerKey, 1, (symbol, count) => count + 1); 10:  11: // now check the result to see if we just incremented the count, or inserted first count 12: if (resultCount == 1) 13: { 14: // subscribe to symbol... 15: } 16: } 17: } Notice the update value factory Func delegate.  If the key does not exist in the dictionary, the add value is used (in this case 1 representing the first subscription for this symbol), but if the key already exists, it passes the key and current value to the update delegate which computes the new value to be stored in the dictionary.  The return result of this operation is the value used (in our case: 1 if added, existing value + 1 if updated). Likewise, the GetOrAdd() allows you to attempt to retrieve a value from the dictionary, and if the value does not currently exist in the dictionary it will insert a value.  This can be handy in cases where perhaps you wish to cache data, and thus you would query the cache to see if the item exists, and if it doesn’t you would put the item into the cache for the first time: 1: public sealed class PriceCache 2: { 3: private readonly ConcurrentDictionary<string, double> _cache = new ConcurrentDictionary<string, double>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public double QueryPrice(string tickerKey) 7: { 8: // check for the price in the cache, if it doesn't exist it will call the delegate to create value. 9: return _cache.GetOrAdd(tickerKey, symbol => GetCurrentPrice(symbol)); 10: } 11:  12: private double GetCurrentPrice(string tickerKey) 13: { 14: // do code to calculate actual true price. 15: } 16: } There are other variations of these two methods which vary whether a value is provided or a factory delegate, but otherwise they work much the same. Oddities with the composite Add methods The AddOrUpdate() and GetOrAdd() methods are totally thread-safe, on this you may rely, but they are not atomic.  It is important to note that the methods that use delegates execute those delegates outside of the lock.  This was done intentionally so that a user delegate (of which the ConcurrentDictionary has no control of course) does not take too long and lock out other threads. This is not necessarily an issue, per se, but it is something you must consider in your design.  The main thing to consider is that your delegate may get called to generate an item, but that item may not be the one returned!  Consider this scenario: A calls GetOrAdd and sees that the key does not currently exist, so it calls the delegate.  Now thread B also calls GetOrAdd and also sees that the key does not currently exist, and for whatever reason in this race condition it’s delegate completes first and it adds its new value to the dictionary.  Now A is done and goes to get the lock, and now sees that the item now exists.  In this case even though it called the delegate to create the item, it will pitch it because an item arrived between the time it attempted to create one and it attempted to add it. Let’s illustrate, assume this totally contrived example program which has a dictionary of char to int.  And in this dictionary we want to store a char and it’s ordinal (that is, A = 1, B = 2, etc).  So for our value generator, we will simply increment the previous value in a thread-safe way (perhaps using Interlocked): 1: public static class Program 2: { 3: private static int _nextNumber = 0; 4:  5: // the holder of the char to ordinal 6: private static ConcurrentDictionary<char, int> _dictionary 7: = new ConcurrentDictionary<char, int>(); 8:  9: // get the next id value 10: public static int NextId 11: { 12: get { return Interlocked.Increment(ref _nextNumber); } 13: } Then, we add a method that will perform our insert: 1: public static void Inserter() 2: { 3: for (int i = 0; i < 26; i++) 4: { 5: _dictionary.GetOrAdd((char)('A' + i), key => NextId); 6: } 7: } Finally, we run our test by starting two tasks to do this work and get the results… 1: public static void Main() 2: { 3: // 3 tasks attempting to get/insert 4: var tasks = new List<Task> 5: { 6: new Task(Inserter), 7: new Task(Inserter) 8: }; 9:  10: tasks.ForEach(t => t.Start()); 11: Task.WaitAll(tasks.ToArray()); 12:  13: foreach (var pair in _dictionary.OrderBy(p => p.Key)) 14: { 15: Console.WriteLine(pair.Key + ":" + pair.Value); 16: } 17: } If you run this with only one task, you get the expected A:1, B:2, ..., Z:26.  But running this in parallel you will get something a bit more complex.  My run netted these results: 1: A:1 2: B:3 3: C:4 4: D:5 5: E:6 6: F:7 7: G:8 8: H:9 9: I:10 10: J:11 11: K:12 12: L:13 13: M:14 14: N:15 15: O:16 16: P:17 17: Q:18 18: R:19 19: S:20 20: T:21 21: U:22 22: V:23 23: W:24 24: X:25 25: Y:26 26: Z:27 Notice that B is 3?  This is most likely because both threads attempted to call GetOrAdd() at roughly the same time and both saw that B did not exist, thus they both called the generator and one thread got back 2 and the other got back 3.  However, only one of those threads can get the lock at a time for the actual insert, and thus the one that generated the 3 won and the 3 was inserted and the 2 got discarded.  This is why on these methods your factory delegates should be careful not to have any logic that would be unsafe if the value they generate will be pitched in favor of another item generated at roughly the same time.  As such, it is probably a good idea to keep those generators as stateless as possible. Summary The ConcurrentDictionary is a very efficient and thread-safe version of the Dictionary generic collection.  It has all the benefits of type-safety that it’s generic collection counterpart does, and in addition is extremely efficient especially when there are more reads than writes concurrently. Tweet Technorati Tags: C#, .NET, Concurrent Collections, Collections, Little Wonders, Black Rabbit Coder,James Michael Hare

    Read the article

  • Partner Blog Series: PwC Perspectives - The Gotchas, The Do's and Don'ts for IDM Implementations

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Verdana","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Arial Narrow","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} It is generally accepted among business communities that technology by itself is not a silver bullet to all problems, but when it is combined with leading practices, strategy, careful planning and execution, it can create a recipe for success. This post attempts to highlight some of the best practices along with dos & don’ts that our practice has accumulated over the years in the identity & access management space in general, and also in the context of R2, in particular. Best Practices The following section illustrates the leading practices in “How” to plan, implement and sustain a successful OIM deployment, based on our collective experience. Planning is critical, but often overlooked A common approach to planning an IAM program that we identify with our clients is the three step process involving a current state assessment, a future state roadmap and an executable strategy to get there. It is extremely beneficial for clients to assess their current IAM state, perform gap analysis, document the recommended controls to address the gaps, align future state roadmap to business initiatives and get buy in from all stakeholders involved to improve the chances of success. When designing an enterprise-wide solution, the scalability of the technology must accommodate the future growth of the enterprise and the projected identity transactions over several years. Aligning the implementation schedule of OIM to related information technology projects increases the chances of success. As a baseline, it is recommended to match hardware specifications to the sizing guide for R2 published by Oracle. Adherence to this will help ensure that the hardware used to support OIM will not become a bottleneck as the adoption of new services increases. If your Organization has numerous connected applications that rely on reconciliation to synchronize the access data into OIM, consider hosting dedicated instances to handle reconciliation. Finally, ensure the use of clustered environment for development and have at least three total environments to help facilitate a controlled migration to production. If your Organization is planning to implement role based access control, we recommend performing a role mining exercise and consolidate your enterprise roles to keep them manageable. In addition, many Organizations have multiple approval flows to control access to critical roles, applications and entitlements. If your Organization falls into this category, we highly recommend that you limit the number of approval workflows to a small set. Most Organizations have operations managed across data centers with backend database synchronization, if your Organization falls into this category, ensure that the overall latency between the datacenters when replicating the databases is less than ten milliseconds to ensure that there are no front office performance impacts. Ingredients for a successful implementation During the development phase of your project, there are a number of guidelines that can be followed to help increase the chances for success. Most implementations cannot be completed without the use of customizations. If your implementation requires this, it’s a good practice to perform code reviews to help ensure quality and reduce code bottlenecks related to performance. We have observed at our clients that the development process works best when team members adhere to coding leading practices. Plan for time to correct coding defects and ensure developers are empowered to report their own bugs for maximum transparency. Many organizations struggle with defining a consistent approach to managing logs. This is particularly important due to the amount of information that can be logged by OIM. We recommend Oracle Diagnostics Logging (ODL) as an alternative to be used for logging. ODL allows log files to be formatted in XML for easy parsing and does not require a server restart when the log levels are changed during troubleshooting. Testing is a vital part of any large project, and an OIM R2 implementation is no exception. We suggest that at least one lower environment should use production-like data and connectors. Configurations should match as closely as possible. For example, use secure channels between OIM and target platforms in pre-production environments to test the configurations, the migration processes of certificates, and the additional overhead that encryption could impose. Finally, we ask our clients to perform database backups regularly and before any major change event, such as a patch or migration between environments. In the lowest environments, we recommend to have at least a weekly backup in order to prevent significant loss of time and effort. Similarly, if your organization is using virtual machines for one or more of the environments, it is recommended to take frequent snapshots so that rollbacks can occur in the event of improper configuration. Operate & sustain the solution to derive maximum benefits When migrating OIM R2 to production, it is important to perform certain activities that will help achieve a smoother transition. At our clients, we have seen that splitting the OIM tables into their own tablespaces by categories (physical tables, indexes, etc.) can help manage database growth effectively. If we notice that a client hasn’t enabled the Oracle-recommended indexing in the applicable database, we strongly suggest doing so to improve performance. Additionally, we work with our clients to make sure that the audit level is set to fit the organization’s auditing needs and sometimes even allocate UPA tables and indexes into their own table-space for better maintenance. Finally, many of our clients have set up schedules for reconciliation tables to be archived at regular intervals in order to keep the size of the database(s) reasonable and result in optimal database performance. For our clients that anticipate availability issues with target applications, we strongly encourage the use of the offline provisioning capabilities of OIM R2. This reduces the provisioning process for a given target application dependency on target availability and help avoid broken workflows. To account for this and other abnormalities, we also advocate that OIM’s monitoring controls be configured to alert administrators on any abnormal situations. Within OIM R2, we have begun advising our clients to utilize the ‘profile’ feature to encapsulate multiple commonly requested accounts, roles, and/or entitlements into a single item. By setting up a number of profiles that can be searched for and used, users will spend less time performing the same exact steps for common tasks. We advise our clients to follow the Oracle recommended guides for database and application server tuning which provides a good baseline configuration. It offers guidance on database connection pools, connection timeouts, user interface threads and proper handling of adapters/plug-ins. All of these can be important configurations that will allow faster provisioning and web page response times. Many of our clients have begun to recognize the value of data mining and a remediation process during the initial phases of an implementation (to help ensure high quality data gets loaded) and beyond (to support ongoing maintenance and business-as-usual processes). A successful program always begins with identifying the data elements and assigning a classification level based on criticality, risk, and availability. It should finish by following through with a remediation process. Dos & Don’ts Here are the most common dos and don'ts that we socialize with our clients, derived from our experience implementing the solution. Dos Don’ts Scope the project into phases with realistic goals. Look for quick wins to show success and value to the stake holders. Avoid “boiling the ocean” and trying to integrate all enterprise applications in the first phase. Establish an enterprise ID (universal unique ID across the enterprise) earlier in the program. Avoid major UI customizations that require code changes. Have a plan in place to patch during the project, which helps alleviate any major issues or roadblocks (product and database). Avoid publishing all the target entitlements if you don't anticipate their usage during access request. Assess your current state and prepare a roadmap to address your operations, tactical and strategic goals, align it with your business priorities. Avoid integrating non-production environments with your production target systems. Defer complex integrations to the later phases and take advantage of lessons learned from previous phases Avoid creating multiple accounts for the same user on the same system, if there is an opportunity to do so. Have an identity and access data quality initiative built into your plan to identify and remediate data related issues early on. Avoid creating complex approval workflows that would negative impact productivity and SLAs. Identify the owner of the identity systems with fair IdM knowledge and empower them with authority to make product related decisions. This will help ensure overcome any design hurdles. Avoid creating complex designs that are not sustainable long term and would need major overhaul during upgrades. Shadow your internal or external consulting resources during the implementation to build the necessary product skills needed to operate and sustain the solution. Avoid treating IAM as a point solution and have appropriate level of communication and training plan for the IT and business users alike. Conclusion In our experience, Identity programs will struggle with scope, proper resourcing, and more. We suggest that companies consider the suggestions discussed in this post and leverage them to help enable their identity and access program. This concludes PwC blog series on R2 for the month and we sincerely hope that the information we have shared thus far has been beneficial. For more information or if you have questions, you can reach out to Rex Thexton, Senior Managing Director, PwC and or Dharma Padala, Director, PwC. We look forward to hearing from you. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL).

    Read the article

  • How to implement Google Maps new version of API v2

    - by bapatla
    Hi every one I came to know that google maps has deprecated its previous version API v1 and introduced a new version of google maps API v2. I tried out one example by following some links in google any how i am pretty sure that i got the api key correctly by providing the exact hash key code and managed to get the correct api key. Now i managed to write some code as well but when i tried to execute the code i am getting the errors please help me to solve this here is my code and i even tried the sample codes provided by google play services an i got the same problem this is the sample that i have done by referring some links in google main activity package com.example.apv; import com.google.android.gms.maps.CameraUpdateFactory; import com.google.android.gms.maps.GoogleMap; import com.google.android.gms.maps.MapFragment; import com.google.android.gms.maps.model.BitmapDescriptorFactory; import com.google.android.gms.maps.model.LatLng; import com.google.android.gms.maps.model.MarkerOptions; import android.os.Bundle; import android.app.Activity; import android.app.FragmentManager; public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); FragmentManager fragmentManager = getFragmentManager(); MapFragment mapFragment = (MapFragment) fragmentManager.findFragmentById(R.id.map); GoogleMap googleMap = mapFragment.getMap(); LatLng sfLatLng = new LatLng(37.7750, -122.4183); googleMap.setMapType(GoogleMap.MAP_TYPE_NORMAL); googleMap.addMarker(new MarkerOptions() .position(sfLatLng) .title("San Francisco") .snippet("Population: 776733") .icon(BitmapDescriptorFactory.defaultMarker( BitmapDescriptorFactory.HUE_AZURE))); googleMap.getUiSettings().setCompassEnabled(true); googleMap.getUiSettings().setZoomControlsEnabled(true); googleMap.animateCamera(CameraUpdateFactory.newLatLngZoom(sfLatLng, 10)); } } main.xml <?xml version="1.0" encoding="utf-8"?> <fragment xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/map" android:layout_width="match_parent" android:layout_height="match_parent" class="com.google.android.gms.maps.MapFragment"/> and finally my manifest file <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.apv" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="17"/> <permission android:name="com.codebybrian.mapsample.permission.MAPS_RECEIVE" android:protectionLevel="signature"/> <!--Required permissions--> permission oid:name="com.codebybrian.mapsample.permission.MAPS_RECEIVE"/> <!--Used by the API to download map tiles from Google Maps servers: --> <uses-permission android:name="android.permission.INTERNET"/> <!--Allows the API to access Google web-based services: --> <uses-permission android:name="com.google.android.providers.gsf.permission.READ_GSERVICES"/> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <!--Optional permissions--> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/> <!--Version 2 of the Google Maps Android API requires OpenGL ES version 2 --> <uses-feature android:glEsVersion="0x00020000" android:required="true"/> application android:label="@string/app_name" android:icon="@drawable/ic_launcher"> <activity android:name=".MyMapActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> <meta-data android:name="com.google.android.maps.v2.API_KEY" android:value="AZzaSSsBmhi4dXoKSylGGmjkQ5Jev9UdAJBjk"/> </application> </manifest> i run my application in emulator of version 4.2 and api level of 17 i got following error 12-17 10:06:52.590: E/Trace(826): error opening trace file: No such file or directory (2) 12-17 10:06:52.590: W/Trace(826): Unexpected value from nativeGetEnabledTags: 0 12-17 10:06:52.590: W/Trace(826): Unexpected value from nativeGetEnabledTags: 0 12-17 10:06:52.590: W/Trace(826): Unexpected value from nativeGetEnabledTags: 0 12-17 10:06:52.680: I/ActivityThread(826): Pub com.google.android.gms.plus;com.google.android.gms.plus.action: com.google.android.gms.plus.provider.PlusProvider 12-17 10:06:52.740: W/Trace(826): Unexpected value from nativeGetEnabledTags: 0 12-17 10:06:52.740: W/Trace(826): Unexpected value from nativeGetEnabledTags: 0 12-17 10:06:52.760: W/Trace(826): Unexpected value from nativeGetEnabledTags: 0 later i came to know that these version cant execute in emulator so i tried executing it with two devices one is Sony xperia u of android version 2.3.7 and Samsung galaxy tab of android version 4.1.1 and these are my outputs 12-17 14:37:02.468: D/AndroidRuntime(7636): Shutting down VM 12-17 14:37:02.468: W/dalvikvm(7636): threadid=1: thread exiting with uncaught exception (group=0x41f672a0) 12-17 14:37:02.476: E/AndroidRuntime(7636): FATAL EXCEPTION: main 12-17 14:37:02.476: E/AndroidRuntime(7636): java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{com.example.apv/com.example.apv.MyMapActivity}: java.lang.ClassNotFoundException: com.example.apv.MyMapActivity 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2021) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2122) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.app.ActivityThread.access$600(ActivityThread.java:140) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1228) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.os.Handler.dispatchMessage(Handler.java:99) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.os.Looper.loop(Looper.java:137) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.app.ActivityThread.main(ActivityThread.java:4895) 12-17 14:37:02.476: E/AndroidRuntime(7636): at java.lang.reflect.Method.invokeNative(Native Method) 12-17 14:37:02.476: E/AndroidRuntime(7636): at java.lang.reflect.Method.invoke(Method.java:511) 12-17 14:37:02.476: E/AndroidRuntime(7636): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller .run(ZygoteInit.java:994) 12-17 14:37:02.476: E/AndroidRuntime(7636): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:761) 12-17 14:37:02.476: E/AndroidRuntime(7636): at dalvik.system.NativeStart.main(Native Method) 12-17 14:37:02.476: E/AndroidRuntime(7636): Caused by: java.lang.ClassNotFoundException: com.example.apv.MyMapActivity 12-17 14:37:02.476: E/AndroidRuntime(7636): at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:61) 12-17 14:37:02.476: E/AndroidRuntime(7636): at java.lang.ClassLoader.loadClass(ClassLoader.java:501) 12-17 14:37:02.476: E/AndroidRuntime(7636): at java.lang.ClassLoader.loadClass(ClassLoader.java:461) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.app.Instrumentation.newActivity(Instrumentation.java:1068) 12-17 14:37:02.476: E/AndroidRuntime(7636): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2012) 12-17 14:37:02.476: E/AndroidRuntime(7636): ... 11 more could any one please suggest me to how to get this done and give me some links of new version API v2 tutorials of google maps and some examples links please help me

    Read the article

  • Atmospheric scattering OpenGL 3.3

    - by user1419305
    Im currently trying to convert a shader by Sean O'Neil to version 330 so i can try it out in a application im writing. Im having some issues with deprecated functions, so i replaced them, but im almost completely new to glsl, so i probably did a mistake somewhere. Original shaders can be found here: http://www.gamedev.net/topic/592043-solved-trying-to-use-atmospheric-scattering-oneill-2004-but-get-black-sphere/ My horrible attempt at converting them: Vertex Shader: #version 330 core layout(location = 0) in vec3 vertexPosition_modelspace; //layout(location = 1) in vec2 vertexUV; layout(location = 2) in vec3 vertexNormal_modelspace; uniform vec3 v3CameraPos; uniform vec3 v3LightPos; uniform vec3 v3InvWavelength; uniform float fCameraHeight; uniform float fCameraHeight2; uniform float fOuterRadius; uniform float fOuterRadius2; uniform float fInnerRadius; uniform float fInnerRadius2; uniform float fKrESun; uniform float fKmESun; uniform float fKr4PI; uniform float fKm4PI; uniform float fScale; uniform float fScaleDepth; uniform float fScaleOverScaleDepth; // passing in matrixes for transformations uniform mat4 MVP; uniform mat4 V; uniform mat4 M; const int nSamples = 4; const float fSamples = 4.0; out vec3 v3Direction; out vec4 gg_FrontColor; out vec4 gg_FrontSecondaryColor; float scale(float fCos) { float x = 1.0 - fCos; return fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } void main(void) { vec3 v3Pos = vertexPosition_modelspace; vec3 v3Ray = v3Pos - v3CameraPos; float fFar = length(v3Ray); v3Ray /= fFar; vec3 v3Start = v3CameraPos; float fHeight = length(v3Start); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fCameraHeight)); float fStartAngle = dot(v3Ray, v3Start) / fHeight; float fStartOffset = fDepth*scale(fStartAngle); float fSampleLength = fFar / fSamples; float fScaledLength = fSampleLength * fScale; vec3 v3SampleRay = v3Ray * fSampleLength; vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5; vec3 v3FrontColor = vec3(0.0, 0.0, 0.0); for(int i=0; i<nSamples; i++) { float fHeight = length(v3SamplePoint); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight)); float fLightAngle = dot(v3LightPos, v3SamplePoint) / fHeight; float fCameraAngle = dot(v3Ray, v3SamplePoint) / fHeight; float fScatter = (fStartOffset + fDepth*(scale(fLightAngle) - scale(fCameraAngle))); vec3 v3Attenuate = exp(-fScatter * (v3InvWavelength * fKr4PI + fKm4PI)); v3FrontColor += v3Attenuate * (fDepth * fScaledLength); v3SamplePoint += v3SampleRay; } gg_FrontSecondaryColor.rgb = v3FrontColor * fKmESun; gg_FrontColor.rgb = v3FrontColor * (v3InvWavelength * fKrESun); gl_Position = MVP * vec4(vertexPosition_modelspace,1); v3Direction = v3CameraPos - v3Pos; } Fragment Shader: #version 330 core uniform vec3 v3LightPos; uniform float g; uniform float g2; in vec3 v3Direction; out vec4 FragColor; in vec4 gg_FrontColor; in vec4 gg_FrontSecondaryColor; void main (void) { float fCos = dot(v3LightPos, v3Direction) / length(v3Direction); float fMiePhase = 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos*fCos) / pow(1.0 + g2 - 2.0*g*fCos, 1.5); FragColor = gg_FrontColor + fMiePhase * gg_FrontSecondaryColor; FragColor.a = FragColor.b; } I wrote a function to render a sphere, and im trying to render this shader onto a inverted version of it, the sphere works completely fine, with normals and all. My problem is that the sphere gets rendered all black, so the shader is not working. This is how i'm trying to render the atmosphere inside my main rendering loop. glUseProgram(programAtmosphere); glBindTexture(GL_TEXTURE_2D, 0); //###################### glUniform3f(v3CameraPos, getPlayerPos().x, getPlayerPos().y, getPlayerPos().z); glUniform3f(v3LightPos, lightPos.x / sqrt(lightPos.x * lightPos.x + lightPos.y * lightPos.y), lightPos.y / sqrt(lightPos.x * lightPos.x + lightPos.y * lightPos.y), 0); glUniform3f(v3InvWavelength, 1.0 / pow(0.650, 4.0), 1.0 / pow(0.570, 4.0), 1.0 / pow(0.475, 4.0)); glUniform1fARB(fCameraHeight, 1); glUniform1fARB(fCameraHeight2, 1); glUniform1fARB(fInnerRadius, 6350); glUniform1fARB(fInnerRadius2, 6350 * 6350); glUniform1fARB(fOuterRadius, 6450); glUniform1fARB(fOuterRadius2, 6450 * 6450); glUniform1fARB(fKrESun, 0.0025 * 20.0); glUniform1fARB(fKmESun, 0.0015 * 20.0); glUniform1fARB(fKr4PI, 0.0025 * 4.0 * 3.141592653); glUniform1fARB(fKm4PI, 0.0015 * 4.0 * 3.141592653); glUniform1fARB(fScale, 1.0 / (6450 - 6350)); glUniform1fARB(fScaleDepth, 0.25); glUniform1fARB(fScaleOverScaleDepth, 4.0 / (6450 - 6350)); glUniform1fARB(g, -0.85); glUniform1f(g2, -0.85 * -0.85); // vertices glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer[1]); glVertexAttribPointer( 0, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? 0, // stride (void*)0 // array buffer offset ); // normals glEnableVertexAttribArray(2); glBindBuffer(GL_ARRAY_BUFFER, normalbuffer[1]); glVertexAttribPointer( 2, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? 0, // stride (void*)0 // array buffer offset ); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementbuffer[1]); glUniformMatrix4fv(ModelMatrixAT, 1, GL_FALSE, &ModelMatrix[0][0]); glUniformMatrix4fv(ViewMatrixAT, 1, GL_FALSE, &ViewMatrix[0][0]); glUniformMatrix4fv(ModelViewPAT, 1, GL_FALSE, &MVP[0][0]); // Draw the triangles glDrawElements( GL_TRIANGLES, // mode cubeIndices[1], // count GL_UNSIGNED_SHORT, // type (void*)0 // element array buffer offset ); Any ideas?

    Read the article

  • Hosted bug tracking system with mercurial repositories (Summary of options & request for opinions)

    - by Mark Booth
    The Question What hosted mercurial repository/bug tracking system or systems have you used? Would you recommend it to others? Are there serious flaws, either in the repository hosting or the bug tracking features that would make it difficult to recommend it? Do you have any other experiences with it or opinions of it that you would like to share? If you have used other non mercurial hosted repository/bug tracking systems, how does it compare? (If I understand correctly, the best format for this type of community-wiki style question is one answer per option, if you have experienced if several) Background I have been looking into options for setting up a bug/issue tracking database and found some valuable advice in this thread and this. But then I got to thinking that a hosted solution might not only solve the problem of tracking bugs, but might also solve the problem we have accessing our mercurial source code repositories while at customer sites around the world. Since we currently have no way to serve mercurial repositories over ssl, when I am at a customer site I have to connect my laptop via VPN to my work network and access the mercurial repositories over a samba share (even if it is just to synce twice a day). This is excruciatingly slow on high latency networks and can be impossible with some customers' firewalls. Even if we could run a TRAC or Redmine server here (thanks turnkey), I'm not sure it would be much quicker as our internet connection is over-stretched as it is. What I would like is for developers to be able to be able to push/pull to/from a remote repository, servicing engineers to be able to pull from a remote repository and for customers (both internal and external) to be able to submit bug/issue reports. Initial options The two options I found were Assembla and Jira. Looking at Assembla I thought the 'group' price looked reasonable, but after enquiring, found that each workspace could only contain a single repository. Since each of our products might have up to a dozen repositories (mostly for libraries) which need to be managed seperately for each product, I could see it getting expensive really quickly. On the plus side, it appears that 'users' are just workspace members, so you can have as many client users (people who can only submit support tickets and track their own tickets) without using up your user allocation. Jira only charges based on the number of users, unfortunately client users also count towards this, if you want them to be able to track their tickets. If you only want clients to be able to submit untracked issues, you can let them submit anonymously, but that doesn't feel very professional to me. More options Looking through MercurialHosting page that @Paidhi suggested, I've added the options which appear to offer private repositories, along with another that I found with a web search. Prices are as per their website today (29th March 2010). Corrections welcome in the future. Anyway, here is my summary, according to the information given on their websites: Assembla, http://www.assembla.com/, looks to be a reasonable price, but suffers only one repository per workspace, so three projects with 6 repos each would use up most of the spaces associated with a $99/month professional account (20 spaces). Bug tracking is based on Trac. Mercurial+Trac support was announced in a blog entry in 2007, but they only list SVN and Git on their Features web page. Cost: $24, $49, $99 & $249/month for 40, 40, unlimited, unlimited users and 1, 10, 20, 100 workspaces. SSL based push/pull? Website https login. BitBucket, http://bitbucket.org/plans/, is primarily a mercurial hosting site for open source projects, with SSL support, but they have an integrated bug tracker and they are cheap for private repositories. It has it’s own issues tracker, but also integrates with Lighthouse & FogBugz. Cost: $0, $5, $12, $50 & $100/month for 1, 5, 15, 25 & 150 private repositories. SSL based push/pull. No https on website login, but supports OpenID, so you can chose an OpenID provider with https login. Codebase HQ, http://www.codebasehq.com/, supports Hg and is almost as cheap as BitBucket. Cost: £5, £13, £21 & £40/month for 3, 15, 30 & 60 active projects, unlimited repositories, unlimited users (except 10 users at £5/month) and 0.5, 2, 4 & 10GB. SSL based push/pull? Website https login? Firefly, http://www.activestate.com/firefly/, by ActiveState looks interesting, but the website is a little light on details, such as whether you can only have one repository per project or not. Cost: $9, $19, & £39/month for 1, 5 & 30 private projects, with a 0.5, 1.5 & 3 GB storage limit. SSL based push/pull? Website https login. Jira, http://www.atlassian.com/software/jira/, isn’t limited by the number of repositories you can have, but by ‘user’. It could work out quite expensive if we want client users to be able to track their issues, since they would need a full user account to be created for them. Also, while there is a Mercurial extension to support jira, there is no ‘Advanced integration’ for Mercurial from Atlassian Fisheye. Cost: $150, $300, $400, $500, $700/month for 10, 25, 50, 100, 100+ users. SSL based push/pull? Website https login. Kiln & FogBugz On Demand, http://fogcreek.com/Kiln/IntrotoOnDemand.html, integrates Kilns mercurial DVCS features with FogBugz, where the combined package is much cheaper than the component parts. Also, the Fogbugz integration is supposedly excellent. *8’) Cost: £30/developer/month ($5/d/m more than either on their own). SSL based push/pull? SourceRepo, http://sourcerepo.com/, also supports HG and is even cheaper than BitBucket & Codebase. Cost: $4, $7 & $13/month for 1, unlimited & unlimited repositories/trac/redmine instances and 500MB, 1GB & 3GB storage. SSL based push/pull. Website https login. Edit: 29th March 2010 & Bounty I split this question into sections, made the questions themselves more explicit, added other options from the research I have done since my first posting and made this community wiki, since I now understand what CW is for. *8') Also, I've added a bounty to encourage people to offer their opinions. At the end of the bounty period, I will award the bounty to whoever writes the best review (good or bad), irrespective of the number of up/down votes it gets. Given that it's probably more important to avoid bad providers than find the absolute best one, 'bad reviews' could be considered more important than good ones.

    Read the article

  • Process: Unable to start service com.google.android.gms.checkin.CheckinService with Intent

    - by AndyRoid
    I'm trying to build a Google map application but keep receiving this in my LogCat. I have all the permissions and meta-data set in my manifest, but am still dumbfounded by this error. Have looked everywhere on SO for this specific error but found nothing relating to com.google.android.gms.checkin A little bit about my structural hierarchy. MainActivity extends ActionBarActivity with three tabs underneath actionbar. Each tab has it's own fragment. On the gMapFragment I create a GPSTrack object from my GPSTrack class which extends Service and implements LocationListener. The problem is that when I start the application I get this message: I have all my libraries imported properly and I even added the google-play-services.jar into my libs folder. I also installed Google Play Services APKs through CMD onto my emulator. Furthermore the LocationManager lm = = (LocationManager) mContext.getSystemService(LOCATION_SERVICE); in my GPSTrack class always returns null. Why is this and how can I fix these issues? Would appreciate an explanation along with solution too, I want to understand what's going on here. ============== Code: gMapFragment.java public class gMapFragment extends SupportMapFragment { private final String TAG = "gMapFragment"; private GoogleMap mMap; protected SupportMapFragment mapFrag; private Context mContext = getActivity(); private static View view; @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { if (view != null) { ViewGroup parent = (ViewGroup) view.getParent(); if (parent != null) { parent.removeView(view); } } try { super.onCreateView(inflater, container, savedInstanceState); view = inflater.inflate(R.layout.fragment_map, container, false); setupGoogleMap(); } catch (Exception e) { /* * Map already there , just return as view */ } return view; } private void setupGoogleMap() { mapFrag = (SupportMapFragment) getFragmentManager().findFragmentById( R.id.mapView); if (mapFrag == null) { FragmentManager fragManager = getFragmentManager(); FragmentTransaction fragTransaction = fragManager .beginTransaction(); mapFrag = SupportMapFragment.newInstance(); fragTransaction.replace(R.id.mapView, mapFrag).commit(); } if (mapFrag != null) { mMap = mapFrag.getMap(); if (mMap != null) { setupMap(); mMap.setOnMapClickListener(new OnMapClickListener() { @Override public void onMapClick(LatLng point) { // TODO your click stuff on map } }); } } } @Override public void onAttach(Activity activity) { super.onAttach(activity); Log.d("Attach", "on attach"); } @Override public void onDetach() { super.onDetach(); } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); } @Override public void onResume() { super.onResume(); } @Override public void onPause() { super.onPause(); } @Override public void onDestroy() { super.onDestroy(); } private void setupMap() { GPSTrack gps = new GPSTrack(mContext); // Enable MyLocation layer of google map mMap.setMyLocationEnabled(true); Log.d(TAG, "MyLocation enabled"); // Set Map type mMap.setMapType(GoogleMap.MAP_TYPE_NORMAL); // Grab current location **ERROR HERE/Returns Null** Location location = gps.getLocation(); Log.d(TAG, "Grabbing location..."); if (location != null) { Log.d(TAG, "location != null"); // Grab Latitude and Longitude double latitude = location.getLatitude(); double longitude = location.getLongitude(); Log.d(TAG, "Getting lat, long.."); // Initialize LatLng object LatLng latLng = new LatLng(latitude, longitude); Log.d(TAG, "LatLng initialized"); // Show current location on google map mMap.moveCamera(CameraUpdateFactory.newLatLng(latLng)); // Zoom in on google map mMap.animateCamera(CameraUpdateFactory.zoomTo(20)); mMap.addMarker(new MarkerOptions().position( new LatLng(latitude, longitude)).title("You are here.")); } else { gps.showSettingsAlert(); } } } GPSTrack.java public class GPSTrack extends Service implements LocationListener{ private final Context mContext; private boolean isGPSEnabled = false; //See if network is connected to internet private boolean isNetworkEnabled = false; //See if you can grab the location private boolean canGetLocation = false; protected Location location = null; protected double latitude; protected double longitude; private static final long MINIMUM_DISTANCE_CHANGE_FOR_UPDATES = 10; //10 Meters private static final long MINIMUM_TIME_CHANGE_FOR_UPDATES = 1000 * 60 * 1; //1 minute protected LocationManager locationManager; public GPSTrack(Context context) { this.mContext = context; getLocation(); } public Location getLocation() { try { //Setup locationManager for controlling location services **ERROR HERE/Return Null** locationManager = (LocationManager) mContext.getSystemService(LOCATION_SERVICE); //See if GPS is enabled isGPSEnabled = locationManager.isProviderEnabled(LocationManager.GPS_PROVIDER); //See if Network is connected to the internet or carrier service isNetworkEnabled = locationManager.isProviderEnabled(LocationManager.NETWORK_PROVIDER); if (!isGPSEnabled && !isNetworkEnabled) { Toast.makeText(getApplicationContext(), "No Network Provider Available", Toast.LENGTH_SHORT).show(); } else { this.canGetLocation = true; if (isNetworkEnabled) { locationManager.requestLocationUpdates( LocationManager.NETWORK_PROVIDER, MINIMUM_TIME_CHANGE_FOR_UPDATES, MINIMUM_DISTANCE_CHANGE_FOR_UPDATES, this); Log.d("GPS", "GPS Enabled"); if (locationManager != null) { location = locationManager.getLastKnownLocation(LocationManager.GPS_PROVIDER); if (location != null) { latitude = location.getLatitude(); longitude = location.getLongitude(); } } } } } catch (Exception e) { e.printStackTrace(); } return location; } public void stopUsingGPS() { if (locationManager != null) { locationManager.removeUpdates(GPSTrack.this); } } public double getLatitude() { if (location != null) { latitude = location.getLatitude(); } return latitude; } public double getLongitude() { if (location != null) { longitude = location.getLongitude(); } return longitude; } public boolean canGetLocation() { return this.canGetLocation; } public void showSettingsAlert() { AlertDialog.Builder alertDialog = new AlertDialog.Builder(mContext); //AlertDialog title alertDialog.setTitle("GPS Settings"); //AlertDialog message alertDialog.setMessage("GPS is not enabled. Do you want to go to Settings?"); alertDialog.setPositiveButton("Settings", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { // TODO Auto-generated method stub Intent i = new Intent(Settings.ACTION_LOCATION_SOURCE_SETTINGS); mContext.startActivity(i); } }); alertDialog.setNegativeButton("Cancel", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { // TODO Auto-generated method stub dialog.cancel(); } }); alertDialog.show(); } @Override public void onLocationChanged(Location location) { // TODO Auto-generated method stub } @Override public void onStatusChanged(String provider, int status, Bundle extras) { // TODO Auto-generated method stub } @Override public void onProviderEnabled(String provider) { // TODO Auto-generated method stub } @Override public void onProviderDisabled(String provider) { // TODO Auto-generated method stub } @Override public IBinder onBind(Intent intent) { // TODO Auto-generated method stub return null; } } logcat 06-08 22:35:03.441: E/AndroidRuntime(1370): FATAL EXCEPTION: main 06-08 22:35:03.441: E/AndroidRuntime(1370): Process: com.google.android.gms, PID: 1370 06-08 22:35:03.441: E/AndroidRuntime(1370): java.lang.RuntimeException: Unable to start service com.google.android.gms.checkin.CheckinService@b1094e48 with Intent { cmp=com.google.android.gms/.checkin.CheckinService }: java.lang.SecurityException: attempting to read gservices without permission: Neither user 10053 nor current process has com.google.android.providers.gsf.permission.READ_GSERVICES. 06-08 22:35:03.441: E/AndroidRuntime(1370): at android.app.ActivityThread.handleServiceArgs(ActivityThread.java:2719) 06-08 22:35:03.441: E/AndroidRuntime(1370): at android.app.ActivityThread.access$2100(ActivityThread.java:135) 06-08 22:35:03.441: E/AndroidRuntime(1370): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1293) 06-08 22:35:03.441: E/AndroidRuntime(1370): at android.os.Handler.dispatchMessage(Handler.java:102) 06-08 22:35:03.441: E/AndroidRuntime(1370): at android.os.Looper.loop(Looper.java:136) 06-08 22:35:03.441: E/AndroidRuntime(1370): at android.app.ActivityThread.main(ActivityThread.java:5017) 06-08 22:35:03.441: E/AndroidRuntime(1370): at java.lang.reflect.Method.invokeNative(Native Method) 06-08 22:35:03.441: E/AndroidRuntime(1370): at java.lang.reflect.Method.invoke(Method.java:515) 06-08 22:35:03.441: E/AndroidRuntime(1370): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:779) 06-08 22:35:03.441: E/AndroidRuntime(1370): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:595) 06-08 22:35:03.441: E/AndroidRuntime(1370): at dalvik.system.NativeStart.main(Native Method) 06-08 22:35:03.441: E/AndroidRuntime(1370): Caused by: java.lang.SecurityException: attempting to read gservices without permission: Neither user 10053 nor current process has com.google.android.providers.gsf.permission.READ_GSERVICES. 06-08 22:35:03.441: E/AndroidRuntime(1370): at android.app.ContextImpl.enforce(ContextImpl.java:1685) 06-08 22:35:03.441: E/AndroidRuntime(1370): at android.app.ContextImpl.enforceCallingOrSelfPermission(ContextImpl.java:1714) 06-08 22:35:03.441: E/AndroidRuntime(1370): at android.content.ContextWrapper.enforceCallingOrSelfPermission(ContextWrapper.java:572) 06-08 22:35:03.441: E/AndroidRuntime(1370): at imq.c(SourceFile:107) 06-08 22:35:03.441: E/AndroidRuntime(1370): at imq.a(SourceFile:121) 06-08 22:35:03.441: E/AndroidRuntime(1370): at imq.a(SourceFile:227) 06-08 22:35:03.441: E/AndroidRuntime(1370): at bwq.c(SourceFile:166) 06-08 22:35:03.441: E/AndroidRuntime(1370): at com.google.android.gms.checkin.CheckinService.a(SourceFile:237) 06-08 22:35:03.441: E/AndroidRuntime(1370): at com.google.android.gms.checkin.CheckinService.onStartCommand(SourceFile:211) 06-08 22:35:03.441: E/AndroidRuntime(1370): at android.app.ActivityThread.handleServiceArgs(ActivityThread.java:2702) AndroidManifest <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.app" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="14" android:targetSdkVersion="19" /> <uses-permission android:name="com.curio.permission.MAPS_RECEIVE" /> <uses-permission android:name="android.permission.CAMERA" /> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="com.google.android.providers.gsf.permission.READ_GSERVICES" /> <uses-permission android:name="android.permission.ACCESS_MOCK_LOCATION" /> <uses-feature android:name="android.hardware.camera" android:required="true" /> <uses-feature android:glEsVersion="0x00020000" android:required="true" /> <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name="com.app.MainActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <meta-data android:name="com.google.android.gms.version" android:value="@integer/google_play_services_version" /> <meta-data android:name="com.google.android.maps.v2.API_KEY" android:value="AI........................" /> </application>

    Read the article

  • Filtering bad requests from Apache -> logger -> rsyslog to syslog-ng on a remote logging server possible?

    - by zeyus
    EDIT: Thanks for the help Here is a quick idea of the setup: webserver X In apache httpd.conf: LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vcombined CustomLog "|/usr/bin/logger -p local6.info -t access " vcombined In rsyslog.conf: *.* @logserver Logserver syslog-ng.conf: ... parser p_apache {csv-parser(columns( "APACHE.VIRTUAL_HOST", "APACHE.CLIENT_IP", "APACHE.IDENT_NAME", "APACHE.USER_NAME", "APACHE.TIMESTAMP", "APACHE.REQUEST_URL", "APACHE.REQUEST_STATUS", "APACHE.CONTENT_LENGTH", "APACHE.REFERER", "APACHE.USER_AGENT", "APACHE.PROCESS_TIME", "APACHE.SERVER_NAME") # flags: # escape-none,escape-backslash,escape-double-char, # strip-whitespace flags(escape-double-char,strip-whitespace) delimiters(" ") quote-pairs('""[]') );}; ... source s_net { udp(ip(0.0.0.0) port(514) so_rcvbuf(1048576)); }; destination hosts_acc { file("/var/log/hosts/$HOST/${APACHE.VIRTUAL_HOST}_acc.log"); }; filter f_apacheacc { facility(local6); }; log { source(s_net); parser(p_apache); filter(f_apacheacc); destination(hosts_acc); }; ... The log's get there just fine, but there are a LOT of logs like the following: -rw------- 1 root root 5726 Apr 6 01:02 xc3\x9d\xc3\x9ed$yA;_acc.log -rw------- 1 root root 23435 Apr 6 01:06 \xc3\x9ed$yA;_acc.log -rw------- 1 root root 745 Apr 6 00:57 xc3\x9ed$yA;_acc.log -rw------- 1 root root 8440 Apr 5 22:50 \xc3\xaf_F\xc3\x95$yA;_acc.log -rw------- 1 root root 3112 Apr 6 00:58 xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA;_acc.log -rw------- 1 root root 4220 Apr 5 22:03 xe2\x80\x98\twd\xc2\xa2\xc2\xb0\xc3\x96$yA;_acc.log -rw------- 1 root root 1055 Apr 5 22:03 xe2\x80\x98\xc2\x9dw\xc3\x94\xc3\xb4T\xc5\x93$yA;_acc.log -rw------- 1 root root 1821 Apr 6 00:58 \xe2\x80\x98\xc3\x9d\xc3\x9ed$yA;_acc.log -rw------- 1 root root 2875 Apr 6 01:02 xe2\x80\x98\xc3\x9d\xc3\x9ed$yA;_acc.log -rw------- 1 root root 3165 Apr 5 22:48 \xe2\x80\x99-w\xc3\xaf_F\xc3\x95$yA;_acc.log -rw------- 1 root root 3165 Apr 5 22:40 \xe2\x80\x99\xe2\x80\x9aw\xe2\x82\xac\xc2\xbd\xe2\x80\x9d($yA;_acc.log -rw------- 1 root root 15825 Apr 5 22:50 xe2\x80\x99\xe2\x80\x9aw\xe2\x82\xac\xc2\xbd\xe2\x80\x9d($yA;_acc.log -rw------- 1 root root 1055 Apr 5 22:39 \xe2\x80\x9aw\xe2\x82\xac\xc2\xbd\xe2\x80\x9d($yA;_acc.log -rw------- 1 root root 2110 Apr 5 22:50 xe2\x80\x9aw\xe2\x82\xac\xc2\xbd\xe2\x80\x9d($yA;_acc.log -rw------- 1 root root 2034 Apr 5 22:50 \xe2\x80\x9d($yA;_acc.log -rw------- 1 root root 4066 Apr 5 22:45 xe2\x80\x9d($yA;_acc.log -rw------- 1 root root 7212 Apr 6 13:30 \xe2\x80\xb9>$yA;_acc.log -rw------- 1 root root 3000 Apr 6 13:25 xe2\x80\xb9>$yA;_acc.log My question is where, and how can I filter these out, I don't want them on the filesystem (But actually I guess it wouldn't be a bad idea to keep them logged, but in their correct VHost file) Here is an example VHost <VirtualHost *:80> ServerAdmin [email protected] ServerName xxx.xx DocumentRoot /var/www/vhosts/xxx <Directory /var/www/vhosts/xxx> AllowOverride All Options All RewriteEngine on </Directory> </VirtualHost> And the default "catch-all" vhost at the bottom of the vhosts config file: <VirtualHost *:80> ServerName default ServerAlias * ServerAlias catchall.xxx.xx DocumentRoot /var/www/vhosts/nodomain <Directory "/var/www/vhosts/nodomain"> Options Indexes FollowSymLinks AllowOverride none Allow from All </Directory> CustomLog /dev/null combined ErrorLog /dev/null </VirtualHost> I had posted this in a related question but It's better in it's own question. Here are some examples from inside the log files r_acc.log: Apr 7 11:16:27 xxxxx access: r PC 5.0; eSobiSubscriber 2.0.4.16; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C)" Apr 7 11:16:28 xxxxx access: r PC 5.0; eSobiSubscriber 2.0.4.16; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C)" ######################## D46-28E2-0FBC95-78798EV\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA;_acc.log: Apr 7 14:54:06 xxxxx access: D46-28E2-0FBC95-78798EV\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; B557000E-F20D-35DD-021A-9824EC-17A4AFV\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; 3BD03D7B-EEFD-83FF-7599-B751AD-6F0A2EV\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; 9CAE0724-D455-0B31-3378-871C11-BBD0A4V\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; C1E24799-3979-2452-81-3BAA0FFD361F5A; 0E701CBC-5832-5AB6-D5-CFBF9BDE863EAA; 464714B1-B3E2-774A-A4-FEA612A46CEE06; 74C817B0-D081-D2CC-6D-C4EF0F1B4F49BB; 1338B1DE-67CD-977C-B35D-1F2C4441DD6A; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729; OfficeLiveConnector.1.5; OfficeLivePatch.1.3; .NET4.0C; BRI/2)" ######################## V\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA;_acc.log: Apr 7 14:55:04 xxxxx access: V\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; FEEACE4F-092A-1D46-28E2-0FBC95-78798EV\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; B557000E-F20D-35DD-021A-9824EC-17A4AFV\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; 3BD03D7B-EEFD-83FF-7599-B751AD-6F0A2EV\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; 9CAE0724-D455-0B31-3378-871C11-BBD0A4V\xe2\x80\x94w\xe2\x80\x98\xc3\x9d\xc3\x9ed$yA; C1E24799-3979-2452-81-3BAA0FFD361F5A; 0E701CBC-5832-5AB6-D5-CFBF9BDE863EAA; 464714B1-B3E2-774A-A4-FEA612A46CEE06; 74C817B0-D081-D2CC-6D-C4EF0F1B4F49BB; 1338B1DE-67CD-977C-B35D-1F2C4441DD6A; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729; OfficeLiveConnector.1.5; OfficeLivePatch.1.3; .NET4.0C; BRI/2)" ################### xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA;_acc.log: Apr 7 19:48:39 xxxxx access: xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; 3C12D25C-9D40-91CF-1F40-AC-B1A083426DV-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; D4713FA8-0142-A0C2-4812-BA-E03221005BV-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; 199BAF2A-ECD5-39FA-65C3-E8-B107FAFF08V-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; 384BDA70-9954-7744-05A0-C4-C7D9FEA685V-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; EE7292A9-333C-AF70-5A7F-55-CAA7D0BA39V-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; -AD7D48FA3A55-2A33-D10B-B4B66276D8B8; -166A9C6A2E71-24DF-A192-C8258AA4DE14; -00077C6C84E0-A302-4954-3D6D17C54D31; 3F56C318-EC3C-432B-680F-7E4BB2B852C4; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.5.21022; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C)" Apr 7 19:48:39 xxxxx access: xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; 3C12D25C-9D40-91CF-1F40-AC-B1A083426DV-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; D4713FA8-0142-A0C2-4812-BA-E03221005BV-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; 199BAF2A-ECD5-39FA-65C3-E8-B107FAFF08V-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; 384BDA70-9954-7744-05A0-C4-C7D9FEA685V-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; EE7292A9-333C-AF70-5A7F-55-CAA7D0BA39V-w\xc2\x90\xc3\x91\xc3\x94\xc2\xab$yA; -AD7D48FA3A55-2A33-D10B-B4B66276D8B8; -166A9C6A2E71-24DF-A192-C8258AA4DE14; -00077C6C84E0-A302-4954-3D6D17C54D31; 3F56C318-EC3C-432B-680F-7E4BB2B852C4; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.5.21022; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C)" Thanks

    Read the article

  • ORDER BY job failed in the Pig script while running EmbeddedPig using Java

    - by C.c. Huang
    I have this following pig script, which works perfectly using grunt shell (stored the results to HDFS without any issues); however, the last job (ORDER BY) failed if I ran the same script using Java EmbeddedPig. If I replace the ORDER BY job by others, such as GROUP or FOREACH GENERATE, the whole script then succeeded in Java EmbeddedPig. So I think it's the ORDER BY which causes the issue. Anyone has any experience with this? Any help would be appreciated! The Pig script: REGISTER pig-udf-0.0.1-SNAPSHOT.jar; user_similarity = LOAD '/tmp/sample-sim-score-results-31/part-r-00000' USING PigStorage('\t') AS (user_id: chararray, sim_user_id: chararray, basic_sim_score: float, alt_sim_score: float); simplified_user_similarity = FOREACH user_similarity GENERATE $0 AS user_id, $1 AS sim_user_id, $2 AS sim_score; grouped_user_similarity = GROUP simplified_user_similarity BY user_id; ordered_user_similarity = FOREACH grouped_user_similarity { sorted = ORDER simplified_user_similarity BY sim_score DESC; top = LIMIT sorted 10; GENERATE group, top; }; top_influencers = FOREACH ordered_user_similarity GENERATE com.aol.grapevine.similarity.pig.udf.AssignPointsToTopInfluencer($1, 10); all_influence_scores = FOREACH top_influencers GENERATE FLATTEN($0); grouped_influence_scores = GROUP all_influence_scores BY bag_of_topSimUserTuples::user_id; influence_scores = FOREACH grouped_influence_scores GENERATE group AS user_id, SUM(all_influence_scores.bag_of_topSimUserTuples::points) AS influence_score; ordered_influence_scores = ORDER influence_scores BY influence_score DESC; STORE ordered_influence_scores INTO '/tmp/cc-test-results-1' USING PigStorage(); The error log from Pig: 12/04/05 10:00:56 INFO pigstats.ScriptState: Pig script settings are added to the job 12/04/05 10:00:56 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 12/04/05 10:00:58 INFO mapReduceLayer.JobControlCompiler: Setting up single store job 12/04/05 10:00:58 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 12/04/05 10:00:58 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission. 12/04/05 10:00:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/04/05 10:00:58 INFO input.FileInputFormat: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths (combined) to process : 1 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating tmp-1546565755 in /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134-work-6955502337234509704 with rwxr-xr-x 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir. 12/04/05 10:00:58 INFO mapred.TaskRunner: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/pigsample_854728855_1333645258470 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.jar.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.jar.crc 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.split.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.split.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.splitmetainfo.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.splitmetainfo.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.xml.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.xml.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.jar <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.jar 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.split <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.split 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.splitmetainfo <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.splitmetainfo 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.xml <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.xml 12/04/05 10:00:59 INFO mapred.Task: Using ResourceCalculatorPlugin : null 12/04/05 10:00:59 INFO mapred.MapTask: io.sort.mb = 100 12/04/05 10:00:59 INFO mapred.MapTask: data buffer = 79691776/99614720 12/04/05 10:00:59 INFO mapred.MapTask: record buffer = 262144/327680 12/04/05 10:00:59 WARN mapred.LocalJobRunner: job_local_0004 java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:139) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigFileInputFormat.listStatus(PigFileInputFormat.java:37) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:153) at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:115) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:112) ... 6 more 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Deleted path /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:59 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: job job_local_0004 has failed! Stop running all dependent jobs 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: 100% complete 12/04/05 10:01:04 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed! 12/04/05 10:01:04 INFO pigstats.PigStats: Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 0.20.2-cdh3u3 0.8.1-cdh3u3 cchuang 2012-04-05 10:00:34 2012-04-05 10:01:04 GROUP_BY,ORDER_BY Some jobs have failed! Stop running all dependent jobs Job Stats (time in seconds): JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MaxReduceTime MinReduceTime AvgReduceTime Alias Feature Outputs job_local_0001 0 0 0 0 0 0 0 0 all_influence_scores,grouped_user_similarity,simplified_user_similarity,user_similarity GROUP_BY job_local_0002 0 0 0 0 0 0 0 0 grouped_influence_scores,influence_scores GROUP_BY,COMBINER job_local_0003 0 0 0 0 0 0 0 0 ordered_influence_scores SAMPLER Failed Jobs: JobId Alias Feature Message Outputs job_local_0004 ordered_influence_scores ORDER_BY Message: Job failed! Error - NA /tmp/cc-test-results-1, Input(s): Successfully read 0 records from: "/tmp/sample-sim-score-results-31/part-r-00000" Output(s): Failed to produce result in "/tmp/cc-test-results-1" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local_0001 -> job_local_0002, job_local_0002 -> job_local_0003, job_local_0003 -> job_local_0004, job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: Some jobs have failed! Stop running all dependent jobs

    Read the article

  • PHP SOAP error: Method element needs to belong to the namespace

    - by kdm
    I'm unable to retrieve data from an XML document, any help is greatly appreciated. I'm using PHP 5.2.10 and the WSDL url is an internal link within my company. The following code produces an error. $url = "http://dta-info/IVR/IVRINFO?WSDL"; $params = array("zANI" => "12345"); try{ $client = new SoapClient($url, array( 'trace' => 1, 'connection_timeout' => 2, 'location' => $url ) ); }catch(SoapFault $fault){ echo "faultstring: {$fault->faultstring})\n"; } try{ $result = $client->GetIVRinfo($params); }catch(SoapFault $fault){ echo "(faultcode: {$fault->faultcode}, faultstring: {$fault->faultstring} )\n"; } (faultcode: SOAP-ENV:Client, faultstring: There should be no path or parameters after a SOAP vname. ) So i tried to use a non-wsdl mode but i receive a different error no matter how i try to format the params. $url = "http://dta-info/IVR/IVRINFO"; $params = array("zANI" => "12345"); try{ $client = new SoapClient(null, array( 'trace' => 1, 'connection_timeout' => 2, 'location' => $url, 'uri' => $uri, 'style' => SOAP_DOCUMENT, 'use' => SOAP_LITERAL, 'soap_version' => SOAP_2 ) ); }catch(SoapFault $fault){ echo "faultstring: {$fault->faultstring})\n"; } try{ $result = $client->GetIVRinfo($params); }catch(SoapFault $fault){ echo "(faultcode: {$fault->faultcode}, faultstring: {$fault->faultstring} )\n"; } (faultcode: SOAP-ENV:Client, faultstring: The method element needs to belong to the namespace 'http://GETIVRINFO/IVR/IVRINFO'. ) I have tested this WSDL with a tool called SoapUI and it returns the results with no errors. So it leads me to believe I'm not formatting the vars or headers correctly with PHP. I also tried passing in a xml fragment as the param but that returns the same error. What am i doing wrong?????? $params = '<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ivr="http://GETIVRINFO/IVR/IVRINFO"> <soapenv:Header/> <soapenv:Body> <ivr:GetIVRinfo> <!--Optional:--> <ivr:zANI>12345</ivr:zANI> </ivr:GetIVRinfo> </soapenv:Body> </soapenv:Envelope>'; Here is the WSDL document: <?xml version="1.0"?><wsdl:definitions name="IVR" targetNamespace="http://GETIVRINFO/IVR/IVRINFO" xmlns:tns="http://GETIVRINFO/IVR/IVRINFO" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:sql="http://schemas.microsoft.com/SQLServer/2001/12/SOAP" xmlns:sqltypes="http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types" xmlns:sqlmessage="http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types/SqlMessage" xmlns:sqlresultstream="http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types/SqlResultStream"> <wsdl:types><xsd:schema targetNamespace='http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types' elementFormDefault='qualified' attributeFormDefault='qualified'> <xsd:import namespace='http://www.w3.org/2001/XMLSchema'/> <xsd:simpleType name='nonNegativeInteger'> <xsd:restriction base='xsd:int'> <xsd:minInclusive value='0'/> </xsd:restriction> </xsd:simpleType> <xsd:attribute name='IsNested' type='xsd:boolean'/> <xsd:complexType name='SqlRowSet'> <xsd:sequence> <xsd:element ref='xsd:schema'/> <xsd:any/> </xsd:sequence> <xsd:attribute ref='sqltypes:IsNested'/> </xsd:complexType> <xsd:complexType name='SqlXml' mixed='true'> <xsd:sequence> <xsd:any/> </xsd:sequence> </xsd:complexType> <xsd:simpleType name='SqlResultCode'> <xsd:restriction base='xsd:int'> <xsd:minInclusive value='0'/> </xsd:restriction> </xsd:simpleType> </xsd:schema> <xsd:schema targetNamespace='http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types/SqlMessage' elementFormDefault='qualified' attributeFormDefault='qualified'> <xsd:import namespace='http://www.w3.org/2001/XMLSchema'/> <xsd:import namespace='http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types'/> <xsd:complexType name='SqlMessage'> <xsd:sequence minOccurs='1' maxOccurs='1'> <xsd:element name='Class' type='sqltypes:nonNegativeInteger'/> <xsd:element name='LineNumber' type='sqltypes:nonNegativeInteger'/> <xsd:element name='Message' type='xsd:string'/> <xsd:element name='Number' type='sqltypes:nonNegativeInteger'/> <xsd:element name='Procedure' type='xsd:string'/> <xsd:element name='Server' type='xsd:string'/> <xsd:element name='Source' type='xsd:string'/> <xsd:element name='State' type='sqltypes:nonNegativeInteger'/> </xsd:sequence> <xsd:attribute ref='sqltypes:IsNested'/> </xsd:complexType> </xsd:schema> <xsd:schema targetNamespace='http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types/SqlResultStream' elementFormDefault='qualified' attributeFormDefault='qualified'> <xsd:import namespace='http://www.w3.org/2001/XMLSchema'/> <xsd:import namespace='http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types'/> <xsd:import namespace='http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types/SqlMessage'/> <xsd:complexType name='SqlResultStream'> <xsd:choice minOccurs='1' maxOccurs='unbounded'> <xsd:element name='SqlRowSet' type='sqltypes:SqlRowSet'/> <xsd:element name='SqlXml' type='sqltypes:SqlXml'/> <xsd:element name='SqlMessage' type='sqlmessage:SqlMessage'/> <xsd:element name='SqlResultCode' type='sqltypes:SqlResultCode'/> </xsd:choice> </xsd:complexType> </xsd:schema> <xsd:schema targetNamespace="http://GETIVRINFO/IVR/IVRINFO" elementFormDefault="qualified" attributeFormDefault="qualified"> <xsd:import namespace="http://www.w3.org/2001/XMLSchema"/> <xsd:import namespace="http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types"/> <xsd:import namespace="http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types/SqlMessage"/> <xsd:import namespace="http://schemas.microsoft.com/SQLServer/2001/12/SOAP/types/SqlResultStream"/> <xsd:element name="GetIVRinfo"> <xsd:complexType> <xsd:sequence> <xsd:element minOccurs="0" maxOccurs="1" name="zANI" type="xsd:string" nillable="true"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="GetIVRinfoResponse"> <xsd:complexType> <xsd:sequence> <xsd:element minOccurs="1" maxOccurs="1" name="GetIVRinfoResult" type="sqlresultstream:SqlResultStream"/> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema> </wsdl:types> <wsdl:message name="GetIVRinfoIn"> <wsdl:part name="parameters" element="tns:GetIVRinfo"/> </wsdl:message> <wsdl:message name="GetIVRinfoOut"> <wsdl:part name="parameters" element="tns:GetIVRinfoResponse"/> </wsdl:message> <wsdl:portType name="SXSPort"> <wsdl:operation name="GetIVRinfo"> <wsdl:input message="tns:GetIVRinfoIn"/> <wsdl:output message="tns:GetIVRinfoOut"/> </wsdl:operation> </wsdl:portType> <wsdl:binding name="SXSBinding" type="tns:SXSPort"> <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/> <wsdl:operation name="GetIVRinfo"> <soap:operation soapAction="http://GETIVRINFO/IVR/IVRINFO/GetIVRinfo" style="document"/> <wsdl:input> <soap:body use="literal"/> </wsdl:input> <wsdl:output> <soap:body use="literal"/> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name="IVR"> <wsdl:port name="SXSPort" binding="tns:SXSBinding"> <soap:address location="http://GETIVRINFO/IVR/IVRINFO"/> </wsdl:port> </wsdl:service> </wsdl:definitions>

    Read the article

  • I need some help with either my SQL or my PHP I do not know which...

    - by sico87
    Hello I am creating a CMS and some of the functionality of it that the images that are within the content are managable. I currently trying to display a table that shows the the content title and then the associated images, ideally I would like a layout similar to this, Content Title Image 1 Image 2 Image 3 Content Title 2 Image 1 Image 2 Content Title 3 Image 1 The SQL the returns the data is actually formed using Codeigniters Active Record class, function getAllContentImages() { $this->db->select('*'); $this->db->from('contentImagesTable'); $this->db->join('contentTable', 'contentTable.contentId = contentImagesTable.contentId'); $this->db->join('categoryTable', 'categoryTable.categoryId = contentTable.categoryId'); $query = $this->db->get(); return $query->result_array(); } The array that is returned is looks like this, I have cut the size down for readability. Array ( [0] => Array ( [contentImageId] => 25 [contentImageName] => green.png [contentImageType] => .png [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/2/green.png [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265222654 [contentId] => 2 [dashboardUserId] => 0 [contentTitle] => sadsadsadassss [contentAbstract] => <p>Pllllleeeeeeeaaaaasssssseeeeee Work</p> [contentBody] => <p>Please work :-( please</p> [contentOnline] => 0 [contentAllowComments] => 0 [contentDateCreated] => 1265124038 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [1] => Array ( [contentImageId] => 28 [contentImageName] => yellow.png [contentImageType] => .png [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/7/yellow.png [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265388055 [contentId] => 7 [dashboardUserId] => 0 [contentTitle] => Another Blog [contentAbstract] => <p>This is another blog and it is shit becuase this does not work</p> [contentBody] => <p>ioasfihfududfhdufhuishdfiudshfiudhsfiuhdsiufhusdhfuids</p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265388034 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [2] => Array ( [contentImageId] => 33 [contentImageName] => portaski.jpg [contentImageType] => .jpg [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/11/portaski.jpg [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265714175 [contentId] => 11 [dashboardUserId] => 0 [contentTitle] => Portaski - new product and brand launch by Bang [contentAbstract] => <p>Bang's experience in new product development has helped launch PortaSki &ndash; the pocket-sized device which is set to revolutionise skiing.</p> [contentBody] => <p>After developing Portaski's brand identity and positioning, Bang re-designed the product and its packaging ahead of launch in late 2008.</p> <p>A media and PR strategy was devised and implemented using Bang's close relationship with two of the UK's most influential organisations in the Advertising and Media Buying industries. On-line advertising was supported with editorial reviews in the UK's leading broadsheets and tabloids, which combined with pin-point HTML direct mail to drive consumers to the new e-commerce site.</p> <p>Impressive month-on-month growth has been achieved since launch, and the direct marketing activity resulted in an unprecedented 2.71% of targets going on-line to purchase a PortaSki.</p> <p>For further information visit <a href="http://www.portaski.com" target="_blank">www.portaski.com</a></p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265718184 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [3] => Array ( [contentImageId] => 26 [contentImageName] => housingplus.jpg [contentImageType] => .jpg [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/5/housingplus.jpg [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265284989 [contentId] => 5 [dashboardUserId] => 0 [contentTitle] => Bang launches Housing Plus [contentAbstract] => <p>Bang has launched Housing Plus, the new brand for the Central Borders Housing Group, along with new sub-brands Property Care and SSHA.</p> [contentBody] => <p>The Midlands based Group, with turnover in excess of &pound;21M, appointed Bang in 2008 following an open pitch of over 40 agencies. Bang's work began with an extensive marketing research strategy that challenged the Group's former positioning and brand structure.</p> <p>The research unveiled that the housing sector demanded a values-led Group. This led Bang to develop the brave &lsquo;Together for the Right Reasons' positioning for Housing Plus.</p> <p>Chris Garratt, Marketing Director at Bang explained "The housing sector has witnessed wholesale change in recent years. Much to tenant's dismay, many associations and Groups appear to be losing touch with their roots, we wanted to develop a Group for associations who place principles at the heart of their corporate strategy".</p> <p>The repositioned sub-brands also play an important role in the Group's revised brand by highlighting Housing Plus' willingness to embrace and nurture individual identities. Chris Garratt continued "By adopting a &lsquo;house of brands' hierarchy from the outset, Housing Plus has sent out a strong message to prospective strategic partners".</p> <p>Bang handled all aspects of work for the redevelopment of the three brands, including research, brand creation, naming, positioning, internal branding and communications, advertising, the brand launches, building the brands' on-line presence and the creation of a powerful brand film &ndash; which is already attracting significant interest from across the sector.</p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265285940 [categoryId] => 8 [categoryTitle] => News [categoryAbstract] => <p>The world at Bang Marketing moves fast, keep up to date w [categorySlug] => news [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1265283717 ) I need a way that I can get all the content images associated with the same content title in one group and then display under the content title. Can anyone help?

    Read the article

  • How can * be a safe hashed password?

    - by Exception e
    phpass is a widely used hashing 'framework'. While evaluating phpass' HashPassword I came across this odd method fragment. function HashPassword($password) { // <snip> trying to generate a hash… # Returning '*' on error is safe here, but would _not_ be safe # in a crypt(3)-like function used _both_ for generating new # hashes and for validating passwords against existing hashes. return '*'; } This is the complete phpsalt class: # Portable PHP password hashing framework. # # Version 0.2 / genuine. # # Written by Solar Designer <solar at openwall.com> in 2004-2006 and placed in # the public domain. # # # class PasswordHash { var $itoa64; var $iteration_count_log2; var $portable_hashes; var $random_state; function PasswordHash($iteration_count_log2, $portable_hashes) { $this->itoa64 = './0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'; if ($iteration_count_log2 < 4 || $iteration_count_log2 > 31) $iteration_count_log2 = 8; $this->iteration_count_log2 = $iteration_count_log2; $this->portable_hashes = $portable_hashes; $this->random_state = microtime() . getmypid(); } function get_random_bytes($count) { $output = ''; if (is_readable('/dev/urandom') && ($fh = @fopen('/dev/urandom', 'rb'))) { $output = fread($fh, $count); fclose($fh); } if (strlen($output) < $count) { $output = ''; for ($i = 0; $i < $count; $i += 16) { $this->random_state = md5(microtime() . $this->random_state); $output .= pack('H*', md5($this->random_state)); } $output = substr($output, 0, $count); } return $output; } function encode64($input, $count) { $output = ''; $i = 0; do { $value = ord($input[$i++]); $output .= $this->itoa64[$value & 0x3f]; if ($i < $count) $value |= ord($input[$i]) << 8; $output .= $this->itoa64[($value >> 6) & 0x3f]; if ($i++ >= $count) break; if ($i < $count) $value |= ord($input[$i]) << 16; $output .= $this->itoa64[($value >> 12) & 0x3f]; if ($i++ >= $count) break; $output .= $this->itoa64[($value >> 18) & 0x3f]; } while ($i < $count); return $output; } function gensalt_private($input) { $output = '$P$'; $output .= $this->itoa64[min($this->iteration_count_log2 + ((PHP_VERSION >= '5') ? 5 : 3), 30)]; $output .= $this->encode64($input, 6); return $output; } function crypt_private($password, $setting) { $output = '*0'; if (substr($setting, 0, 2) == $output) $output = '*1'; if (substr($setting, 0, 3) != '$P$') return $output; $count_log2 = strpos($this->itoa64, $setting[3]); if ($count_log2 < 7 || $count_log2 > 30) return $output; $count = 1 << $count_log2; $salt = substr($setting, 4, 8); if (strlen($salt) != 8) return $output; # We're kind of forced to use MD5 here since it's the only # cryptographic primitive available in all versions of PHP # currently in use. To implement our own low-level crypto # in PHP would result in much worse performance and # consequently in lower iteration counts and hashes that are # quicker to crack (by non-PHP code). if (PHP_VERSION >= '5') { $hash = md5($salt . $password, TRUE); do { $hash = md5($hash . $password, TRUE); } while (--$count); } else { $hash = pack('H*', md5($salt . $password)); do { $hash = pack('H*', md5($hash . $password)); } while (--$count); } $output = substr($setting, 0, 12); $output .= $this->encode64($hash, 16); return $output; } function gensalt_extended($input) { $count_log2 = min($this->iteration_count_log2 + 8, 24); # This should be odd to not reveal weak DES keys, and the # maximum valid value is (2**24 - 1) which is odd anyway. $count = (1 << $count_log2) - 1; $output = '_'; $output .= $this->itoa64[$count & 0x3f]; $output .= $this->itoa64[($count >> 6) & 0x3f]; $output .= $this->itoa64[($count >> 12) & 0x3f]; $output .= $this->itoa64[($count >> 18) & 0x3f]; $output .= $this->encode64($input, 3); return $output; } function gensalt_blowfish($input) { # This one needs to use a different order of characters and a # different encoding scheme from the one in encode64() above. # We care because the last character in our encoded string will # only represent 2 bits. While two known implementations of # bcrypt will happily accept and correct a salt string which # has the 4 unused bits set to non-zero, we do not want to take # chances and we also do not want to waste an additional byte # of entropy. $itoa64 = './ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789'; $output = '$2a$'; $output .= chr(ord('0') + $this->iteration_count_log2 / 10); $output .= chr(ord('0') + $this->iteration_count_log2 % 10); $output .= '$'; $i = 0; do { $c1 = ord($input[$i++]); $output .= $itoa64[$c1 >> 2]; $c1 = ($c1 & 0x03) << 4; if ($i >= 16) { $output .= $itoa64[$c1]; break; } $c2 = ord($input[$i++]); $c1 |= $c2 >> 4; $output .= $itoa64[$c1]; $c1 = ($c2 & 0x0f) << 2; $c2 = ord($input[$i++]); $c1 |= $c2 >> 6; $output .= $itoa64[$c1]; $output .= $itoa64[$c2 & 0x3f]; } while (1); return $output; } function HashPassword($password) { $random = ''; if (CRYPT_BLOWFISH == 1 && !$this->portable_hashes) { $random = $this->get_random_bytes(16); $hash = crypt($password, $this->gensalt_blowfish($random)); if (strlen($hash) == 60) return $hash; } if (CRYPT_EXT_DES == 1 && !$this->portable_hashes) { if (strlen($random) < 3) $random = $this->get_random_bytes(3); $hash = crypt($password, $this->gensalt_extended($random)); if (strlen($hash) == 20) return $hash; } if (strlen($random) < 6) $random = $this->get_random_bytes(6); $hash = $this->crypt_private($password, $this->gensalt_private($random)); if (strlen($hash) == 34) return $hash; # Returning '*' on error is safe here, but would _not_ be safe # in a crypt(3)-like function used _both_ for generating new # hashes and for validating passwords against existing hashes. return '*'; } function CheckPassword($password, $stored_hash) { $hash = $this->crypt_private($password, $stored_hash); if ($hash[0] == '*') $hash = crypt($password, $stored_hash); return $hash == $stored_hash; } }

    Read the article

< Previous Page | 83 84 85 86 87 88  | Next Page >