Search Results

Search found 4590 results on 184 pages for 'direction'.

Page 182/184 | < Previous Page | 178 179 180 181 182 183 184  | Next Page >

  • How to make a form using ajax, onchange event, reload to SAME page

    - by user1348220
    I've been studying this for a while and I'm not sure if I'm going about this the right way because every form I setup according to examples, it doesn't do what I need. I need to setup a form that will: set session when you select from dropdown menu not reload/refresh page (i've read that using AJAX solves this) submit and stay on SAME page (confused because most AJAX examples send it to different process.php page which is supposedly "invisible" but it doesn't "stay" on the same page, it redirects. Basically, client selects quantity of 1 to 10. If they select "2"... it does NOT reload the page.. but it DOES set a session[quantity]=2. Should be simple... but do I POST to same page as form? or POST to different page and it somehow redirects? Also, one test I did it kept pasting my "echo session[quantity]" down the page like 7, 2, 3, 5, etc. etc. each time instead of replacing it. I would paste code but it's all over the place and I'm hoping for direction on which methods to use. Feel I need to start all over again. Edit: trying to add code below but can't seem to paste it properly. <? ob_start();?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <?php session_start(); ?> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Submit Form with out refreshing page Tutorial</title> <!-- JavaScript --> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.0/jquery.min.js"></script> <script type="text/javascript" > $(function() { $(".submit").click(function() { var gender = $("#gender").val(); var dataString = '&gender=' + gender; if(gender=='') { $('.success').fadeOut(200).hide(); $('.error').fadeOut(200).show(); } else { $.ajax({ type: "POST", url: "join.php", data: dataString, success: function(){ $('.success').fadeIn(200).show(); $('.error').fadeOut(200).hide(); } }); } return false; }); }); </script> <style type="text/css"> body{ } .error{ color:#d12f19; font-size:12px; } .success{ color:#006600; font-size:12px; } </style> </head> <body id="public"> <div style="height:30px"></div> <div id="container"> <div style="height:30px"></div> <form method="post" name="form"> <select id="gender" name="gender"> <option value="">Gender</option> <option value="male">Male</option> <option value="female">Female</option> </select> <div> <input type="submit" value="Submit" class="submit"/> <span class="error" style="display:none"> Please Enter Valid Data</span> <span class="success" style="display:none"> Your gender is <?php echo $_SESSION['gender'];?></span> </div> </form> <div style="height:20px"></div> </div><!--container--> </body> </html> <? ob_flush(); ?> and here is my page where the POST goes called join.php (called that in example so I went with it for now) <?php session_start(); if($_POST) { $gender = $_POST['gender']; $_SESSION['gender'] = $gender; } else { } ?>

    Read the article

  • Why doesn't JFreeCharts correctly connect the points in my xy-line graph?

    - by Javajava
    /Each letter A,T,G,C represents a direction for the plot to graph. Specifically, “A” means move right, “T” is move down, “C” is move up, and “G” is move left. When the applet reads A,T,C, it plots the graph correctly. However, when I plot G, the graph is messed up. When I input "ACACACA," the graph is like a rising staircase. When I input "gtgtgt," the graph should look like a staircase, but it looks like a lightning bolt instead/ /This is all one code... i don't know why it's all split up like this:/ import java.applet.Applet; import java.awt.*; import java.awt.event.*; import java.util.Scanner.*; import java.jfree.chart.*; import java.jfree.data.xy.*; import java.jfree.chart.plot.PlotOrientation; public class If_Graph extends Applet implements ActionListener{ Panel panel; TextArea textarea, outputArea; Button move; String thetext; Scanner reader = new Scanner(System.in); String thetext2; int size,p,q; int x,y; public void init(){ setSize(500,500); //set size of applet panel = new Panel(); add(panel); setVisible(true); textarea= new TextArea(10,20); add(textarea); move=new Button("Graph"); move.addActionListener(this); add(move); } public void actionPerformed(ActionEvent e) { XYSeries series = new XYSeries("DNA Walk"); x= 0; y = 0; series.add(x,y); if(e.getSource() == move) { thetext=textarea.getText(); //the text is the DNA bases pasted thetext=thetext.replaceAll(" ",""); //removes spaces thetext2 = ""; for(int i=0; i<thetext.length(); i++) { char a = thetext.charAt(i); switch (a) { case 'A': //moves right x+=1; y+=0; series.add(x,y); break; case 'a': x+=1;y+=0; series.add(x,y); break; case 'C': //moves up x+=0; y+=1; series.add(x,y); break; case 'c': x+=0; y+=1; System.out.println(x + "," + y); series.add(x,y); break; case 'G': //move left x-=1; y+=0; series.add(x,y); System.out.println("G is: "+ x +"," +y); break; case 'g': x-=1; y+=0; System.out.println("g is: " +x + "," + y); series.add(x,y); break; case 'T': //move down x+=0; y-=1; series.add(x,y); System.out.println("T is: "+ x +"," +y); break; case 't': x+=0; y-=1; series.add(x,y); System.out.println("t is: "+ x +"," +y); break; default: // series.add(0,0); break; } } XYDataset xyDataset = new XYSeriesCollection(series); JFreeChart chart = ChartFactory.createXYLineChart ("DNA Random Walk", "", "", xyDataset, PlotOrientation.VERTICAL, true, true, false); ChartFrame frame1=new ChartFrame("DNA Random Walk",chart); frame1.setVisible(true); frame1.setSize(300,300); outputArea.setText(thetext2); } } }

    Read the article

  • css problem with unordered lists (as usual with IE)

    - by Emin
    I am using un-ordered lists that nests some divs to show the desired output on screen. I am using css to style them and they seem to look perfect on chrome and firefox. But in IE(8) it looks there is a problem which I was unable to locate. I am using the below CSS <style type="text/css"> .ur_container {width:980px; padding: 0; margin: 0;} .ur_container ul.bx_grp {list-style-type: none; padding: 0px; margin: 0px; } .ur_container ul.bx_lnx {list-style-type: none; padding: 5px; margin: 0px; } .bx_grp {border:1px solid #c5c5c5; background-color: yellow; margin:0; padding:0;} .bx_grp_header {background-color: #d6d6d6; border-bottom:1px solid #acacac;} .bx_grp_title {float: left; font: bold 11px Arial; padding:5px;} .bx_grp_options {float: right; font: 10px Arial; padding: 5px;} .bx_grp_options a{color: #125B93; text-decoration: none; } .bx_lnx {padding:0px; background-color: red;} .bx_lnx_header {font:11px Arial; color:#333;} .bx_lnx_title {float: left;} .bx_lnx_refno {background-color:#333; color: fff; padding: 1px; margin-right: 5px; } .bx_lnx_options {float: right;} .bx_lnx_options a {color: #258CF4; text-decoration: none;} .bx_lnx_url {font: 9px Arial; color: #999; margin-top: 4px; } .bx_lnx_notes {} .bx_lnx_notes span {background-color: #FDFFCC; color: #666; font: 9px Arial; padding:2px;} .bx_lnx_tags {} .bx_lnx_tags span {background-color: #efefef; border-bottom: 1px solid #ccc; color: #666; font: 9px Arial; padding: 1px 2px 1px 2px; margin-right: 5px;} </style> Against the following HTML <div class="ur_container"> <ul class="bx_grp" id="grp_1"> <li> <div class="bx_grp_header"> <span class="bx_grp_title">Personal File</span> <span class="bx_grp_options"><a href="#">rename</a> &bull; <a href="#">make private</a> &bull; <a href="#">hide</a href="#"> &bull; <a href="#">delete</a></span> <div style="clear: both;"></div> </div> </li> <li> <ul class="bx_lnx" id="lnx_1"> <li> <div class="bx_lnx_header"> <span class="bx_linx_title"><span class="bx_lnx_refno">#3103</span>How to file personal files</span> <span class="bx_lnx_options"><a href="#">edit</a> &bull; <a href="#">move</a> &bull; <a href="#">delete</a></span> </div> </li> <li class="bx_lnx_url">http://www.google.com</li> <li class="bx_lnx_notes"><span>search google for this</span></li> <li class="bx_lnx_tags"><span>personal</span><span>file</span><span>google</span></li> </ul> </li> </ul> </div> Which produces this output in Chrome and Fireworks and the following in IE The yellow and red colors was used in order to show that is being going wrong. The yellow part is the undesired one. Can anyone point me in the right direction please ? Regards

    Read the article

  • Webkit browser jQuery transformations/transitions not working with jSplitSlider

    - by user3689793
    I am helping to build a site and i'm having an issue with the functionality of an add-in called jsplitslider when running it in chrome. Right now, when I navigate between the slides, the div's get stuck on top of each other and never clear the webkit transformations/animations: <div class="sl-content-slice" style="transition: all 800ms ease-in-out; -webkit-transition: all 800ms ease-in-out;"> I think the problem is due to timing of the functions, but I can't seem to figure out where I would need to add a setTimeout(). I only think this because I exhausted a lot of the other options like display: inline-block, notransitions css, etc. I'm desperate to figure out how to make this work in chrome. It works in FF and IE(surprisingly enough). I'm not great at webcoding, so any help will be appreciated! The code on the site isn't minimized. Here is the jQuery where I think the problem lies: var cssStyle = config.orientation === 'horizontal' ? { marginTop : -this.size.height / 2 } : { marginLeft : -this.size.width / 2 }, // default slide's slices style resetStyle = { 'transform' : 'translate(0%,0%) rotate(0deg) scale(1)', opacity : 1 }, // slice1 style slice1Style = config.orientation === 'horizontal' ? { 'transform' : 'translateY(-' + this.options.translateFactor + '%) rotate(' + config.slice1angle + 'deg) scale(' + config.slice1scale + ')' } : { 'transform' : 'translateX(-' + this.options.translateFactor + '%) rotate(' + config.slice1angle + 'deg) scale(' + config.slice1scale + ')' }, // slice2 style slice2Style = config.orientation === 'horizontal' ? { 'transform' : 'translateY(' + this.options.translateFactor + '%) rotate(' + config.slice2angle + 'deg) scale(' + config.slice2scale + ')' } : { 'transform' : 'translateX(' + this.options.translateFactor + '%) rotate(' + config.slice2angle + 'deg) scale(' + config.slice2scale + ')' }; if( this.options.optOpacity ) { slice1Style.opacity = 0; slice2Style.opacity = 0; } // we are adding the classes sl-trans-elems and sl-trans-back-elems to the slide that is either coming "next" // or going "prev" according to the direction. // the idea is to make it more interesting by giving some animations to the respective slide's elements //( dir === 'next' ) ? $nextSlide.addClass( 'sl-trans-elems' ) : $currentSlide.addClass( 'sl-trans-back-elems' ); $currentSlide.removeClass( 'sl-trans-elems' ); var transitionProp = { 'transition' : 'all ' + this.options.speed + 'ms ease-in-out' }; // add the 2 slices and animate them $movingSlide.css( 'z-index', this.slidesCount ) .find( 'div.sl-content-wrapper' ) .wrap( $( '<div class="sl-content-slice" />' ).css( transitionProp ) ) .parent() .cond( dir === 'prev', function() { var slice = this; this.css( slice1Style ); setTimeout( function() { slice.css( resetStyle ); }, 150 ); }, function() { var slice = this; setTimeout( function() { slice.css( slice1Style ); }, 150 ); } ) .clone() .appendTo( $movingSlide ) .cond( dir === 'prev', function() { var slice = this; this.css( slice2Style ); setTimeout( function() { $currentSlide.addClass( 'sl-trans-back-elems' ); if( self.support ) { slice.css( resetStyle ).on( self.transEndEventName, function() { self._onEndNavigate( slice, $currentSlide, dir ); } ); } else { self._onEndNavigate( slice, $currentSlide, dir ); } }, 150 ); }, function() { var slice = this; setTimeout( function() { $nextSlide.addClass( 'sl-trans-elems' ); if( self.support ) { slice.css( slice2Style ).on( self.transEndEventName, function() { self._onEndNavigate( slice, $currentSlide, dir ); } ); } else { self._onEndNavigate( slice, $currentSlide, dir ); } }, 150 ); } ) .find( 'div.sl-content-wrapper' ) .css( cssStyle ); $nextSlide.show(); }, _validateValues : function( config ) { // OK, so we are restricting the angles and scale values here. // This is to avoid the slices wrong sides to be shown. // you can adjust these values as you wish but make sure you also ajust the // paddings of the slides and also the options.translateFactor value and scale data attrs if( config.slice1angle > this.options.maxAngle || config.slice1angle < -this.options.maxAngle ) { config.slice1angle = this.options.maxAngle; } if( config.slice2angle > this.options.maxAngle || config.slice2angle < -this.options.maxAngle ) { config.slice2angle = this.options.maxAngle; } if( config.slice1scale > this.options.maxScale || config.slice1scale <= 0 ) { config.slice1scale = this.options.maxScale; } if( config.slice2scale > this.options.maxScale || config.slice2scale <= 0 ) { config.slice2scale = this.options.maxScale; } if( config.orientation !== 'vertical' && config.orientation !== 'horizontal' ) { config.orientation = 'horizontal' } }, _onEndNavigate : function( $slice, $oldSlide, dir ) { // reset previous slide's style after next slide is shown var $slide = $slice.parent(), removeClasses = 'sl-trans-elems sl-trans-back-elems'; // remove second slide's slice $slice.remove(); // unwrap.. $slide.css( 'z-index', 10 ) .find( 'div.sl-content-wrapper' ) .unwrap(); // hide previous current slide $oldSlide.hide().removeClass( removeClasses ); $slide.removeClass( removeClasses ); // now we can navigate again.. this.isAnimating = false; this.options.onAfterChange( $slide, this.current ); }, Sorry if I missed any conventions when posting, this is my first S.O. post. Thanks in advance for any help.

    Read the article

  • Having trouble comparing a range of dates entered in a form with records in a mySQL database

    - by Andrew Fox
    I have a table called schedule and a column called Date where the column type is date. In that column I have a range of dates, which is currently from 2012-11-01 to 2012-11-30. I have a small form where the user can enter a range of dates (input names from and to) and I want to be able to compare the range of dates with the dates currently in the database. This is what I have: //////////////////////////////////////////////////////// //////First set the date range that we want to use////// //////////////////////////////////////////////////////// if(isset($_POST['from']) && ($_POST['from'] != NULL)) { $startDate = $_POST['from']; } else { //Default date is Today $startDate = date("Y-m-d"); } if(isset($_POST['to']) && ($_POST['to'] != NULL)) { $endDate = $_POST['to']; } else { //Default day is one month from today $endDate = date("Y-m-d", strtotime("+1 month")); } ////////////////////////////////////////////////////////////////////////////////////// //////Next calculate the total amount of days selected above to use as a limiter////// ////////////////////////////////////////////////////////////////////////////////////// $dayStart = strtotime($startDate); $dayEnd = strtotime($endDate); $total_days = abs($dayEnd - $dayStart) / 86400 +1; echo "Start Date: " . $startDate . "<br>End Date: " . $endDate . "<br>"; echo "Day Start: " . $dayStart . "<br>Day End: " . $dayEnd . "<br>"; echo "Total Days: " . $total_days . "<br>"; //////////////////////////////////////////////////////////////////////////////////// //////Then we're going to see if the dates selected are in the schedule table////// //////////////////////////////////////////////////////////////////////////////////// //Select all of the dates currently in the schedule table between the range selected. $sql = ("SELECT Date FROM schedule WHERE Date BETWEEN '$startDate' AND '$endDate' LIMIT $total_days"); //Run a check on the query to make sure it worked. If it failed then print the error. if(!$result_date_query = $mysqli->query($sql)) { die('There was an error getting the dates from the schedule table [' . $mysqli->error . ']'); } //Set the dates to an array for future use. // $current_dates = $result_date_query->fetch_assoc(); //Loop through the results while a result is being returned. while($row = $result_date_query->fetch_assoc()) { echo "Row: " . $row['Date'] . "<br>"; echo "Start day: " . date('Y-m-d', $dayStart) . "<br>"; //Set this loop to add 1 day to the Start Date until it reaches the End Date for($i = $dayStart; $i <= $dayEnd; $i = strtotime('+1 day', $i)) { $date = date('Y-m-d',$i); echo "Loop Start day: " . date('Y-m-d', $dayStart) . "<br>"; //Run a check to see if any of the dates selected are in the schedule table. if($row['Date'] != $date) { echo "Current Date: " . $row['Date'] . "<br>"; echo "Date: " . $date . "<br>"; echo "It appears as though you've selected some dates that are not in the schedule database.<br>Please correct the issue and try again."; return; } } } //Free the result so something else can use it. $result_date_query->free(); As you can see I've added in some echo statements so I can see what is being produced. From what I can see it looks like my $row['Date'] is not incrementing and staying at the same date. I originally had it set to a variable (currently commented out) but I thought that could be causing problems. I have created the table with dates ranging from 2012-11-01 to 2012-11-15 for testing and entered all of this php code onto phpfiddle.org but I can't get the username provided to connect. Here is the link: PHP Fiddle I'll be reading through the documentation to try and figure out the user connection problem in the meantime, I would really appreciate any direction or advice you can give me.

    Read the article

  • Object oriented n-tier design. Am I abstracting too much? Or not enough?

    - by max
    Hi guys, I'm building my first enterprise grade solution (at least I'm attempting to make it enterprise grade). I'm trying to follow best practice design patterns but am starting to worry that I might be going too far with abstraction. I'm trying to build my asp.net webforms (in C#) app as an n-tier application. I've created a Data Access Layer using an XSD strongly-typed dataset that interfaces with a SQL server backend. I access the DAL through some Business Layer Objects that I've created on a 1:1 basis to the datatables in the dataset (eg, a UsersBLL class for the Users datatable in the dataset). I'm doing checks inside the BLL to make sure that data passed to DAL is following the business rules of the application. That's all well and good. Where I'm getting stuck though is the point at which I connect the BLL to the presentation layer. For example, my UsersBLL class deals mostly with whole datatables, as it's interfacing with the DAL. Should I now create a separate "User" (Singular) class that maps out the properties of a single user, rather than multiple users? This way I don't have to do any searching through datatables in the presentation layer, as I could use the properties created in the User class. Or would it be better to somehow try to handle this inside the UsersBLL? Sorry if this sounds a little complicated... Below is the code from the UsersBLL: using System; using System.Data; using PedChallenge.DAL.PedDataSetTableAdapters; [System.ComponentModel.DataObject] public class UsersBLL { private UsersTableAdapter _UsersAdapter = null; protected UsersTableAdapter Adapter { get { if (_UsersAdapter == null) _UsersAdapter = new UsersTableAdapter(); return _UsersAdapter; } } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, true)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUsers() { return Adapter.GetUsers(); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, false)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUserByUserID(int userID) { return Adapter.GetUserByUserID(userID); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, false)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUsersByTeamID(int teamID) { return Adapter.GetUsersByTeamID(teamID); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, false)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUsersByEmail(string Email) { return Adapter.GetUserByEmail(Email); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Insert, true)] public bool AddUser(int? teamID, string FirstName, string LastName, string Email, string Role, int LocationID) { // Create a new UsersRow instance PedChallenge.DAL.PedDataSet.UsersDataTable Users = new PedChallenge.DAL.PedDataSet.UsersDataTable(); PedChallenge.DAL.PedDataSet.UsersRow user = Users.NewUsersRow(); if (UserExists(Users, Email) == true) return false; if (teamID == null) user.SetTeamIDNull(); else user.TeamID = teamID.Value; user.FirstName = FirstName; user.LastName = LastName; user.Email = Email; user.Role = Role; user.LocationID = LocationID; // Add the new user Users.AddUsersRow(user); int rowsAffected = Adapter.Update(Users); // Return true if precisely one row was inserted, // otherwise false return rowsAffected == 1; } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Update, true)] public bool UpdateUser(int userID, int? teamID, string FirstName, string LastName, string Email, string Role, int LocationID) { PedChallenge.DAL.PedDataSet.UsersDataTable Users = Adapter.GetUserByUserID(userID); if (Users.Count == 0) // no matching record found, return false return false; PedChallenge.DAL.PedDataSet.UsersRow user = Users[0]; if (teamID == null) user.SetTeamIDNull(); else user.TeamID = teamID.Value; user.FirstName = FirstName; user.LastName = LastName; user.Email = Email; user.Role = Role; user.LocationID = LocationID; // Update the product record int rowsAffected = Adapter.Update(user); // Return true if precisely one row was updated, // otherwise false return rowsAffected == 1; } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Delete, true)] public bool DeleteUser(int userID) { int rowsAffected = Adapter.Delete(userID); // Return true if precisely one row was deleted, // otherwise false return rowsAffected == 1; } private bool UserExists(PedChallenge.DAL.PedDataSet.UsersDataTable users, string email) { // Check if user email already exists foreach (PedChallenge.DAL.PedDataSet.UsersRow userRow in users) { if (userRow.Email == email) return true; } return false; } } Some guidance in the right direction would be greatly appreciated!! Thanks all! Max

    Read the article

  • Android application displays black screen after running

    - by frgnvola
    When I click "Run as an Android Application" on Eclipse, the following is displayed in the console [2014-06-05 20:07:18 - StudentConnect] Android Launch! [2014-06-05 20:07:18 - StudentConnect] adb is running normally. [2014-06-05 20:07:18 - StudentConnect] Performing sandhu.student.connect.SplashActivity activity launch [2014-06-05 20:07:18 - StudentConnect] Using default Build Tools revision 19.0.0 [2014-06-05 20:07:18 - StudentConnect] Refreshing resource folders. [2014-06-05 20:07:18 - StudentConnect] Using default Build Tools revision 19.0.0 [2014-06-05 20:07:18 - StudentConnect] Starting incremental Pre Compiler: Checking resource changes. [2014-06-05 20:07:18 - StudentConnect] Nothing to pre compile! [2014-06-05 20:07:18 - StudentConnect] Starting incremental Package build: Checking resource changes. [2014-06-05 20:07:18 - StudentConnect] Using default Build Tools revision 19.0.0 [2014-06-05 20:07:18 - StudentConnect] Skipping over Post Compiler. [2014-06-05 20:07:20 - StudentConnect] Application already deployed. No need to reinstall. [2014-06-05 20:07:20 - StudentConnect] Starting activity sandhu.student.connect.SplashActivity on device 0f0898b2 [2014-06-05 20:07:21 - StudentConnect] ActivityManager: Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] cmp=sandhu.student.connect/.SplashActivity } [2014-06-05 20:07:21 - StudentConnect] ActivityManager: Warning: Activity not started, its current task has been brought to the front After deployed to my phone, it only displays a black screen. I recently implemented a splash screen, but it was working fine before; however I think it might have something to do with the problem. Here are my java and xml files: MainActivity.java package sandhu.student.connect; import android.app.Activity; import android.os.Bundle; import android.view.KeyEvent; import android.view.View; import android.webkit.WebSettings; import android.webkit.WebView; import android.webkit.WebViewClient; public class MainActivity extends Activity { public WebView student_zangle; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); WebView student_zangle = (WebView) findViewById(R.id.student_zangle); student_zangle.loadUrl("https://zangleweb01.clovisusd.k12.ca.us/studentconnect/"); student_zangle.setWebViewClient(new WebViewClient()); student_zangle.setScrollBarStyle(View.SCROLLBARS_INSIDE_OVERLAY); WebSettings settings = student_zangle.getSettings(); settings.setJavaScriptEnabled(true); settings.setBuiltInZoomControls(true); settings.setLoadWithOverviewMode(true); settings.setUseWideViewPort(true); } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { WebView student_zangle = (WebView) findViewById(R.id.student_zangle); if ((keyCode == KeyEvent.KEYCODE_BACK) && student_zangle.canGoBack()) { student_zangle.goBack(); return true; } else { finish(); } return super.onKeyDown(keyCode, event); } } activity_main.xml <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@drawable/blue" tools:context=".MainActivity" > <WebView android:id="@+id/student_zangle" android:layout_width="match_parent" android:layout_height="match_parent" /> </RelativeLayout> SplashActivity.java package sandhu.student.connect; import android.os.Bundle; import android.preference.PreferenceActivity; public class SplashActivity extends PreferenceActivity { @SuppressWarnings("deprecation") @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); addPreferencesFromResource(R.xml.prefs); } } splash_activity.xml <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@drawable/blue" android:orientation="vertical" > <ImageView android:id="@+id/imageView1" android:layout_width="250dp" android:layout_height="100dp" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" android:layout_marginTop="145dp" android:contentDescription="@string/zangle_logo" android:src="@drawable/logo" /> </RelativeLayout> Also, here is a full copy of the logcat error output: 06-05 20:19:46.698: E/Watchdog(817): !@Sync 1952 06-05 20:20:09.971: E/memtrack(16438): Couldn't load memtrack module (No such file or directory) 06-05 20:20:09.971: E/android.os.Debug(16438): failed to load memtrack module: -2 06-05 20:20:11.012: E/memtrack(16451): Couldn't load memtrack module (No such file or directory) 06-05 20:20:11.012: E/android.os.Debug(16451): failed to load memtrack module: -2 06-05 20:20:11.202: E/EnterpriseContainerManager(817): ContainerPolicy Service is not yet ready!!! Please help me figure out what is wrong, or at least point me in the right direction. Thanks in advance.

    Read the article

  • setting up linked list Java

    - by erp
    I'm working on some basic linked list stuff, like insert, delete, go to the front or end of the list, and basically i understand the concept of all of that stuff once i have the list i guess but im having trouble setting up the list. I was wondering of you guys could tell me if im going in the right direction. (mostly just the setup) this is what i have so far: public class List { private int size; private List linkedList; List head; List cur; List next; /** * Creates an empty list. * @pre * @post */ public List(){ linkedList = new List(); this.head = null; cur = head; } /** * Delete the current element from this list. The element after the deleted element becomes the new current. * If that's not possible, then the element before the deleted element becomes the new current. * If that is also not possible, then you need to recognize what state the list is in and define current accordingly. * Nothing should be done if a delete is not possible. * @pre * @post */ public void delete(){ // delete size--; } /** * Get the value of the current element. If this is not possible, throw an IllegalArgumentException. * @pre the list is not empty * @post * @return value of the current element. */ public char get(){ return getItem(cur); } /** * Go to the last element of the list. If this is not possible, don't change the cursor. * @pre * @post */ public void goLast(){ while (cur.next != null){ cur = cur.next; } } /** * Advance the cursor to the next element. If this is not possible, don't change the cursor. * @pre * @post */ public void goNext(){ if(cur.next != null){ cur = cur.next;} //else do nothing } /** * Retreat the cursor to the previous element. If this is not possible, don't change the cursor. * @pre * @post */ public void goPrev(){ } /** * Go to top of the list. This is the position before the first element. * @pre * @post */ public void goTop(){ } /** * Go to first element of the list. If this is not possible, don't change the cursor. * @pre * @post */ public void goFirst(){ } /** * Insert the given parameter after the current element. The newly inserted element becomes the current element. * @pre * @post * @param newVal : value to insert after the current element. */ public void insert(char newVal){ cur.setItem(newVal); size++; } /** * Determines if this list is empty. Empty means this list has no elements. * @pre * @post * @return true if the list is empty. */ public boolean isEmpty(){ return head == null; } /** * Determines the size of the list. The size of the list is the number of elements in the list. * @pre * @post * @return size which is the number of elements in the list. */ public int size(){ return size; } public class Node { private char item; private Node next; public Node() { } public Node(char item) { this.item = item; } public Node(char item, Node next) { this.item = item; this.next = next; } public char getItem() { return this.item; } public void setItem(char item) { this.item = item; } public Node getNext() { return this.next; } public void setNext(Node next) { this.next = next; } } } I got the node class alright (well i think it works alright), but is it necessary to even have that class? or can i go about it without even using it (just curious). And for example on the method get() in the list class can i not call that getItem() method from the node class because it's getting an error even though i thought that was the whole point for the node class. bottom line i just wanna make sure im setting up the list right. Thanks for any help guys, im new to linked list's so bear with me!

    Read the article

  • SCCM 2012 unable to update boot images with pxe enabled

    - by Adam
    we are fighting an error in sccm 2012. When we attempt to distribute boot images (after selecting the pxe option) we receive an error that the pxe image cannot be expanded (distmgr log). Can you give us any direction on what to try or attempt in this scenario? We only have one dp in our environment at the moment, however we have found that by creating another dp on a different server we don’t have this problem. However we really need the primary site to be a dp. We have tried: Removing and reinstalling the dp Removing and reinstalling the WDS Reinstalled the OS ... ouch Reinstalled SQL We even attempted to manually mount these wims in the remote install folder, no luck... And we have been working on this for days... Any and all help is appreciated! Our log is below Thank you very much, Small Town IT guy Attempting to add or update a package on a distribution point. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) The distribution point is on the siteserver and the package is a content type package. There is nothing to be copied over. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) STATMSG: ID=2342 SEV=I LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=OURSERVER.ourdomain.cc.ia.us SITE=IVC PID=3600 TID=6924 GMTDATE=Fri Jun 22 19:49:41.559 2012 ISTR0="Boot image (x86)" ISTR1="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=2 AID0=400 AVAL0="IVC00001" AID1=404 AVAL1="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) The current user context will be used for connecting to ["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) No network connection is needed to ["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\ as this is the local machine. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Signature share exists on distribution point path \OURSERVER.ourdomain.cc.ia.us\SMSSIG$ SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Ignoring drive C:. File C:\NO_SMS_ON_DRIVE.SMS exists. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) user(NT AUTHORITY\SYSTEM) runing application(SMS_DISTRIBUTION_MANAGER) from machine (OURSERVER.ourdomain.cc.ia.us) is submitting SDK changes from site(IVC) SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Share SMSPKGD$ exists on distribution point \OURSERVER.ourdomain.cc.ia.us\SMSPKGD$ SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Creating, reading and or updating Operations Management server role registry keys for a Distribution Point ... SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) user(NT AUTHORITY\SYSTEM) runing application(SMS_DISTRIBUTION_MANAGER) from machine (OURSERVER.ourdomain.cc.ia.us) is submitting SDK changes from site(IVC) SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Creating, reading or updating IIS registry key for a distribution point. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) IISPortsList in the SCF is "80". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) IISSSLPortsList in the SCF is "443". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) IISWebSiteName in the SCF is "". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) IISSSLState in the SCF is 224. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Virtual Directory SMS_DP_SMSPKG$ for the physical path F:\SCCMContentLib already exists. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) STATMSG: ID=2375 SEV=I LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=OURSERVER.ourdomain.cc.ia.us SITE=IVC PID=3600 TID=6924 GMTDATE=Fri Jun 22 19:49:42.058 2012 ISTR0="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" ISTR1="" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=1 AID0=404 AVAL0="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) Creating, reading or updating IIS registry key for a distribution point. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) IISPortsList in the SCF is "80". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) IISSSLPortsList in the SCF is "443". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) IISWebSiteName in the SCF is "". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) IISSSLState in the SCF is 224. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) Virtual Directory SMS_DP_SMSSIG$ for the physical path D:\SMSSIG$ already exists. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) STATMSG: ID=2375 SEV=I LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=OURSERVER.ourdomain.cc.ia.us SITE=IVC PID=3600 TID=6924 GMTDATE=Fri Jun 22 19:49:42.105 2012 ISTR0="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" ISTR1="" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=1 AID0=404 AVAL0="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) user(NT AUTHORITY\SYSTEM) runing application(SMS_DISTRIBUTION_MANAGER) from machine (OURSERVER.ourdomain.cc.ia.us) is submitting SDK changes from site(IVC) SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) RDC:Successfully created package signature file from \?\F:\SMSPKGSIG\IVC00001.3 to \OURSERVER.ourdomain.cc.ia.us\SMSSIG$\IVC00001.3.tar SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) Setting permissions on file MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\SMSSIG$\IVC00001.3.tar. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) ExpandPXEImage: IVC00001, 1024 SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) CContentDefinition::GetFileProperties failed; 0x80070003 SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) CContentDefinition::TotalFileSizes failed; 0x80070003 SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) ExpandPXEImage failed; 0x80070003 SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) Error occurred. Performing error cleanup prior to returning. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) DP thread with array index 0 ended. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) DP thread with thread handle 00000000000013A4 and thread ID 6924 ended. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) Updating package info for package IVC00001 SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) Package IVC00001 does not have a preferred sender. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) The package and/or program properties for package IVC00001 have not changed, need to determine which site(s) need updated package info. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) StoredPkgVersion (3) of package IVC00001. StoredPkgVersion in database is 3. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) SourceVersion (3) of package IVC00001. SourceVersion in database is 3. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) STATMSG: ID=2302 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=OURSERVER.ourdomain.cc.ia.us SITE=IVC PID=3600 TID=4492 GMTDATE=Fri Jun 22 19:49:42.292 2012 ISTR0="Boot image (x86)" ISTR1="IVC00001" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=1 AID0=400 AVAL0="IVC00001" SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) Failed to process package IVC00001 after 0 retries, will retry 100 more times SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) Exiting package processing thread. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C)

    Read the article

  • Any way to view dynamic java content ex-post? Browser session still open

    - by Ryan
    I feel like a grandpa from 1996 asking this, but is it at all possible to view a representation of a particular screen that was rendered as part of a java-based online checkout process I executed a couple days ago? I haven't cleared my browser cache or temp files or anything, and I don't think I've restarted the comp or even the browser since. I'm using mac OS X 10.6.8, and the page(s) were viewed with Chrome version 21.0.1180.89 in standard mode (not incognito). Specifically the page in question was part of Verizon Wireless's 'iconic' contract/checkout process, which leads the user through several pages to make selections on various criteria and seems to be based on java. (Obviously I'm a dummy regarding web stuff so the question is probably not very well defined, I'm happy to elaborate). ^This is the tl;dr question. If it belongs on another site please just let me know. This is what I've been able to figure out on my own, for the bored / ultra-helpful / those who could use a laugh at a noob fumbling his way around cache files with no idea what he's doing: The progress through the selection pages is very clear in Chrome's browser history, the sequential pages are: https://www.verizonwireless.com/b2c/accountholder/estore/phoneupgrade?execution=e3s2 https://www.verizonwireless.com/b2c/accountholder/estore/phoneupgrade?execution=e3s3 https://www.verizonwireless.com/b2c/accountholder/estore/phoneupgrade?execution=e3s4 https://www.verizonwireless.com/b2c/accountholder/estore/phoneupgrade?execution=e3s5 https://preorder.verizonwireless.com/iconic/?format=JSON&value={%22action%22:%22START_ORDER%22,%22custType%22:%22EXISTING%22,%22orderType%22:%22UPGRADE%22,%22lookupMtn%22:%22*(NumberA)*%22,%22lineData%22:[{%22mtn%22:%22*(NumberA)*%22,%22upgType%22:%22ALTERNATE_UPGRADE%22,%22eligibleMtn%22:%22*(NumberB)*%22}]} https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicOrder.do?format=JSON&value={%22action%22:%22START_ORDER%22,%22custType%22:%22EXISTING%22,%22orderType%22:%22UPGRADE%22,%22lookupMtn%22:%22*(NumberA)*%22,%22lineData%22:[{%22mtn%22:%22*(NumberA)*%22,%22upgType%22:%22ALTERNATE_UPGRADE%22,%22eligibleMtn%22:%22*(NumberB)*%22}]} https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicEligibility.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicDeviceSelection.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/PlanOptions.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicFeatures.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicAccessories.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicShipmentBilling.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicReview.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicPaymentCreditInfo.do https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicConfirmation.do The visual representation I would need could come from any of these pages, as the necessary information was shown at the top of each of them (although the two with long URLs were just like redirects or something). Of course, clicking the link to the page in History right now requires a new sign-in and just returns the user to the initial step for doing the process again; it does not pull up a representation of the page as it was seen several days ago. This I understand. Instead using Chrome's integrated cache viewer by typing about:cache in the address bar, I can search and find links that appear to be relevant, when I click on the link I just get a http header and a bunch of hexadecimal gobbledygook. I've tried to use the URL at the top of the cache and URLs in the http headers, but they take me to current versions of those pages and not the versions I saw during the checkout process. I tried this with a few of them but stopped because I noticed that it updated the date in the http header to the present moment and I don't want to take chances overwriting the cache files since I don't know what I'm doing. The links to the cache files look like this: https://login.verizonwireless.com/amserver/UI/Login?realm=vzw&goto=https%3A%2F%2Fpreorder.verizonwireless.com%3A443%2Ficonic%2Ficonic%2Fsecured%2Fscreens%2FPlanOptions.do https://preorder.verizonwireless.com/iconic/iconic/screens/customerTypeOverlay.jsp https://verizonwireless.tt.omtrdc.net/m2/verizonwireless/mbox/standard?mboxHost=login.verizonwireless.com&mboxSession=1347776884663-145230&mboxPC=1347609748832-956765.19&mboxPage=1347776884663-145230&screenHeight=1200&screenWidth=1920&browserWidth=1299&browserHeight=868&browserTimeOffset=-420&colorDepth=24&mboxCount=1&mbox=My_Verizon_Global&mboxId=0&mboxTime=1347751684666&mboxURL=https%3A%2F%2Flogin.verizonwireless.com%2Famserver%2FUI%2FLogin%3Frealm%3Dvzw%26goto%3Dhttps%253A%252F%252Fpreorder.verizonwireless.com%253A443%252Ficonic%252Ficonic%252Fsecured%252Fscreens%252FPlanOptions.do&mboxReferrer=&mboxVersion=41 and https://verizonwireless.tt.omtrdc.net/m2/verizonwireless/mbox/standard?mboxHost=login.verizonwireless.com&mboxSession=1347735676953-663794&mboxPC=1347609748832-956765.19&mboxPage=1347738347511-550383&screenHeight=1200&screenWidth=1920&browserWidth=1299&browserHeight=845&browserTimeOffset=-420&colorDepth=24&mboxCount=1&mbox=My_Verizon_Global&mboxId=0&mboxTime=1347713147517&mboxURL=https%3A%2F%2Flogin.verizonwireless.com%2Famserver%2FUI%2FLogin%3Frealm%3Dvzw%26goto%3Dhttps%253A%252F%252Fpreorder.verizonwireless.com%253A443%252Ficonic%252Ficonic%252Fsecured%252Fscreens%252FIconicOrder.do%253Fformat%253DJSON%2526value%253D%257B%252522action%252522%253A%252522START_ORDER%252522%252C%252522custType%252522%253A%252522EXISTING%252522%252C%252522orderType%252522%253A%252522UPGRADE%252522%252C%252522lookupMtn%252522%253A%252522*(NumberA)*%252522%252C%252522lineData%252522%253A%255B%257B%252522mtn%252522%253A%252522*(NumberA)*%252522%252C%252522upgType%252522%253A%252522ALTERNATE_UPGRADE%252522%252C%252522eligibleMtn%252522%253A%252522*(NumberB)*%252522%257D%255D%257D&mboxReferrer=&mboxVersion=41 and the http headers look like this: HTTP/1.1 200 OK Server: VZW Date: Sun, 16 Sep 2012 14:55:48 GMT Cache-control: private Pragma: no-cache Expires: 0 X-dsameversion: VZW Am_client_type: genericHTML Content-type: text/html;charset=ISO-8859-1 Content-Encoding: gzip Content-Length: 6220 and HTTP/1.1 200 OK Cache-Control: no-cache Date: Sun, 16 Sep 2012 16:16:30 GMT Content-Type: text/html Expires: Thu, 01 Jan 1970 00:00:00 GMT Content-Encoding: gzip X-Powered-By: Servlet/2.5 JSP/2.1 and HTTP/1.1 302 Moved Temporarily Server: VZW Date: Sun, 16 Sep 2012 16:29:32 GMT Cache-control: private Pragma: no-cache X-dsameversion: VZW Am_client_type: genericHTML Location: https://preorder.verizonwireless.com:443/iconic/iconic/secured/screens/IconicOrder.do?format=JSON&value={%22action%22:%22START_ORDER%22,%22custType%22:%22EXISTING%22,%22orderType%22:%22UPGRADE%22,%22lookupMtn%22:%22*(*(NumberA)*%22,%22lineData%22:[{%22mtn%22:%22*(NumberA)*%22,%22upgType%22:%22ALTERNATE_UPGRADE%22,%22eligibleMtn%22:%22*(NumberB)*%22}]} Content-length: 0 ^^this last one actually returned me to a page in the middle of the process when I used the "Location:" given in this http header rather than the URL at the top of the cache page (and was signed in to Verizon's website through a separate tab), but the page it took me to had already been updated to reflect new information, it wasn't presented as of the time the actions were taken several days ago when the page was originally viewed. (It's clear I can't achieve what I'm looking for by visiting current versions of these pages on the web…I should actually probably disable my network adapter while testing this out). The cache folder seems promising, but I don't know what to make of all that hexadecimal mess - if it contains what I'm looking for and if so, how to view it. Finally, the third thing I've come across is the Google Chrome cache folder on my local machine, at ~/Library/Caches/Google/Chrome/ then there are 'Default' and 'Media Cache' folders within. There are ~4,000 files in the former averaging ~100kb each, and 100 files in the latter averaging ~900kb each. The filenames all start "f_00xxxx" except for files titled data_0 through data_4 in each folder. I'm not sure how to observe the contents of these files and don't really want to start opening them up and potentially overwriting existing cached pages, as I notice there are already some holes in the arrangement of the files which I have never deleted manually. Hopefully this is an easy question to answer for someone who knows this stuff, admittedly web stuff is my weak point. As such, I've spent the past five hours searching around and trying to provide all the information I can. I'm probably asking for a miracle - like can those cached pages full of hexadecimal data be used to recreate the representation of the information that was on screen during the process? Or could screenshots of the previously viewed webpages be lurking in the /Caches folder? I have doubt because the content wasn't viewed at a permanent link, rather it seems like the on-screen information was served by Verizon's db, and probably securely so. I'm just not sure if Chrome saves the visual rendering of the page contents somewhere, even just temporarily. Alternatively I would be happy just to get the raw data that was on the page, even if not a visual representation…I just need to be able to demonstrate the phone line that was referenced on this page: https://preorder.verizonwireless.com/iconic/iconic/secured/screens/IconicFeatures.do . Can anyone point me in the right direction?

    Read the article

  • ESX3.5 Cluster & MD3000i -- Both servers see iSCSI Targets, Only one server can use partition.

    - by GruffTech
    Alright. First and foremost, Warning. This is a bigger-then-normal question. I like to be thorough and try to eliminate all possible "easymode" answers, as well as give everyone a feel of what i've tried. I've included several images of our setup and the problem it is having.. TLDR Version: So I've followed the guides located here: ESX Deployment Guide V1 this is the guide Dell has sent me to setup two ESX3.5 servers mounting a Dell MD3000i. It doesn't work. Both servers can't use the same storage partition on the MD3000. Both servers see it, but only one server can actually use it. (that server being whatever server created the partition on the target.) Both ESX servers are members of the Host Group. Full Version I have 2 ESX3.5 Servers (10.0.7.102, also called EPI2, and 10.0.7.103, also called EPI3.) connected to a iSCSI SAN Device (Dell MD3000i). Both ESX servers can "scan" the SAN and see the LUNS. Part One: MD3000i Storage On the MD3000i, Both servers are in my host group. I have two partitions, VM1 and VM2, both 1.6TB (vmware doesn't like anything past 2tb.) And you can even see that the ESX servers are targetting the MD3000 just fine. Part Two: The ESX Servers Figure 1. So as you can see above, Both ESX Servers (10.0.7.102 and 10.0.7.103) are able to see and scan the MD3000i SAN. Figure 2. Above is the storage both servers see. I created the storage partition on EPI2 (102). I then Extended the partition to include the second LUN for a grand total of 3.27 TB of storage. When i "rescan" on 103 (the server not mounting the partition), I get the below log in log/messages. Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). being the only line that grabs my attentions. (EPI3 is the server name) Mar 11 10:41:04 epi3 vmkiscsid[5436]: Connected to Discovery Address 192.168.130.101 Mar 11 10:41:04 epi3 vmkiscsid[5437]: Connected to Discovery Address 192.168.130.102 Mar 11 10:41:04 epi3 vmkiscsid[5438]: Connected to Discovery Address 192.168.131.101 Mar 11 10:41:04 epi3 vmkiscsid[5439]: Connected to Discovery Address 192.168.131.102 Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 0 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdb : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Device id info for sdb: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0xa2 0x0 0x0 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Id for sdb 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0xa2 0x00 0x00 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:17 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: Attached scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: scan_scsis starting finish Mar 11 10:41:17 epi3 kernel: SCSI device sdb: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:17 epi3 kernel: sdb: sdb1 Mar 11 10:41:17 epi3 kernel: scan_scsis done with finish Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 1 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdc : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Device id info for sdc: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0x86 0x0 0x0 0xd 0xb7 0x4d 0x75 0xf2 0x77 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Id for sdc 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0x86 0x00 0x00 0x0d 0xb7 0x4d 0x75 0xf2 0x77 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:18 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: Attached scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: scan_scsis starting finish Mar 11 10:41:18 epi3 kernel: SCSI device sdc: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:18 epi3 kernel: sdc: sdc1 Mar 11 10:41:18 epi3 kernel: scan_scsis done with finish Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). Mar 11 10:41:18 epi3 kernel: scsi singledevice 1 0 0 0 Things I've Tried: Removing iSCSI targets from only 103, disabling iSCSI, rebooting, enabled iSCSI, re-adding targets, rescan. Same result. Removing partition on 102, Formatted partition on 103 instead. Same result, except flipped. 103 can use storage, 102 can not. Starting Over. Removing all iSCSI Targets on both ESX Boxes, disabling iSCSI, turning off the firewall for iSCSI, rebooting ESX. Then on the MD3000, Removed the Host Group, Removed the Host-to-Virtual Mappings, Restarted the SAN. Followed the Documentation again, same result. Both servers see the storage, but only one server can use it. Disabling and Re-enabling VMware DRS and HA. Same result. Flat-out turning off VMware DRS and HA, and doing the "start over" step to see if maybe that borked it. Same Result. I'm kinda loosing my mind here, Everything i read online says "just partition it and if the ESX boxes can see the targets, it just works".... well crap. Any ideas, any other things to try? Can anyone atleast point me in the right direction? I'm really tired of working from 1am til 4am (our maintenance hours)

    Read the article

  • Mandatory profile on Terminal server fails to load. Userenv.log debug

    - by Datapimp23
    Hi, We're having a lot of corrupted profiles lately on our profile share. At the moment I have no clue why, but I decided to switch to one mandatory profile since the users can all use the same and there is no need to have seperate profiles for each user. Here's what I did. I logged into the Terminal server with a new user and configured some stuff (imported certificates and a few files). Then I logged out. Later as admin I copied the profile to another server and renamed it to bsilo. I made sure the user hive settings were adjusted. Everyone had access to the hive. I shared the bsilo folder with full access for everyone. I set the NTFS permissions to read, read & execute, list folder contents for domain users. I also renamed NTUSER.DAT to NTUSER.MAN. Now I set a env variable %manprofile% on the Terminal server that points to \server\bsilo\ntuser.man I set the env var as terminal services profile path for a test user. When I log in I as the user get the following output The system cannot find the path specified. Can somebody point me in the right direction. Thanks USERENV(1774.d18) 15:52:39:724 InitializePolicyProcessing: Initialised Machine Mutex/Events USERENV(1774.d18) 15:52:39:724 InitializePolicyProcessing: Initialised User Mutex/Events USERENV(1774.d18) 15:52:39:724 LibMain: Process Name: \??\C:\WINDOWS\system32\winlogon.exe USERENV(1774.d18) 15:52:48:005 LoadUserProfile: Yes, we can impersonate the user. Running as self USERENV(1774.d18) 15:52:48:005 ========================================================= USERENV(1774.d18) 15:52:48:005 LoadUserProfile: Entering, hToken = <0x340, lpProfileInfo = 0x6e5d8 USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-dwFlags = <0x0 USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-lpUserName = USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-lpProfilePath = <\server\bsilo\ntuser.man USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-lpDefaultPath = <\BDPINF5\netlogon\Default User USERENV(1774.d18) 15:52:48:005 LoadUserProfile: NULL server name USERENV(1774.d18) 15:52:48:005 LoadUserProfile: no thread token found, impersonating self. USERENV(1774.d18) 15:52:48:005 GetInterface: Returning rpc binding handle USERENV(218.2f94) 15:52:48:005 IProfileSecurityCallBack: client authenticated. USERENV(218.2f94) 15:52:48:005 DropClientContext: Got client token 000009B4, sid = S-1-5-18 USERENV(218.2f94) 15:52:48:005 MIDL_user_allocate enter USERENV(218.2f94) 15:52:48:005 DropClientContext: load profile object successfully made USERENV(218.2f94) 15:52:48:005 DropClientContext: Returning 0 USERENV(1774.d18) 15:52:48:005 LoadUserProfile: Calling DropClientToken (as self) succeeded USERENV(1774.d18) 15:52:48:005 CProfileDialog::Initialize : Cookie generated USERENV(1774.d18) 15:52:48:005 CProfileDialog::Initialize : Endpoint generated USERENV(218.1f38) 15:52:48:005 IProfileSecurityCallBack: client authenticated. USERENV(218.1f38) 15:52:48:020 LoadUserProfileI: RPC end point IProfileDialog_9D36D6DD48F0578A2A41B23D7A982E63 USERENV(218.1f38) 15:52:48:020 In LoadUserProfileP USERENV(218.1f38) 15:52:48:020 LoadUserProfile: Running as client, sid = S-1-5-18 USERENV(218.1f38) 15:52:48:020 ========================================================= USERENV(218.1f38) 15:52:48:020 LoadUserProfile: Entering, hToken = <0x98c, lpProfileInfo = 0x9c940 USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-dwFlags = <0x0 USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-lpUserName = USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-lpProfilePath = <\server\bsilo\ntuser.man USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-lpDefaultPath = <\BDPINF5\netlogon\Default User USERENV(218.1f38) 15:52:48:020 LoadUserProfile: NULL server name USERENV(218.1f38) 15:52:48:020 LoadUserProfile: User sid: S-1-5-21-807756564-1922302612-1565739477-22627 USERENV(218.1f38) 15:52:48:020 CSyncManager::EnterLock USERENV(218.1f38) 15:52:48:020 CSyncManager::EnterLock: No existing entry found USERENV(218.1f38) 15:52:48:020 CSyncManager::EnterLock: New entry created USERENV(218.1f38) 15:52:48:020 CHashTable::HashAdd: S-1-5-21-807756564-1922302612-1565739477-22627 added in bucket 11 USERENV(218.1f38) 15:52:48:020 LoadUserProfile: Wait succeeded. In critical section. USERENV(218.1f38) 15:52:48:864 GetOldSidString: Failed to open profile profile guid key with error 2 USERENV(218.1f38) 15:52:48:864 GetProfileSid: No Guid - Sid Mapping available USERENV(218.1f38) 15:52:48:864 TestIfUserProfileLoaded: return with error 2. USERENV(218.1f38) 15:52:48:864 GetOldSidString: Failed to open profile profile guid key with error 2 USERENV(218.1f38) 15:52:48:864 GetProfileSid: No Guid - Sid Mapping available USERENV(218.1f38) 15:52:48:864 LoadUserProfile: Expanded profile path is \server\bsilo\ntuser.man USERENV(218.1f38) 15:52:48:880 ParseProfilePath: Entering, lpProfilePath = <\server\bsilo\ntuser.man USERENV(218.1f38) 15:52:48:880 CheckXForestLogon: checking x-forest logon, user handle = 2444 USERENV(218.1f38) 15:52:48:880 CheckXForestLogon: policy set to disable XForest check USERENV(218.1f38) 15:52:48:880 ParseProfilePath: Mandatory profile (.man extension) USERENV(218.1f38) 15:52:49:239 AbleToBypassCSC: Try to bypass CSC USERENV(218.1f38) 15:52:49:239 AbleToBypassCSC: tried NPAddConnection3ForCSCAgent. Error 2109 USERENV(218.1f38) 15:52:49:239 AbleToBypassCSC: Share \server\bsilo mapped to drive E. Returned Path E:\ntuser.man USERENV(218.1f38) 15:52:49:239 ParseProfilePath: CSC bypassed. Profile path E:\ntuser.man USERENV(218.1f38) 15:52:49:255 ParseProfilePath: Tick Count = 0 USERENV(218.1f38) 15:52:49:255 ParseProfilePath: GetFileAttributes found something with attributes <0x2022 USERENV(218.1f38) 15:52:49:255 ParseProfilePath: Found a file USERENV(218.1f38) 15:52:49:255 ReportError: Impersonating user. USERENV(218.1f38) 15:52:49:255 ReportError: Logging Error DETAIL - The system cannot find the path specified. USERENV(218.1f38) 15:52:49:255 GetInterface: Returning rpc binding handle USERENV(218.1f38) 15:52:49:255 ReportError: RPC End point IProfileDialog_9D36D6DD48F0578A2A41B23D7A982E63 USERENV(218.1f38) 15:52:49:255 ReportError: waiting on rpc async event USERENV(1774.2398) 15:52:49:255 ErrorDialogEx: Calling DialogBoxParam USERENV(1774.2398) 15:52:49:270 ErrorDlgProc:: DialogBoxParam USERENV(218.1f38) 15:52:52:177 RpcAsyncCompleteCall finished, status = 0 USERENV(218.1f38) 15:52:52:177 ReleaseInterface: Releasing rpc binding handle USERENV(218.1f38) 15:52:52:177 LoadUserProfile: ParseProfilePath returned FALSE USERENV(218.1f38) 15:52:52:177 CancelCSCBypassedConnection: Cancelling connection of E: USERENV(218.1f38) 15:52:52:177 CancelCSCBypassedConnection: Connection deleted. USERENV(218.1f38) 15:52:52:177 CSyncManager::LeaveLock USERENV(218.1f38) 15:52:52:192 CSyncManager::LeaveLock: Lock released USERENV(218.1f38) 15:52:52:192 CHashTable::HashDelete: S-1-5-21-807756564-1922302612-1565739477-22627 deleted USERENV(218.1f38) 15:52:52:192 CSyncManager::LeaveLock: Lock deleted USERENV(218.1f38) 15:52:52:192 LoadUserProfile: 003 About Reverted back to user <00000000 USERENV(218.1f38) 15:52:52:192 LoadUserProfile: Leaving with a value of 0. USERENV(218.1f38) 15:52:52:192 ========================================================= USERENV(218.1f38) 15:52:52:192 LoadUserProfileI: LoadUserProfileP failed with 3 USERENV(218.1f38) 15:52:52:192 LoadUserProfileI: returning 3 USERENV(1774.d18) 15:52:52:192 LoadUserProfile: Running as self USERENV(1774.d18) 15:52:52:192 LoadUserProfile: Calling LoadUserProfileI failed. err = 3 USERENV(218.200c) 15:52:52:192 IProfileSecurityCallBack: client authenticated. USERENV(218.200c) 15:52:52:192 ReleaseClientContext: Releasing context USERENV(218.200c) 15:52:52:192 ReleaseClientContext_s: Releasing context USERENV(218.200c) 15:52:52:192 MIDL_user_free enter USERENV(1774.d18) 15:52:52:192 ReleaseInterface: Releasing rpc binding handle USERENV(1774.d18) 15:52:52:192 LoadUserProfile: Returning FALSE. Error = 3

    Read the article

  • Moving the swapfiles to a dedicated partition in Snow Leopard

    - by e.James
    I have been able to move Apple's virtual memory swapfiles to a dedicated partition on my hard drive up until now. The technique I have been using is described in a thread on forums.macosxhints.com. However, with the developer preview of Snow Leopard, this method no longer works. Does anyone know how it could be done with the new OS? Update: I have marked dblu's answer as accepted even though it didn't quite work because he gave excellent, detailed instructions and because his suggestion to use plutil ultimately pointed me in the right direction. The complete, working solution is posted here in the question because I don't have enough reputation to edit the accepted answer. Complete solution: 1. Open Terminal and make a backup copy of Apple's default dynamic_pager.plist: $ cd /System/Library/LaunchDaemons $ sudo cp com.apple.dynamic_pager.plist{,_bak} 2. Convert the plist from binary to plain XML: $ sudo plutil -convert xml1 com.apple.dynamic_pager.plist 3. Open the converted plist with your text editor of choice. (I use pico, see dblu's answer for an example using vim): $ sudo pico -w com.apple.dynamic_pager.plist It should look as follows: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs$ <plist version="1.0"> <dict> <key>EnableTransactions</key> <true/> <key>HopefullyExitsLast</key> <true/> <key>Label</key> <string>com.apple.dynamic_pager</string> <key>OnDemand</key> <false/> <key>ProgramArguments</key> <array> <string>/sbin/dynamic_pager</string> <string>-F</string> <string>/private/var/vm/swapfile</string> </array> </dict> </plist> 4. Change the ProgramArguments array (lines 13 through 18) so that it launches an intermediate shell script instead of launching dynamic_pager directly. See note #1 for details on why this is necessary. <key>ProgramArguments</key> <array> <string>/sbin/dynamic_pager_init</string> </array> 5. Save the plist, and return to the terminal prompt. Using pico, the commands would be: <ctrl+o> to save the file <enter> to accept the same filename (com.apple.dynamic_pager.plist) <ctrl+x> to exit 6. Convert the modified plist back to binary: $ sudo plutil -convert binary1 com.apple.dynamic_pager.plist 7. Create the intermediate shell script: $ cd /sbin $ sudo pico -w dynamic_pager_init The script should look as follows (my partition is called 'Swap', and I chose to put the swapfiles in a hidden directory on that partition, called '.vm' be sure that the directory you specify actually exists): Update: This version of the script makes use of wait4path as suggested by ZILjr: #!/bin/bash #launch Apple's dynamic_pager only when the swap volume is mounted echo "Waiting for Swap volume to mount"; wait4path /Volumes/Swap; echo "Launching dynamic pager on volume Swap"; /sbin/dynamic_pager -F /Volumes/Swap/.vm/swapfile; 8. Save and close dynamic_pager_init (same commands as step 5) 9. Modify permissions and ownership for dynamic_pager_init: $ sudo chmod a+x-w /sbin/dynamic_pager_init $ sudo chown root:wheel /sbin/dynamic_pager_init 10. Verify the permissions on dynamic_pager_init: $ ls -l dynamic_pager_init -r-xr-xr-x 1 root wheel 6 18 Sep 15:11 dynamic_pager_init 11. Restart your Mac. If you run into trouble, switch to verbose startup mode by holding down Command-v immediately after the startup chime. This will let you see all of the startup messages that appear during startup. If you run into even worse trouble (i.e. you never see the login screen), hold down Command-s instead. This will boot the computer in single-user mode (no graphical UI, just a command prompt) and allow you to restore the backup copy of com.apple.dynamic_pager.plist that you made in step 1. 12. Once the computer boots, fire up Terminal and verify that the swap files have actually been moved: $ cd /Volumes/Swap/.vm $ ls -l You should see something like this: -rw------- 1 someUser staff 67108864 18 Sep 12:02 swapfile0 13. Delete the old swapfiles: $ cd /private/var/vm $ sudo rm swapfile* 14. Profit! Note 1 Simply modifying the arguments to dynamic_pager in the plist does not always work, and when it fails, it does so in a spectacularly silent way. The problem stems from the fact that dynamic_pager is launched very early in the startup process. If your swap partition has not yet been mounted when dynamic_pager is first loaded (in my experience, this happens 99% of the time), then the system will fake its way through. It will create a symbolic link in your /Volumes directory which has the same name as your swap partition, but points back to the default swapfile location (/private/var/vm). Then, when your actual swap partition mounts, it will be given the name Swap 1 (or YourDriveName 1). You can see the problem by opening up Terminal and listing the contents of your /Volumes directory: $ cd /Volumes $ ls -l You will see something like this: drwxrwxrwx 11 yourUser staff 442 16 Sep 12:13 Swap -> private/var/vm drwxrwxrwx 14 yourUser staff 5 16 Sep 12:13 Swap 1 lrwxr-xr-x 1 root admin 1 17 Sep 12:01 System -> / Note that this failure can be very hard to spot. If you were to check for the swapfiles as I show in step 12, you would still see them! The symbolic link would make it seem as though your swapfiles had been moved, even though they were actually being stored in the default location. Note 2 I was originally unable to get this to work in Snow Leopard because com.apple.dynamic_pager.plist was stored in binary format. I made a copy of the original file and opened it with Apple's Property List Editor (available with Xcode) in order to make changes, but this process added some extended attributes to the plist file which caused the system to ignore it and just use the defaults. As dblu pointed out, using plutil to convert the file to plain XML works like a charm. Note 3 You can check the Console application to see any messages that dynamic_pager_init echos to the screen. If you see the following lines repeated over and over again, there is a problem with the setup. I ran into these messages because I forgot to create the '.vm' directory that I specified in dynamic_pager_init. com.apple.launchd[1] (com.apple.dynamic_pager[176]) Exited with exit code: 1 com.apple.launchd[1] (com.apple.dynamic_pager) Throttling respawn: Will start in 10 seconds When everything is working properly, you may see the above message a couple of times, but you should also see the following message, and then no more of the "Throttling respawn" messages afterwards. com.apple.dynamic_pager[???] Launching dynamic pager on volume Swap This means that the script did have to wait for the partition to load, but in the end it was successful.

    Read the article

  • How to stop registration attempts on Asterisk

    - by Travesty3
    The main question: My Asterisk logs are littered with messages like these: [2012-05-29 15:53:49] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 15:53:50] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 15:53:55] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 15:53:55] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 15:53:57] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device <sip:[email protected]>;tag=cb23fe53 [2012-05-29 15:53:57] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device <sip:[email protected]>;tag=cb23fe53 [2012-05-29 15:54:02] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 15:54:03] NOTICE[5578] chan_sip.c: Registration from '<sip:[email protected]>' failed for '37.75.210.177' - No matching peer found [2012-05-29 21:20:36] NOTICE[5578] chan_sip.c: Registration from '"55435217"<sip:[email protected]>' failed for '65.218.221.180' - No matching peer found [2012-05-29 21:20:36] NOTICE[5578] chan_sip.c: Registration from '"1731687005"<sip:[email protected]>' failed for '65.218.221.180' - No matching peer found [2012-05-30 01:18:58] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=dEBcOzUysX [2012-05-30 01:18:58] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=9zUari4Mve [2012-05-30 01:19:00] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=sOYgI1ItQn [2012-05-30 01:19:02] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=2EGLTzZSEi [2012-05-30 01:19:04] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=j0JfZoPcur [2012-05-30 01:19:06] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=Ra0DFDKggt [2012-05-30 01:19:08] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=rR7q7aTHEz [2012-05-30 01:19:10] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=VHUMtOpIvU [2012-05-30 01:19:12] NOTICE[5578] chan_sip.c: Sending fake auth rejection for device "unknown" <sip:[email protected]>;tag=JxZUzBnPMW I use Asterisk for an automated phone system. The only thing it does is receives incoming calls and executes a Perl script. No outgoing calls, no incoming calls to an actual phone, no phones registered with Asterisk. It seems like there should be an easy way to block all unauthorized registration attempts, but I have struggled with this for a long time. It seems like there should be a more effective way to prevent these attempts from even getting far enough to reach my Asterisk logs. Some setting I could turn on/off that doesn't allow registration attempts at all or something. Is there any way to do this? Also, am I correct in assuming that the "Registration from ..." messages are likely people attempting to get access to my Asterisk server (probably to make calls on my account)? And what's the difference between those messages and the "Sending fake auth rejection ..." messages? Further detail: I know that the "Registration from ..." lines are intruders attempting to get access to my Asterisk server. With Fail2Ban set up, these IPs are banned after 5 attempts (for some reason, one got 6 attempts, but w/e). But I have no idea what the "Sending fake auth rejection ..." messages mean or how to stop these potential intrusion attempts. As far as I can tell, they have never been successful (haven't seen any weird charges on my bills or anything). Here's what I have done: Set up hardware firewall rules as shown below. Here, xx.xx.xx.xx is the IP address of the server, yy.yy.yy.yy is the IP address of our facility, and aa.aa.aa.aa, bb.bb.bb.bb, and cc.cc.cc.cc are the IP addresses that our VoIP provider uses. Theoretically, ports 10000-20000 should only be accessible by those three IPs.+-------+-----------------------------+----------+-----------+--------+-----------------------------+------------------+ | Order | Source Ip | Protocol | Direction | Action | Destination Ip | Destination Port | +-------+-----------------------------+----------+-----------+--------+-----------------------------+------------------+ | 1 | cc.cc.cc.cc/255.255.255.255 | udp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 10000-20000 | | 2 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 80 | | 3 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 2749 | | 4 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 443 | | 5 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 53 | | 6 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 1981 | | 7 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 1991 | | 8 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 2001 | | 9 | yy.yy.yy.yy/255.255.255.255 | udp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 137-138 | | 10 | yy.yy.yy.yy/255.255.255.255 | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 139 | | 11 | yy.yy.yy.yy/255.255.255.255 | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 445 | | 14 | aa.aa.aa.aa/255.255.255.255 | udp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 10000-20000 | | 17 | bb.bb.bb.bb/255.255.255.255 | udp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 10000-20000 | | 18 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 1971 | | 19 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 2739 | | 20 | any | tcp | inbound | permit | xx.xx.xx.xx/255.255.255.255 | 1023-1050 | | 21 | any | all | inbound | deny | any on server | 1-65535 | +-------+-----------------------------+----------+-----------+--------+-----------------------------+------------------+ Set up Fail2Ban. This is sort of working, but it's reactive instead of proactive, and doesn't seem to be blocking everything (like the "Sending fake auth rejection ..." messages). Set up rules in sip.conf to deny all except for my VoIP provider. Here is my sip.conf with almost all commented lines removed (to save space). Notice at the bottom is my attempt to deny all except for my VoIP provider:[general] context=default allowguest=no allowoverlap=no bindport=5060 bindaddr=0.0.0.0 srvlookup=yes disallow=all allow=g726 allow=ulaw allow=alaw allow=g726aal2 allow=adpcm allow=slin allow=lpc10 allow=speex allow=g726 insecure=invite alwaysauthreject=yes ;registertimeout=20 registerattempts=0 register = user:pass:[email protected]:5060/700 [mysipprovider] type=peer username=user fromuser=user secret=pass host=sip.mysipprovider.com fromdomain=sip.mysipprovider.com nat=no ;canreinvite=yes qualify=yes context=inbound-mysipprovider disallow=all allow=ulaw allow=alaw allow=gsm insecure=port,invite deny=0.0.0.0/0.0.0.0 permit=aa.aa.aa.aa/255.255.255.255 permit=bb.bb.bb.bb/255.255.255.255 permit=cc.cc.cc.cc/255.255.255.255

    Read the article

  • A Taxonomy of Numerical Methods v1

    - by JoshReuben
    Numerical Analysis – When, What, (but not how) Once you understand the Math & know C++, Numerical Methods are basically blocks of iterative & conditional math code. I found the real trick was seeing the forest for the trees – knowing which method to use for which situation. Its pretty easy to get lost in the details – so I’ve tried to organize these methods in a way that I can quickly look this up. I’ve included links to detailed explanations and to C++ code examples. I’ve tried to classify Numerical methods in the following broad categories: Solving Systems of Linear Equations Solving Non-Linear Equations Iteratively Interpolation Curve Fitting Optimization Numerical Differentiation & Integration Solving ODEs Boundary Problems Solving EigenValue problems Enjoy – I did ! Solving Systems of Linear Equations Overview Solve sets of algebraic equations with x unknowns The set is commonly in matrix form Gauss-Jordan Elimination http://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_elimination C++: http://www.codekeep.net/snippets/623f1923-e03c-4636-8c92-c9dc7aa0d3c0.aspx Produces solution of the equations & the coefficient matrix Efficient, stable 2 steps: · Forward Elimination – matrix decomposition: reduce set to triangular form (0s below the diagonal) or row echelon form. If degenerate, then there is no solution · Backward Elimination –write the original matrix as the product of ints inverse matrix & its reduced row-echelon matrix à reduce set to row canonical form & use back-substitution to find the solution to the set Elementary ops for matrix decomposition: · Row multiplication · Row switching · Add multiples of rows to other rows Use pivoting to ensure rows are ordered for achieving triangular form LU Decomposition http://en.wikipedia.org/wiki/LU_decomposition C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-lu-decomposition-for-solving.html Represent the matrix as a product of lower & upper triangular matrices A modified version of GJ Elimination Advantage – can easily apply forward & backward elimination to solve triangular matrices Techniques: · Doolittle Method – sets the L matrix diagonal to unity · Crout Method - sets the U matrix diagonal to unity Note: both the L & U matrices share the same unity diagonal & can be stored compactly in the same matrix Gauss-Seidel Iteration http://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method C++: http://www.nr.com/forum/showthread.php?t=722 Transform the linear set of equations into a single equation & then use numerical integration (as integration formulas have Sums, it is implemented iteratively). an optimization of Gauss-Jacobi: 1.5 times faster, requires 0.25 iterations to achieve the same tolerance Solving Non-Linear Equations Iteratively find roots of polynomials – there may be 0, 1 or n solutions for an n order polynomial use iterative techniques Iterative methods · used when there are no known analytical techniques · Requires set functions to be continuous & differentiable · Requires an initial seed value – choice is critical to convergence à conduct multiple runs with different starting points & then select best result · Systematic - iterate until diminishing returns, tolerance or max iteration conditions are met · bracketing techniques will always yield convergent solutions, non-bracketing methods may fail to converge Incremental method if a nonlinear function has opposite signs at 2 ends of a small interval x1 & x2, then there is likely to be a solution in their interval – solutions are detected by evaluating a function over interval steps, for a change in sign, adjusting the step size dynamically. Limitations – can miss closely spaced solutions in large intervals, cannot detect degenerate (coinciding) solutions, limited to functions that cross the x-axis, gives false positives for singularities Fixed point method http://en.wikipedia.org/wiki/Fixed-point_iteration C++: http://books.google.co.il/books?id=weYj75E_t6MC&pg=PA79&lpg=PA79&dq=fixed+point+method++c%2B%2B&source=bl&ots=LQ-5P_taoC&sig=lENUUIYBK53tZtTwNfHLy5PEWDk&hl=en&sa=X&ei=wezDUPW1J5DptQaMsIHQCw&redir_esc=y#v=onepage&q=fixed%20point%20method%20%20c%2B%2B&f=false Algebraically rearrange a solution to isolate a variable then apply incremental method Bisection method http://en.wikipedia.org/wiki/Bisection_method C++: http://numericalcomputing.wordpress.com/category/algorithms/ Bracketed - Select an initial interval, keep bisecting it ad midpoint into sub-intervals and then apply incremental method on smaller & smaller intervals – zoom in Adv: unaffected by function gradient à reliable Disadv: slow convergence False Position Method http://en.wikipedia.org/wiki/False_position_method C++: http://www.dreamincode.net/forums/topic/126100-bisection-and-false-position-methods/ Bracketed - Select an initial interval , & use the relative value of function at interval end points to select next sub-intervals (estimate how far between the end points the solution might be & subdivide based on this) Newton-Raphson method http://en.wikipedia.org/wiki/Newton's_method C++: http://www-users.cselabs.umn.edu/classes/Summer-2012/csci1113/index.php?page=./newt3 Also known as Newton's method Convenient, efficient Not bracketed – only a single initial guess is required to start iteration – requires an analytical expression for the first derivative of the function as input. Evaluates the function & its derivative at each step. Can be extended to the Newton MutiRoot method for solving multiple roots Can be easily applied to an of n-coupled set of non-linear equations – conduct a Taylor Series expansion of a function, dropping terms of order n, rewrite as a Jacobian matrix of PDs & convert to simultaneous linear equations !!! Secant Method http://en.wikipedia.org/wiki/Secant_method C++: http://forum.vcoderz.com/showthread.php?p=205230 Unlike N-R, can estimate first derivative from an initial interval (does not require root to be bracketed) instead of inputting it Since derivative is approximated, may converge slower. Is fast in practice as it does not have to evaluate the derivative at each step. Similar implementation to False Positive method Birge-Vieta Method http://mat.iitm.ac.in/home/sryedida/public_html/caimna/transcendental/polynomial%20methods/bv%20method.html C++: http://books.google.co.il/books?id=cL1boM2uyQwC&pg=SA3-PA51&lpg=SA3-PA51&dq=Birge-Vieta+Method+c%2B%2B&source=bl&ots=QZmnDTK3rC&sig=BPNcHHbpR_DKVoZXrLi4nVXD-gg&hl=en&sa=X&ei=R-_DUK2iNIjzsgbE5ID4Dg&redir_esc=y#v=onepage&q=Birge-Vieta%20Method%20c%2B%2B&f=false combines Horner's method of polynomial evaluation (transforming into lesser degree polynomials that are more computationally efficient to process) with Newton-Raphson to provide a computational speed-up Interpolation Overview Construct new data points for as close as possible fit within range of a discrete set of known points (that were obtained via sampling, experimentation) Use Taylor Series Expansion of a function f(x) around a specific value for x Linear Interpolation http://en.wikipedia.org/wiki/Linear_interpolation C++: http://www.hamaluik.com/?p=289 Straight line between 2 points à concatenate interpolants between each pair of data points Bilinear Interpolation http://en.wikipedia.org/wiki/Bilinear_interpolation C++: http://supercomputingblog.com/graphics/coding-bilinear-interpolation/2/ Extension of the linear function for interpolating functions of 2 variables – perform linear interpolation first in 1 direction, then in another. Used in image processing – e.g. texture mapping filter. Uses 4 vertices to interpolate a value within a unit cell. Lagrange Interpolation http://en.wikipedia.org/wiki/Lagrange_polynomial C++: http://www.codecogs.com/code/maths/approximation/interpolation/lagrange.php For polynomials Requires recomputation for all terms for each distinct x value – can only be applied for small number of nodes Numerically unstable Barycentric Interpolation http://epubs.siam.org/doi/pdf/10.1137/S0036144502417715 C++: http://www.gamedev.net/topic/621445-barycentric-coordinates-c-code-check/ Rearrange the terms in the equation of the Legrange interpolation by defining weight functions that are independent of the interpolated value of x Newton Divided Difference Interpolation http://en.wikipedia.org/wiki/Newton_polynomial C++: http://jee-appy.blogspot.co.il/2011/12/newton-divided-difference-interpolation.html Hermite Divided Differences: Interpolation polynomial approximation for a given set of data points in the NR form - divided differences are used to approximately calculate the various differences. For a given set of 3 data points , fit a quadratic interpolant through the data Bracketed functions allow Newton divided differences to be calculated recursively Difference table Cubic Spline Interpolation http://en.wikipedia.org/wiki/Spline_interpolation C++: https://www.marcusbannerman.co.uk/index.php/home/latestarticles/42-articles/96-cubic-spline-class.html Spline is a piecewise polynomial Provides smoothness – for interpolations with significantly varying data Use weighted coefficients to bend the function to be smooth & its 1st & 2nd derivatives are continuous through the edge points in the interval Curve Fitting A generalization of interpolating whereby given data points may contain noise à the curve does not necessarily pass through all the points Least Squares Fit http://en.wikipedia.org/wiki/Least_squares C++: http://www.ccas.ru/mmes/educat/lab04k/02/least-squares.c Residual – difference between observed value & expected value Model function is often chosen as a linear combination of the specified functions Determines: A) The model instance in which the sum of squared residuals has the least value B) param values for which model best fits data Straight Line Fit Linear correlation between independent variable and dependent variable Linear Regression http://en.wikipedia.org/wiki/Linear_regression C++: http://www.oocities.org/david_swaim/cpp/linregc.htm Special case of statistically exact extrapolation Leverage least squares Given a basis function, the sum of the residuals is determined and the corresponding gradient equation is expressed as a set of normal linear equations in matrix form that can be solved (e.g. using LU Decomposition) Can be weighted - Drop the assumption that all errors have the same significance –-> confidence of accuracy is different for each data point. Fit the function closer to points with higher weights Polynomial Fit - use a polynomial basis function Moving Average http://en.wikipedia.org/wiki/Moving_average C++: http://www.codeproject.com/Articles/17860/A-Simple-Moving-Average-Algorithm Used for smoothing (cancel fluctuations to highlight longer-term trends & cycles), time series data analysis, signal processing filters Replace each data point with average of neighbors. Can be simple (SMA), weighted (WMA), exponential (EMA). Lags behind latest data points – extra weight can be given to more recent data points. Weights can decrease arithmetically or exponentially according to distance from point. Parameters: smoothing factor, period, weight basis Optimization Overview Given function with multiple variables, find Min (or max by minimizing –f(x)) Iterative approach Efficient, but not necessarily reliable Conditions: noisy data, constraints, non-linear models Detection via sign of first derivative - Derivative of saddle points will be 0 Local minima Bisection method Similar method for finding a root for a non-linear equation Start with an interval that contains a minimum Golden Search method http://en.wikipedia.org/wiki/Golden_section_search C++: http://www.codecogs.com/code/maths/optimization/golden.php Bisect intervals according to golden ratio 0.618.. Achieves reduction by evaluating a single function instead of 2 Newton-Raphson Method Brent method http://en.wikipedia.org/wiki/Brent's_method C++: http://people.sc.fsu.edu/~jburkardt/cpp_src/brent/brent.cpp Based on quadratic or parabolic interpolation – if the function is smooth & parabolic near to the minimum, then a parabola fitted through any 3 points should approximate the minima – fails when the 3 points are collinear , in which case the denominator is 0 Simplex Method http://en.wikipedia.org/wiki/Simplex_algorithm C++: http://www.codeguru.com/cpp/article.php/c17505/Simplex-Optimization-Algorithm-and-Implemetation-in-C-Programming.htm Find the global minima of any multi-variable function Direct search – no derivatives required At each step it maintains a non-degenerative simplex – a convex hull of n+1 vertices. Obtains the minimum for a function with n variables by evaluating the function at n-1 points, iteratively replacing the point of worst result with the point of best result, shrinking the multidimensional simplex around the best point. Point replacement involves expanding & contracting the simplex near the worst value point to determine a better replacement point Oscillation can be avoided by choosing the 2nd worst result Restart if it gets stuck Parameters: contraction & expansion factors Simulated Annealing http://en.wikipedia.org/wiki/Simulated_annealing C++: http://code.google.com/p/cppsimulatedannealing/ Analogy to heating & cooling metal to strengthen its structure Stochastic method – apply random permutation search for global minima - Avoid entrapment in local minima via hill climbing Heating schedule - Annealing schedule params: temperature, iterations at each temp, temperature delta Cooling schedule – can be linear, step-wise or exponential Differential Evolution http://en.wikipedia.org/wiki/Differential_evolution C++: http://www.amichel.com/de/doc/html/ More advanced stochastic methods analogous to biological processes: Genetic algorithms, evolution strategies Parallel direct search method against multiple discrete or continuous variables Initial population of variable vectors chosen randomly – if weighted difference vector of 2 vectors yields a lower objective function value then it replaces the comparison vector Many params: #parents, #variables, step size, crossover constant etc Convergence is slow – many more function evaluations than simulated annealing Numerical Differentiation Overview 2 approaches to finite difference methods: · A) approximate function via polynomial interpolation then differentiate · B) Taylor series approximation – additionally provides error estimate Finite Difference methods http://en.wikipedia.org/wiki/Finite_difference_method C++: http://www.wpi.edu/Pubs/ETD/Available/etd-051807-164436/unrestricted/EAMPADU.pdf Find differences between high order derivative values - Approximate differential equations by finite differences at evenly spaced data points Based on forward & backward Taylor series expansion of f(x) about x plus or minus multiples of delta h. Forward / backward difference - the sums of the series contains even derivatives and the difference of the series contains odd derivatives – coupled equations that can be solved. Provide an approximation of the derivative within a O(h^2) accuracy There is also central difference & extended central difference which has a O(h^4) accuracy Richardson Extrapolation http://en.wikipedia.org/wiki/Richardson_extrapolation C++: http://mathscoding.blogspot.co.il/2012/02/introduction-richardson-extrapolation.html A sequence acceleration method applied to finite differences Fast convergence, high accuracy O(h^4) Derivatives via Interpolation Cannot apply Finite Difference method to discrete data points at uneven intervals – so need to approximate the derivative of f(x) using the derivative of the interpolant via 3 point Lagrange Interpolation Note: the higher the order of the derivative, the lower the approximation precision Numerical Integration Estimate finite & infinite integrals of functions More accurate procedure than numerical differentiation Use when it is not possible to obtain an integral of a function analytically or when the function is not given, only the data points are Newton Cotes Methods http://en.wikipedia.org/wiki/Newton%E2%80%93Cotes_formulas C++: http://www.siafoo.net/snippet/324 For equally spaced data points Computationally easy – based on local interpolation of n rectangular strip areas that is piecewise fitted to a polynomial to get the sum total area Evaluate the integrand at n+1 evenly spaced points – approximate definite integral by Sum Weights are derived from Lagrange Basis polynomials Leverage Trapezoidal Rule for default 2nd formulas, Simpson 1/3 Rule for substituting 3 point formulas, Simpson 3/8 Rule for 4 point formulas. For 4 point formulas use Bodes Rule. Higher orders obtain more accurate results Trapezoidal Rule uses simple area, Simpsons Rule replaces the integrand f(x) with a quadratic polynomial p(x) that uses the same values as f(x) for its end points, but adds a midpoint Romberg Integration http://en.wikipedia.org/wiki/Romberg's_method C++: http://code.google.com/p/romberg-integration/downloads/detail?name=romberg.cpp&can=2&q= Combines trapezoidal rule with Richardson Extrapolation Evaluates the integrand at equally spaced points The integrand must have continuous derivatives Each R(n,m) extrapolation uses a higher order integrand polynomial replacement rule (zeroth starts with trapezoidal) à a lower triangular matrix set of equation coefficients where the bottom right term has the most accurate approximation. The process continues until the difference between 2 successive diagonal terms becomes sufficiently small. Gaussian Quadrature http://en.wikipedia.org/wiki/Gaussian_quadrature C++: http://www.alglib.net/integration/gaussianquadratures.php Data points are chosen to yield best possible accuracy – requires fewer evaluations Ability to handle singularities, functions that are difficult to evaluate The integrand can include a weighting function determined by a set of orthogonal polynomials. Points & weights are selected so that the integrand yields the exact integral if f(x) is a polynomial of degree <= 2n+1 Techniques (basically different weighting functions): · Gauss-Legendre Integration w(x)=1 · Gauss-Laguerre Integration w(x)=e^-x · Gauss-Hermite Integration w(x)=e^-x^2 · Gauss-Chebyshev Integration w(x)= 1 / Sqrt(1-x^2) Solving ODEs Use when high order differential equations cannot be solved analytically Evaluated under boundary conditions RK for systems – a high order differential equation can always be transformed into a coupled first order system of equations Euler method http://en.wikipedia.org/wiki/Euler_method C++: http://rosettacode.org/wiki/Euler_method First order Runge–Kutta method. Simple recursive method – given an initial value, calculate derivative deltas. Unstable & not very accurate (O(h) error) – not used in practice A first-order method - the local error (truncation error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size In evolving solution between data points xn & xn+1, only evaluates derivatives at beginning of interval xn à asymmetric at boundaries Higher order Runge Kutta http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods C++: http://www.dreamincode.net/code/snippet1441.htm 2nd & 4th order RK - Introduces parameterized midpoints for more symmetric solutions à accuracy at higher computational cost Adaptive RK – RK-Fehlberg – estimate the truncation at each integration step & automatically adjust the step size to keep error within prescribed limits. At each step 2 approximations are compared – if in disagreement to a specific accuracy, the step size is reduced Boundary Value Problems Where solution of differential equations are located at 2 different values of the independent variable x à more difficult, because cannot just start at point of initial value – there may not be enough starting conditions available at the end points to produce a unique solution An n-order equation will require n boundary conditions – need to determine the missing n-1 conditions which cause the given conditions at the other boundary to be satisfied Shooting Method http://en.wikipedia.org/wiki/Shooting_method C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-shooting-method-for-solving.html Iteratively guess the missing values for one end & integrate, then inspect the discrepancy with the boundary values of the other end to adjust the estimate Given the starting boundary values u1 & u2 which contain the root u, solve u given the false position method (solving the differential equation as an initial value problem via 4th order RK), then use u to solve the differential equations. Finite Difference Method For linear & non-linear systems Higher order derivatives require more computational steps – some combinations for boundary conditions may not work though Improve the accuracy by increasing the number of mesh points Solving EigenValue Problems An eigenvalue can substitute a matrix when doing matrix multiplication à convert matrix multiplication into a polynomial EigenValue For a given set of equations in matrix form, determine what are the solution eigenvalue & eigenvectors Similar Matrices - have same eigenvalues. Use orthogonal similarity transforms to reduce a matrix to diagonal form from which eigenvalue(s) & eigenvectors can be computed iteratively Jacobi method http://en.wikipedia.org/wiki/Jacobi_method C++: http://people.sc.fsu.edu/~jburkardt/classes/acs2_2008/openmp/jacobi/jacobi.html Robust but Computationally intense – use for small matrices < 10x10 Power Iteration http://en.wikipedia.org/wiki/Power_iteration For any given real symmetric matrix, generate the largest single eigenvalue & its eigenvectors Simplest method – does not compute matrix decomposition à suitable for large, sparse matrices Inverse Iteration Variation of power iteration method – generates the smallest eigenvalue from the inverse matrix Rayleigh Method http://en.wikipedia.org/wiki/Rayleigh's_method_of_dimensional_analysis Variation of power iteration method Rayleigh Quotient Method Variation of inverse iteration method Matrix Tri-diagonalization Method Use householder algorithm to reduce an NxN symmetric matrix to a tridiagonal real symmetric matrix vua N-2 orthogonal transforms     Whats Next Outside of Numerical Methods there are lots of different types of algorithms that I’ve learned over the decades: Data Mining – (I covered this briefly in a previous post: http://geekswithblogs.net/JoshReuben/archive/2007/12/31/ssas-dm-algorithms.aspx ) Search & Sort Routing Problem Solving Logical Theorem Proving Planning Probabilistic Reasoning Machine Learning Solvers (eg MIP) Bioinformatics (Sequence Alignment, Protein Folding) Quant Finance (I read Wilmott’s books – interesting) Sooner or later, I’ll cover the above topics as well.

    Read the article

  • The broken Promise of the Mobile Web

    - by Rick Strahl
    High end mobile devices have been with us now for almost 7 years and they have utterly transformed the way we access information. Mobile phones and smartphones that have access to the Internet and host smart applications are in the hands of a large percentage of the population of the world. In many places even very remote, cell phones and even smart phones are a common sight. I’ll never forget when I was in India in 2011 I was up in the Southern Indian mountains riding an elephant out of a tiny local village, with an elephant herder in front riding atop of the elephant in front of us. He was dressed in traditional garb with the loin wrap and head cloth/turban as did quite a few of the locals in this small out of the way and not so touristy village. So we’re slowly trundling along in the forest and he’s lazily using his stick to guide the elephant and… 10 minutes in he pulls out his cell phone from his sash and starts texting. In the middle of texting a huge pig jumps out from the side of the trail and he takes a picture running across our path in the jungle! So yeah, mobile technology is very pervasive and it’s reached into even very buried and unexpected parts of this world. Apps are still King Apps currently rule the roost when it comes to mobile devices and the applications that run on them. If there’s something that you need on your mobile device your first step usually is to look for an app, not use your browser. But native app development remains a pain in the butt, with the requirement to have to support 2 or 3 completely separate platforms. There are solutions that try to bridge that gap. Xamarin is on a tear at the moment, providing their cross-device toolkit to build applications using C#. While Xamarin tools are impressive – and also *very* expensive – they only address part of the development madness that is app development. There are still specific device integration isssues, dealing with the different developer programs, security and certificate setups and all that other noise that surrounds app development. There’s also PhoneGap/Cordova which provides a hybrid solution that involves creating local HTML/CSS/JavaScript based applications, and then packaging them to run in a specialized App container that can run on most mobile device platforms using a WebView interface. This allows for using of HTML technology, but it also still requires all the set up, configuration of APIs, security keys and certification and submission and deployment process just like native applications – you actually lose many of the benefits that  Web based apps bring. The big selling point of Cordova is that you get to use HTML have the ability to build your UI once for all platforms and run across all of them – but the rest of the app process remains in place. Apps can be a big pain to create and manage especially when we are talking about specialized or vertical business applications that aren’t geared at the mainstream market and that don’t fit the ‘store’ model. If you’re building a small intra department application you don’t want to deal with multiple device platforms and certification etc. for various public or corporate app stores. That model is simply not a good fit both from the development and deployment perspective. Even for commercial, big ticket apps, HTML as a UI platform offers many advantages over native, from write-once run-anywhere, to remote maintenance, single point of management and failure to having full control over the application as opposed to have the app store overloads censor you. In a lot of ways Web based HTML/CSS/JavaScript applications have so much potential for building better solutions based on existing Web technologies for the very same reasons a lot of content years ago moved off the desktop to the Web. To me the Web as a mobile platform makes perfect sense, but the reality of today’s Mobile Web unfortunately looks a little different… Where’s the Love for the Mobile Web? Yet here we are in the middle of 2014, nearly 7 years after the first iPhone was released and brought the promise of rich interactive information at your fingertips, and yet we still don’t really have a solid mobile Web platform. I know what you’re thinking: “But we have lots of HTML/JavaScript/CSS features that allows us to build nice mobile interfaces”. I agree to a point – it’s actually quite possible to build nice looking, rich and capable Web UI today. We have media queries to deal with varied display sizes, CSS transforms for smooth animations and transitions, tons of CSS improvements in CSS 3 that facilitate rich layout, a host of APIs geared towards mobile device features and lately even a number of JavaScript framework choices that facilitate development of multi-screen apps in a consistent manner. Personally I’ve been working a lot with AngularJs and heavily modified Bootstrap themes to build mobile first UIs and that’s been working very well to provide highly usable and attractive UI for typical mobile business applications. From the pure UI perspective things actually look very good. Not just about the UI But it’s not just about the UI - it’s also about integration with the mobile device. When it comes to putting all those pieces together into what amounts to a consolidated platform to build mobile Web applications, I think we still have a ways to go… there are a lot of missing pieces to make it all work together and integrate with the device more smoothly, and more importantly to make it work uniformly across the majority of devices. I think there are a number of reasons for this. Slow Standards Adoption HTML standards implementations and ratification has been dreadfully slow, and browser vendors all seem to pick and choose different pieces of the technology they implement. The end result is that we have a capable UI platform that’s missing some of the infrastructure pieces to make it whole on mobile devices. There’s lots of potential but what is lacking that final 10% to build truly compelling mobile applications that can compete favorably with native applications. Some of it is the fragmentation of browsers and the slow evolution of the mobile specific HTML APIs. A host of mobile standards exist but many of the standards are in the early review stage and they have been there stuck for long periods of time and seem to move at a glacial pace. Browser vendors seem even slower to implement them, and for good reason – non-ratified standards mean that implementations may change and vendor implementations tend to be experimental and  likely have to be changed later. Neither Vendors or developers are not keen on changing standards. This is the typical chicken and egg scenario, but without some forward momentum from some party we end up stuck in the mud. It seems that either the standards bodies or the vendors need to carry the torch forward and that doesn’t seem to be happening quickly enough. Mobile Device Integration just isn’t good enough Current standards are not far reaching enough to address a number of the use case scenarios necessary for many mobile applications. While not every application needs to have access to all mobile device features, almost every mobile application could benefit from some integration with other parts of the mobile device platform. Integration with GPS, phone, media, messaging, notifications, linking and contacts system are benefits that are unique to mobile applications and could be widely used, but are mostly (with the exception of GPS) inaccessible for Web based applications today. Unfortunately trying to do most of this today only with a mobile Web browser is a losing battle. Aside from PhoneGap/Cordova’s app centric model with its own custom API accessing mobile device features and the token exception of the GeoLocation API, most device integration features are not widely supported by the current crop of mobile browsers. For example there’s no usable messaging API that allows access to SMS or contacts from HTML. Even obvious components like the Media Capture API are only implemented partially by mobile devices. There are alternatives and workarounds for some of these interfaces by using browser specific code, but that’s might ugly and something that I thought we were trying to leave behind with newer browser standards. But it’s not quite working out that way. It’s utterly perplexing to me that mobile standards like Media Capture and Streams, Media Gallery Access, Responsive Images, Messaging API, Contacts Manager API have only minimal or no traction at all today. Keep in mind we’ve had mobile browsers for nearly 7 years now, and yet we still have to think about how to get access to an image from the image gallery or the camera on some devices? Heck Windows Phone IE Mobile just gained the ability to upload images recently in the Windows 8.1 Update – that’s feature that HTML has had for 20 years! These are simple concepts and common problems that should have been solved a long time ago. It’s extremely frustrating to see build 90% of a mobile Web app with relative ease and then hit a brick wall for the remaining 10%, which often can be show stoppers. The remaining 10% have to do with platform integration, browser differences and working around the limitations that browsers and ‘pinned’ applications impose on HTML applications. The maddening part is that these limitations seem arbitrary as they could easily work on all mobile platforms. For example, SMS has a URL Moniker interface that sort of works on Android, works badly with iOS (only works if the address is already in the contact list) and not at all on Windows Phone. There’s no reason this shouldn’t work universally using the same interface – after all all phones have supported SMS since before the year 2000! But, it doesn’t have to be this way Change can happen very quickly. Take the GeoLocation API for example. Geolocation has taken off at the very beginning of the mobile device era and today it works well, provides the necessary security (a big concern for many mobile APIs), and is supported by just about all major mobile and even desktop browsers today. It handles security concerns via prompts to avoid unwanted access which is a model that would work for most other device APIs in a similar fashion. One time approval and occasional re-approval if code changes or caches expire. Simple and only slightly intrusive. It all works well, even though GeoLocation actually has some physical limitations, such as representing the current location when no GPS device is present. Yet this is a solved problem, where other APIs that are conceptually much simpler to implement have failed to gain any traction at all. Technically none of these APIs should be a problem to implement, but it appears that the momentum is just not there. Inadequate Web Application Linking and Activation Another important piece of the puzzle missing is the integration of HTML based Web applications. Today HTML based applications are not first class citizens on mobile operating systems. When talking about HTML based content there’s a big difference between content and applications. Content is great for search engine discovery and plain browser usage. Content is usually accessed intermittently and permanent linking is not so critical for this type of content.  But applications have different needs. Applications need to be started up quickly and must be easily switchable to support a multi-tasking user workflow. Therefore, it’s pretty crucial that mobile Web apps are integrated into the underlying mobile OS and work with the standard task management features. Unfortunately this integration is not as smooth as it should be. It starts with actually trying to find mobile Web applications, to ‘installing’ them onto a phone in an easily accessible manner in a prominent position. The experience of discovering a Mobile Web ‘App’ and making it sticky is by no means as easy or satisfying. Today the way you’d go about this is: Open the browser Search for a Web Site in the browser with your search engine of choice Hope that you find the right site Hope that you actually find a site that works for your mobile device Click on the link and run the app in a fully chrome’d browser instance (read tiny surface area) Pin the app to the home screen (with all the limitations outline above) Hope you pointed at the right URL when you pinned Even for you and me as developers, there are a few steps in there that are painful and annoying, but think about the average user. First figuring out how to search for a specific site or URL? And then pinning the app and hopefully from the right location? You’ve probably lost more than half of your audience at that point. This experience sucks. For developers too this process is painful since app developers can’t control the shortcut creation directly. This problem often gets solved by crazy coding schemes, with annoying pop-ups that try to get people to create shortcuts via fancy animations that are both annoying and add overhead to each and every application that implements this sort of thing differently. And that’s not the end of it - getting the link onto the home screen with an application icon varies quite a bit between browsers. Apple’s non-standard meta tags are prominent and they work with iOS and Android (only more recent versions), but not on Windows Phone. Windows Phone instead requires you to create an actual screen or rather a partial screen be captured for a shortcut in the tile manager. Who had that brilliant idea I wonder? Surprisingly Chrome on recent Android versions seems to actually get it right – icons use pngs, pinning is easy and pinned applications properly behave like standalone apps and retain the browser’s active page state and content. Each of the platforms has a different way to specify icons (WP doesn’t allow you to use an icon image at all), and the most widely used interface in use today is a bunch of Apple specific meta tags that other browsers choose to support. The question is: Why is there no standard implementation for installing shortcuts across mobile platforms using an official format rather than a proprietary one? Then there’s iOS and the crazy way it treats home screen linked URLs using a crazy hybrid format that is neither as capable as a Web app running in Safari nor a WebView hosted application. Moving off the Web ‘app’ link when switching to another app actually causes the browser and preview it to ‘blank out’ the Web application in the Task View (see screenshot on the right). Then, when the ‘app’ is reactivated it ends up completely restarting the browser with the original link. This is crazy behavior that you can’t easily work around. In some situations you might be able to store the application state and restore it using LocalStorage, but for many scenarios that involve complex data sources (like say Google Maps) that’s not a possibility. The only reason for this screwed up behavior I can think of is that it is deliberate to make Web apps a pain in the butt to use and forcing users trough the App Store/PhoneGap/Cordova route. App linking and management is a very basic problem – something that we essentially have solved in every desktop browser – yet on mobile devices where it arguably matters a lot more to have easy access to web content we have to jump through hoops to have even a remotely decent linking/activation experience across browsers. Where’s the Money? It’s not surprising that device home screen integration and Mobile Web support in general is in such dismal shape – the mobile OS vendors benefit financially from App store sales and have little to gain from Web based applications that bypass the App store and the cash cow that it presents. On top of that, platform specific vendor lock-in of both end users and developers who have invested in hardware, apps and consumables is something that mobile platform vendors actually aspire to. Web based interfaces that are cross-platform are the anti-thesis of that and so again it’s no surprise that the mobile Web is on a struggling path. But – that may be changing. More and more we’re seeing operations shifting to services that are subscription based or otherwise collect money for usage, and that may drive more progress into the Web direction in the end . Nothing like the almighty dollar to drive innovation forward. Do we need a Mobile Web App Store? As much as I dislike moderated experiences in today’s massive App Stores, they do at least provide one single place to look for apps for your device. I think we could really use some sort of registry, that could provide something akin to an app store for mobile Web apps, to make it easier to actually find mobile applications. This could take the form of a specialized search engine, or maybe a more formal store/registry like structure. Something like apt-get/chocolatey for Web apps. It could be curated and provide at least some feedback and reviews that might help with the integrity of applications. Coupled to that could be a native application on each platform that would allow searching and browsing of the registry and then also handle installation in the form of providing the home screen linking, plus maybe an initial security configuration that determines what features are allowed access to for the app. I’m not holding my breath. In order for this sort of thing to take off and gain widespread appeal, a lot of coordination would be required. And in order to get enough traction it would have to come from a well known entity – a mobile Web app store from a no name source is unlikely to gain high enough usage numbers to make a difference. In a way this would eliminate some of the freedom of the Web, but of course this would also be an optional search path in addition to the standard open Web search mechanisms to find and access content today. Security Security is a big deal, and one of the perceived reasons why so many IT professionals appear to be willing to go back to the walled garden of deployed apps is that Apps are perceived as safe due to the official review and curation of the App stores. Curated stores are supposed to protect you from malware, illegal and misleading content. It doesn’t always work out that way and all the major vendors have had issues with security and the review process at some time or another. Security is critical, but I also think that Web applications in general pose less of a security threat than native applications, by nature of the sandboxed browser and JavaScript environments. Web applications run externally completely and in the HTML and JavaScript sandboxes, with only a very few controlled APIs allowing access to device specific features. And as discussed earlier – security for any device interaction can be granted the same for mobile applications through a Web browser, as they can for native applications either via explicit policies loaded from the Web, or via prompting as GeoLocation does today. Security is important, but it’s certainly solvable problem for Web applications even those that need to access device hardware. Security shouldn’t be a reason for Web apps to be an equal player in mobile applications. Apps are winning, but haven’t we been here before? So now we’re finding ourselves back in an era of installed app, rather than Web based and managed apps. Only it’s even worse today than with Desktop applications, in that the apps are going through a gatekeeper that charges a toll and censors what you can and can’t do in your apps. Frankly it’s a mystery to me why anybody would buy into this model and why it’s lasted this long when we’ve already been through this process. It’s crazy… It’s really a shame that this regression is happening. We have the technology to make mobile Web apps much more prominent, but yet we’re basically held back by what seems little more than bureaucracy, partisan bickering and self interest of the major parties involved. Back in the day of the desktop it was Internet Explorer’s 98+%  market shareholding back the Web from improvements for many years – now it’s the combined mobile OS market in control of the mobile browsers. If mobile Web apps were allowed to be treated the same as native apps with simple ways to install and run them consistently and persistently, that would go a long way to making mobile applications much more usable and seriously viable alternatives to native apps. But as it is mobile apps have a severe disadvantage in placement and operation. There are a few bright spots in all of this. Mozilla’s FireFoxOs is embracing the Web for it’s mobile OS by essentially building every app out of HTML and JavaScript based content. It supports both packaged and certified package modes (that can be put into the app store), and Open Web apps that are loaded and run completely off the Web and can also cache locally for offline operation using a manifest. Open Web apps are treated as full class citizens in FireFoxOS and run using the same mechanism as installed apps. Unfortunately FireFoxOs is getting a slow start with minimal device support and specifically targeting the low end market. We can hope that this approach will change and catch on with other vendors, but that’s also an uphill battle given the conflict of interest with platform lock in that it represents. Recent versions of Android also seem to be working reasonably well with mobile application integration onto the desktop and activation out of the box. Although it still uses the Apple meta tags to find icons and behavior settings, everything at least works as you would expect – icons to the desktop on pinning, WebView based full screen activation, and reliable application persistence as the browser/app is treated like a real application. Hopefully iOS will at some point provide this same level of rudimentary Web app support. What’s also interesting to me is that Microsoft hasn’t picked up on the obvious need for a solid Web App platform. Being a distant third in the mobile OS war, Microsoft certainly has nothing to lose and everything to gain by using fresh ideas and expanding into areas that the other major vendors are neglecting. But instead Microsoft is trying to beat the market leaders at their own game, fighting on their adversary’s terms instead of taking a new tack. Providing a kick ass mobile Web platform that takes the lead on some of the proposed mobile APIs would be something positive that Microsoft could do to improve its miserable position in the mobile device market. Where are we at with Mobile Web? It sure sounds like I’m really down on the Mobile Web, right? I’ve built a number of mobile apps in the last year and while overall result and response has been very positive to what we were able to accomplish in terms of UI, getting that final 10% that required device integration dialed was an absolute nightmare on every single one of them. Big compromises had to be made and some features were left out or had to be modified for some devices. In two cases we opted to go the Cordova route in order to get the integration we needed, along with the extra pain involved in that process. Unless you’re not integrating with device features and you don’t care deeply about a smooth integration with the mobile desktop, mobile Web development is fraught with frustration. So, yes I’m frustrated! But it’s not for lack of wanting the mobile Web to succeed. I am still a firm believer that we will eventually arrive a much more functional mobile Web platform that allows access to the most common device features in a sensible way. It wouldn't be difficult for device platform vendors to make Web based applications first class citizens on mobile devices. But unfortunately it looks like it will still be some time before this happens. So, what’s your experience building mobile Web apps? Are you finding similar issues? Just giving up on raw Web applications and building PhoneGap apps instead? Completely skipping the Web and going native? Leave a comment for discussion. Resources Rick Strahl on DotNet Rocks talking about Mobile Web© Rick Strahl, West Wind Technologies, 2005-2014Posted in HTML5  Mobile   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue

    - by John-Brown.Evans
    JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} ol{margin:0;padding:0} .c12_5{vertical-align:top;width:468pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c8_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 0pt 5pt} .c10_5{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c14_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c21_5{background-color:#ffffff} .c18_5{color:#1155cc;text-decoration:underline} .c16_5{color:#666666;font-size:12pt} .c5_5{background-color:#f3f3f3;font-weight:bold} .c19_5{color:inherit;text-decoration:inherit} .c3_5{height:11pt;text-align:center} .c11_5{font-weight:bold} .c20_5{background-color:#00ff00} .c6_5{font-style:italic} .c4_5{height:11pt} .c17_5{background-color:#ffff00} .c0_5{direction:ltr} .c7_5{font-family:"Courier New"} .c2_5{border-collapse:collapse} .c1_5{line-height:1.0} .c13_5{background-color:#f3f3f3} .c15_5{height:0pt} .c9_5{text-align:center} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} Welcome to another post in the series of blogs which demonstrates how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue Today we will create a BPEL process which will read (dequeue) the message from the JMS queue, which we enqueued in the last example. The JMS adapter will dequeue the full XML payload from the queue. 1. Recap and Prerequisites In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we designed and deployed a BPEL composite, which took a simple XML payload and enqueued it to the JMS queue. In this example, we will read that same message from the queue, using a JMS adapter and a BPEL process. As many of the configuration steps required to read from that queue were done in the previous samples, this one will concentrate on the new steps. A summary of the required objects is listed below. To find out how to create them please see the previous samples. They also include instructions on how to verify the objects are set up correctly. WebLogic Server Objects Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue Schema XSD File The following XSD file is used for the message format. It was created in the previous example and will be copied to the new process. stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"                 xmlns="http://www.example.org"                 targetNamespace="http://www.example.org"                 elementFormDefault="qualified">   <xsd:element name="exampleElement" type="xsd:string">   </xsd:element> </xsd:schema> JMS Message After executing the previous samples, the following XML message should be in the JMS queue located at jms/TestJMSQueue: <?xml version="1.0" encoding="UTF-8" ?><exampleElement xmlns="http://www.example.org">Test Message</exampleElement> JDeveloper Connection You will need a valid Application Server Connection in JDeveloper pointing to the SOA server which the process will be deployed to. 2. Create a BPEL Composite with a JMS Adapter Partner Link In the previous example, we created a composite in JDeveloper called JmsAdapterWriteSchema. In this one, we will create a new composite called JmsAdapterReadSchema. There are probably many ways of incorporating a JMS adapter into a SOA composite for incoming messages. One way is design the process in such a way that the adapter polls for new messages and when it dequeues one, initiates a SOA or BPEL instance. This is possibly the most common use case. Other use cases include mid-flow adapters, which are activated from within the BPEL process. In this example we will use a polling adapter, because it is the most simple to set up and demonstrate. But it has one disadvantage as a demonstrative model. When a polling adapter is active, it will dequeue all messages as soon as they reach the queue. This makes it difficult to monitor messages we are writing to the queue, because they will disappear from the queue as soon as they have been enqueued. To work around this, we will shut down the composite after deploying it and restart it as required. (Another solution for this would be to pause the consumption for the queue and resume consumption again if needed. This can be done in the WLS console JMS-Modules -> queue -> Control -> Consumption -> Pause/Resume.) We will model the composite as a one-way incoming process. Usually, a BPEL process will do something useful with the message after receiving it, such as passing it to a database or file adapter, a human workflow or external web service. But we only want to demonstrate how to dequeue a JMS message using BPEL and a JMS adapter, so we won’t complicate the design with further activities. However, we do want to be able to verify that we have read the message correctly, so the BPEL process will include a small piece of embedded java code, which will print the message to standard output, so we can view it in the SOA server’s log file. Alternatively, you can view the instance in the Enterprise Manager and verify the message. The following steps are all executed in JDeveloper. Create the project in the same JDeveloper application used for the previous examples or create a new one. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterReadSchema. When prompted for the composite type, choose Empty Composite. Create a JMS Adapter Partner Link In the composite editor, drag a JMS adapter over from the Component Palette to the left-hand swim lane, under Exposed Services. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterRead Oracle Enterprise Messaging Service (OEMS): Oracle WebLogic JMS AppServer Connection: Use an application server connection pointing to the WebLogic server on which the JMS queue and connection factory mentioned under Prerequisites above are located. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Consume Message Operation Name: Consume_message Consume Operation Parameters Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created in a previous example. JNDI Name: The JNDI name to use for the JMS connection. As in the previous example, this is probably the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) Messages/Message SchemaURL: We will use the XSD file created during the previous example, in the JmsAdapterWriteSchema project to define the format for the incoming message payload and, at the same time, demonstrate how to import an existing XSD file into a JDeveloper project. Press the magnifying glass icon to search for schema files. In the Type Chooser, press the Import Schema File button. Select the magnifying glass next to URL to search for schema files. Navigate to the location of the JmsAdapterWriteSchema project > xsd and select the stringPayload.xsd file. Check the “Copy to Project” checkbox, press OK and confirm the following Localize Files popup. Now that the XSD file has been copied to the local project, it can be selected from the project’s schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string . Press Next and Finish, which will complete the JMS Adapter configuration.Save the project. Create a BPEL Component Drag a BPEL Process from the Component Palette (Service Components) to the Components section of the composite designer. Name it JmsAdapterReadSchema and select Template: Define Service Later and press OK. Wire the JMS Adapter to the BPEL Component Now wire the JMS adapter to the BPEL process, by dragging the arrow from the adapter to the BPEL process. A Transaction Properties popup will be displayed. Set the delivery mode to async.persist. This completes the steps at the composite level. 3 . Complete the BPEL Process Design Invoke the BPEL Flow via the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterReadSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterRead partner link in the left-hand swim lane. Drag a Receive activity onto the BPEL flow diagram, then drag a wire (left-hand yellow arrow) from it to the JMS adapter. This will open the Receive activity editor. Auto-generate the variable by pressing the green “+” button and check the “Create Instance” checkbox. This will result in a BPEL instance being created when a new JMS message is received. At this point it would actually be OK to compile and deploy the composite and it would pick up any messages from the JMS queue. In fact, you can do that to test it, if you like. But it is very rudimentary and would not be doing anything useful with the message. Also, you could only verify the actual message payload by looking at the instance’s flow in the Enterprise Manager. There are various other possibilities; we could pass the message to another web service, write it to a file using a file adapter or to a database via a database adapter etc. But these will all introduce unnecessary complications to our sample. So, to keep it simple, we will add a small piece of Java code to the BPEL process which will write the payload to standard output. This will be written to the server’s log file, which will be easy to monitor. Add a Java Embedding Activity First get the full name of the process’s input variable, as this will be needed for the Java code. Go to the Structure pane and expand Variables > Process > Variables. Then expand the input variable, for example, "Receive1_Consume_Message_InputVariable > body > ns2:exampleElement”, and note variable’s name and path, if they are different from this one. Drag a Java Embedding activity from the Component Palette (Oracle Extensions) to the BPEL flow, after the Receive activity, then open it to edit. Delete the example code and replace it with the following, replacing the variable parts with those in your sample, if necessary.: System.out.println("JmsAdapterReadSchema process picked up a message"); oracle.xml.parser.v2.XMLElement inputPayload =    (oracle.xml.parser.v2.XMLElement)getVariableData(                           "Receive1_Consume_Message_InputVariable",                           "body",                           "/ns2:exampleElement");   String inputString = inputPayload.getFirstChild().getNodeValue(); System.out.println("Input String is " + inputPayload.getFirstChild().getNodeValue()); Tip. If you are not sure of the exact syntax of the input variable, create an Assign activity in the BPEL process and copy the variable to another, temporary one. Then check the syntax created by the BPEL designer. This completes the BPEL process design in JDeveloper. Save, compile and deploy the process to the SOA server. 3. Test the Composite Shut Down the JmsAdapterReadSchema Composite After deploying the JmsAdapterReadSchema composite to the SOA server it is automatically activated. If there are already any messages in the queue, the adapter will begin polling them. To ease the testing process, we will deactivate the process first Log in to the Enterprise Manager (Fusion Middleware Control) and navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterReadSchema [1.0] . Press the Shut Down button to disable the composite and confirm the following popup. Monitor Messages in the JMS Queue In a separate browser window, log in to the WebLogic Server Console and navigate to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. This is the location of the JMS queue we created in an earlier sample (see the prerequisites section of this sample). Check whether there are any messages already in the queue. If so, you can dequeue them using the QueueReceive Java program created in an earlier sample. This will ensure that the queue is empty and doesn’t contain any messages in the wrong format, which would cause the JmsAdapterReadSchema to fail. Send a Test Message In the Enterprise Manager, navigate to the JmsAdapterWriteSchema created earlier, press Test and send a test message, for example “Message from JmsAdapterWriteSchema”. Confirm that the message was written correctly to the queue by verifying it via the queue monitor in the WLS Console. Monitor the SOA Server’s Output A program deployed on the SOA server will write its standard output to the terminal window in which the server was started, unless this has been redirected to somewhere else, for example to a file. If it has not been redirected, go to the terminal session in which the server was started, otherwise open and monitor the file to which it was redirected. Re-Enable the JmsAdapterReadSchema Composite In the Enterprise Manager, navigate to the JmsAdapterReadSchema composite again and press Start Up to re-enable it. This should cause the JMS adapter to dequeue the test message and the following output should be written to the server’s standard output: JmsAdapterReadSchema process picked up a message. Input String is Message from JmsAdapterWriteSchema Note that you can also monitor the payload received by the process, by navigating to the the JmsAdapterReadSchema’s Instances tab in the Enterprise Manager. Then select the latest instance and view the flow of the BPEL component. The Receive activity will contain and display the dequeued message too. 4 . Troubleshooting This sample demonstrates how to dequeue an XML JMS message using a BPEL process and no additional functionality. For example, it doesn’t contain any error handling. Therefore, any errors in the payload will result in exceptions being written to the log file or standard output. If you get any errors related to the payload, such as Message handle error ... ORABPEL-09500 ... XPath expression failed to execute. An error occurs while processing the XPath expression; the expression is /ns2:exampleElement. ... etc. check that the variable used in the Java embedding part of the process was entered correctly. Possibly follow the tip mentioned in previous section. If this doesn’t help, you can delete the Java embedding part and simply verify the message via the flow diagram in the Enterprise Manager. Or use a different method, such as writing it to a file via a file adapter. This concludes this example. In the next post, we will begin with an AQ JMS example, which uses JMS to write to an Advanced Queue stored in the database. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • PTLQueue : a scalable bounded-capacity MPMC queue

    - by Dave
    Title: Fast concurrent MPMC queue -- I've used the following concurrent queue algorithm enough that it warrants a blog entry. I'll sketch out the design of a fast and scalable multiple-producer multiple-consumer (MPSC) concurrent queue called PTLQueue. The queue has bounded capacity and is implemented via a circular array. Bounded capacity can be a useful property if there's a mismatch between producer rates and consumer rates where an unbounded queue might otherwise result in excessive memory consumption by virtue of the container nodes that -- in some queue implementations -- are used to hold values. A bounded-capacity queue can provide flow control between components. Beware, however, that bounded collections can also result in resource deadlock if abused. The put() and take() operators are partial and wait for the collection to become non-full or non-empty, respectively. Put() and take() do not allocate memory, and are not vulnerable to the ABA pathologies. The PTLQueue algorithm can be implemented equally well in C/C++ and Java. Partial operators are often more convenient than total methods. In many use cases if the preconditions aren't met, there's nothing else useful the thread can do, so it may as well wait via a partial method. An exception is in the case of work-stealing queues where a thief might scan a set of queues from which it could potentially steal. Total methods return ASAP with a success-failure indication. (It's tempting to describe a queue or API as blocking or non-blocking instead of partial or total, but non-blocking is already an overloaded concurrency term. Perhaps waiting/non-waiting or patient/impatient might be better terms). It's also trivial to construct partial operators by busy-waiting via total operators, but such constructs may be less efficient than an operator explicitly and intentionally designed to wait. A PTLQueue instance contains an array of slots, where each slot has volatile Turn and MailBox fields. The array has power-of-two length allowing mod/div operations to be replaced by masking. We assume sensible padding and alignment to reduce the impact of false sharing. (On x86 I recommend 128-byte alignment and padding because of the adjacent-sector prefetch facility). Each queue also has PutCursor and TakeCursor cursor variables, each of which should be sequestered as the sole occupant of a cache line or sector. You can opt to use 64-bit integers if concerned about wrap-around aliasing in the cursor variables. Put(null) is considered illegal, but the caller or implementation can easily check for and convert null to a distinguished non-null proxy value if null happens to be a value you'd like to pass. Take() will accordingly convert the proxy value back to null. An advantage of PTLQueue is that you can use atomic fetch-and-increment for the partial methods. We initialize each slot at index I with (Turn=I, MailBox=null). Both cursors are initially 0. All shared variables are considered "volatile" and atomics such as CAS and AtomicFetchAndIncrement are presumed to have bidirectional fence semantics. Finally T is the templated type. I've sketched out a total tryTake() method below that allows the caller to poll the queue. tryPut() has an analogous construction. Zebra stripping : alternating row colors for nice-looking code listings. See also google code "prettify" : https://code.google.com/p/google-code-prettify/ Prettify is a javascript module that yields the HTML/CSS/JS equivalent of pretty-print. -- pre:nth-child(odd) { background-color:#ff0000; } pre:nth-child(even) { background-color:#0000ff; } border-left: 11px solid #ccc; margin: 1.7em 0 1.7em 0.3em; background-color:#BFB; font-size:12px; line-height:65%; " // PTLQueue : Put(v) : // producer : partial method - waits as necessary assert v != null assert Mask = 1 && (Mask & (Mask+1)) == 0 // Document invariants // doorway step // Obtain a sequence number -- ticket // As a practical concern the ticket value is temporally unique // The ticket also identifies and selects a slot auto tkt = AtomicFetchIncrement (&PutCursor, 1) slot * s = &Slots[tkt & Mask] // waiting phase : // wait for slot's generation to match the tkt value assigned to this put() invocation. // The "generation" is implicitly encoded as the upper bits in the cursor // above those used to specify the index : tkt div (Mask+1) // The generation serves as an epoch number to identify a cohort of threads // accessing disjoint slots while s-Turn != tkt : Pause assert s-MailBox == null s-MailBox = v // deposit and pass message Take() : // consumer : partial method - waits as necessary auto tkt = AtomicFetchIncrement (&TakeCursor,1) slot * s = &Slots[tkt & Mask] // 2-stage waiting : // First wait for turn for our generation // Acquire exclusive "take" access to slot's MailBox field // Then wait for the slot to become occupied while s-Turn != tkt : Pause // Concurrency in this section of code is now reduced to just 1 producer thread // vs 1 consumer thread. // For a given queue and slot, there will be most one Take() operation running // in this section. // Consumer waits for producer to arrive and make slot non-empty // Extract message; clear mailbox; advance Turn indicator // We have an obvious happens-before relation : // Put(m) happens-before corresponding Take() that returns that same "m" for T v = s-MailBox if v != null : s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 // unlock slot to admit next producer and consumer return v Pause tryTake() : // total method - returns ASAP with failure indication for auto tkt = TakeCursor slot * s = &Slots[tkt & Mask] if s-Turn != tkt : return null T v = s-MailBox // presumptive return value if v == null : return null // ratify tkt and v values and commit by advancing cursor if CAS (&TakeCursor, tkt, tkt+1) != tkt : continue s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 return v The basic idea derives from the Partitioned Ticket Lock "PTL" (US20120240126-A1) and the MultiLane Concurrent Bag (US8689237). The latter is essentially a circular ring-buffer where the elements themselves are queues or concurrent collections. You can think of the PTLQueue as a partitioned ticket lock "PTL" augmented to pass values from lock to unlock via the slots. Alternatively, you could conceptualize of PTLQueue as a degenerate MultiLane bag where each slot or "lane" consists of a simple single-word MailBox instead of a general queue. Each lane in PTLQueue also has a private Turn field which acts like the Turn (Grant) variables found in PTL. Turn enforces strict FIFO ordering and restricts concurrency on the slot mailbox field to at most one simultaneous put() and take() operation. PTL uses a single "ticket" variable and per-slot Turn (grant) fields while MultiLane has distinct PutCursor and TakeCursor cursors and abstract per-slot sub-queues. Both PTL and MultiLane advance their cursor and ticket variables with atomic fetch-and-increment. PTLQueue borrows from both PTL and MultiLane and has distinct put and take cursors and per-slot Turn fields. Instead of a per-slot queues, PTLQueue uses a simple single-word MailBox field. PutCursor and TakeCursor act like a pair of ticket locks, conferring "put" and "take" access to a given slot. PutCursor, for instance, assigns an incoming put() request to a slot and serves as a PTL "Ticket" to acquire "put" permission to that slot's MailBox field. To better explain the operation of PTLQueue we deconstruct the operation of put() and take() as follows. Put() first increments PutCursor obtaining a new unique ticket. That ticket value also identifies a slot. Put() next waits for that slot's Turn field to match that ticket value. This is tantamount to using a PTL to acquire "put" permission on the slot's MailBox field. Finally, having obtained exclusive "put" permission on the slot, put() stores the message value into the slot's MailBox. Take() similarly advances TakeCursor, identifying a slot, and then acquires and secures "take" permission on a slot by waiting for Turn. Take() then waits for the slot's MailBox to become non-empty, extracts the message, and clears MailBox. Finally, take() advances the slot's Turn field, which releases both "put" and "take" access to the slot's MailBox. Note the asymmetry : put() acquires "put" access to the slot, but take() releases that lock. At any given time, for a given slot in a PTLQueue, at most one thread has "put" access and at most one thread has "take" access. This restricts concurrency from general MPMC to 1-vs-1. We have 2 ticket locks -- one for put() and one for take() -- each with its own "ticket" variable in the form of the corresponding cursor, but they share a single "Grant" egress variable in the form of the slot's Turn variable. Advancing the PutCursor, for instance, serves two purposes. First, we obtain a unique ticket which identifies a slot. Second, incrementing the cursor is the doorway protocol step to acquire the per-slot mutual exclusion "put" lock. The cursors and operations to increment those cursors serve double-duty : slot-selection and ticket assignment for locking the slot's MailBox field. At any given time a slot MailBox field can be in one of the following states: empty with no pending operations -- neutral state; empty with one or more waiting take() operations pending -- deficit; occupied with no pending operations; occupied with one or more waiting put() operations -- surplus; empty with a pending put() or pending put() and take() operations -- transitional; or occupied with a pending take() or pending put() and take() operations -- transitional. The partial put() and take() operators can be implemented with an atomic fetch-and-increment operation, which may confer a performance advantage over a CAS-based loop. In addition we have independent PutCursor and TakeCursor cursors. Critically, a put() operation modifies PutCursor but does not access the TakeCursor and a take() operation modifies the TakeCursor cursor but does not access the PutCursor. This acts to reduce coherence traffic relative to some other queue designs. It's worth noting that slow threads or obstruction in one slot (or "lane") does not impede or obstruct operations in other slots -- this gives us some degree of obstruction isolation. PTLQueue is not lock-free, however. The implementation above is expressed with polite busy-waiting (Pause) but it's trivial to implement per-slot parking and unparking to deschedule waiting threads. It's also easy to convert the queue to a more general deque by replacing the PutCursor and TakeCursor cursors with Left/Front and Right/Back cursors that can move either direction. Specifically, to push and pop from the "left" side of the deque we would decrement and increment the Left cursor, respectively, and to push and pop from the "right" side of the deque we would increment and decrement the Right cursor, respectively. We used a variation of PTLQueue for message passing in our recent OPODIS 2013 paper. ul { list-style:none; padding-left:0; padding:0; margin:0; margin-left:0; } ul#myTagID { padding: 0px; margin: 0px; list-style:none; margin-left:0;} -- -- There's quite a bit of related literature in this area. I'll call out a few relevant references: Wilson's NYU Courant Institute UltraComputer dissertation from 1988 is classic and the canonical starting point : Operating System Data Structures for Shared-Memory MIMD Machines with Fetch-and-Add. Regarding provenance and priority, I think PTLQueue or queues effectively equivalent to PTLQueue have been independently rediscovered a number of times. See CB-Queue and BNPBV, below, for instance. But Wilson's dissertation anticipates the basic idea and seems to predate all the others. Gottlieb et al : Basic Techniques for the Efficient Coordination of Very Large Numbers of Cooperating Sequential Processors Orozco et al : CB-Queue in Toward high-throughput algorithms on many-core architectures which appeared in TACO 2012. Meneghin et al : BNPVB family in Performance evaluation of inter-thread communication mechanisms on multicore/multithreaded architecture Dmitry Vyukov : bounded MPMC queue (highly recommended) Alex Otenko : US8607249 (highly related). John Mellor-Crummey : Concurrent queues: Practical fetch-and-phi algorithms. Technical Report 229, Department of Computer Science, University of Rochester Thomasson : FIFO Distributed Bakery Algorithm (very similar to PTLQueue). Scott and Scherer : Dual Data Structures I'll propose an optimization left as an exercise for the reader. Say we wanted to reduce memory usage by eliminating inter-slot padding. Such padding is usually "dark" memory and otherwise unused and wasted. But eliminating the padding leaves us at risk of increased false sharing. Furthermore lets say it was usually the case that the PutCursor and TakeCursor were numerically close to each other. (That's true in some use cases). We might still reduce false sharing by incrementing the cursors by some value other than 1 that is not trivially small and is coprime with the number of slots. Alternatively, we might increment the cursor by one and mask as usual, resulting in a logical index. We then use that logical index value to index into a permutation table, yielding an effective index for use in the slot array. The permutation table would be constructed so that nearby logical indices would map to more distant effective indices. (Open question: what should that permutation look like? Possibly some perversion of a Gray code or De Bruijn sequence might be suitable). As an aside, say we need to busy-wait for some condition as follows : "while C == 0 : Pause". Lets say that C is usually non-zero, so we typically don't wait. But when C happens to be 0 we'll have to spin for some period, possibly brief. We can arrange for the code to be more machine-friendly with respect to the branch predictors by transforming the loop into : "if C == 0 : for { Pause; if C != 0 : break; }". Critically, we want to restructure the loop so there's one branch that controls entry and another that controls loop exit. A concern is that your compiler or JIT might be clever enough to transform this back to "while C == 0 : Pause". You can sometimes avoid this by inserting a call to a some type of very cheap "opaque" method that the compiler can't elide or reorder. On Solaris, for instance, you could use :"if C == 0 : { gethrtime(); for { Pause; if C != 0 : break; }}". It's worth noting the obvious duality between locks and queues. If you have strict FIFO lock implementation with local spinning and succession by direct handoff such as MCS or CLH,then you can usually transform that lock into a queue. Hidden commentary and annotations - invisible : * And of course there's a well-known duality between queues and locks, but I'll leave that topic for another blog post. * Compare and contrast : PTLQ vs PTL and MultiLane * Equivalent : Turn; seq; sequence; pos; position; ticket * Put = Lock; Deposit Take = identify and reserve slot; wait; extract & clear; unlock * conceptualize : Distinct PutLock and TakeLock implemented as ticket lock or PTL Distinct arrival cursors but share per-slot "Turn" variable provides exclusive role-based access to slot's mailbox field put() acquires exclusive access to a slot for purposes of "deposit" assigns slot round-robin and then acquires deposit access rights/perms to that slot take() acquires exclusive access to slot for purposes of "withdrawal" assigns slot round-robin and then acquires withdrawal access rights/perms to that slot At any given time, only one thread can have withdrawal access to a slot at any given time, only one thread can have deposit access to a slot Permissible for T1 to have deposit access and T2 to simultaneously have withdrawal access * round-robin for the purposes of; role-based; access mode; access role mailslot; mailbox; allocate/assign/identify slot rights; permission; license; access permission; * PTL/Ticket hybrid Asymmetric usage ; owner oblivious lock-unlock pairing K-exclusion add Grant cursor pass message m from lock to unlock via Slots[] array Cursor performs 2 functions : + PTL ticket + Assigns request to slot in round-robin fashion Deconstruct protocol : explication put() : allocate slot in round-robin fashion acquire PTL for "put" access store message into slot associated with PTL index take() : Acquire PTL for "take" access // doorway step seq = fetchAdd (&Grant, 1) s = &Slots[seq & Mask] // waiting phase while s-Turn != seq : pause Extract : wait for s-mailbox to be full v = s-mailbox s-mailbox = null Release PTL for both "put" and "take" access s-Turn = seq + Mask + 1 * Slot round-robin assignment and lock "doorway" protocol leverage the same cursor and FetchAdd operation on that cursor FetchAdd (&Cursor,1) + round-robin slot assignment and dispersal + PTL/ticket lock "doorway" step waiting phase is via "Turn" field in slot * PTLQueue uses 2 cursors -- put and take. Acquire "put" access to slot via PTL-like lock Acquire "take" access to slot via PTL-like lock 2 locks : put and take -- at most one thread can access slot's mailbox Both locks use same "turn" field Like multilane : 2 cursors : put and take slot is simple 1-capacity mailbox instead of queue Borrow per-slot turn/grant from PTL Provides strict FIFO Lock slot : put-vs-put take-vs-take at most one put accesses slot at any one time at most one put accesses take at any one time reduction to 1-vs-1 instead of N-vs-M concurrency Per slot locks for put/take Release put/take by advancing turn * is instrumental in ... * P-V Semaphore vs lock vs K-exclusion * See also : FastQueues-excerpt.java dice-etc/queue-mpmc-bounded-blocking-circular-xadd/ * PTLQueue is the same as PTLQB - identical * Expedient return; ASAP; prompt; immediately * Lamport's Bakery algorithm : doorway step then waiting phase Threads arriving at doorway obtain a unique ticket number Threads enter in ticket order * In the terminology of Reed and Kanodia a ticket lock corresponds to the busy-wait implementation of a semaphore using an eventcount and a sequencer It can also be thought of as an optimization of Lamport's bakery lock was designed for fault-tolerance rather than performance Instead of spinning on the release counter, processors using a bakery lock repeatedly examine the tickets of their peers --

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • C#/.NET Little Wonders: The Generic Func Delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Back in one of my three original “Little Wonders” Trilogy of posts, I had listed generic delegates as one of the Little Wonders of .NET.  Later, someone posted a comment saying said that they would love more detail on the generic delegates and their uses, since my original entry just scratched the surface of them. Last week, I began our look at some of the handy generic delegates built into .NET with a description of delegates in general, and the Action family of delegates.  For this week, I’ll launch into a look at the Func family of generic delegates and how they can be used to support generic, reusable algorithms and classes. Quick Delegate Recap Delegates are similar to function pointers in C++ in that they allow you to store a reference to a method.  They can store references to either static or instance methods, and can actually be used to chain several methods together in one delegate. Delegates are very type-safe and can be satisfied with any standard method, anonymous method, or a lambda expression.  They can also be null as well (refers to no method), so care should be taken to make sure that the delegate is not null before you invoke it. Delegates are defined using the keyword delegate, where the delegate’s type name is placed where you would typically place the method name: 1: // This delegate matches any method that takes string, returns nothing 2: public delegate void Log(string message); This delegate defines a delegate type named Log that can be used to store references to any method(s) that satisfies its signature (whether instance, static, lambda expression, etc.). Delegate instances then can be assigned zero (null) or more methods using the operator = which replaces the existing delegate chain, or by using the operator += which adds a method to the end of a delegate chain: 1: // creates a delegate instance named currentLogger defaulted to Console.WriteLine (static method) 2: Log currentLogger = Console.Out.WriteLine; 3:  4: // invokes the delegate, which writes to the console out 5: currentLogger("Hi Standard Out!"); 6:  7: // append a delegate to Console.Error.WriteLine to go to std error 8: currentLogger += Console.Error.WriteLine; 9:  10: // invokes the delegate chain and writes message to std out and std err 11: currentLogger("Hi Standard Out and Error!"); While delegates give us a lot of power, it can be cumbersome to re-create fairly standard delegate definitions repeatedly, for this purpose the generic delegates were introduced in various stages in .NET.  These support various method types with particular signatures. Note: a caveat with generic delegates is that while they can support multiple parameters, they do not match methods that contains ref or out parameters. If you want to a delegate to represent methods that takes ref or out parameters, you will need to create a custom delegate. We’ve got the Func… delegates Just like it’s cousin, the Action delegate family, the Func delegate family gives us a lot of power to use generic delegates to make classes and algorithms more generic.  Using them keeps us from having to define a new delegate type when need to make a class or algorithm generic. Remember that the point of the Action delegate family was to be able to perform an “action” on an item, with no return results.  Thus Action delegates can be used to represent most methods that take 0 to 16 arguments but return void.  You can assign a method The Func delegate family was introduced in .NET 3.5 with the advent of LINQ, and gives us the power to define a function that can be called on 0 to 16 arguments and returns a result.  Thus, the main difference between Action and Func, from a delegate perspective, is that Actions return nothing, but Funcs return a result. The Func family of delegates have signatures as follows: Func<TResult> – matches a method that takes no arguments, and returns value of type TResult. Func<T, TResult> – matches a method that takes an argument of type T, and returns value of type TResult. Func<T1, T2, TResult> – matches a method that takes arguments of type T1 and T2, and returns value of type TResult. Func<T1, T2, …, TResult> – and so on up to 16 arguments, and returns value of type TResult. These are handy because they quickly allow you to be able to specify that a method or class you design will perform a function to produce a result as long as the method you specify meets the signature. For example, let’s say you were designing a generic aggregator, and you wanted to allow the user to define how the values will be aggregated into the result (i.e. Sum, Min, Max, etc…).  To do this, we would ask the user of our class to pass in a method that would take the current total, the next value, and produce a new total.  A class like this could look like: 1: public sealed class Aggregator<TValue, TResult> 2: { 3: // holds method that takes previous result, combines with next value, creates new result 4: private Func<TResult, TValue, TResult> _aggregationMethod; 5:  6: // gets or sets the current result of aggregation 7: public TResult Result { get; private set; } 8:  9: // construct the aggregator given the method to use to aggregate values 10: public Aggregator(Func<TResult, TValue, TResult> aggregationMethod = null) 11: { 12: if (aggregationMethod == null) throw new ArgumentNullException("aggregationMethod"); 13:  14: _aggregationMethod = aggregationMethod; 15: } 16:  17: // method to add next value 18: public void Aggregate(TValue nextValue) 19: { 20: // performs the aggregation method function on the current result and next and sets to current result 21: Result = _aggregationMethod(Result, nextValue); 22: } 23: } Of course, LINQ already has an Aggregate extension method, but that works on a sequence of IEnumerable<T>, whereas this is designed to work more with aggregating single results over time (such as keeping track of a max response time for a service). We could then use this generic aggregator to find the sum of a series of values over time, or the max of a series of values over time (among other things): 1: // creates an aggregator that adds the next to the total to sum the values 2: var sumAggregator = new Aggregator<int, int>((total, next) => total + next); 3:  4: // creates an aggregator (using static method) that returns the max of previous result and next 5: var maxAggregator = new Aggregator<int, int>(Math.Max); So, if we were timing the response time of a web method every time it was called, we could pass that response time to both of these aggregators to get an idea of the total time spent in that web method, and the max time spent in any one call to the web method: 1: // total will be 13 and max 13 2: int responseTime = 13; 3: sumAggregator.Aggregate(responseTime); 4: maxAggregator.Aggregate(responseTime); 5:  6: // total will be 20 and max still 13 7: responseTime = 7; 8: sumAggregator.Aggregate(responseTime); 9: maxAggregator.Aggregate(responseTime); 10:  11: // total will be 40 and max now 20 12: responseTime = 20; 13: sumAggregator.Aggregate(responseTime); 14: maxAggregator.Aggregate(responseTime); The Func delegate family is useful for making generic algorithms and classes, and in particular allows the caller of the method or user of the class to specify a function to be performed in order to generate a result. What is the result of a Func delegate chain? If you remember, we said earlier that you can assign multiple methods to a delegate by using the += operator to chain them.  So how does this affect delegates such as Func that return a value, when applied to something like the code below? 1: Func<int, int, int> combo = null; 2:  3: // What if we wanted to aggregate the sum and max together? 4: combo += (total, next) => total + next; 5: combo += Math.Max; 6:  7: // what is the result? 8: var comboAggregator = new Aggregator<int, int>(combo); Well, in .NET if you chain multiple methods in a delegate, they will all get invoked, but the result of the delegate is the result of the last method invoked in the chain.  Thus, this aggregator would always result in the Math.Max() result.  The other chained method (the sum) gets executed first, but it’s result is thrown away: 1: // result is 13 2: int responseTime = 13; 3: comboAggregator.Aggregate(responseTime); 4:  5: // result is still 13 6: responseTime = 7; 7: comboAggregator.Aggregate(responseTime); 8:  9: // result is now 20 10: responseTime = 20; 11: comboAggregator.Aggregate(responseTime); So remember, you can chain multiple Func (or other delegates that return values) together, but if you do so you will only get the last executed result. Func delegates and co-variance/contra-variance in .NET 4.0 Just like the Action delegate, as of .NET 4.0, the Func delegate family is contra-variant on its arguments.  In addition, it is co-variant on its return type.  To support this, in .NET 4.0 the signatures of the Func delegates changed to: Func<out TResult> – matches a method that takes no arguments, and returns value of type TResult (or a more derived type). Func<in T, out TResult> – matches a method that takes an argument of type T (or a less derived type), and returns value of type TResult(or a more derived type). Func<in T1, in T2, out TResult> – matches a method that takes arguments of type T1 and T2 (or less derived types), and returns value of type TResult (or a more derived type). Func<in T1, in T2, …, out TResult> – and so on up to 16 arguments, and returns value of type TResult (or a more derived type). Notice the addition of the in and out keywords before each of the generic type placeholders.  As we saw last week, the in keyword is used to specify that a generic type can be contra-variant -- it can match the given type or a type that is less derived.  However, the out keyword, is used to specify that a generic type can be co-variant -- it can match the given type or a type that is more derived. On contra-variance, if you are saying you need an function that will accept a string, you can just as easily give it an function that accepts an object.  In other words, if you say “give me an function that will process dogs”, I could pass you a method that will process any animal, because all dogs are animals.  On the co-variance side, if you are saying you need a function that returns an object, you can just as easily pass it a function that returns a string because any string returned from the given method can be accepted by a delegate expecting an object result, since string is more derived.  Once again, in other words, if you say “give me a method that creates an animal”, I can pass you a method that will create a dog, because all dogs are animals. It really all makes sense, you can pass a more specific thing to a less specific parameter, and you can return a more specific thing as a less specific result.  In other words, pay attention to the direction the item travels (parameters go in, results come out).  Keeping that in mind, you can always pass more specific things in and return more specific things out. For example, in the code below, we have a method that takes a Func<object> to generate an object, but we can pass it a Func<string> because the return type of object can obviously accept a return value of string as well: 1: // since Func<object> is co-variant, this will access Func<string>, etc... 2: public static string Sequence(int count, Func<object> generator) 3: { 4: var builder = new StringBuilder(); 5:  6: for (int i=0; i<count; i++) 7: { 8: object value = generator(); 9: builder.Append(value); 10: } 11:  12: return builder.ToString(); 13: } Even though the method above takes a Func<object>, we can pass a Func<string> because the TResult type placeholder is co-variant and accepts types that are more derived as well: 1: // delegate that's typed to return string. 2: Func<string> stringGenerator = () => DateTime.Now.ToString(); 3:  4: // This will work in .NET 4.0, but not in previous versions 5: Sequence(100, stringGenerator); Previous versions of .NET implemented some forms of co-variance and contra-variance before, but .NET 4.0 goes one step further and allows you to pass or assign an Func<A, BResult> to a Func<Y, ZResult> as long as A is less derived (or same) as Y, and BResult is more derived (or same) as ZResult. Sidebar: The Func and the Predicate A method that takes one argument and returns a bool is generally thought of as a predicate.  Predicates are used to examine an item and determine whether that item satisfies a particular condition.  Predicates are typically unary, but you may also have binary and other predicates as well. Predicates are often used to filter results, such as in the LINQ Where() extension method: 1: var numbers = new[] { 1, 2, 4, 13, 8, 10, 27 }; 2:  3: // call Where() using a predicate which determines if the number is even 4: var evens = numbers.Where(num => num % 2 == 0); As of .NET 3.5, predicates are typically represented as Func<T, bool> where T is the type of the item to examine.  Previous to .NET 3.5, there was a Predicate<T> type that tended to be used (which we’ll discuss next week) and is still supported, but most developers recommend using Func<T, bool> now, as it prevents confusion with overloads that accept unary predicates and binary predicates, etc.: 1: // this seems more confusing as an overload set, because of Predicate vs Func 2: public static SomeMethod(Predicate<int> unaryPredicate) { } 3: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } 4:  5: // this seems more consistent as an overload set, since just uses Func 6: public static SomeMethod(Func<int, bool> unaryPredicate) { } 7: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } Also, even though Predicate<T> and Func<T, bool> match the same signatures, they are separate types!  Thus you cannot assign a Predicate<T> instance to a Func<T, bool> instance and vice versa: 1: // the same method, lambda expression, etc can be assigned to both 2: Predicate<int> isEven = i => (i % 2) == 0; 3: Func<int, bool> alsoIsEven = i => (i % 2) == 0; 4:  5: // but the delegate instances cannot be directly assigned, strongly typed! 6: // ERROR: cannot convert type... 7: isEven = alsoIsEven; 8:  9: // however, you can assign by wrapping in a new instance: 10: isEven = new Predicate<int>(alsoIsEven); 11: alsoIsEven = new Func<int, bool>(isEven); So, the general advice that seems to come from most developers is that Predicate<T> is still supported, but we should use Func<T, bool> for consistency in .NET 3.5 and above. Sidebar: Func as a Generator for Unit Testing One area of difficulty in unit testing can be unit testing code that is based on time of day.  We’d still want to unit test our code to make sure the logic is accurate, but we don’t want the results of our unit tests to be dependent on the time they are run. One way (of many) around this is to create an internal generator that will produce the “current” time of day.  This would default to returning result from DateTime.Now (or some other method), but we could inject specific times for our unit testing.  Generators are typically methods that return (generate) a value for use in a class/method. For example, say we are creating a CacheItem<T> class that represents an item in the cache, and we want to make sure the item shows as expired if the age is more than 30 seconds.  Such a class could look like: 1: // responsible for maintaining an item of type T in the cache 2: public sealed class CacheItem<T> 3: { 4: // helper method that returns the current time 5: private static Func<DateTime> _timeGenerator = () => DateTime.Now; 6:  7: // allows internal access to the time generator 8: internal static Func<DateTime> TimeGenerator 9: { 10: get { return _timeGenerator; } 11: set { _timeGenerator = value; } 12: } 13:  14: // time the item was cached 15: public DateTime CachedTime { get; private set; } 16:  17: // the item cached 18: public T Value { get; private set; } 19:  20: // item is expired if older than 30 seconds 21: public bool IsExpired 22: { 23: get { return _timeGenerator() - CachedTime > TimeSpan.FromSeconds(30.0); } 24: } 25:  26: // creates the new cached item, setting cached time to "current" time 27: public CacheItem(T value) 28: { 29: Value = value; 30: CachedTime = _timeGenerator(); 31: } 32: } Then, we can use this construct to unit test our CacheItem<T> without any time dependencies: 1: var baseTime = DateTime.Now; 2:  3: // start with current time stored above (so doesn't drift) 4: CacheItem<int>.TimeGenerator = () => baseTime; 5:  6: var target = new CacheItem<int>(13); 7:  8: // now add 15 seconds, should still be non-expired 9: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(15); 10:  11: Assert.IsFalse(target.IsExpired); 12:  13: // now add 31 seconds, should now be expired 14: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(31); 15:  16: Assert.IsTrue(target.IsExpired); Now we can unit test for 1 second before, 1 second after, 1 millisecond before, 1 day after, etc.  Func delegates can be a handy tool for this type of value generation to support more testable code.  Summary Generic delegates give us a lot of power to make truly generic algorithms and classes.  The Func family of delegates is a great way to be able to specify functions to calculate a result based on 0-16 arguments.  Stay tuned in the weeks that follow for other generic delegates in the .NET Framework!   Tweet Technorati Tags: .NET, C#, CSharp, Little Wonders, Generics, Func, Delegates

    Read the article

  • Ball bouncing at a certain angle and efficiency computations

    - by X Y
    I would like to make a pong game with a small twist (for now). Every time the ball bounces off one of the paddles i want it to be under a certain angle (between a min and a max). I simply can't wrap my head around how to actually do it (i have some thoughts and such but i simply cannot implement them properly - i feel i'm overcomplicating things). Here's an image with a small explanation . One other problem would be that the conditions for bouncing have to be different for every edge. For example, in the picture, on the two small horizontal edges i do not want a perfectly vertical bounce when in the middle of the edge but rather a constant angle (pi/4 maybe) in either direction depending on the collision point (before the middle of the edge, or after). All of my collisions are done with the Separating Axes Theorem (and seem to work fine). I'm looking for something efficient because i want to add a lot of things later on (maybe polygons with many edges and such). So i need to keep to a minimum the amount of checking done every frame. The collision algorithm begins testing whenever the bounding boxes of the paddle and the ball intersect. Is there something better to test for possible collisions every frame? (more efficient in the long run,with many more objects etc, not necessarily easy to code). I'm going to post the code for my game: Paddle Class public class Paddle : Microsoft.Xna.Framework.DrawableGameComponent { #region Private Members private SpriteBatch spriteBatch; private ContentManager contentManager; private bool keybEnabled; private bool isLeftPaddle; private Texture2D paddleSprite; private Vector2 paddlePosition; private float paddleSpeedY; private Vector2 paddleScale = new Vector2(1f, 1f); private const float DEFAULT_Y_SPEED = 150; private Vector2[] Normals2Edges; private Vector2[] Vertices = new Vector2[4]; private List<Vector2> lst = new List<Vector2>(); private Vector2 Edge; #endregion #region Properties public float Speed { get {return paddleSpeedY; } set { paddleSpeedY = value; } } public Vector2[] Normal2EdgesVector { get { NormalsToEdges(this.isLeftPaddle); return Normals2Edges; } } public Vector2[] VertexVector { get { return Vertices; } } public Vector2 Scale { get { return paddleScale; } set { paddleScale = value; NormalsToEdges(this.isLeftPaddle); } } public float X { get { return paddlePosition.X; } set { paddlePosition.X = value; } } public float Y { get { return paddlePosition.Y; } set { paddlePosition.Y = value; } } public float Width { get { return (Scale.X == 1f ? (float)paddleSprite.Width : paddleSprite.Width * Scale.X); } } public float Height { get { return ( Scale.Y==1f ? (float)paddleSprite.Height : paddleSprite.Height*Scale.Y ); } } public Texture2D GetSprite { get { return paddleSprite; } } public Rectangle Boundary { get { return new Rectangle((int)paddlePosition.X, (int)paddlePosition.Y, (int)this.Width, (int)this.Height); } } public bool KeyboardEnabled { get { return keybEnabled; } } #endregion private void NormalsToEdges(bool isLeftPaddle) { Normals2Edges = null; Edge = Vector2.Zero; lst.Clear(); for (int i = 0; i < Vertices.Length; i++) { Edge = Vertices[i + 1 == Vertices.Length ? 0 : i + 1] - Vertices[i]; if (Edge != Vector2.Zero) { Edge.Normalize(); //outer normal to edge !! (origin in top-left) lst.Add(new Vector2(Edge.Y, -Edge.X)); } } Normals2Edges = lst.ToArray(); } public float[] ProjectPaddle(Vector2 axis) { if (Vertices.Length == 0 || axis == Vector2.Zero) return (new float[2] { 0, 0 }); float min, max; min = Vector2.Dot(axis, Vertices[0]); max = min; for (int i = 1; i < Vertices.Length; i++) { float p = Vector2.Dot(axis, Vertices[i]); if (p < min) min = p; else if (p > max) max = p; } return (new float[2] { min, max }); } public Paddle(Game game, bool isLeftPaddle, bool enableKeyboard = true) : base(game) { contentManager = new ContentManager(game.Services); keybEnabled = enableKeyboard; this.isLeftPaddle = isLeftPaddle; } public void setPosition(Vector2 newPos) { X = newPos.X; Y = newPos.Y; } public override void Initialize() { base.Initialize(); this.Speed = DEFAULT_Y_SPEED; X = 0; Y = 0; NormalsToEdges(this.isLeftPaddle); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); paddleSprite = contentManager.Load<Texture2D>(@"Content\pongBar"); } public override void Update(GameTime gameTime) { //vertices array Vertices[0] = this.paddlePosition; Vertices[1] = this.paddlePosition + new Vector2(this.Width, 0); Vertices[2] = this.paddlePosition + new Vector2(this.Width, this.Height); Vertices[3] = this.paddlePosition + new Vector2(0, this.Height); // Move paddle, but don't allow movement off the screen if (KeyboardEnabled) { float moveDistance = Speed * (float)gameTime.ElapsedGameTime.TotalSeconds; KeyboardState newKeyState = Keyboard.GetState(); if (newKeyState.IsKeyDown(Keys.Down) && Y + paddleSprite.Height + moveDistance <= Game.GraphicsDevice.Viewport.Height) { Y += moveDistance; } else if (newKeyState.IsKeyDown(Keys.Up) && Y - moveDistance >= 0) { Y -= moveDistance; } } else { if (this.Y + this.Height > this.GraphicsDevice.Viewport.Height) { this.Y = this.Game.GraphicsDevice.Viewport.Height - this.Height - 1; } } base.Update(gameTime); } public override void Draw(GameTime gameTime) { spriteBatch.Begin(SpriteSortMode.Texture,null); spriteBatch.Draw(paddleSprite, paddlePosition, null, Color.White, 0f, Vector2.Zero, Scale, SpriteEffects.None, 0); spriteBatch.End(); base.Draw(gameTime); } } Ball Class public class Ball : Microsoft.Xna.Framework.DrawableGameComponent { #region Private Members private SpriteBatch spriteBatch; private ContentManager contentManager; private const float DEFAULT_SPEED = 50; private float speedIncrement = 0; private Vector2 ballScale = new Vector2(1f, 1f); private const float INCREASE_SPEED = 50; private Texture2D ballSprite; //initial texture private Vector2 ballPosition; //position private Vector2 centerOfBall; //center coords private Vector2 ballSpeed = new Vector2(DEFAULT_SPEED, DEFAULT_SPEED); //speed #endregion #region Properties public float DEFAULTSPEED { get { return DEFAULT_SPEED; } } public Vector2 ballCenter { get { return centerOfBall; } } public Vector2 Scale { get { return ballScale; } set { ballScale = value; } } public float SpeedX { get { return ballSpeed.X; } set { ballSpeed.X = value; } } public float SpeedY { get { return ballSpeed.Y; } set { ballSpeed.Y = value; } } public float X { get { return ballPosition.X; } set { ballPosition.X = value; } } public float Y { get { return ballPosition.Y; } set { ballPosition.Y = value; } } public Texture2D GetSprite { get { return ballSprite; } } public float Width { get { return (Scale.X == 1f ? (float)ballSprite.Width : ballSprite.Width * Scale.X); } } public float Height { get { return (Scale.Y == 1f ? (float)ballSprite.Height : ballSprite.Height * Scale.Y); } } public float SpeedIncreaseIncrement { get { return speedIncrement; } set { speedIncrement = value; } } public Rectangle Boundary { get { return new Rectangle((int)ballPosition.X, (int)ballPosition.Y, (int)this.Width, (int)this.Height); } } #endregion public Ball(Game game) : base(game) { contentManager = new ContentManager(game.Services); } public void Reset() { ballSpeed.X = DEFAULT_SPEED; ballSpeed.Y = DEFAULT_SPEED; ballPosition.X = Game.GraphicsDevice.Viewport.Width / 2 - ballSprite.Width / 2; ballPosition.Y = Game.GraphicsDevice.Viewport.Height / 2 - ballSprite.Height / 2; } public void SpeedUp() { if (ballSpeed.Y < 0) ballSpeed.Y -= (INCREASE_SPEED + speedIncrement); else ballSpeed.Y += (INCREASE_SPEED + speedIncrement); if (ballSpeed.X < 0) ballSpeed.X -= (INCREASE_SPEED + speedIncrement); else ballSpeed.X += (INCREASE_SPEED + speedIncrement); } public float[] ProjectBall(Vector2 axis) { if (axis == Vector2.Zero) return (new float[2] { 0, 0 }); float min, max; min = Vector2.Dot(axis, this.ballCenter) - this.Width/2; //center - radius max = min + this.Width; //center + radius return (new float[2] { min, max }); } public void ChangeHorzDirection() { ballSpeed.X *= -1; } public void ChangeVertDirection() { ballSpeed.Y *= -1; } public override void Initialize() { base.Initialize(); ballPosition.X = Game.GraphicsDevice.Viewport.Width / 2 - ballSprite.Width / 2; ballPosition.Y = Game.GraphicsDevice.Viewport.Height / 2 - ballSprite.Height / 2; } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); ballSprite = contentManager.Load<Texture2D>(@"Content\ball"); } public override void Update(GameTime gameTime) { if (this.Y < 1 || this.Y > GraphicsDevice.Viewport.Height - this.Height - 1) this.ChangeVertDirection(); centerOfBall = new Vector2(ballPosition.X + this.Width / 2, ballPosition.Y + this.Height / 2); base.Update(gameTime); } public override void Draw(GameTime gameTime) { spriteBatch.Begin(); spriteBatch.Draw(ballSprite, ballPosition, null, Color.White, 0f, Vector2.Zero, Scale, SpriteEffects.None, 0); spriteBatch.End(); base.Draw(gameTime); } } Main game class public class gameStart : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; public gameStart() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; this.Window.Title = "Pong game"; } protected override void Initialize() { ball = new Ball(this); paddleLeft = new Paddle(this,true,false); paddleRight = new Paddle(this,false,true); Components.Add(ball); Components.Add(paddleLeft); Components.Add(paddleRight); this.Window.AllowUserResizing = false; this.IsMouseVisible = true; this.IsFixedTimeStep = false; this.isColliding = false; base.Initialize(); } #region MyPrivateStuff private Ball ball; private Paddle paddleLeft, paddleRight; private int[] bit = { -1, 1 }; private Random rnd = new Random(); private int updates = 0; enum nrPaddle { None, Left, Right }; private nrPaddle PongBar = nrPaddle.None; private ArrayList Axes = new ArrayList(); private Vector2 MTV; //minimum translation vector private bool isColliding; private float overlap; //smallest distance after projections private Vector2 overlapAxis; //axis of overlap #endregion protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); paddleLeft.setPosition(new Vector2(0, this.GraphicsDevice.Viewport.Height / 2 - paddleLeft.Height / 2)); paddleRight.setPosition(new Vector2(this.GraphicsDevice.Viewport.Width - paddleRight.Width, this.GraphicsDevice.Viewport.Height / 2 - paddleRight.Height / 2)); paddleLeft.Scale = new Vector2(1f, 2f); //scale left paddle } private bool ShapesIntersect(Paddle paddle, Ball ball) { overlap = 1000000f; //large value overlapAxis = Vector2.Zero; MTV = Vector2.Zero; foreach (Vector2 ax in Axes) { float[] pad = paddle.ProjectPaddle(ax); //pad0 = min, pad1 = max float[] circle = ball.ProjectBall(ax); //circle0 = min, circle1 = max if (pad[1] <= circle[0] || circle[1] <= pad[0]) { return false; } if (pad[1] - circle[0] < circle[1] - pad[0]) { if (Math.Abs(overlap) > Math.Abs(-pad[1] + circle[0])) { overlap = -pad[1] + circle[0]; overlapAxis = ax; } } else { if (Math.Abs(overlap) > Math.Abs(circle[1] - pad[0])) { overlap = circle[1] - pad[0]; overlapAxis = ax; } } } if (overlapAxis != Vector2.Zero) { MTV = overlapAxis * overlap; } return true; } protected override void Update(GameTime gameTime) { updates += 1; float ftime = 5 * (float)gameTime.ElapsedGameTime.TotalSeconds; if (updates == 1) { isColliding = false; int Xrnd = bit[Convert.ToInt32(rnd.Next(0, 2))]; int Yrnd = bit[Convert.ToInt32(rnd.Next(0, 2))]; ball.SpeedX = Xrnd * ball.SpeedX; ball.SpeedY = Yrnd * ball.SpeedY; ball.X += ftime * ball.SpeedX; ball.Y += ftime * ball.SpeedY; } else { updates = 100; ball.X += ftime * ball.SpeedX; ball.Y += ftime * ball.SpeedY; } //autorun :) paddleLeft.Y = ball.Y; //collision detection PongBar = nrPaddle.None; if (ball.Boundary.Intersects(paddleLeft.Boundary)) { PongBar = nrPaddle.Left; if (!isColliding) { Axes.Clear(); Axes.AddRange(paddleLeft.Normal2EdgesVector); //axis from nearest vertex to ball's center Axes.Add(FORMULAS.NormAxisFromCircle2ClosestVertex(paddleLeft.VertexVector, ball.ballCenter)); } } else if (ball.Boundary.Intersects(paddleRight.Boundary)) { PongBar = nrPaddle.Right; if (!isColliding) { Axes.Clear(); Axes.AddRange(paddleRight.Normal2EdgesVector); //axis from nearest vertex to ball's center Axes.Add(FORMULAS.NormAxisFromCircle2ClosestVertex(paddleRight.VertexVector, ball.ballCenter)); } } if (PongBar != nrPaddle.None && !isColliding) switch (PongBar) { case nrPaddle.Left: if (ShapesIntersect(paddleLeft, ball)) { isColliding = true; if (MTV != Vector2.Zero) ball.X += MTV.X; ball.Y += MTV.Y; ball.ChangeHorzDirection(); } break; case nrPaddle.Right: if (ShapesIntersect(paddleRight, ball)) { isColliding = true; if (MTV != Vector2.Zero) ball.X += MTV.X; ball.Y += MTV.Y; ball.ChangeHorzDirection(); } break; default: break; } if (!ShapesIntersect(paddleRight, ball) && !ShapesIntersect(paddleLeft, ball)) isColliding = false; ball.X += ftime * ball.SpeedX; ball.Y += ftime * ball.SpeedY; //check ball movement if (ball.X > paddleRight.X + paddleRight.Width + 2) { //IncreaseScore(Left); ball.Reset(); updates = 0; return; } else if (ball.X < paddleLeft.X - 2) { //IncreaseScore(Right); ball.Reset(); updates = 0; return; } base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Aquamarine); spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend); spriteBatch.End(); base.Draw(gameTime); } } And one method i've used: public static Vector2 NormAxisFromCircle2ClosestVertex(Vector2[] vertices, Vector2 circle) { Vector2 temp = Vector2.Zero; if (vertices.Length > 0) { float dist = (circle.X - vertices[0].X) * (circle.X - vertices[0].X) + (circle.Y - vertices[0].Y) * (circle.Y - vertices[0].Y); for (int i = 1; i < vertices.Length;i++) { if (dist > (circle.X - vertices[i].X) * (circle.X - vertices[i].X) + (circle.Y - vertices[i].Y) * (circle.Y - vertices[i].Y)) { temp = vertices[i]; //memorize the closest vertex dist = (circle.X - vertices[i].X) * (circle.X - vertices[i].X) + (circle.Y - vertices[i].Y) * (circle.Y - vertices[i].Y); } } temp = circle - temp; temp.Normalize(); } return temp; } Thanks in advance for any tips on the 4 issues. EDIT1: Something isn't working properly. The collision axis doesn't come out right and the interpolation also seems to have no effect. I've changed the code a bit: private bool ShapesIntersect(Paddle paddle, Ball ball) { overlap = 1000000f; //large value overlapAxis = Vector2.Zero; MTV = Vector2.Zero; foreach (Vector2 ax in Axes) { float[] pad = paddle.ProjectPaddle(ax); //pad0 = min, pad1 = max float[] circle = ball.ProjectBall(ax); //circle0 = min, circle1 = max if (pad[1] < circle[0] || circle[1] < pad[0]) { return false; } if (Math.Abs(pad[1] - circle[0]) < Math.Abs(circle[1] - pad[0])) { if (Math.Abs(overlap) > Math.Abs(-pad[1] + circle[0])) { overlap = -pad[1] + circle[0]; overlapAxis = ax * (-1); } //to get the proper axis } else { if (Math.Abs(overlap) > Math.Abs(circle[1] - pad[0])) { overlap = circle[1] - pad[0]; overlapAxis = ax; } } } if (overlapAxis != Vector2.Zero) { MTV = overlapAxis * Math.Abs(overlap); } return true; } And part of the Update method: if (ShapesIntersect(paddleRight, ball)) { isColliding = true; if (MTV != Vector2.Zero) { ball.X += MTV.X; ball.Y += MTV.Y; } //test if (overlapAxis.X == 0) //collision with horizontal edge { } else if (overlapAxis.Y == 0) //collision with vertical edge { float factor = Math.Abs(ball.ballCenter.Y - paddleRight.Y) / paddleRight.Height; if (factor > 1) factor = 1f; if (overlapAxis.X < 0) //left edge? ball.Speed = ball.DEFAULTSPEED * Vector2.Normalize(Vector2.Reflect(ball.Speed, (Vector2.Lerp(new Vector2(-1, -3), new Vector2(-1, 3), factor)))); else //right edge? ball.Speed = ball.DEFAULTSPEED * Vector2.Normalize(Vector2.Reflect(ball.Speed, (Vector2.Lerp(new Vector2(1, -3), new Vector2(1, 3), factor)))); } else //vertex collision??? { ball.Speed = -ball.Speed; } } What seems to happen is that "overlapAxis" doesn't always return the right one. So instead of (-1,0) i get the (1,0) (this happened even before i multiplied with -1 there). Sometimes there isn't even a collision registered even though the ball passes through the paddle... The interpolation also seems to have no effect as the angles barely change (or the overlapAxis is almost never (-1,0) or (1,0) but something like (0.9783473, 0.02743843)... ). What am i missing here? :(

    Read the article

< Previous Page | 178 179 180 181 182 183 184  | Next Page >