Search Results

Search found 16143 results on 646 pages for 'ms word 2003'.

Page 643/646 | < Previous Page | 639 640 641 642 643 644 645 646  | Next Page >

  • Webkit browser jQuery transformations/transitions not working with jSplitSlider

    - by user3689793
    I am helping to build a site and i'm having an issue with the functionality of an add-in called jsplitslider when running it in chrome. Right now, when I navigate between the slides, the div's get stuck on top of each other and never clear the webkit transformations/animations: <div class="sl-content-slice" style="transition: all 800ms ease-in-out; -webkit-transition: all 800ms ease-in-out;"> I think the problem is due to timing of the functions, but I can't seem to figure out where I would need to add a setTimeout(). I only think this because I exhausted a lot of the other options like display: inline-block, notransitions css, etc. I'm desperate to figure out how to make this work in chrome. It works in FF and IE(surprisingly enough). I'm not great at webcoding, so any help will be appreciated! The code on the site isn't minimized. Here is the jQuery where I think the problem lies: var cssStyle = config.orientation === 'horizontal' ? { marginTop : -this.size.height / 2 } : { marginLeft : -this.size.width / 2 }, // default slide's slices style resetStyle = { 'transform' : 'translate(0%,0%) rotate(0deg) scale(1)', opacity : 1 }, // slice1 style slice1Style = config.orientation === 'horizontal' ? { 'transform' : 'translateY(-' + this.options.translateFactor + '%) rotate(' + config.slice1angle + 'deg) scale(' + config.slice1scale + ')' } : { 'transform' : 'translateX(-' + this.options.translateFactor + '%) rotate(' + config.slice1angle + 'deg) scale(' + config.slice1scale + ')' }, // slice2 style slice2Style = config.orientation === 'horizontal' ? { 'transform' : 'translateY(' + this.options.translateFactor + '%) rotate(' + config.slice2angle + 'deg) scale(' + config.slice2scale + ')' } : { 'transform' : 'translateX(' + this.options.translateFactor + '%) rotate(' + config.slice2angle + 'deg) scale(' + config.slice2scale + ')' }; if( this.options.optOpacity ) { slice1Style.opacity = 0; slice2Style.opacity = 0; } // we are adding the classes sl-trans-elems and sl-trans-back-elems to the slide that is either coming "next" // or going "prev" according to the direction. // the idea is to make it more interesting by giving some animations to the respective slide's elements //( dir === 'next' ) ? $nextSlide.addClass( 'sl-trans-elems' ) : $currentSlide.addClass( 'sl-trans-back-elems' ); $currentSlide.removeClass( 'sl-trans-elems' ); var transitionProp = { 'transition' : 'all ' + this.options.speed + 'ms ease-in-out' }; // add the 2 slices and animate them $movingSlide.css( 'z-index', this.slidesCount ) .find( 'div.sl-content-wrapper' ) .wrap( $( '<div class="sl-content-slice" />' ).css( transitionProp ) ) .parent() .cond( dir === 'prev', function() { var slice = this; this.css( slice1Style ); setTimeout( function() { slice.css( resetStyle ); }, 150 ); }, function() { var slice = this; setTimeout( function() { slice.css( slice1Style ); }, 150 ); } ) .clone() .appendTo( $movingSlide ) .cond( dir === 'prev', function() { var slice = this; this.css( slice2Style ); setTimeout( function() { $currentSlide.addClass( 'sl-trans-back-elems' ); if( self.support ) { slice.css( resetStyle ).on( self.transEndEventName, function() { self._onEndNavigate( slice, $currentSlide, dir ); } ); } else { self._onEndNavigate( slice, $currentSlide, dir ); } }, 150 ); }, function() { var slice = this; setTimeout( function() { $nextSlide.addClass( 'sl-trans-elems' ); if( self.support ) { slice.css( slice2Style ).on( self.transEndEventName, function() { self._onEndNavigate( slice, $currentSlide, dir ); } ); } else { self._onEndNavigate( slice, $currentSlide, dir ); } }, 150 ); } ) .find( 'div.sl-content-wrapper' ) .css( cssStyle ); $nextSlide.show(); }, _validateValues : function( config ) { // OK, so we are restricting the angles and scale values here. // This is to avoid the slices wrong sides to be shown. // you can adjust these values as you wish but make sure you also ajust the // paddings of the slides and also the options.translateFactor value and scale data attrs if( config.slice1angle > this.options.maxAngle || config.slice1angle < -this.options.maxAngle ) { config.slice1angle = this.options.maxAngle; } if( config.slice2angle > this.options.maxAngle || config.slice2angle < -this.options.maxAngle ) { config.slice2angle = this.options.maxAngle; } if( config.slice1scale > this.options.maxScale || config.slice1scale <= 0 ) { config.slice1scale = this.options.maxScale; } if( config.slice2scale > this.options.maxScale || config.slice2scale <= 0 ) { config.slice2scale = this.options.maxScale; } if( config.orientation !== 'vertical' && config.orientation !== 'horizontal' ) { config.orientation = 'horizontal' } }, _onEndNavigate : function( $slice, $oldSlide, dir ) { // reset previous slide's style after next slide is shown var $slide = $slice.parent(), removeClasses = 'sl-trans-elems sl-trans-back-elems'; // remove second slide's slice $slice.remove(); // unwrap.. $slide.css( 'z-index', 10 ) .find( 'div.sl-content-wrapper' ) .unwrap(); // hide previous current slide $oldSlide.hide().removeClass( removeClasses ); $slide.removeClass( removeClasses ); // now we can navigate again.. this.isAnimating = false; this.options.onAfterChange( $slide, this.current ); }, Sorry if I missed any conventions when posting, this is my first S.O. post. Thanks in advance for any help.

    Read the article

  • Can anyone tell me why my XML writer is not writing attributes?

    - by user1632018
    I am writing a parsing tool to help me clean up a large VC++ project before I make .net bindings for it. I am using an XML writer to read an xml file and write out each element to a new file. If an element with a certain name is found, then it executes some code and writes an output value into the elements value. So far it is almost working, except for one thing: It is not copying the attributes. Can anyone tell me why this is happening? Here is a sample of what it is supposed to copy/modify(Includes the attributes): <?xml version="1.0" encoding="utf-8"?> <Project DefaultTargets="Build" ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <ItemGroup Label="ProjectConfigurations"> <ProjectConfiguration Include="Debug|Win32"> <Configuration>Debug</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Release|Win32"> <Configuration>Release</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> </ItemGroup> <PropertyGroup Label="Globals"> <ProjectGuid>{57900E99-A405-49F4-83B2-0254117D041B}</ProjectGuid> <Keyword>Win32Proj</Keyword> <RootNamespace>libproj</RootNamespace> </PropertyGroup> Here is the output I am getting(No Attributes): <?xml version="1.0" encoding="utf-8"?> <Project> <ItemGroup> <ProjectConfiguration> <Configuration>Debug</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration> <Configuration>Release</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> </ItemGroup> <PropertyGroup> <ProjectGuid>{57900E99-A405-49F4-83B2-0254117D041B}</ProjectGuid> <Keyword>Win32Proj</Keyword> <RootNamespace>libproj</RootNamespace> Here is my code currently. I have tried every way I can come up with to write the attributes. string baseDir = (textBox2.Text + "\\" + safeFileName); string vcName = Path.GetFileName(textBox1.Text); string vcProj = Path.Combine(baseDir, vcName); using (XmlReader reader = XmlReader.Create(textBox1.Text)) { XmlWriterSettings settings = new XmlWriterSettings(); settings.OmitXmlDeclaration = true; settings.ConformanceLevel = ConformanceLevel.Fragment; settings.Indent = true; settings.CloseOutput = false; using (XmlWriter writer = XmlWriter.Create(vcProj, settings)) { while (reader.Read()) { switch (reader.NodeType) { case XmlNodeType.Element: if (reader.Name == "ClInclude") { string include = reader.GetAttribute("Include"); string dirPath = Path.GetDirectoryName(textBox1.Text); Directory.SetCurrentDirectory(dirPath); string fullPath = Path.GetFullPath(include); //string dirPath = Path.GetDirectoryName(fullPath); copyFile(fullPath, 3); string filename = Path.GetFileName(fullPath); writer.WriteStartElement(reader.Name); writer.WriteAttributeString("Include", "include/" + filename); writer.WriteEndElement(); } else if (reader.Name == "ClCompile" && reader.HasAttributes) { string include = reader.GetAttribute("Include"); string dirPath = Path.GetDirectoryName(textBox1.Text); Directory.SetCurrentDirectory(dirPath); string fullPath = Path.GetFullPath(include); copyFile(fullPath, 2); string filename = Path.GetFileName(fullPath); writer.WriteStartElement(reader.Name); writer.WriteAttributeString("Include", "src/" + filename); writer.WriteEndElement(); } else { writer.WriteStartElement(reader.Name); } break; case XmlNodeType.Text: writer.WriteString(reader.Value); break; case XmlNodeType.XmlDeclaration: case XmlNodeType.ProcessingInstruction: writer.WriteProcessingInstruction(reader.Name, reader.Value); break; case XmlNodeType.Comment: writer.WriteComment(reader.Value); break; case XmlNodeType.Attribute: writer.WriteAttributes(reader, true); break; case XmlNodeType.EntityReference: writer.WriteEntityRef(reader.Value); break; case XmlNodeType.EndElement: writer.WriteFullEndElement(); break; } } } }

    Read the article

  • how can we generate the bit greater than 60000?

    - by thinthinyu
    we can now generate about 50000bits. my code cannot generate more than 60000 bit..please help me............m_B is member variable and type is CString. // LFSR_ECDlg.cpp : implementation file // #include "stdafx.h" #include "myecc.h" #include "LFSR_ECDlg.h" #include "MyClass.h" #ifdef _DEBUG #define new DEBUG_NEW #undef THIS_FILE static char THIS_FILE[] = __FILE__; #endif extern MyClass mycrv; ///////////////////////////////////////////////////////////////////////////// // LFSR_ECDlg dialog LFSR_ECDlg::LFSR_ECDlg(CWnd* pParent /*=NULL*/) : CDialog(LFSR_ECDlg::IDD, pParent) { //{{AFX_DATA_INIT(LFSR_ECDlg) m_C1 = 0; m_C2 = 0; m_B = _T(""); m_p = _T(""); m_Qty = 0; m_time = _T(""); //}}AFX_DATA_INIT } void LFSR_ECDlg::DoDataExchange(CDataExchange* pDX) { CDialog::DoDataExchange(pDX); //{{AFX_DATA_MAP(LFSR_ECDlg) DDX_Text(pDX, IDC_C1, m_C1); DDX_Text(pDX, IDC_C2, m_C2); DDX_Text(pDX, IDC_Sequence, m_B); DDX_Text(pDX, IDC_Sequence2, m_p); DDX_Text(pDX, IDC_QTY, m_Qty); DDV_MinMaxLong(pDX, m_Qty, 0, 2147483647); DDX_Text(pDX, IDC_time, m_time); //}}AFX_DATA_MAP } BEGIN_MESSAGE_MAP(LFSR_ECDlg, CDialog) //{{AFX_MSG_MAP(LFSR_ECDlg) ON_WM_SETCURSOR() ON_EN_CHANGE(IDC_Sequence, OnGeneratorLFSR) ON_MESSAGE(WM_MYPAINTMESSAGE,PaintMyCaption)//by ttyu ON_BN_CLICKED(IDC_save, Onsave) //}}AFX_MSG_MAP END_MESSAGE_MAP() ///////////////////////////////////////////////////////////////////////////// // LFSR_ECDlg message handlers bool LFSR_ECDlg::CheckDataEntry() { //if((m_Px>=mycrv.p)|(m_Py>=mycrv.p)) {AfxMessageBox("Seed [P] is invalid!");return false;}//by ttyu if((m_C1<=0) | (m_C1>mycrv.n)) {AfxMessageBox("Constant c1 is not valid!");return false;} if((m_C2<=0 )| (m_C2>mycrv.n)) {AfxMessageBox("Constant c2 is not valid!");return false;} return true; } void LFSR_ECDlg::OnOK() { UpdateData(true); static int stime,etime,dtime; CString txt; m_time=""; CTime t(CTime::GetCurrentTime()); CString txt1; txt1=""; //ms = t.GetDay(); // TODO: Add extra validation here stime=t.GetTime(); txt1.Format("%d",stime); AfxMessageBox (txt1); txt=""; if (CheckDataEntry()) OnGeneratorLFSR(); etime=t.GetTime(); CString txt2; txt2=""; txt2.Format("%d",etime); AfxMessageBox (txt2); dtime=etime-stime; txt.Format("%f",dtime); m_time+=txt; // UpdateData(false); //rtime.Format("%s, %s %d, %d.",day,month,dd,yy); //CDialog::OnOK(); } void LFSR_ECDlg::OnCancel() { // TODO: Add extra cleanup here CDialog::OnCancel(); } void LFSR_ECDlg::OnGeneratorLFSR() { // TODO: If this is a RICHEDIT control, the control will not // send this notification unless you override the CDialog::OnInitDialog() // function and call CRichEditCtrl().SetEventMask() // with the ENM_CHANGE flag ORed into the mask. // TODO: Add your control notification handler code here point P0,P1,P2; P0 = mycrv.G; P1 = mycrv.MulPoint(P0,2); int C1=m_C1, C2=m_C2, n=m_Qty, k=0; int q= (mycrv.p-1) / 2; m_p = ""; m_B = ""; CString txt; for(int i=0;i<n;i++) { txt=""; if(P0==mycrv.O) txt.Format("O"); else txt.Format("(%d, %d)",P0.x,P0.y); m_p +=txt; m_p += 13; m_p += 10; if((P0.y >= 0)&&(P0.y <= q)) m_B += "0"; else if(P0 == mycrv.O) m_B += "0"; else m_B += "1"; //m_B += 13;//by ttyu // m_B += 10;//by ttyu P2 = mycrv.AddPoints(mycrv.MulPoint(P1,C2), mycrv.MulPoint(P0,C1)); P0 = P1; P1 = P2; } } BOOL LFSR_ECDlg::OnInitDialog() { CDialog::OnInitDialog(); // TODO: Add extra initialization here //code for dlg bar CString str="LFSR_EC"; m_cap.SetCaption (str); m_cap.Install (this,WM_MYPAINTMESSAGE); ////////////////////////////// return TRUE; // return TRUE unless you set the focus to a control // EXCEPTION: OCX Property Pages should return FALSE } LRESULT LFSR_ECDlg::PaintMyCaption(WPARAM wp, LPARAM lp) { m_cap.PaintCaption(wp,lp); return 0; } BOOL LFSR_ECDlg::OnSetCursor(CWnd* pWnd, UINT nHitTest, UINT message) { // TODO: Add your message handler code here and/or call default return CDialog::OnSetCursor(pWnd, nHitTest, message); } void LFSR_ECDlg::Onsave() { this->UpdateData(); CFile bitstream; char strFilter[] = { "Stream Records (*.mpl)|*.mpl| (*.pis)|*.pis|All Files (*.*)|*.*||" }; CFileDialog FileDlg(FALSE, ".mpl", NULL, 0, strFilter); //insertion//by TTT CFile cf_object; if( FileDlg.DoModal() == IDOK ){ cf_object.Open( FileDlg.GetFileName(), CFile::modeCreate|CFile::modeWrite); //char szText[100]; //strcpy(szText, "File Write Test"); CString txt; txt=""; txt.Format("%s",m_B);//by ANO AfxMessageBox (txt);//by ANO int mB_size=m_B.GetLength(); cf_object.Write (m_B,mB_size); //insertion end /* if( FileDlg.DoModal() == IDOK ) { if( bitstream.Open(FileDlg.GetFileName(), CFile::modeCreate | CFile::modeWrite) == FALSE ) return; CArchive ar(&bitstream, CArchive::store); CString txt; txt=""; txt.Format("%s",m_B);//by ANO AfxMessageBox (txt);//by ANO //txt=m_B;//by ANO ar <<txt;//by ANO ar.Close(); } else return; bitstream.Close(); */ // TODO: Add your control notification handler code here } }

    Read the article

  • Error when call 'qb.query(db, projection, selection, selectionArgs, null, null, orderBy);'

    - by smalltalk1960s
    Hi all, I make a content provider named 'DictionaryProvider' (Based on NotepadProvider). When my program run to command 'qb.query(db, projection, selection, selectionArgs, null, null, orderBy);', error happen. I don't know how to fix. please help me. Below is my code // file main calling DictionnaryProvider @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.dictionary); final String[] PROJECTION = new String[] { DicColumns._ID, // 0 DicColumns.KEY_WORD, // 1 DicColumns.KEY_DEFINITION // 2 }; Cursor c = managedQuery(DicColumns.CONTENT_URI, PROJECTION, null, null, DicColumns.DEFAULT_SORT_ORDER); String str = ""; if (c.moveToFirst()) { int wordColumn = c.getColumnIndex("KEY_WORD"); int defColumn = c.getColumnIndex("KEY_DEFINITION"); do { // Get the field values str = ""; str += c.getString(wordColumn); str +="\n"; str +=c.getString(defColumn); } while (c.moveToNext()); } Toast.makeText(this, str, Toast.LENGTH_SHORT).show(); } // file DictionaryProvider.java package com.example.helloandroid; import java.util.HashMap; import android.content.ContentProvider; import android.content.ContentValues; import android.content.UriMatcher; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteQueryBuilder; import android.net.Uri; import android.text.TextUtils; import com.example.helloandroid.Dictionary.DicColumns; public class DictionaryProvider extends ContentProvider { //private static final String TAG = "DictionaryProvider"; private DictionaryOpenHelper dbdic; static final int DATABASE_VERSION = 1; static final String DICTIONARY_DATABASE_NAME = "dictionarydb"; static final String DICTIONARY_TABLE_NAME = "dictionary"; private static final UriMatcher sUriMatcher; private static HashMap<String, String> sDicProjectionMap; @Override public int delete(Uri arg0, String arg1, String[] arg2) { // TODO Auto-generated method stub return 0; } @Override public String getType(Uri arg0) { // TODO Auto-generated method stub return null; } @Override public Uri insert(Uri arg0, ContentValues arg1) { // TODO Auto-generated method stub return null; } @Override public boolean onCreate() { // TODO Auto-generated method stub dbdic = new DictionaryOpenHelper(getContext(), DICTIONARY_DATABASE_NAME, null, DATABASE_VERSION); return true; } @Override public Cursor query(Uri uri, String[] projection, String selection, String[] selectionArgs, String sortOrder) { SQLiteQueryBuilder qb = new SQLiteQueryBuilder(); qb.setTables(DICTIONARY_TABLE_NAME); switch (sUriMatcher.match(uri)) { case 1: qb.setProjectionMap(sDicProjectionMap); break; case 2: qb.setProjectionMap(sDicProjectionMap); qb.appendWhere(DicColumns._ID + "=" + uri.getPathSegments().get(1)); break; default: throw new IllegalArgumentException("Unknown URI " + uri); } // If no sort order is specified use the default String orderBy; if (TextUtils.isEmpty(sortOrder)) { orderBy = DicColumns.DEFAULT_SORT_ORDER; } else { orderBy = sortOrder; } // Get the database and run the query SQLiteDatabase db = dbdic.getReadableDatabase(); Cursor c = qb.query(db, projection, selection, selectionArgs, null, null, orderBy); // Tell the cursor what uri to watch, so it knows when its source data changes c.setNotificationUri(getContext().getContentResolver(), uri); return c; } @Override public int update(Uri uri, ContentValues values, String selection, String[] selectionArgs) { // TODO Auto-generated method stub return 0; } static { sUriMatcher = new UriMatcher(UriMatcher.NO_MATCH); sUriMatcher.addURI(Dictionary.AUTHORITY, "dictionary", 1); sUriMatcher.addURI(Dictionary.AUTHORITY, "dictionary/#", 2); sDicProjectionMap = new HashMap<String, String>(); sDicProjectionMap.put(DicColumns._ID, DicColumns._ID); sDicProjectionMap.put(DicColumns.KEY_WORD, DicColumns.KEY_WORD); sDicProjectionMap.put(DicColumns.KEY_DEFINITION, DicColumns.KEY_DEFINITION); // Support for Live Folders. /*sLiveFolderProjectionMap = new HashMap<String, String>(); sLiveFolderProjectionMap.put(LiveFolders._ID, NoteColumns._ID + " AS " + LiveFolders._ID); sLiveFolderProjectionMap.put(LiveFolders.NAME, NoteColumns.TITLE + " AS " + LiveFolders.NAME);*/ // Add more columns here for more robust Live Folders. } } // file Dictionary.java package com.example.helloandroid; import android.net.Uri; import android.provider.BaseColumns; /** * Convenience definitions for DictionaryProvider */ public final class Dictionary { public static final String AUTHORITY = "com.example.helloandroid.provider.Dictionary"; // This class cannot be instantiated private Dictionary() {} /** * Dictionary table */ public static final class DicColumns implements BaseColumns { // This class cannot be instantiated private DicColumns() {} /** * The content:// style URL for this table */ public static final Uri CONTENT_URI = Uri.parse("content://" + AUTHORITY + "/dictionary"); /** * The MIME type of {@link #CONTENT_URI} providing a directory of notes. */ //public static final String CONTENT_TYPE = "vnd.android.cursor.dir/vnd.google.note"; /** * The MIME type of a {@link #CONTENT_URI} sub-directory of a single note. */ //public static final String CONTENT_ITEM_TYPE = "vnd.android.cursor.item/vnd.google.note"; /** * The default sort order for this table */ public static final String DEFAULT_SORT_ORDER = "modified DESC"; /** * The key_word of the dictionary * <P>Type: TEXT</P> */ public static final String KEY_WORD = "KEY_WORD"; /** * The key_definition of word * <P>Type: TEXT</P> */ public static final String KEY_DEFINITION = "KEY_DEFINITION"; } } thanks so much

    Read the article

  • .NET 3.5SP1 64-bit memory model vs. 32-bit memory model

    - by James Dunne
    As I understand it, the .NET memory model on a 32-bit machine guarantees 32-bit word writes and reads to be atomic operations but does not provide this guarantee on 64-bit words. I have written a quick tool to demonstrate this effect on a Windows XP 32-bit OS and am getting results consistent with that memory model description. However, I have taken this same tool's executable and run it on a Windows 7 Enterprise 64-bit OS and am getting wildly different results. Both the machines are identical specs just with different OSes installed. I would have expected that the .NET memory model would guarantee writes and reads to BOTH 32-bit and 64-bit words to be atomic on a 64-bit OS. I find results completely contrary to BOTH assumptions. 32-bit reads and writes are not demonstrated to be atomic on this OS. Can someone explain to me why this fails on a 64-bit OS? Tool code: using System; using System.Threading; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { var th = new Thread(new ThreadStart(RunThread)); var th2 = new Thread(new ThreadStart(RunThread)); int lastRecordedInt = 0; long lastRecordedLong = 0L; th.Start(); th2.Start(); while (!done) { int newIntValue = intValue; long newLongValue = longValue; if (lastRecordedInt > newIntValue) Console.WriteLine("BING(int)! {0} > {1}, {2}", lastRecordedInt, newIntValue, (lastRecordedInt - newIntValue)); if (lastRecordedLong > newLongValue) Console.WriteLine("BING(long)! {0} > {1}, {2}", lastRecordedLong, newLongValue, (lastRecordedLong - newLongValue)); lastRecordedInt = newIntValue; lastRecordedLong = newLongValue; } th.Join(); th2.Join(); Console.WriteLine("{0} =? {2}, {1} =? {3}", intValue, longValue, Int32.MaxValue / 2, (long)Int32.MaxValue + (Int32.MaxValue / 2)); } private static long longValue = Int32.MaxValue; private static int intValue; private static bool done = false; static void RunThread() { for (int i = 0; i < Int32.MaxValue / 4; ++i) { ++longValue; ++intValue; } done = true; } } } Results on Windows XP 32-bit: Windows XP 32-bit Intel Core2 Duo P8700 @ 2.53GHz BING(long)! 2161093208 > 2161092246, 962 BING(long)! 2162448397 > 2161273312, 1175085 BING(long)! 2270110050 > 2270109040, 1010 BING(long)! 2270115061 > 2270110059, 5002 BING(long)! 2558052223 > 2557528157, 524066 BING(long)! 2571660540 > 2571659563, 977 BING(long)! 2646433569 > 2646432557, 1012 BING(long)! 2660841714 > 2660840732, 982 BING(long)! 2661795522 > 2660841715, 953807 BING(long)! 2712855281 > 2712854239, 1042 BING(long)! 2737627472 > 2735210929, 2416543 1025780885 =? 1073741823, 3168207035 =? 3221225470 Notice how BING(int) is never written and demonstrates that 32-bit reads/writes are atomic on this 32-bit OS. Results on Windows 7 Enterprise 64-bit: Windows 7 Enterprise 64-bit Intel Core2 Duo P8700 @ 2.53GHz BING(long)! 2208482159 > 2208121217, 360942 BING(int)! 280292777 > 279704627, 588150 BING(int)! 308158865 > 308131694, 27171 BING(long)! 2549116628 > 2548884894, 231734 BING(int)! 534815527 > 534708027, 107500 BING(int)! 545113548 > 544270063, 843485 BING(long)! 2710030799 > 2709941968, 88831 BING(int)! 668662394 > 667539649, 1122745 1006355562 =? 1073741823, 3154727581 =? 3221225470 Notice that BING(long) AND BING(int) are both displayed! Why are the 32-bit operations failing, let alone the 64-bit ones?

    Read the article

  • seeking help with Chrome & Safari not rendering my table stretched to fit its contents...help?

    - by oompa_l
    I have an element on this web page I'm developing where I need my text to conform to the width of an image above it - whose width will always be different - think of captions. I have found numerous references to using a 1px table to force this width sizing behaviour. I am having problems, though with Safari and Chrome "seeing" this instruction - the text ends up as a marginally sized text box sitting behind the image. The problem, as I see it, has to do with the text and images sitting in div's nested within the table. I need the images to sit in a div because of some jquery script I'm using called cycle, which turns a group of images into a slideshow. The problem may have something to do with the script as well. In any case, I have tried a seeming infinite number of combination of floating left and clearing left on all all the divs, changing their positions and widths...nothing works. Anyone have any clues about how to broach this one? EDIT 1: ok, should I be editing my post or responding with answers? here's the url to see the problem I am having - http://friedmanstudios.ca/webdev/test8.html and the code: <div id="content" class="boxes"> <table> <tr> <td > <div id="imageFrame"> <a href="#" class="img" title="_MG_9786_fmt.jpeg"> <img src="images/_MG_9786_fmt.jpeg"/> </a> <a href="#" class="img" title="IMG_5169_fmt.jpeg"> <img src="indesign export/GFA-TEARSHEETS-100526-01-web-images/IMG_5169_fmt.jpeg"/> </a> <a href="#" class="img" title="IMG_5175_fmt.jpeg"> <img src="indesign export/GFA-TEARSHEETS-100526-01-web-images/IMG_5175_fmt.jpeg"/> </a> <a href="#" class="img" title="aerial_fmt.jpeg" width=""> <img src="indesign export/GFA-TEARSHEETS-100526-01-web-images/aerial_fmt.jpeg"/> </a> </div> <div id="cycleCtrl"> <div id="prev" class="pager"><a href="#">< Prev</a> </div> <div id="next" class="pager"><a href="#">Next ></a></div> <div id="pagerNav" class="pager"></div> </div> <div id="descController"> <img src="images/arrow.gif" name="arrow" width="5" height="10" id="arrow" /> <span id="projectName">Toronto Centre for the Arts </span> <br /> <div id="desc"> In the past eight years... </div> </div></td> <td width="90%"><!--push col 1 back--></td> </tr> </table> and the styles: #content { position: absolute; top: 250px; left: 275px; float: left; clear: both; } content table { float: left; width: 1px; } imageFrame { position: relative; float: left; clear: left; width: inherit; } desc { position: relative; clear: left; float: left; } descController { position:relative; padding-top:5px; padding-bottom:10px; clear: left; float: left; } descController div { height:0; overflow:hidden; -webkit-transition:all .5s ease; -moz-transition:all .5s ease; -o-transition:all .5s ease; transition:all .5s ease; padding-top:10px; margin-top: 10px; word-spacing: 0em; line-height: 16px; font-size: 12px; position: relative; float: left; clear: left; }

    Read the article

  • Why does Mac OS X Software Update not work when machine uses Active Directory?

    - by Lyndsey Ferguson
    My company's IT department is mostly a Windows run operation and in order to become more secure, they are altering the way that the Macintosh computers login to our internal network so that they use Active Directory like their Windows counterparts. I have been given Administrative permission on my Mac and I am able to do most of what I used to be able to do in terms of authentication of software installations. However, there is a problem: the "Software Update" feature doesn't work. What happens is that when I try to get the Mac to perform its Software Updates from the Apple menu, the normal window appears listing what has to be updated; I am able to select what to update and click the "Update" button, but then nothing happens. It doesn't ask for authentication like it used to, the computer doesn't perform any download or installation (it does sometimes ask me to agree to license agreements for iTunes). I can download the updates individually and install them without any issues, but the auto-update fails. I'd rather use the Software Update menu item like I used to: it is much more convenient. Any suggestions on how I can fix this? EDIT Nov 19th, 2009, 10:09 EST: I have posted this question to the Apple Mac OS X Snow Leopard support forum. EDIT Nov 19th, 2009, 12:39 EST:Yes, the Terminal command "sudo softwareupdate --install --all" does work flawlessly. I want to avoid that as my co-workers are generally not comfortable on the Mac. I also tried Chealion's suggestion to delete "~/Library/Preferences/com.apple.SoftwareUpdate.plist" and "/Library/Preferences/com.apple.SoftwareUpdate.plist", Software Update still fails. However, I did get diagnostic messages in the Console (below). I've deleted the MS Office Package Receipts and examined the suhelperd (Software Update Helper Daemon?); it appears that suhelperd is crashing and that explains why it doesn't work. I've submitted a bug report to Apple (radar://7408619). Here are the Console diagnostic messages: 11/19/09 12:36:44 PM com.apple.suhelperd[66829] terminate called after throwing an instance of 'NSException' 11/19/09 12:36:47 PM com.apple.launchd[1] (com.apple.suhelperd[66829]) Job appears to have crashed: Abort trap 11/19/09 12:36:48 PM com.apple.ReportCrash.Root[66830] 2009-11-19 12:36:48.275 ReportCrash[66830:2703] Saved crash report for suhelperd[66829] version ??? (???) to /Library/Logs/DiagnosticReports/suhelperd_2009-11-19-123648_localhost.crash 11/19/09 12:36:54 PM com.apple.launchd[1] (com.apple.suhelperd) Throttling respawn: Will start in 1 seconds 11/19/09 12:36:55 PM com.apple.suhelperd[66836] terminate called after throwing an instance of 'NSException' 11/19/09 12:36:55 PM com.apple.launchd[1] (com.apple.suhelperd[66836]) Job appears to have crashed: Abort trap 11/19/09 12:36:56 PM com.apple.ReportCrash.Root[66830] 2009-11-19 12:36:56.017 ReportCrash[66830:2f03] Saved crash report for suhelperd[66836] version ??? (???) to /Library/Logs/DiagnosticReports/suhelperd_2009-11-19-123655_localhost.crash 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_automator.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_automator_workflow.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_autoupdate.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_clipart.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_core.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_dock.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_entourage.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_entourage_help_std.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_equationeditor.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_errorreporting.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_excel.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_excel_help_std.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_fonts.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_graph.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_helpviewer.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_launch.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_ooxml.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_orgchart.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_powerpoint.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_powerpoint_help_std.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_brazilian.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_danish.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_dutch.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_english.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_finnish.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_french.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_german.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_italian.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_japanese.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_norwegian.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_portuguese.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_spanish.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_swedish.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_required.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_silverlight.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_sounds.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_word.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_word_help_std.pkg 11/19/09 12:37:26 PM com.apple.suhelperd[66839] terminate called after throwing an instance of 'NSException' 11/19/09 12:37:26 PM com.apple.launchd[1] (com.apple.suhelperd[66839]) Job appears to have crashed: Abort trap 11/19/09 12:37:26 PM com.apple.ReportCrash.Root[66830] 2009-11-19 12:37:26.929 ReportCrash[66830:2b07] Saved crash report for suhelperd[66839] version ??? (???) to /Library/Logs/DiagnosticReports/suhelperd_2009-11-19-123726_localhost.crash And here is the suhelperd crash report: Process: suhelperd [66839] Path: /System/Library/PrivateFrameworks/SoftwareUpdate.framework/Versions/A/Resources/suhelperd Identifier: suhelperd Version: ??? (???) Code Type: X86-64 (Native) Parent Process: launchd [1] Date/Time: 2009-11-19 12:37:26.473 -0500 OS Version: Mac OS X 10.6.2 (10C540) Report Version: 6 Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Application Specific Information: abort() called *** Terminating app due to uncaught exception 'NSRangeException', reason: '*** -[NSCFArray objectAtIndex:]: index (0) beyond bounds (0)' *** Call stack at first throw: ( 0 CoreFoundation 0x00007fff859a9444 __exceptionPreprocess + 180 1 libobjc.A.dylib 0x00007fff8787e0f3 objc_exception_throw + 45 2 CoreFoundation 0x00007fff859a9267 +[NSException raise:format:arguments:] + 103 3 CoreFoundation 0x00007fff859a91f4 +[NSException raise:format:] + 148 4 Foundation 0x00007fff855da080 _NSArrayRaiseBoundException + 122 5 Foundation 0x00007fff8553cb81 -[NSCFArray objectAtIndex:] + 75 6 Admin 0x00007fff8107920e +[User(UserPrivate) _userWithInfo:attributes:] + 71 7 Admin 0x00007fff81080d6b +[User findUserByID:searchParent:] + 404 8 suhelperd 0x0000000100001274 0x0 + 4294972020 9 suhelperd 0x0000000100002240 0x0 + 4294976064 10 suhelperd 0x00000001000053b1 0x0 + 4294988721 11 suhelperd 0x00000001000044b3 0x0 + 4294984883 12 suhelperd 0x0000000100004154 0x0 + 4294984020 13 libSystem.B.dylib 0x00007fff83eb60d8 mach_msg_server + 357 14 suhelperd 0x00000001000036eb 0x0 + 4294981355 15 suhelperd 0x0000000100002a1f 0x0 + 4294978079 16 suhelperd 0x0000000100001080 0x0 + 4294971520 ) Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 libSystem.B.dylib 0x00007fff83e86fe6 __kill + 10 1 libSystem.B.dylib 0x00007fff83f27e32 abort + 83 2 libstdc++.6.dylib 0x00007fff873cf5d2 __tcf_0 + 0 3 libobjc.A.dylib 0x00007fff87881d29 _objc_terminate + 100 4 libstdc++.6.dylib 0x00007fff873cdae1 __cxxabiv1::__terminate(void (*)()) + 11 5 libstdc++.6.dylib 0x00007fff873cdb16 __cxxabiv1::__unexpected(void (*)()) + 0 6 libstdc++.6.dylib 0x00007fff873cdbfc __gxx_exception_cleanup(_Unwind_Reason_Code, _Unwind_Exception*) + 0 7 libobjc.A.dylib 0x00007fff8787e192 object_getIvar + 0 8 com.apple.CoreFoundation 0x00007fff859a9267 +[NSException raise:format:arguments:] + 103 9 com.apple.CoreFoundation 0x00007fff859a91f4 +[NSException raise:format:] + 148 10 com.apple.Foundation 0x00007fff855da080 _NSArrayRaiseBoundException + 122 11 com.apple.Foundation 0x00007fff8553cb81 -[NSCFArray objectAtIndex:] + 75 12 com.apple.framework.Admin 0x00007fff8107920e +[User(UserPrivate) _userWithInfo:attributes:] + 71 13 com.apple.framework.Admin 0x00007fff81080d6b +[User findUserByID:searchParent:] + 404 14 suhelperd 0x0000000100001274 0x100000000 + 4724 15 suhelperd 0x0000000100002240 0x100000000 + 8768 16 suhelperd 0x00000001000053b1 0x100000000 + 21425 17 suhelperd 0x00000001000044b3 0x100000000 + 17587 18 suhelperd 0x0000000100004154 0x100000000 + 16724 19 libSystem.B.dylib 0x00007fff83eb60d8 mach_msg_server + 357 20 suhelperd 0x00000001000036eb 0x100000000 + 14059 21 suhelperd 0x0000000100002a1f 0x100000000 + 10783 22 suhelperd 0x0000000100001080 0x100000000 + 4224 Thread 1: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x00007fff83e51bba kevent + 10 1 libSystem.B.dylib 0x00007fff83e53a85 _dispatch_mgr_invoke + 154 2 libSystem.B.dylib 0x00007fff83e5375c _dispatch_queue_invoke + 185 3 libSystem.B.dylib 0x00007fff83e53286 _dispatch_worker_thread2 + 244 4 libSystem.B.dylib 0x00007fff83e52bb8 _pthread_wqthread + 353 5 libSystem.B.dylib 0x00007fff83e52a55 start_wqthread + 13 Thread 2: 0 libSystem.B.dylib 0x00007fff83e529da __workq_kernreturn + 10 1 libSystem.B.dylib 0x00007fff83e52dec _pthread_wqthread + 917 2 libSystem.B.dylib 0x00007fff83e52a55 start_wqthread + 13 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x0000000000000000 rbx: 0x00007fff707d7298 rcx: 0x00007fff5fbff868 rdx: 0x0000000000000000 rdi: 0x0000000000010517 rsi: 0x0000000000000006 rbp: 0x00007fff5fbff880 rsp: 0x00007fff5fbff868 r8: 0x00007fff707da9e0 r9: 0x0000000000000063 r10: 0x00007fff83e83026 r11: 0x0000000000000202 r12: 0x00007fff85a2dca1 r13: 0x0000000000000000 r14: 0x00007fff70bea228 r15: 0x00007fff5fbffb10 rip: 0x00007fff83e86fe6 rfl: 0x0000000000000202 cr2: 0x00007fff70e3afd0

    Read the article

  • socket connection failed, telnet OK

    - by cf16
    my problem is that I can't connect two comps through socket (windows xp and windows7) although the server created with socket is listening and I can telnet it. It receives then information and does what should be done, but if I run the corresponding socket client I get error 10061. Moreover I am behind firewall - these two comps are running within my LAN, the windows firewalls are turned off, comp1: 192.168.1.2 port 12345 comp1: 192.168.1.6 port 12345 router: 192.168.1.1 Maybe port forwarding could help? But most important for me is to answer why Sockets fail if telnet works fine. client: int main(){ // Initialize Winsock. WSADATA wsaData; int iResult = WSAStartup(MAKEWORD(2,2), &wsaData); if (iResult != NO_ERROR) printf("Client: Error at WSAStartup().\n"); else printf("Client: WSAStartup() is OK.\n"); // Create a socket. SOCKET m_socket; m_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); if (m_socket == INVALID_SOCKET){ printf("Client: socket() - Error at socket(): %ld\n", WSAGetLastError()); WSACleanup(); return 7; }else printf("Client: socket() is OK.\n"); // Connect to a server. sockaddr_in clientService; clientService.sin_family = AF_INET; //clientService.sin_addr.s_addr = inet_addr("77.64.240.156"); clientService.sin_addr.s_addr = inet_addr("192.168.1.5"); //clientService.sin_addr.s_addr = inet_addr("87.207.222.5"); clientService.sin_port = htons(12345); if (connect(m_socket, (SOCKADDR*)&clientService, sizeof(clientService)) == SOCKET_ERROR){ printf("Client: connect() - Failed to connect.\n"); wprintf(L"connect function failed with error: %ld\n", WSAGetLastError()); iResult = closesocket(m_socket); if (iResult == SOCKET_ERROR) wprintf(L"closesocket function failed with error: %ld\n", WSAGetLastError()); WSACleanup(); return 6; } // Send and receive data int bytesSent; int bytesRecv = SOCKET_ERROR; // Be careful with the array bound, provide some checking mechanism char sendbuf[200] = "Client: Sending some test string to server..."; char recvbuf[200] = ""; bytesSent = send(m_socket, sendbuf, strlen(sendbuf), 0); printf("Client: send() - Bytes Sent: %ld\n", bytesSent); while(bytesRecv == SOCKET_ERROR){ bytesRecv = recv(m_socket, recvbuf, 32, 0); if (bytesRecv == 0 || bytesRecv == WSAECONNRESET){ printf("Client: Connection Closed.\n"); break; }else printf("Client: recv() is OK.\n"); if (bytesRecv < 0) return 0; else printf("Client: Bytes received - %ld.\n", bytesRecv); } system("pause"); return 0; } server: int main(){ WORD wVersionRequested; WSADATA wsaData={0}; int wsaerr; // Using MAKEWORD macro, Winsock version request 2.2 wVersionRequested = MAKEWORD(2, 2); wsaerr = WSAStartup(wVersionRequested, &wsaData); if (wsaerr != 0){ /* Tell the user that we could not find a usable WinSock DLL.*/ printf("Server: The Winsock dll not found!\n"); return 0; }else{ printf("Server: The Winsock dll found!\n"); printf("Server: The status: %s.\n", wsaData.szSystemStatus); } /* Confirm that the WinSock DLL supports 2.2.*/ /* Note that if the DLL supports versions greater */ /* than 2.2 in addition to 2.2, it will still return */ /* 2.2 in wVersion since that is the version we */ /* requested. */ if (LOBYTE(wsaData.wVersion) != 2 || HIBYTE(wsaData.wVersion) != 2 ){ /* Tell the user that we could not find a usable WinSock DLL.*/ printf("Server: The dll do not support the Winsock version %u.%u!\n", LOBYTE(wsaData.wVersion), HIBYTE(wsaData.wVersion)); WSACleanup(); return 0; }else{ printf("Server: The dll supports the Winsock version %u.%u!\n", LOBYTE(wsaData.wVersion), HIBYTE(wsaData.wVersion)); printf("Server: The highest version this dll can support: %u.%u\n", LOBYTE(wsaData.wHighVersion), HIBYTE(wsaData.wHighVersion)); } //////////Create a socket//////////////////////// //Create a SOCKET object called m_socket. SOCKET m_socket; // Call the socket function and return its value to the m_socket variable. // For this application, use the Internet address family, streaming sockets, and the TCP/IP protocol. // using AF_INET family, TCP socket type and protocol of the AF_INET - IPv4 m_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); // Check for errors to ensure that the socket is a valid socket. if (m_socket == INVALID_SOCKET){ printf("Server: Error at socket(): %ld\n", WSAGetLastError()); WSACleanup(); //return 0; }else{ printf("Server: socket() is OK!\n"); } ////////////////bind////////////////////////////// // Create a sockaddr_in object and set its values. sockaddr_in service; // AF_INET is the Internet address family. service.sin_family = AF_INET; // "127.0.0.1" is the local IP address to which the socket will be bound. service.sin_addr.s_addr = htons(INADDR_ANY);//inet_addr("127.0.0.1");//htons(INADDR_ANY); //inet_addr("192.168.1.2"); // 55555 is the port number to which the socket will be bound. // using the htons for big-endian service.sin_port = htons(12345); // Call the bind function, passing the created socket and the sockaddr_in structure as parameters. // Check for general errors. if (bind(m_socket, (SOCKADDR*)&service, sizeof(service)) == SOCKET_ERROR){ printf("Server: bind() failed: %ld.\n", WSAGetLastError()); closesocket(m_socket); //return 0; }else{ printf("Server: bind() is OK!\n"); } // Call the listen function, passing the created socket and the maximum number of allowed // connections to accept as parameters. Check for general errors. if (listen(m_socket, 1) == SOCKET_ERROR) printf("Server: listen(): Error listening on socket %ld.\n", WSAGetLastError()); else{ printf("Server: listen() is OK, I'm waiting for connections...\n"); } // Create a temporary SOCKET object called AcceptSocket for accepting connections. SOCKET AcceptSocket; // Create a continuous loop that checks for connections requests. If a connection // request occurs, call the accept function to handle the request. printf("Server: Waiting for a client to connect...\n"); printf("***Hint: Server is ready...run your client program...***\n"); // Do some verification... while (1){ AcceptSocket = SOCKET_ERROR; while (AcceptSocket == SOCKET_ERROR){ AcceptSocket = accept(m_socket, NULL, NULL); } // else, accept the connection... note: now it is wrong implementation !!!!!!!! !! !! (only 1 char) // When the client connection has been accepted, transfer control from the // temporary socket to the original socket and stop checking for new connections. printf("Server: Client Connected! Mammamija. \n"); m_socket = AcceptSocket; char recvBuf[200]=""; char * rc=recvBuf; int bytesRecv=recv(m_socket,recvBuf,64,0); if(bytesRecv==0 || bytesRecv==WSAECONNRESET){ cout<<"server: connection closed.\n"; }else{ cout<<"server: recv() is OK.\n"; if(bytesRecv<0){ return 0; }else{ printf("server: bytes received: %ld.\n",recvBuf); } }

    Read the article

  • OWA, Outlook Anywhere, RPCPing Inconsistencies

    - by pk.
    I'm troubleshooting an Outlook Anywhere issue with a new Exchange 2010 server. The server in question, MS2010, is behind a SonicWALL NSA 2400 device and works wonderfully except for Outlook Anywhere. Outlook Anywhere works internally and I've verified (through Ctrl-Right Click --> Connection Status) that I'm able to connect to MS2010 over HTTPS. When trying to connect to the server using HTTPS from outside the firewall, I'm unable to do so. A Wireshark trace shows 30 or so successful HTTPS packet transmissions, and then it fails with 3 straight transmissions to a destination port of 135. I have no idea why my computer is attempting to access anything on port 135 since I've setup my profile to use HTTPS on both slow and fast connections. I'm 99% certain that the firewall is configured correctly. I run Outlook Web Access (also HTTPS) on the same server and there are no issues with access. EDIT: My Autodiscover settings are correct (as far as I can tell). My server passes the Outlook Anywhere and Autodiscover tests at https://www.testexchangeconnectivity.com/. I've been using the RPCPing utility to troubleshoot and have come across the following results: Internally- >rpcping -t ncacn_http -s mail.mydomain.com -o RpcProxy=mail.mydomain.com -P "pk,mydomain,*" -I "pk,mydomain,*" -H 1 -u 10 -a connect -F 3 -v 3 -E -R none RPCPing v2.12. Copyright (C) Microsoft Corporation, 2002 OS Version is: 6.1, Service Pack 1 RPCPinging proxy server mail.mydomain.com with Echo Request Packet Sending ping to server Response from server received: 200 Pinging successfully completed in 93 ms Externally- >rpcping -t ncacn_http -s mail.mydomain.com -o RpcProxy=mail.mydomain.com -P "pk,mydomain,*" -I "pk,mydomain,*" -H 1 -u 10 -a connect -F 3 -v 3 -E -R none RPCPing v6.0. Copyright (C) Microsoft Corporation, 2002-2006 Enter password for RPC/HTTP proxy: RPCPing set Activity ID: {fc8411ba-2987-4175-b37b-801dc69d5ff9} RPCPinging proxy server mail.mydomain.com with Echo Request Packet Setting autologon policy to high WinHttpSetCredentials for target server called Error 87 : The parameter is incorrect. returned in WinHttpSetCredentials Ping failed What should I be checking in order to troubleshoot my Outlook Anywhere issues? I'm using Windows 7 SP1 for internal and external access. EDIT: Autodiscover.xml content <?xml version="1.0"?> <Autodiscover xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/exchange/autodiscover/responseschema/2006"> <Response xmlns="http://schemas.microsoft.com/exchange/autodiscover/outlook/responseschema/2006a"> <User> <DisplayName>John Doe</DisplayName> <LegacyDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=pk</LegacyDN> <DeploymentId>d35170cc-f3a7-42c5-9427-1f554a469126</DeploymentId> </User> <Account> <AccountType>email</AccountType> <Action>settings</Action> <Protocol> <Type>EXCH</Type> <Server>MS2010.MYDOMAIN.local</Server> <ServerDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=MS2010</ServerDN> <ServerVersion>738180DA</ServerVersion> <MdbDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=MS2010/cn=Microsoft Private MDB</MdbDN> <ASUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</ASUrl> <OOFUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</OOFUrl> <OABUrl>http://MS2010.MYDOMAIN.local/OAB/2c34c9f5-5521-4c8c-b684-538df815052a/</OABUrl> <UMUrl>https://MS2010.MYDOMAIN.local/EWS/UM2007Legacy.asmx</UMUrl> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <PublicFolderServer>MS2007.MYDOMAIN.local</PublicFolderServer> <AD>dc1.MYDOMAIN.local</AD> <EwsUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</EwsUrl> <EcpUrl>https://MS2010.MYDOMAIN.local/ecp/</EcpUrl> <EcpUrl-um>?p=customize/voicemail.aspx&amp;exsvurl=1</EcpUrl-um> <EcpUrl-aggr>?p=personalsettings/EmailSubscriptions.slab&amp;exsvurl=1</EcpUrl-aggr> <EcpUrl-mt>PersonalSettings/DeliveryReport.aspx?exsvurl=1&amp;IsOWA=&lt;IsOWA&gt;&amp;MsgID=&lt;MsgID&gt;&amp;Mbx=&lt;Mbx&gt;</EcpUrl-mt> <EcpUrl-ret>?p=organize/retentionpolicytags.slab&amp;exsvurl=1</EcpUrl-ret> <EcpUrl-sms>?p=sms/textmessaging.slab&amp;exsvurl=1</EcpUrl-sms> </Protocol> <Protocol> <Type>EXPR</Type> <Server>mail.mycompany.com</Server> <ASUrl>https://mail.mycompany.com/ews/exchange.asmx</ASUrl> <OOFUrl>https://mail.mycompany.com/ews/exchange.asmx</OOFUrl> <OABUrl>https://mail.mycompany.com/OAB/2c34c9f5-5521-4c8c-b684-538df815052a/</OABUrl> <UMUrl>https://mail.mycompany.com/ews/UM2007Legacy.asmx</UMUrl> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <SSL>On</SSL> <AuthPackage>Basic</AuthPackage> <CertPrincipalName>msstd:mail.mycompany.com</CertPrincipalName> <EwsUrl>https://mail.mycompany.com/ews/exchange.asmx</EwsUrl> <EcpUrl>https://mail.mycompany.com/owa/</EcpUrl> <EcpUrl-um>?p=customize/voicemail.aspx&amp;exsvurl=1</EcpUrl-um> <EcpUrl-aggr>?p=personalsettings/EmailSubscriptions.slab&amp;exsvurl=1</EcpUrl-aggr> <EcpUrl-mt>PersonalSettings/DeliveryReport.aspx?exsvurl=1&amp;IsOWA=&lt;IsOWA&gt;&amp;MsgID=&lt;MsgID&gt;&amp;Mbx=&lt;Mbx&gt;</EcpUrl-mt> <EcpUrl-ret>?p=organize/retentionpolicytags.slab&amp;exsvurl=1</EcpUrl-ret> <EcpUrl-sms>?p=sms/textmessaging.slab&amp;exsvurl=1</EcpUrl-sms> </Protocol> <Protocol> <Type>WEB</Type> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <Internal> <OWAUrl AuthenticationMethod="Basic, Fba">https://MS2010.MYDOMAIN.local/owa/</OWAUrl> <Protocol> <Type>EXCH</Type> <ASUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</ASUrl> </Protocol> </Internal> <External> <OWAUrl AuthenticationMethod="Fba">https://mail.mycompany.com/owa/</OWAUrl> <Protocol> <Type>EXPR</Type> <ASUrl>https://mail.mycompany.com/ews/exchange.asmx</ASUrl> </Protocol> </External> </Protocol> </Account> </Response> </Autodiscover>

    Read the article

  • tmux: Suddenly, cannot horizontally split

    - by A__A__0
    As root, using a reasonably default .profile and .shrc and an empty tmux.conf, I am unable to split the window horizontally. There are a number of cases to consider so I'll list them clearly. Using the keybinding + empty configuration: nothing happens Using the keybinding + my configuration: a bell is generated, nothing else; occasionally, the split will appear and disappear immediately (maybe it always does this, but I'm connecting over ssh so it may not make it through) Using tmux split-window -h with any config: tmux immediately exits I've posted here in order the server and client verbose logs generated by tmux -v during the third case: server started, pid 9523 socket path /tmp/tmux-0/default new client 7 got 100 from client 7 got 101 from client 7 got 102 from client 7 got 103 from client 7 got 104 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 106 from client 7 got 200 from client 7 cmdq 0x801c6e080: new-session (client 7) new term: xterm xterm override: XT xterm override: Ms ]52;%p1%s;%p2%s xterm override: Cs ]12;%p1%s xterm override: Cr ]112 xterm override: Ss [%p1%d q xterm override: Se [2 q new key Oo: 0x1021 (KP/) new key Oj: 0x1022 (KP*) new key Om: 0x1023 (KP-) new key Ow: 0x1024 (KP7) new key Ox: 0x1025 (KP8) new key Oy: 0x1026 (KP9) new key Ok: 0x1027 (KP+) new key Ot: 0x1028 (KP4) new key Ou: 0x1029 (KP5) new key Ov: 0x102a (KP6) new key Oq: 0x102b (KP1) new key Or: 0x102c (KP2) new key Os: 0x102d (KP3) new key OM: 0x102e (KPEnter) new key Op: 0x102f (KP0) new key On: 0x1030 (KP.) new key OA: 0x101d (Up) new key OB: 0x101e (Down) new key OC: 0x1020 (Right) new key OD: 0x101f (Left) new key [A: 0x101d (Up) new key [B: 0x101e (Down) new key [C: 0x1020 (Right) new key [D: 0x101f (Left) new key OH: 0x1018 (Home) new key OF: 0x1019 (End) new key [H: 0x1018 (Home) new key [F: 0x1019 (End) new key Oa: 0x501d (C-Up) new key Ob: 0x501e (C-Down) new key Oc: 0x5020 (C-Right) new key Od: 0x501f (C-Left) new key [a: 0x901d (S-Up) new key [b: 0x901e (S-Down) new key [c: 0x9020 (S-Right) new key [d: 0x901f (S-Left) new key [11^: 0x5002 (C-F1) new key [12^: 0x5003 (C-F2) new key [13^: 0x5004 (C-F3) new key [14^: 0x5005 (C-F4) new key [15^: 0x5006 (C-F5) new key [17^: 0x5007 (C-F6) new key [18^: 0x5008 (C-F7) new key [19^: 0x5009 (C-F8) new key [20^: 0x500a (C-F9) new key [21^: 0x500b (C-F10) new key [23^: 0x500c (C-F11) new key [24^: 0x500d (C-F12) new key [25^: 0x500e (C-F13) new key [26^: 0x500f (C-F14) new key [28^: 0x5010 (C-F15) new key [29^: 0x5011 (C-F16) new key [31^: 0x5012 (C-F17) new key [32^: 0x5013 (C-F18) new key [33^: 0x5014 (C-F19) new key [34^: 0x5015 (C-F20) new key [2^: 0x5016 (C-IC) new key [3^: 0x5017 (C-DC) new key [7^: 0x5018 (C-Home) new key [8^: 0x5019 (C-End) new key [6^: 0x501a (C-NPage) new key [5^: 0x501b (C-PPage) new key [11$: 0x9002 (S-F1) new key [12$: 0x9003 (S-F2) new key [13$: 0x9004 (S-F3) new key [14$: 0x9005 (S-F4) new key [15$: 0x9006 (S-F5) new key [17$: 0x9007 (S-F6) new key [18$: 0x9008 (S-F7) new key [19$: 0x9009 (S-F8) new key [20$: 0x900a (S-F9) new key [21$: 0x900b (S-F10) new key [23$: 0x900c (S-F11) new key [24$: 0x900d (S-F12) new key [25$: 0x900e (S-F13) new key [26$: 0x900f (S-F14) new key [28$: 0x9010 (S-F15) new key [29$: 0x9011 (S-F16) new key [31$: 0x9012 (S-F17) new key [32$: 0x9013 (S-F18) new key [33$: 0x9014 (S-F19) new key [34$: 0x9015 (S-F20) new key [2$: 0x9016 (S-IC) new key [3$: 0x9017 (S-DC) new key [7$: 0x9018 (S-Home) new key [8$: 0x9019 (S-End) new key [6$: 0x901a (S-NPage) new key [5$: 0x901b (S-PPage) new key [11@: 0xd002 (C-S-F1) new key [12@: 0xd003 (C-S-F2) new key [13@: 0xd004 (C-S-F3) new key [14@: 0xd005 (C-S-F4) new key [15@: 0xd006 (C-S-F5) new key [17@: 0xd007 (C-S-F6) new key [18@: 0xd008 (C-S-F7) new key [19@: 0xd009 (C-S-F8) new key [20@: 0xd00a (C-S-F9) new key [21@: 0xd00b (C-S-F10) new key [23@: 0xd00c (C-S-F11) new key [24@: 0xd00d (C-S-F12) new key [25@: 0xd00e (C-S-F13) new key [26@: 0xd00f (C-S-F14) new key [28@: 0xd010 (C-S-F15) new key [29@: 0xd011 (C-S-F16) new key [31@: 0xd012 (C-S-F17) new key [32@: 0xd013 (C-S-F18) new key [33@: 0xd014 (C-S-F19) new key [34@: 0xd015 (C-S-F20) new key [2@: 0xd016 (C-S-IC) new key [3@: 0xd017 (C-S-DC) new key [7@: 0xd018 (C-S-Home) new key [8@: 0xd019 (C-S-End) new key [6@: 0xd01a (C-S-NPage) new key [5@: 0xd01b (C-S-PPage) new key [I: 0x1031 ((null)) new key [O: 0x1032 ((null)) new key OP: 0x1002 (F1) new key OQ: 0x1003 (F2) new key OR: 0x1004 (F3) new key OS: 0x1005 (F4) new key [15~: 0x1006 (F5) new key [17~: 0x1007 (F6) new key [18~: 0x1008 (F7) new key [19~: 0x1009 (F8) new key [20~: 0x100a (F9) new key [21~: 0x100b (F10) new key [23~: 0x100c (F11) new key [24~: 0x100d (F12) new key [2~: 0x1016 (IC) new key [3~: 0x1017 (DC) replacing key OH: 0x1018 (Home) replacing key OF: 0x1019 (End) new key [6~: 0x101a (NPage) new key [5~: 0x101b (PPage) new key [Z: 0x101c (BTab) replacing key OA: 0x101d (Up) replacing key OB: 0x101e (Down) replacing key OD: 0x101f (Left) replacing key OC: 0x1020 (Right) spawn: /bin/sh -- session 0 created writing 207 to client 7 got 208 from client 7 input_parse: '#' ground input_parse: ' ' ground keys are 7 ([?1;2c) received service class 1 complete key [?1;2c 0xfff keys are 1 (t) complete key t 0x74 input_parse: 't' ground keys are 1 (m) complete key m 0x6d input_parse: 'm' ground keys are 1 (u) complete key u 0x75 input_parse: 'u' ground keys are 1 (x) complete key x 0x78 input_parse: 'x' ground keys are 1 ( ) complete key 0x20 input_parse: ' ' ground keys are 1 (s) complete key s 0x73 input_parse: 's' ground keys are 1 (p) complete key p 0x70 input_parse: 'p' ground keys are 1 (l) complete key l 0x6c input_parse: 'l' ground keys are 1 (i) complete key i 0x69 input_parse: 'i' ground keys are 1 (t) complete key t 0x74 input_parse: 't' ground keys are 1 (-) complete key - 0x2d input_parse: '-' ground keys are 1 (d) complete key d 0x64 input_parse: 'd' ground keys are 1 () complete key 0x7f input_parse: '' ground input_c0_dispatch: ' input_parse: '' ground input_parse: '[' esc_enter input_parse: 'K' csi_enter input_csi_dispatch: 'K' "" "" keys are 1 (w) complete key w 0x77 input_parse: 'w' ground keys are 1 (i) complete key i 0x69 input_parse: 'i' ground keys are 1 (n) complete key n 0x6e input_parse: 'n' ground keys are 1 (d) complete key d 0x64 input_parse: 'd' ground keys are 1 (o) complete key o 0x6f input_parse: 'o' ground keys are 1 (w) complete key w 0x77 input_parse: 'w' ground keys are 1 ( ) complete key 0x20 input_parse: ' ' ground keys are 1 (-) complete key - 0x2d input_parse: '-' ground keys are 1 (h) complete key h 0x68 input_parse: 'h' ground keys are 1 ( ) complete key 0xd input_parse: ' ' ground input_c0_dispatch: ' input_parse: ' ' ground input_c0_dispatch: ' new client 13 got 100 from client 13 got 101 from client 13 got 102 from client 13 got 103 from client 13 got 104 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 105 from client 13 got 106 from client 13 got 200 from client 13 cmdq 0x801c6e160: split-window -h (client 13) spawn: /bin/sh -- writing 203 to client 13 input_parse: '#' ground input_parse: ' ' ground input_parse: '#' ground input_parse: ' ' ground lost client 13 session 0 destroyed writing 203 to client 7 got 205 from client 7 writing 204 to client 7 lost client 7 got 207 from server got 203 from server got 204 from server There are some other peculiarities: With a newly created user (from which I overwrote root's .profile and .shrc, tmux works perfectly. Occasionally (twice out of the 50 or so times I've tested it), the splitting will work fine once in a session. (This happened for example when I ran ktrace on tmux, which I can also post) To explain the 'suddenly' part of the title: when I started my newly updated mysql56-server, tmux immediately exited and lost the session. Recently I changed architectures, from FreeBSD 10.0 i386 to amd64, and I am still working through shared library incompatibilities. I suspect that this could be involved, but I can't imagine how an incompatibility of this sort could result in such a specific, isolated failure.

    Read the article

  • How to get full control of umask/PAM/permissions?

    - by plua
    OUR SITUATION Several people from our company log in to a server and upload files. They all need to be able to upload and overwrite the same files. They have different usernames, but are all part of the same group. However, this is an internet server, so the "other" users should have (in general) just read-only access. So what I want to have is these standard permissions: files: 664 directories: 771 My goal is that all users do not need to worry about permissions. The server should be configured in such a way that these permissions apply to all files and directories, newly created, copied, or over-written. Only when we need some special permissions we'd manually change this. We upload files to the server by SFTP-ing in Nautilus, by mounting the server using sshfs and accessing it in Nautilus as if it were a local folder, and by SCP-ing in the command line. That basically covers our situation and what we aim to do. Now, I have read many things about the beautiful umask functionality. From what I understand umask (together with PAM) should allow me to do exactly what I want: set standard permissions for new files and directories. However, after many many hours of reading and trial-and-error, I still do not get this to work. I get many unexpected results. I really like to get a solid grasp of umask and have many question unanswered. I will post these questions below, together with my findings and an explanation of my trials that led to these questions. Given that many things appear to go wrong, I think that I am doing several things wrong. So therefore, there are many questions. NOTE: I am using Ubuntu 9.10 and therefore can not change the sshd_config to set the umask for the SFTP server. Installed SSH OpenSSH_5.1p1 Debian-6ubuntu2 < required OpenSSH 5.4p1. So here go the questions. 1. DO I NEED TO RESTART FOR PAM CHANGS TO TAKE EFFECT? Let's start with this. There were so many files involved and I was unable to figure out what does and what does not affect things, also because I did not know whether or not I have to restart the whole system for PAM changes to take effect. I did do so after not seeing the expected results, but is this really necessary? Or can I just log out from the server and log back in, and should new PAM policies be effective? Or is there some 'PAM' program to reload? 2. IS THERE ONE SINGLE FILE TO CHANGE THAT AFFECTS ALL USERS FOR ALL SESSIONS? So I ended up changing MANY files, as I read MANY different things. I ended up setting the umask in the following files: ~/.profile -> umask=0002 ~/.bashrc -> umask=0002 /etc/profile -> umask=0002 /etc/pam.d/common-session -> umask=0002 /etc/pam.d/sshd -> umask=0002 /etc/pam.d/login -> umask=0002 I want this change to apply to all users, so some sort of system-wide change would be best. Can it be achieved? 3. AFTER ALL, THIS UMASK THING, DOES IT WORK? So after changing umask to 0002 at every possible place, I run tests. ------------SCP----------- TEST 1: scp testfile (which has 777 permissions for testing purposes) server:/home/ testfile 100% 4 0.0KB/s 00:00 Let's check permissions: user@server:/home$ ls -l total 4 -rwx--x--x 1 user uploaders 4 2011-02-05 17:59 testfile (711) ---------SSH------------ TEST 2: ssh server user@server:/home$ touch anotherfile user@server:/home$ ls -l total 4 -rw-rw-r-- 1 user uploaders 0 2011-02-05 18:03 anotherfile (664) --------SFTP----------- Nautilus: sftp://server/home/ Copy and paste newfile from client to server (777 on client) TEST 3: user@server:/home$ ls -l total 4 -rwxrwxrwx 1 user uploaders 3 2011-02-05 18:05 newfile (777) Create a new file through Nautilus. Check file permissions in terminal: TEST 4: user@server:/home$ ls -l total 4 -rw------- 1 user uploaders 0 2011-02-05 18:06 newfile (600) I mean... WHAT just happened here?! We should get 644 every single time. Instead I get 711, 777, 600, and then once 644. And the 644 is only achieved when creating a new, blank file through SSH, which is the least probable scenario. So I am asking, does umask/pam work after all? 4. SO WHAT DOES IT MEAN TO UMASK SSHFS? Sometimes we mount a server locally, using sshfs. Very useful. But again, we have permissions issues. Here is how we mount: sshfs -o idmap=user -o umask=0113 user@server:/home/ /mnt NOTE: we use umask = 113 because apparently, sshfs starts from 777 instead of 666, so with 113 we get 664 which is the desired file permission. But what now happens is that we see all files and directories as if they are 664. We browse in Nautilus to /mnt and: Right click - New File (newfile) --- TEST 5 Right click - New Folder (newfolder) --- TEST 6 Copy and paste a 777 file from our local client --- TEST 7 So let's check on the command line: user@client:/mnt$ ls -l total 8 -rw-rw-r-- 1 user 1007 3 Feb 5 18:05 copyfile (664) -rw-rw-r-- 1 user 1007 0 Feb 5 18:15 newfile (664) drw-rw-r-- 1 user 1007 4096 Feb 5 18:15 newfolder (664) But hey, let's check this same folder on the server-side: user@server:/home$ ls -l total 8 -rwxrwxrwx 1 user uploaders 3 2011-02-05 18:05 copyfile (777) -rw------- 1 user uploaders 0 2011-02-05 18:15 newfile (600) drwx--x--x 2 user uploaders 4096 2011-02-05 18:15 newfolder (711) What?! The REAL file permissions are very different from what we see in Nautilus. So does this umask on sshfs just create a 'filter' that shows unreal file permissions? And I tried to open a file from another user but the same group that had real 600 permissions but 644 'fake' permissions, and I could still not read this, so what good is this filter?? 5. UMASK IS ALL ABOUT FILES. BUT WHAT ABOUT DIRECTORIES? From my tests I can see that the umask that is being applied also somehow influences the directory permissions. However, I want my files to be 664 (002) and my directories to be 771 (006). So is it possible to have a different umask for directories? 6. PERHAPS UMASK/PAM IS REALLY COOL, BUT UBUNTU IS JUST BUGGY? On the one hand, I have read topics of people that have had success with PAM/UMASK and Ubuntu. On the other hand, I have found many older and newer bugs regarding umask/PAM/fuse on Ubuntu: https://bugs.launchpad.net/ubuntu/+source/gdm/+bug/241198 https://bugs.launchpad.net/ubuntu/+source/fuse/+bug/239792 https://bugs.launchpad.net/ubuntu/+source/pam/+bug/253096 https://bugs.launchpad.net/ubuntu/+source/sudo/+bug/549172 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=314796 So I do not know what to believe anymore. Should I just give up? Would ACL solve all my problems? Or do I have again problems using Ubuntu? One word of caution with backups using tar. Red Hat /Centos distributions support acls in the tar program but Ubuntu does not support acls when backing up. This means that all acls will be lost when you create a backup. I am very willing to upgrade to Ubuntu 10.04 if that would solve my problems too, but first I want to understand what is happening.

    Read the article

  • Strange Recurrent Excessive I/O Wait

    - by Chris
    I know quite well that I/O wait has been discussed multiple times on this site, but all the other topics seem to cover constant I/O latency, while the I/O problem we need to solve on our server occurs at irregular (short) intervals, but is ever-present with massive spikes of up to 20k ms a-wait and service times of 2 seconds. The disk affected is /dev/sdb (Seagate Barracuda, for details see below). A typical iostat -x output would at times look like this, which is an extreme sample but by no means rare: iostat (Oct 6, 2013) tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16.00 0.00 156.00 9.75 21.89 288.12 36.00 57.60 5.50 0.00 44.00 8.00 48.79 2194.18 181.82 100.00 2.00 0.00 16.00 8.00 46.49 3397.00 500.00 100.00 4.50 0.00 40.00 8.89 43.73 5581.78 222.22 100.00 14.50 0.00 148.00 10.21 13.76 5909.24 68.97 100.00 1.50 0.00 12.00 8.00 8.57 7150.67 666.67 100.00 0.50 0.00 4.00 8.00 6.31 10168.00 2000.00 100.00 2.00 0.00 16.00 8.00 5.27 11001.00 500.00 100.00 0.50 0.00 4.00 8.00 2.96 17080.00 2000.00 100.00 34.00 0.00 1324.00 9.88 1.32 137.84 4.45 59.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22.00 44.00 204.00 11.27 0.01 0.27 0.27 0.60 Let me provide you with some more information regarding the hardware. It's a Dell 1950 III box with Debian as OS where uname -a reports the following: Linux xx 2.6.32-5-amd64 #1 SMP Fri Feb 15 15:39:52 UTC 2013 x86_64 GNU/Linux The machine is a dedicated server that hosts an online game without any databases or I/O heavy applications running. The core application consumes about 0.8 of the 8 GBytes RAM, and the average CPU load is relatively low. The game itself, however, reacts rather sensitive towards I/O latency and thus our players experience massive ingame lag, which we would like to address as soon as possible. iostat: avg-cpu: %user %nice %system %iowait %steal %idle 1.77 0.01 1.05 1.59 0.00 95.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 13.16 25.42 135.12 504701011 2682640656 sda 1.52 0.74 20.63 14644533 409684488 Uptime is: 19:26:26 up 229 days, 17:26, 4 users, load average: 0.36, 0.37, 0.32 Harddisk controller: 01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04) Harddisks: Array 1, RAID-1, 2x Seagate Cheetah 15K.5 73 GB SAS Array 2, RAID-1, 2x Seagate ST3500620SS Barracuda ES.2 500GB 16MB 7200RPM SAS Partition information from df: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb1 480191156 30715200 425083668 7% /home /dev/sda2 7692908 437436 6864692 6% / /dev/sda5 15377820 1398916 13197748 10% /usr /dev/sda6 39159724 19158340 18012140 52% /var Some more data samples generated with iostat -dx sdb 1 (Oct 11, 2013) Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sdb 0.00 15.00 0.00 70.00 0.00 656.00 9.37 4.50 1.83 4.80 33.60 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 12.00 836.00 500.00 100.00 sdb 0.00 0.00 0.00 3.00 0.00 32.00 10.67 9.96 1990.67 333.33 100.00 sdb 0.00 0.00 0.00 4.00 0.00 40.00 10.00 6.96 3075.00 250.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 4.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 2.62 4648.00 500.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 1.00 0.00 16.00 16.00 1.69 7024.00 1000.00 100.00 sdb 0.00 74.00 0.00 124.00 0.00 1584.00 12.77 1.09 67.94 6.94 86.00 Characteristic charts generated with rrdtool can be found here: iostat plot 1, 24 min interval: http://imageshack.us/photo/my-images/600/yqm3.png/ iostat plot 2, 120 min interval: http://imageshack.us/photo/my-images/407/griw.png/ As we have a rather large cache of 5.5 GBytes, we thought it might be a good idea to test if the I/O wait spikes would perhaps be caused by cache miss events. Therefore, we did a sync and then this to flush the cache and buffers: echo 3 > /proc/sys/vm/drop_caches and directly afterwards the I/O wait and service times virtually went through the roof, and everything on the machine felt like slow motion. During the next few hours the latency recovered and everything was as before - small to medium lags in short, unpredictable intervals. Now my question is: does anybody have any idea what might cause this annoying behaviour? Is it the first indication of the disk array or the raid controller dying, or something that can be easily mended by rebooting? (At the moment we're very reluctant to do this, however, because we're afraid that the disks might not come back up again.) Any help is greatly appreciated. Thanks in advance, Chris. Edited to add: we do see one or two processes go to 'D' state in top, one of which seems to be kjournald rather frequently. If I'm not mistaken, however, this does not indicate the processes causing the latency, but rather those affected by it - correct me if I'm wrong. Does the information about uninterruptibly sleeping processes help us in any way to address the problem? @Andy Shinn requested smartctl data, here it is: smartctl -a -d megaraid,2 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:13 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 20 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 1236631092 Blocks received from initiator = 1097862364 Blocks read from cache and sent to initiator = 1383620256 Number of read and write commands whose size <= segment size = 531295338 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36556.93 number of minutes until next internal SMART test = 32 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 509271032 47 0 509271079 509271079 20981.423 0 write: 0 0 0 0 0 5022.039 0 verify: 1870931090 196 0 1870931286 1870931286 100558.708 0 Non-medium error count: 0 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36538 - [- - -] # 2 Background short Completed 16 36514 - [- - -] # 3 Background short Completed 16 36490 - [- - -] # 4 Background short Completed 16 36466 - [- - -] # 5 Background short Completed 16 36442 - [- - -] # 6 Background long Completed 16 36420 - [- - -] # 7 Background short Completed 16 36394 - [- - -] # 8 Background short Completed 16 36370 - [- - -] # 9 Background long Completed 16 36364 - [- - -] #10 Background short Completed 16 36361 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes] smartctl -a -d megaraid,3 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:26 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 19 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 288745640 Blocks received from initiator = 1097848399 Blocks read from cache and sent to initiator = 1304149705 Number of read and write commands whose size <= segment size = 527414694 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36596.83 number of minutes until next internal SMART test = 28 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 610862490 44 0 610862534 610862534 20470.133 0 write: 0 0 0 0 0 5022.480 0 verify: 2861227413 203 0 2861227616 2861227616 100872.443 0 Non-medium error count: 1 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36580 - [- - -] # 2 Background short Completed 16 36556 - [- - -] # 3 Background short Completed 16 36532 - [- - -] # 4 Background short Completed 16 36508 - [- - -] # 5 Background short Completed 16 36484 - [- - -] # 6 Background long Completed 16 36462 - [- - -] # 7 Background short Completed 16 36436 - [- - -] # 8 Background short Completed 16 36412 - [- - -] # 9 Background long Completed 16 36404 - [- - -] #10 Background short Completed 16 36401 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes]

    Read the article

  • Permissions denied on apache rewrite module virtual host configuration

    - by sina
    All of a sudden I keep getting "Permissions denied" on apache 2 virtualhost once we moved it to its own conf file. I have tried all the suggestions I have found here but none work. Please can someone tell me what I am doing wrong? Thanks! <VirtualHost *:80> DocumentRoot "/var/www/mm" <Directory "/var/www/mm"> Options +Indexes +MultiViews +FollowSymLinks AllowOverride all Order deny,allow Allow from all AddType text/vnd.sun.j2me.app-descriptor .jad AddType application/vnd.rim.cod .cod </Directory> Alias /holdspace "/var/www/mm/holdspace" RewriteLogLevel 9 RewriteLog "/var/log/httpd/rewrite.log" RewriteEngine on # 91xx RewriteCond %{HTTP_USER_AGENT} BlackBerry.9105 RewriteRule ^/download/(.*) /holdspace/bb6-360x480/$1 [L] # 92xx RewriteCond %{HTTP_USER_AGENT} BlackBerry.9220 RewriteRule ^/download/(.*) /holdspace/bb5-320x240/$1 [L] Errors in error.log: [Wed May 28 12:44:58 2014] [error] [client 197.255.173.95] (13)Permission denied: access to /download/eazymoney.jad denied [Wed May 28 12:44:58 2014] [error] [client 197.255.173.95] (13)Permission denied: access to /error/HTTP_FORBIDDEN.html.var denied [Wed May 28 12:44:59 2014] [error] [client 197.255.173.95] (13)Permission denied: access to /favicon.ico denied [Wed May 28 12:44:59 2014] [error] [client 197.255.173.95] (13)Permission denied: access to /error/HTTP_FORBIDDEN.html.var denied [Wed May 28 12:44:59 2014] [error] [client 197.255.173.95] (13)Permission denied: access to /favicon.ico denied [Wed May 28 12:44:59 2014] [error] [client 197.255.173.95] (13)Permission denied: access to /error/HTTP_FORBIDDEN.html.var denied Errors in rewrite.log: 197.255.173.95 - - [28/May/2014:12:46:01 +0100] [41.203.113.103/sid#7fe41704ca28][rid#7fe417123378/initial/redir#1] (3) applying pattern '^/download/(.*)' to uri '/error/HTTP_FORBIDDEN.html.var' 197.255.173.95 - - [28/May/2014:12:46:01 +0100] [41.203.113.103/sid#7fe41704ca28][rid#7fe417123378/initial/redir#1] (3) applying pattern '^/download/(.*)' to uri '/error/HTTP_FORBIDDEN.html.var' Apache Configuration file: ServerTokens Prod ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> Listen 80 LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so Include conf.d/*.conf User apache Group apache ServerAdmin root@localhost ServerName sv001zma002.africa.int.myorg.com UseCanonicalName Off DocumentRoot "/var/www/html" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/var/www/html"> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <IfModule mod_userdir.c> UserDir disabled </IfModule> DirectoryIndex index.html index.html.var AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy All </Files> TypesConfig /etc/mime.types DefaultType text/plain <IfModule mod_mime_magic.c> MIMEMagicFile conf/magic </IfModule> HostnameLookups Off ErrorLog logs/error_log LogLevel warn LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log combined ServerSignature Off TraceEnable Off Alias /icons/ "/var/www/icons/" <Directory "/var/www/icons"> Options MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <IfModule mod_dav_fs.c> DAVLockDB /var/lib/dav/lockdb </IfModule> ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" <Directory "/var/www/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable Charset=UTF-8 AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ DefaultIcon /icons/unknown.gif ReadmeName README.html HeaderName HEADER.html IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW ForceLanguagePriority Prefer Fallback AddDefaultCharset UTF-8 AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl AddHandler type-map var AddType text/html .shtml AddOutputFilter INCLUDES .shtml ProxyErrorOverride On Alias /error/ "/var/www/error/" <IfModule mod_negotiation.c> <IfModule mod_include.c> <Directory "/var/www/error"> AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en es de fr ForceLanguagePriority Prefer Fallback </Directory> ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var </IfModule> </IfModule> BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4\.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4\.0" force-response-1.0 BrowserMatch "Java/1\.0" force-response-1.0 BrowserMatch "JDK/1\.0" force-response-1.0 BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully ErrorDocument 400 "Bad Request"

    Read the article

  • Metro: Declarative Data Binding

    - by Stephen.Walther
    The goal of this blog post is to describe how declarative data binding works in the WinJS library. In particular, you learn how to use both the data-win-bind and data-win-bindsource attributes. You also learn how to use calculated properties and converters to format the value of a property automatically when performing data binding. By taking advantage of WinJS data binding, you can use the Model-View-ViewModel (MVVM) pattern when building Metro style applications with JavaScript. By using the MVVM pattern, you can prevent your JavaScript code from spinning into chaos. The MVVM pattern provides you with a standard pattern for organizing your JavaScript code which results in a more maintainable application. Using Declarative Bindings You can use the data-win-bind attribute with any HTML element in a page. The data-win-bind attribute enables you to bind (associate) an attribute of an HTML element to the value of a property. Imagine, for example, that you want to create a product details page. You want to show a product object in a page. In that case, you can create the following HTML page to display the product details: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1>Product Details</h1> <div class="field"> Product Name: <span data-win-bind="innerText:name"></span> </div> <div class="field"> Product Price: <span data-win-bind="innerText:price"></span> </div> <div class="field"> Product Picture: <br /> <img data-win-bind="src:photo;alt:name" /> </div> </body> </html> The HTML page above contains three data-win-bind attributes – one attribute for each product property displayed. You use the data-win-bind attribute to set properties of the HTML element associated with the data-win-attribute. The data-win-bind attribute takes a semicolon delimited list of element property names and data source property names: data-win-bind=”elementPropertyName:datasourcePropertyName; elementPropertyName:datasourcePropertyName;…” In the HTML page above, the first two data-win-bind attributes are used to set the values of the innerText property of the SPAN elements. The last data-win-bind attribute is used to set the values of the IMG element’s src and alt attributes. By the way, using data-win-bind attributes is perfectly valid HTML5. The HTML5 standard enables you to add custom attributes to an HTML document just as long as the custom attributes start with the prefix data-. So you can add custom attributes to an HTML5 document with names like data-stephen, data-funky, or data-rover-dog-is-hungry and your document will validate. The product object displayed in the page above with the data-win-bind attributes is created in the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var product = { name: "Tesla", price: 80000, photo: "/images/TeslaPhoto.png" }; WinJS.Binding.processAll(null, product); } }; app.start(); })(); In the code above, a product object is created with a name, price, and photo property. The WinJS.Binding.processAll() method is called to perform the actual binding (Don’t confuse WinJS.Binding.processAll() and WinJS.UI.processAll() – these are different methods). The first parameter passed to the processAll() method represents the root element for the binding. In other words, binding happens on this element and its child elements. If you provide the value null, then binding happens on the entire body of the document (document.body). The second parameter represents the data context. This is the object that has the properties which are displayed with the data-win-bind attributes. In the code above, the product object is passed as the data context parameter. Another word for data context is view model.  Creating Complex View Models In the previous section, we used the data-win-bind attribute to display the properties of a simple object: a single product. However, you can use binding with more complex view models including view models which represent multiple objects. For example, the view model in the following default.js file represents both a customer and a product object. Furthermore, the customer object has a nested address object: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var viewModel = { customer: { firstName: "Fred", lastName: "Flintstone", address: { street: "1 Rocky Way", city: "Bedrock", country: "USA" } }, product: { name: "Bowling Ball", price: 34.55 } }; WinJS.Binding.processAll(null, viewModel); } }; app.start(); })(); The following page displays the customer (including the customer address) and the product. Notice that you can use dot notation to refer to child objects in a view model such as customer.address.street. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1>Customer Details</h1> <div class="field"> First Name: <span data-win-bind="innerText:customer.firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:customer.lastName"></span> </div> <div class="field"> Address: <address> <span data-win-bind="innerText:customer.address.street"></span> <br /> <span data-win-bind="innerText:customer.address.city"></span> <br /> <span data-win-bind="innerText:customer.address.country"></span> </address> </div> <h1>Product</h1> <div class="field"> Name: <span data-win-bind="innerText:product.name"></span> </div> <div class="field"> Price: <span data-win-bind="innerText:product.price"></span> </div> </body> </html> A view model can be as complicated as you need and you can bind the view model to a view (an HTML document) by using declarative bindings. Creating Calculated Properties You might want to modify a property before displaying the property. For example, you might want to format the product price property before displaying the property. You don’t want to display the raw product price “80000”. Instead, you want to display the formatted price “$80,000”. You also might need to combine multiple properties. For example, you might need to display the customer full name by combining the values of the customer first and last name properties. In these situations, it is tempting to call a function when performing binding. For example, you could create a function named fullName() which concatenates the customer first and last name. Unfortunately, the WinJS library does not support the following syntax: <span data-win-bind=”innerText:fullName()”></span> Instead, in these situations, you should create a new property in your view model that has a getter. For example, the customer object in the following default.js file includes a property named fullName which combines the values of the firstName and lastName properties: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var customer = { firstName: "Fred", lastName: "Flintstone", get fullName() { return this.firstName + " " + this.lastName; } }; WinJS.Binding.processAll(null, customer); } }; app.start(); })(); The customer object has a firstName, lastName, and fullName property. Notice that the fullName property is defined with a getter function. When you read the fullName property, the values of the firstName and lastName properties are concatenated and returned. The following HTML page displays the fullName property in an H1 element. You can use the fullName property in a data-win-bind attribute in exactly the same way as any other property. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1 data-win-bind="innerText:fullName"></h1> <div class="field"> First Name: <span data-win-bind="innerText:firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:lastName"></span> </div> </body> </html> Creating a Converter In the previous section, you learned how to format the value of a property by creating a property with a getter. This approach makes sense when the formatting logic is specific to a particular view model. If, on the other hand, you need to perform the same type of formatting for multiple view models then it makes more sense to create a converter function. A converter function is a function which you can apply whenever you are using the data-win-bind attribute. Imagine, for example, that you want to create a general function for displaying dates. You always want to display dates using a short format such as 12/25/1988. The following JavaScript file – named converters.js – contains a shortDate() converter: (function (WinJS) { var shortDate = WinJS.Binding.converter(function (date) { return date.getMonth() + 1 + "/" + date.getDate() + "/" + date.getFullYear(); }); // Export shortDate WinJS.Namespace.define("MyApp.Converters", { shortDate: shortDate }); })(WinJS); The file above uses the Module Pattern, a pattern which is used through the WinJS library. To learn more about the Module Pattern, see my blog entry on namespaces and modules: http://stephenwalther.com/blog/archive/2012/02/22/windows-web-applications-namespaces-and-modules.aspx The file contains the definition for a converter function named shortDate(). This function converts a JavaScript date object into a short date string such as 12/1/1988. The converter function is created with the help of the WinJS.Binding.converter() method. This method takes a normal function and converts it into a converter function. Finally, the shortDate() converter is added to the MyApp.Converters namespace. You can call the shortDate() function by calling MyApp.Converters.shortDate(). The default.js file contains the customer object that we want to bind. Notice that the customer object has a firstName, lastName, and birthday property. We will use our new shortDate() converter when displaying the customer birthday property: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var customer = { firstName: "Fred", lastName: "Flintstone", birthday: new Date("12/1/1988") }; WinJS.Binding.processAll(null, customer); } }; app.start(); })(); We actually use our shortDate converter in the HTML document. The following HTML document displays all of the customer properties: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script type="text/javascript" src="js/converters.js"></script> </head> <body> <h1>Customer Details</h1> <div class="field"> First Name: <span data-win-bind="innerText:firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:lastName"></span> </div> <div class="field"> Birthday: <span data-win-bind="innerText:birthday MyApp.Converters.shortDate"></span> </div> </body> </html> Notice the data-win-bind attribute used to display the birthday property. It looks like this: <span data-win-bind="innerText:birthday MyApp.Converters.shortDate"></span> The shortDate converter is applied to the birthday property when the birthday property is bound to the SPAN element’s innerText property. Using data-win-bindsource Normally, you pass the view model (the data context) which you want to use with the data-win-bind attributes in a page by passing the view model to the WinJS.Binding.processAll() method like this: WinJS.Binding.processAll(null, viewModel); As an alternative, you can specify the view model declaratively in your markup by using the data-win-datasource attribute. For example, the following default.js script exposes a view model with the fully-qualified name of MyWinWebApp.viewModel: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { // Create view model var viewModel = { customer: { firstName: "Fred", lastName: "Flintstone" }, product: { name: "Bowling Ball", price: 12.99 } }; // Export view model to be seen by universe WinJS.Namespace.define("MyWinWebApp", { viewModel: viewModel }); // Process data-win-bind attributes WinJS.Binding.processAll(); } }; app.start(); })(); In the code above, a view model which represents a customer and a product is exposed as MyWinWebApp.viewModel. The following HTML page illustrates how you can use the data-win-bindsource attribute to bind to this view model: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1>Customer Details</h1> <div data-win-bindsource="MyWinWebApp.viewModel.customer"> <div class="field"> First Name: <span data-win-bind="innerText:firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:lastName"></span> </div> </div> <h1>Product</h1> <div data-win-bindsource="MyWinWebApp.viewModel.product"> <div class="field"> Name: <span data-win-bind="innerText:name"></span> </div> <div class="field"> Price: <span data-win-bind="innerText:price"></span> </div> </div> </body> </html> The data-win-bindsource attribute is used twice in the page above: it is used with the DIV element which contains the customer details and it is used with the DIV element which contains the product details. If an element has a data-win-bindsource attribute then all of the child elements of that element are affected. The data-win-bind attributes of all of the child elements are bound to the data source represented by the data-win-bindsource attribute. Summary The focus of this blog entry was data binding using the WinJS library. You learned how to use the data-win-bind attribute to bind the properties of an HTML element to a view model. We also discussed several advanced features of data binding. We examined how to create calculated properties by including a property with a getter in your view model. We also discussed how you can create a converter function to format the value of a view model property when binding the property. Finally, you learned how to use the data-win-bindsource attribute to specify a view model declaratively.

    Read the article

  • Using BPEL Performance Statistics to Diagnose Performance Bottlenecks

    - by fip
    Tuning performance of Oracle SOA 11G applications could be challenging. Because SOA is a platform for you to build composite applications that connect many applications and "services", when the overall performance is slow, the bottlenecks could be anywhere in the system: the applications/services that SOA connects to, the infrastructure database, or the SOA server itself.How to quickly identify the bottleneck becomes crucial in tuning the overall performance. Fortunately, the BPEL engine in Oracle SOA 11G (and 10G, for that matter) collects BPEL Engine Performance Statistics, which show the latencies of low level BPEL engine activities. The BPEL engine performance statistics can make it a bit easier for you to identify the performance bottleneck. Although the BPEL engine performance statistics are always available, the access to and interpretation of them are somewhat obscure in the early and current (PS5) 11G versions. This blog attempts to offer instructions that help you to enable, retrieve and interpret the performance statistics, before the future versions provides a more pleasant user experience. Overview of BPEL Engine Performance Statistics  SOA BPEL has a feature of collecting some performance statistics and store them in memory. One MBean attribute, StatLastN, configures the size of the memory buffer to store the statistics. This memory buffer is a "moving window", in a way that old statistics will be flushed out by the new if the amount of data exceeds the buffer size. Since the buffer size is limited by StatLastN, impacts of statistics collection on performance is minimal. By default StatLastN=-1, which means no collection of performance data. Once the statistics are collected in the memory buffer, they can be retrieved via another MBean oracle.as.soainfra.bpel:Location=[Server Name],name=BPELEngine,type=BPELEngine.> My friend in Oracle SOA development wrote this simple 'bpelstat' web app that looks up and retrieves the performance data from the MBean and displays it in a human readable form. It does not have beautiful UI but it is fairly useful. Although in Oracle SOA 11.1.1.5 onwards the same statistics can be viewed via a more elegant UI under "request break down" at EM -> SOA Infrastructure -> Service Engines -> BPEL -> Statistics, some unsophisticated minds like mine may still prefer the simplicity of the 'bpelstat' JSP. One thing that simple JSP does do well is that you can save the page and send it to someone to further analyze Follows are the instructions of how to install and invoke the BPEL statistic JSP. My friend in SOA Development will soon blog about interpreting the statistics. Stay tuned. Step1: Enable BPEL Engine Statistics for Each SOA Servers via Enterprise Manager First st you need to set the StatLastN to some number as a way to enable the collection of BPEL Engine Performance Statistics EM Console -> soa-infra(Server Name) -> SOA Infrastructure -> SOA Administration -> BPEL Properties Click on "More BPEL Configuration Properties" Click on attribute "StatLastN", set its value to some integer number. Typically you want to set it 1000 or more. Step 2: Download and Deploy bpelstat.war File to Admin Server, Note: the WAR file contains a JSP that does NOT have any security restriction. You do NOT want to keep in your production server for a long time as it is a security hazard. Deactivate the war once you are done. Download the bpelstat.war to your local PC At WebLogic Console, Go to Deployments -> Install Click on the "upload your file(s)" Click the "Browse" button to upload the deployment to Admin Server Accept the uploaded file as the path, click next Check the default option "Install this deployment as an application" Check "AdminServer" as the target server Finish the rest of the deployment with default settings Console -> Deployments Check the box next to "bpelstat" application Click on the "Start" button. It will change the state of the app from "prepared" to "active" Step 3: Invoke the BPEL Statistic Tool The BPELStat tool merely call the MBean of BPEL server and collects and display the in-memory performance statics. You usually want to do that after some peak loads. Go to http://<admin-server-host>:<admin-server-port>/bpelstat Enter the correct admin hostname, port, username and password Enter the SOA Server Name from which you want to collect the performance statistics. For example, SOA_MS1, etc. Click Submit Keep doing the same for all SOA servers. Step 3: Interpret the BPEL Engine Statistics You will see a few categories of BPEL Statistics from the JSP Page. First it starts with the overall latency of BPEL processes, grouped by synchronous and asynchronous processes. Then it provides the further break down of the measurements through the life time of a BPEL request, which is called the "request break down". 1. Overall latency of BPEL processes The top of the page shows that the elapse time of executing the synchronous process TestSyncBPELProcess from the composite TestComposite averages at about 1543.21ms, while the elapse time of executing the asynchronous process TestAsyncBPELProcess from the composite TestComposite2 averages at about 1765.43ms. The maximum and minimum latency were also shown. Synchronous process statistics <statistics>     <stats key="default/TestComposite!2.0.2-ScopedJMSOSB*soa_bfba2527-a9ba-41a7-95c5-87e49c32f4ff/TestSyncBPELProcess" min="1234" max="4567" average="1543.21" count="1000">     </stats> </statistics> Asynchronous process statistics <statistics>     <stats key="default/TestComposite2!2.0.2-ScopedJMSOSB*soa_bfba2527-a9ba-41a7-95c5-87e49c32f4ff/TestAsyncBPELProcess" min="2234" max="3234" average="1765.43" count="1000">     </stats> </statistics> 2. Request break down Under the overall latency categorized by synchronous and asynchronous processes is the "Request breakdown". Organized by statistic keys, the Request breakdown gives finer grain performance statistics through the life time of the BPEL requests.It uses indention to show the hierarchy of the statistics. Request breakdown <statistics>     <stats key="eng-composite-request" min="0" max="0" average="0.0" count="0">         <stats key="eng-single-request" min="22" max="606" average="258.43" count="277">             <stats key="populate-context" min="0" max="0" average="0.0" count="248"> Please note that in SOA 11.1.1.6, the statistics under Request breakdown is aggregated together cross all the BPEL processes based on statistic keys. It does not differentiate between BPEL processes. If two BPEL processes happen to have the statistic that share same statistic key, the statistics from two BPEL processes will be aggregated together. Keep this in mind when we go through more details below. 2.1 BPEL process activity latencies A very useful measurement in the Request Breakdown is the performance statistics of the BPEL activities you put in your BPEL processes: Assign, Invoke, Receive, etc. The names of the measurement in the JSP page directly come from the names to assign to each BPEL activity. These measurements are under the statistic key "actual-perform" Example 1:  Follows is the measurement for BPEL activity "AssignInvokeCreditProvider_Input", which looks like the Assign activity in a BPEL process that assign an input variable before passing it to the invocation:                                <stats key="AssignInvokeCreditProvider_Input" min="1" max="8" average="1.9" count="153">                                     <stats key="sensor-send-activity-data" min="0" max="1" average="0.0" count="306">                                     </stats>                                     <stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                 </stats> Note: because as previously mentioned that the statistics cross all BPEL processes are aggregated together based on statistic keys, if two BPEL processes happen to name their Invoke activity the same name, they will show up at one measurement (i.e. statistic key). Example 2: Follows is the measurement of BPEL activity called "InvokeCreditProvider". You can not only see that by average it takes 3.31ms to finish this call (pretty fast) but also you can see from the further break down that most of this 3.31 ms was spent on the "invoke-service".                                  <stats key="InvokeCreditProvider" min="1" max="13" average="3.31" count="153">                                     <stats key="initiate-correlation-set-again" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="invoke-service" min="1" max="13" average="3.08" count="153">                                         <stats key="prep-call" min="0" max="1" average="0.04" count="153">                                         </stats>                                     </stats>                                     <stats key="initiate-correlation-set" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="sensor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                     <stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                     <stats key="update-audit-trail" min="0" max="2" average="0.03" count="153">                                     </stats>                                 </stats> 2.2 BPEL engine activity latency Another type of measurements under Request breakdown are the latencies of underlying system level engine activities. These activities are not directly tied to a particular BPEL process or process activity, but they are critical factors in the overall engine performance. These activities include the latency of saving asynchronous requests to database, and latency of process dehydration. My friend Malkit Bhasin is working on providing more information on interpreting the statistics on engine activities on his blog (https://blogs.oracle.com/malkit/). I will update this blog once the information becomes available. Update on 2012-10-02: My friend Malkit Bhasin has published the detail interpretation of the BPEL service engine statistics at his blog http://malkit.blogspot.com/2012/09/oracle-bpel-engine-soa-suite.html.

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • What Every Developer Should Know About MSI Components

    - by Alois Kraus
    Hopefully nothing. But if you have to do more than simple XCopy deployment and you need to support updates, upgrades and perhaps side by side scenarios there is no way around MSI. You can create Msi files with a Visual Studio Setup project which is severely limited or you can use the Windows Installer Toolset. I cannot talk about WIX with my German colleagues because WIX has a very special meaning. It is funny to always use the long name when I talk about deployment possibilities. Alternatively you can buy commercial tools which help you to author Msi files but I am not sure how good they are. Given enough pain with existing solutions you can also learn the MSI Apis and create your own packaging solution. If I were you I would use either a commercial visual tool when you do easy deployments or use the free Windows Installer Toolset. Once you know the WIX schema you can create well formed wix xml files easily with any editor. Then you can “compile” from the wxs files your Msi package. Recently I had the “pleasure” to get my hands dirty with C++ (again) and the MSI technology. Installation is a complex topic but after several month of digging into arcane MSI issues I can safely say that there should exist an easier way to install and update files as today. I am not alone with this statement as John Robbins (creator of the cool tool Paraffin) states: “.. It's a brittle and scary API in Windows …”. To help other people struggling with installation issues I present you the advice I (and others) found useful and what will happen if you ignore this advice. What is a MSI file? A MSI file is basically a database with tables which reference each other to control how your un/installation should work. The basic idea is that you declare via these tables what you want to install and MSI controls the how to get your stuff onto or off your machine. Your “stuff” consists usually of files, registry keys, shortcuts and environment variables. Therefore the most important tables are File, Registry, Environment and Shortcut table which define what will be un/installed. The key to master MSI is that every resource (file, registry key ,…) is associated with a MSI component. The actual payload consists of compressed files in the CAB format which can either be embedded into the MSI file or reside beside the MSI file or in a subdirectory below it. To examine MSI files you need Orca a free MSI editor provided by MS. There is also another free editor called Super Orca which does support diffs between MSI and it does not lock the MSI files. But since Orca comes with a shell extension I tend to use only Orca because it is so easy to right click on a MSI file and open it with this tool. How Do I Install It? Double click it. This does work for fresh installations as well as major upgrades. Updates need to be installed via the command line via msiexec /i <msi> REINSTALL=ALL REINSTALLMODE=vomus   This tells the installer to reinstall all already installed features (new features will NOT be installed). The reinstallmode letters do force an overwrite of the old cached package in the %WINDIR%\Installer folder. All files, shortcuts and registry keys are redeployed if they are missing or need to be replaced with a newer version. When things did go really wrong and you want to overwrite everything unconditionally use REINSTALLMODE=vamus. How To Enable MSI Logs? You can download a MSI from Microsoft which installs some registry keys to enable full MSI logging. The log files can be found in your %TEMP% folder and are called MSIxxxx.log. Alternatively you can add to your msiexec command line the option msiexec …. /l*vx <LogFileName> Personally I find it rather strange that * does not mean full logging. To really get all logs I need to add v and x which is documented in the msiexec help but I still find this behavior unintuitive. What are MSI components? The whole MSI logic is bound to the concept of MSI components. Nearly every msi table has a Component column which binds an installable resource to a component. Below are the screenshots of the FeatureComponents and Component table of an example MSI. The Feature table defines basically the feature hierarchy.  To find out what belongs to a feature you need to look at the FeatureComponents table where for each feature the components are listed which will be installed when a feature is installed. The MSI components are defined in the  Component table. This table has as first column the component name and as second column the component id which is a GUID. All resources you want to install belong to a MSI component. Therefore nearly all MSI tables have a Component_ column which contains the component name. If you look e.g. a the File table you see that every file belongs to a component which is true for all other tables which install resources. The component table is the glue between all other tables which contain the resources you want to install. So far so easy. Why is MSI then so complex? Most MSI problems arise from the fact that you did violate a MSI component rule in one or the other way. When you install a feature the reference count for all components belonging to this feature will increase by one. If your component is installed by more than one feature it will get a higher refcount. When you uninstall a feature its refcount will drop by one. Interesting things happen if the component reference count reaches zero: Then all associated resources will be deleted. That looks like a reasonable thing and it is. What it makes complex are the strange component rules you have to follow. Below are some important component rules from the Tao of the Windows Installer … Rule 16: Follow Component Rules Components are a very important part of the Installer technology. They are the means whereby the Installer manages the resources that make up your application. The SDK provides the following guidelines for creating components in your package: Never create two components that install a resource under the same name and target location. If a resource must be duplicated in multiple components, change its name or target location in each component. This rule should be applied across applications, products, product versions, and companies. Two components must not have the same key path file. This is a consequence of the previous rule. The key path value points to a particular file or folder belonging to the component that the installer uses to detect the component. If two components had the same key path file, the installer would be unable to distinguish which component is installed. Two components however may share a key path folder. Do not create a version of a component that is incompatible with all previous versions of the component. This rule should be applied across applications, products, product versions, and companies. Do not create components containing resources that will need to be installed into more than one directory on the user’s system. The installer installs all of the resources in a component into the same directory. It is not possible to install some resources into subdirectories. Do not include more than one COM server per component. If a component contains a COM server, this must be the key path for the component. Do not specify more than one file per component as a target for the Start menu or a Desktop shortcut. … And these rules do not even talk about component ids, update packages and upgrades which you need to understand as well. Lets suppose you install two MSIs (MSI1 and MSI2) which have the same ComponentId but different component names. Both do install the same file. What will happen when you uninstall MSI2?   Hm the file should stay there. But the component names are different. Yes and yes. But MSI uses not use the component name as key for the refcount. Instead the ComponentId column of the Component table which contains a GUID is used as identifier under which the refcount is stored. The components Comp1 and Comp2 are identical from the MSI perspective. After the installation of both MSIs the Component with the Id {100000….} has a refcount of two. After uninstallation of one MSI there is still a refcount of one which drops to zero just as expected when we uninstall the last msi. Then the file which was the same for both MSIs is deleted. You should remember that MSI keeps a refcount across MSIs for components with the same component id. MSI does manage components not the resources you did install. The resources associated with a component are then and only then deleted when the refcount of the component reaches zero.   The dependencies between features, components and resources can be described as relations. m,k are numbers >= 1, n can be 0. Inside a MSI the following relations are valid Feature    1  –> n Components Component    1 –> m Features Component      1  –>  k Resources These relations express that one feature can install several components and features can share components between them. Every (meaningful) component will install at least one resource which means that its name (primary key to stay in database speak) does occur in some other table in the Component column as value which installs some resource. Lets make it clear with an example. We want to install with the feature MainFeature some files a registry key and a shortcut. We can then create components Comp1..3 which are referenced by the resources defined in the corresponding tables.   Feature Component Registry File Shortcuts MainFeature Comp1 RegistryKey1     MainFeature Comp2   File.txt   MainFeature Comp3   File2.txt Shortcut to File2.txt   It is illegal that the same resource is part of more than one component since this would break the refcount mechanism. Lets illustrate this:            Feature ComponentId Resource Reference Count Feature1 {1000-…} File1.txt 1 Feature2 {2000-….} File1.txt 1 The installation part works well but what happens when you uninstall Feature2? Component {20000…} gets a refcount of zero where MSI deletes all resources belonging to this component. In this case File1.txt will be deleted. But Feature1 still has another component {10000…} with a refcount of one which means that the file was deleted too early. You just have ruined your installation. To fix it you then need to click on the Repair button under Add/Remove Programs to let MSI reinstall any missing registry keys, files or shortcuts. The vigilant reader might has noticed that there is more in the Component table. Beside its name and GUID it has also an installation directory, attributes and a KeyPath. The KeyPath is a reference to a file or registry key which is used to detect if the component is already installed. This becomes important when you repair or uninstall a component. To find out if the component is already installed MSI checks if the registry key or file referenced by the KeyPath property does exist. When it does not exist it assumes that it was either already uninstalled (can lead to problems during uninstall) or that it is already installed and all is fine. Why is this detail so important? Lets put all files into one component. The KeyPath should be then one of the files of your component to check if it was installed or not. When your installation becomes corrupt because a file was deleted you cannot repair it with the Repair button under Add/Remove Programs because MSI checks the component integrity via the Resource referenced by its KeyPath. As long as you did not delete the KeyPath file MSI thinks all resources with your component are installed and never executes any repair action. You get even more trouble when you try to remove files during an upgrade (you cannot remove files during an update) from your super component which contains all files. The only way out and therefore best practice is to assign for every resource you want to install an extra component. This ensures painless updatability and repairs and you have much less effort to remove specific files during an upgrade. In effect you get this best practice relation Feature 1  –> n Components Component   1  –>  1 Resources MSI Component Rules Rule 1 – One component per resource Every resource you want to install (file, registry key, value, environment value, shortcut, directory, …) must get its own component which does never change between versions as long as the install location is the same. Penalty If you add more than one resources to a component you will break the repair capability of MSI because the KeyPath is used to check if the component needs repair. MSI ComponentId Files MSI 1.0 {1000} File1-5 MSI 2.0 {2000} File2-5 You want to remove File1 in version 2.0 of your MSI. Since you want to keep the other files you create a new component and add them there. MSI will delete all files if the component refcount of {1000} drops to zero. The files you want to keep are added to the new component {2000}. Ok that does work if your upgrade does uninstall the old MSI first. This will cause the refcount of all previously installed components to reach zero which means that all files present in version 1.0 are deleted. But there is a faster way to perform your upgrade by first installing your new MSI and then remove the old one.  If you choose this upgrade path then you will loose File1-5 after your upgrade and not only File1 as intended by your new component design.   Rule 2 – Only add, never remove resources from a component If you did follow rule 1 you will not need Rule 2. You can add in a patch more resources to one component. That is ok. But you can never remove anything from it. There are tricky ways around that but I do not want to encourage bad component design. Penalty Lets assume you have 2 MSI files which install under the same component one file   MSI1 MSI2 {1000} - ComponentId {1000} – ComponentId File1.txt File2.txt   When you install and uninstall both MSIs you will end up with an installation where either File1 or File2 will be left. Why? It seems that MSI does not store the resources associated with each component in its internal database. Instead Windows will simply query the MSI that is currently uninstalled for all resources belonging to this component. Since it will find only one file and not two it will only uninstall one file. That is the main reason why you never can remove resources from a component!   Rule 3 Never Remove A Component From an Update MSI. This is the same as if you change the GUID of a component by accident for your new update package. The resulting update package will not contain all components from the previously installed package. Penalty When you remove a component from a feature MSI will set the feature state during update to Advertised and log a warning message into its log file when you did enable MSI logging. SELMGR: ComponentId '{2DCEA1BA-3E27-E222-484C-D0D66AEA4F62}' is registered to feature 'xxxxxxx, but is not present in the Component table.  Removal of components from a feature is not supported! MSI (c) (24:44) [07:53:13:436]: SELMGR: Removal of a component from a feature is not supported Advertised means that MSI treats all components of this feature as not installed. As a consequence during uninstall nothing will be removed since it is not installed! This is not only bad because uninstall does no longer work but this feature will also not get the required patches. All other features which have followed component versioning rules for update packages will be updated but the one faulty feature will not. This results in very hard to find bugs why an update was only partially successful. Things got better with Windows Installer 4.5 but you cannot rely on that nobody will use an older installer. It is a good idea to add to your update msiexec call MSIENFORCEUPGRADECOMPONENTRULES=1 which will abort the installation if you did violate this rule.

    Read the article

  • ASP.NET MVC 2 Model Binding for a Collection

    - by nmarun
    Yes, my yet another post on Model Binding (previous one is here), but this one uses features presented in MVC 2. How I got to writing this blog? Well, I’m on a project where we’re doing some MVC things for a shopping cart. Let me show you what I was working with. Below are my model classes: 1: public class Product 2: { 3: public int Id { get; set; } 4: public string Name { get; set; } 5: public int Quantity { get; set; } 6: public decimal UnitPrice { get; set; } 7: } 8:   9: public class Totals 10: { 11: public decimal SubTotal { get; set; } 12: public decimal Tax { get; set; } 13: public decimal Total { get; set; } 14: } 15:   16: public class Basket 17: { 18: public List<Product> Products { get; set; } 19: public Totals Totals { get; set;} 20: } The view looks as below:  1: <h2>Shopping Cart</h2> 2:   3: <% using(Html.BeginForm()) { %> 4: 5: <h3>Products</h3> 6: <% for (int i = 0; i < Model.Products.Count; i++) 7: { %> 8: <div style="width: 100px;float:left;">Id</div> 9: <div style="width: 100px;float:left;"> 10: <%= Html.TextBox("ID", Model.Products[i].Id) %> 11: </div> 12: <div style="clear:both;"></div> 13: <div style="width: 100px;float:left;">Name</div> 14: <div style="width: 100px;float:left;"> 15: <%= Html.TextBox("Name", Model.Products[i].Name) %> 16: </div> 17: <div style="clear:both;"></div> 18: <div style="width: 100px;float:left;">Quantity</div> 19: <div style="width: 100px;float:left;"> 20: <%= Html.TextBox("Quantity", Model.Products[i].Quantity)%> 21: </div> 22: <div style="clear:both;"></div> 23: <div style="width: 100px;float:left;">Unit Price</div> 24: <div style="width: 100px;float:left;"> 25: <%= Html.TextBox("UnitPrice", Model.Products[i].UnitPrice)%> 26: </div> 27: <div style="clear:both;"><hr /></div> 28: <% } %> 29: 30: <h3>Totals</h3> 31: <div style="width: 100px;float:left;">Sub Total</div> 32: <div style="width: 100px;float:left;"> 33: <%= Html.TextBox("SubTotal", Model.Totals.SubTotal)%> 34: </div> 35: <div style="clear:both;"></div> 36: <div style="width: 100px;float:left;">Tax</div> 37: <div style="width: 100px;float:left;"> 38: <%= Html.TextBox("Tax", Model.Totals.Tax)%> 39: </div> 40: <div style="clear:both;"></div> 41: <div style="width: 100px;float:left;">Total</div> 42: <div style="width: 100px;float:left;"> 43: <%= Html.TextBox("Total", Model.Totals.Total)%> 44: </div> 45: <div style="clear:both;"></div> 46: <p /> 47: <input type="submit" name="Submit" value="Submit" /> 48: <% } %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Nothing fancy, just a bunch of div’s containing textboxes and a submit button. Just make note that the textboxes have the same name as the property they are going to display. Yea, yea, I know. I’m displaying unit price as a textbox instead of a label, but that’s beside the point (and trust me, this will not be how it’ll look on the production site!!). The way my controller works is that initially two dummy products are added to the basked object and the Totals are calculated based on what products were added in what quantities and their respective unit price. So when the page loads in edit mode, where the user can change the quantity and hit the submit button. In the ‘post’ version of the action method, the Totals get recalculated and the new total will be displayed on the screen. Here’s the code: 1: public ActionResult Index() 2: { 3: Product product1 = new Product 4: { 5: Id = 1, 6: Name = "Product 1", 7: Quantity = 2, 8: UnitPrice = 200m 9: }; 10:   11: Product product2 = new Product 12: { 13: Id = 2, 14: Name = "Product 2", 15: Quantity = 1, 16: UnitPrice = 150m 17: }; 18:   19: List<Product> products = new List<Product> { product1, product2 }; 20:   21: Basket basket = new Basket 22: { 23: Products = products, 24: Totals = ComputeTotals(products) 25: }; 26: return View(basket); 27: } 28:   29: [HttpPost] 30: public ActionResult Index(Basket basket) 31: { 32: basket.Totals = ComputeTotals(basket.Products); 33: return View(basket); 34: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } That’s that. Now I run the app, I see two products with the totals section below them. I look at the view source and I see that the input controls have the right ID, the right name and the right value as well. 1: <input id="ID" name="ID" type="text" value="1" /> 2: <input id="Name" name="Name" type="text" value="Product 1" /> 3: ... 4: <input id="ID" name="ID" type="text" value="2" /> 5: <input id="Name" name="Name" type="text" value="Product 2" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } So just as a regular user would do, I change the quantity value of one of the products and hit the submit button. The ‘post’ version of the Index method gets called and I had put a break-point on line 32 in the above snippet. When I hovered my mouse on the ‘basked’ object, happily assuming that the object would be all bound and ready for use, I was surprised to see both basket.Products and basket.Totals were null. Huh? A little research and I found out that the reason the DefaultModelBinder could not do its job is because of a naming mismatch on the input controls. What I mean is that when you have to bind to a custom .net type, you need more than just the property name. You need to pass a qualified name to the name property of the input control. I modified my view and the emitted code looked as below: 1: <input id="Product_Name" name="Product.Name" type="text" value="Product 1" /> 2: ... 3: <input id="Product_Name" name="Product.Name" type="text" value="Product 2" /> 4: ... 5: <input id="Totals_SubTotal" name="Totals.SubTotal" type="text" value="550" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, I update the quantity and hit the submit button and I see that the Totals object is populated, but the Products list is still null. Once again I went: ‘Hmm.. time for more research’. I found out that the way to do this is to provide the name as: 1: <%= Html.TextBox(string.Format("Products[{0}].ID", i), Model.Products[i].Id) %> 2: <!-- this will be rendered as --> 3: <input id="Products_0__ID" name="Products[0].ID" type="text" value="1" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } It was only now that I was able to see both the products and the totals being properly bound in the ‘post’ action method. Somehow, I feel this is kinda ‘clunky’ way of doing things. Seems like people at MS felt in a similar way and offered us a much cleaner way to solve this issue. The simple solution is that instead of using a Textbox, we can either use a TextboxFor or an EditorFor helper method. This one directly spits out the name of the input property as ‘Products[0].ID and so on. Cool right? I totally fell for this and changed my UI to contain EditorFor helper method. At this point, I ran the application, changed the quantity field and pressed the submit button. Of course my basket object parameter in my action method was correctly bound after these changes. I let the app complete the rest of the lines in the action method. When the page finally rendered, I did see that the quantity was changed to what I entered before the post. But, wait a minute, the totals section did not reflect the changes and showed the old values. My status: COMPLETELY PUZZLED! Just to recap, this is what my ‘post’ Index method looked like: 1: [HttpPost] 2: public ActionResult Index(Basket basket) 3: { 4: basket.Totals = ComputeTotals(basket.Products); 5: return View(basket); 6: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } A careful debug confirmed that the basked.Products[0].Quantity showed the updated value and the ComputeTotals() method also returns the correct totals. But still when I passed this basket object, it ended up showing the old totals values only. I began playing a bit with the code and my first guess was that the input controls got their values from the ModelState object. For those who don’t know, the ModelState is a temporary storage area that ASP.NET MVC uses to retain incoming attempted values plus binding and validation errors. Also, the fact that input controls populate the values using data taken from: Previously attempted values recorded in the ModelState["name"].Value.AttemptedValue Explicitly provided value (<%= Html.TextBox("name", "Some value") %>) ViewData, by calling ViewData.Eval("name") FYI: ViewData dictionary takes precedence over ViewData's Model properties – read more here. These two indicators led to my guess. It took me quite some time, but finally I hit this post where Brad brilliantly explains why this is the preferred behavior. My guess was right and I, accordingly modified my code to reflect the following way: 1: [HttpPost] 2: public ActionResult Index(Basket basket) 3: { 4: // read the following posts to see why the ModelState 5: // needs to be cleared before passing it the view 6: // http://forums.asp.net/t/1535846.aspx 7: // http://forums.asp.net/p/1527149/3687407.aspx 8: if (ModelState.IsValid) 9: { 10: ModelState.Clear(); 11: } 12:   13: basket.Totals = ComputeTotals(basket.Products); 14: return View(basket); 15: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } What this does is that in the case where your ModelState IS valid, it clears the dictionary. This enables the values to be read from the model directly and not from the ModelState. So the verdict is this: If you need to pass other parameters (like html attributes and the like) to your input control, use 1: <%= Html.TextBox(string.Format("Products[{0}].ID", i), Model.Products[i].Id) %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Since, in EditorFor, there is no direct and simple way of passing this information to the input control. If you don’t have to pass any such ‘extra’ piece of information to the control, then go the EditorFor way. The code used in the post can be found here.

    Read the article

  • CodePlex Daily Summary for Wednesday, November 23, 2011

    CodePlex Daily Summary for Wednesday, November 23, 2011Popular ReleasesVisual Leak Detector for Visual C++ 2008/2010: v2.2.1: Enhancements: * strdup and _wcsdup functions support added. * Preliminary support for VS 11 added. Bugs Fixed: * Low performance after upgrading from VLD v2.1. * Memory leaks with static linking fixed (disabled calloc support). * Runtime error R6002 fixed because of wrong memory dump format. * version.h fixed in installer. * Some PVS studio warning fixed.NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.10: 3.6.0.10 22-Nov-2011 Update: Removed PreEmptive Platform integration (PreEmptive analytics) Removed all PreEmptive attributes Removed PreEmptive.dll assembly references from all projects Added first support to ADAM/AD LDS Thanks to PatBea. Work Item 9775: http://netsqlazman.codeplex.com/workitem/9775Developer Team Article System Management: DTASM v1.3: ?? ??? ???? 3 ????? ???? ???? ????? ??? : - ????? ?????? ????? ???? ?? ??? ???? ????? ?? ??? ? ?? ???? ?????? ???? ?? ???? ????? ?? . - ??? ?? ???? ????? ???? ????? ???? ???? ?? ????? , ?????? ????? ????? ?? ??? . - ??? ??????? ??? ??? ???? ?? ????? ????? ????? .SharePoint 2010 FBA Pack: SharePoint 2010 FBA Pack 1.2.0: Web parts are now fully customizable via html templates (Issue #323) FBA Pack is now completely localizable using resource files. Thank you David Chen for submitting the code as well as Chinese translations of the FBA Pack! The membership request web part now gives the option of having the user enter the password and removing the captcha (Issue # 447) The FBA Pack will now work in a zone that does not have FBA enabled (Another zone must have FBA enabled, and the zone must contain the me...SharePoint 2010 Education Demo Project: Release SharePoint SP1 for Education Solutions: This release includes updates to the Content Packs for SharePoint SP1. All Content Packs have been updated to install successfully under SharePoint SP1SQL Monitor - tracking sql server activities: SQLMon 4.1 alpha 6: 1. improved support for schema 2. added find reference when right click on object list 3. added object rename supportBugNET Issue Tracker: BugNET 0.9.126: First stable release of version 0.9. Upgrades from 0.8 are fully supported and upgrades to future releases will also be supported. This release is now compiled against the .NET 4.0 framework and is a requirement. Because of this the web.config has significantly changed. After upgrading, you will need to configure the authentication settings for user registration and anonymous access again. Please see our installation / upgrade instructions for more details: http://wiki.bugnetproject.c...Anno 2070 Assistant: v0.1.0 (STABLE): Version 0.1.0 Features Production Chains Eco Production Chains (Complete) Tycoon Production Chains (Disabled - Incomplete) Tech Production Chains (Disabled - Incomplete) Supply (Disabled - Incomplete) Calculator (Disabled - Incomplete) Building Layouts Eco Building Layouts (Complete) Tycoon Building Layouts (Disabled - Incomplete) Tech Building Layouts (Disabled - Incomplete) Credits (Complete)Free SharePoint 2010 Sites Templates: SharePoint Server 2010 Sites Templates: here is the list of sites templates to be downloadedVsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 30 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 30 (beta)New: Support for TortoiseSVN 1.7 added. (the download contains both setups, for TortoiseSVN 1.6 and 1.7) New: OpenModifiedDocumentDialog displays conflicted files now. New: OpenModifiedDocument allows to group items by changelist now. Fix: OpenModifiedDocumentDialog caused Visual Studio 2010 to freeze sometimes. Fix: The installer didn...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.30: Highlight features & improvements: • Performance optimization. • Back in stock notifications. • Product special price support. • Catalog mode (based on customer role) To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).WPF Converters: WPF Converters V1.2.0.0: support for enumerations, value types, and reference types in the expression converter's equality operators the expression converter now handles DependencyProperty.UnsetValue as argument values correctly (#4062) StyleCop conformance (more or less)Json.NET: Json.NET 4.0 Release 4: Change - JsonTextReader.Culture is now CultureInfo.InvariantCulture by default Change - KeyValurPairConverter no longer cares about the order of the key and value properties Change - Time zone conversions now use new TimeZoneInfo instead of TimeZone Fix - Fixed boolean values sometimes being capitalized when converting to XML Fix - Fixed error when deserializing ConcurrentDictionary Fix - Fixed serializing some Uris returning the incorrect value Fix - Fixed occasional error when...Media Companion: MC 3.423b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Replaced 'Rebuild' with 'Refresh' throughout entire code. Rebuild will now be known as Refresh. mc_com.exe has been fully updated TV Show Resolutions... Resolved issue #206 - having to hit save twice when updating runtime manually Shrunk cache size and lowered loading times f...Delta Engine: Delta Engine Beta Preview v0.9.1: v0.9.1 beta release with lots of refactoring, fixes, new samples and support for iOS, Android and WP7 (you need a Marketplace account however). If you want a binary release for the games (like v0.9.0), just say so in the Forum or here and we will quickly prepare one. It is just not much different from v0.9.0, so I left it out this time. See http://DeltaEngine.net/Wiki.Roadmap for details.ASP.net Awesome Samples (Web-Forms): 1.0 samples: Demos and Tutorials for ASP.net Awesome VS2008 are in .NET 3.5 VS2010 are in .NET 4.0 (demos for the ASP.net Awesome jQuery Ajax Controls)SharpMap - Geospatial Application Framework for the CLR: SharpMap-0.9-AnyCPU-Trunk-2011.11.17: This is a build of SharpMap from the 0.9 development trunk as per 2011-11-17 For most applications the AnyCPU release is the recommended, but in case you need an x86 build that is included to. For some dataproviders (GDAL/OGR, SqLite, PostGis) you need to also referense the SharpMap.Extensions assembly For SqlServer Spatial you need to reference the SharpMap.SqlServerSpatial assemblyAJAX Control Toolkit: November 2011 Release: AJAX Control Toolkit Release Notes - November 2011 Release Version 51116November 2011 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 - Binary – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 - Binary – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found h...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.36: Fix for issue #16908: string literals containing ASP.NET replacement syntax fail if the ASP.NET code contains the same character as the string literal delimiter. Also, we shouldn't be changing the delimiter for those literals or combining them with other literals; the developer may have specifically chosen the delimiter used because of possible content inserted by ASP.NET code. This logic is normally off; turn it on via the -aspnet command-line flag (or the Code.Settings.AllowEmbeddedAspNetBl...MVC Controls Toolkit: Mvc Controls Toolkit 1.5.5: Added: Now the DateRanteAttribute accepts complex expressions containing "Now" and "Today" as static minimum and maximum. Menu, MenuFor helpers capable of handling a "currently selected element". The developer can choose between using a standard nested menu based on a standard SimpleMenuItem class or specifying an item template based on a custom class. Added also helpers to build the tree structure containing all data items the menu takes infos from. Improved the pager. Now the developer ...New ProjectsActiveWorlds World Server Admin PowerShell SnapIn: The purpose of this PowerShell SnapIn is to provide a set of tools to administer the world server from PowerShell. It leverages the ActiveWorlds SDK .NET Wrapper to provide this functionality.Aigu: Enter special characters like you would on your mobile phone. For instance, if you want to type 'é', you just hold down 'e' and a menu will appear. Selected the desired character using the arrow keys and press 'enter'. Simple but powerful.Are you workaholic?: Are you a workaholic? Did your Doctor advice you not to stare at the computer monitor for a long time? Then this app is perfectly made for you. It runs in the background, and alerts you to take periodic rests for your eyes and body. What's more, It's open source (MS-PL).ATDIS PoC: privateAuto Version Web Assets: The AVWA project is an HTTP Module written in C# that is designed to allow for versioning of various web assets such as .CSS and .JS files. This allows you to publish new versions of these files without having to force the server or the client browsers to expire cache.Bachelor Thesis Algorithm Test Bed: Algorithm Test Bed for my Bachelor ThesisBase64: Simple application helps converting strings and files from or to Base64 string. You can use any encoding to convert while a sidebar previews decoded string for all other encodings.BoracayExpress: BoracayExpressC++ Framework for Test Driven Development: A testing framework for C++ written in C++.Class2Table: Class2Table aka Entity2Table. Easy tool that allows creation of SQL tables from .Net types.Code for Demos & Experiments: This is where I will post code from demos and presentationsCodeMaker: CodeMaker?????????: 1、?????????? 2、???? 3、????? 4、??Python????????? ConsoleCommand: ConsoleCommand provides certain .Net commands for access from javascript console engines. Included are commands to set the text and background colors, as well as list and extract resources compiled in a .Net dll. Converter: Character code conversion tools ???????? CryptoInator - self contained, self-encrypting, self-decrypting image viewer: Original developed to encrypt and store NemID images in Denmark. DAiBears: Something, something, botDelicious Notify Plugin: Lets you push a blog post straight to Delicious from Live WriterDeveloperFile: Compresses Javascripts using the YUI .NET project. Loops through the root folder and subfolders for files matching the debug extension and creates new files using the release extension. (File extensions must match exactly).DotNetNuke SharePoint File Explorer: A DotNetNuke SharePoint File ExplorerDouban FM: WP7 Douban FM appGame Lib: Game Library is a open-source game library to allow focusing on the fun part of a game. It is developed in C#, but will be ported to C++ and VB.net.Google reader notes to Delicious Export tool (WPF): Google reader discontinued note in reader features. Current google reader allows to export users old notes in JSON format, This App will parse the JSON file & upload it to it delicious , delicious is a good alternative for note in readerHtml Source Transmitter Control: This web control allows getting a source of a web page, that will displayed before submit. So, developer can store a view of the html page, that was before server exception. It helps to reproduce bugs and can be used with other logging systems.Ideopuzzle: A puzzle gameImageShack-Uploader: This project demonstrates how to upload files automated to imageshack.us and other image hosters with C#.Insert Acronym Tags: Lets you insert <acronym> and <abbr> tags into your blog entry more easily.Insert Quick Link: Allows you to paste a link into the Writer window and have the a window similar to the one in Writer where you can change what text is to appear, open in new window, etc.Insert Video Plugin: Allows you to insert a video into a blog entry from a multitude of different sitesIoCWrap: Provides a wrapper to the various IoC container implementations so that it is possible to switch to a different provider without changing any application code.kaveepoj: sharepoint projectKinect Quiz Engine: Fun quiz game for the Kinect.Klaverjas: Test application for testing different new technologies in .NET (WCF, DataServices, C# stuff, Entity...etc.)Man In The Middle: A cyberpunk themed action with puzzle and strategy elements. Made with XNA as part of a game development course at the IT University of Copenhagen by Bo Bendtsen, Jonas Flensbak, Daniel Kromand, Jess Rahbek & Darryl Woodford.MediaSelektor: Simple tool to select mediasMicajah Mindtouch Deki Wiki Copier: Small C# application to move data between 2 Deki Wiki installs or, more importantly, from a wik.is account to a locally installed systemMineFlagger: MineFlagger is a mine clearing game modeled after Microsoft’s Minesweeper. In addition to standard play, MineFlagger incorporates an AI for fun and training.myXbyqwrhjadsfasfhgf: myXbyqwrhjadsfasfhgfnatoop: natoopNauplius.KeyStore: Provides secure application key storage backed by SQL 2008 and Active Directory.ObjectDB: An object database written using C# 4 and Mono.Cecil.PaceR: PaceR is an attempt to encapsulate a lot of the common code functionality I use on different projects. Instead of recreating functionality from memory or worse, copying from older projects, I'd like to have a central location to maintain this common code. Parseq: Parseq is a Parser Combinator library written in C# (version 2.0).PowerShell Network Adapter Configuration module: PowerShell Network Adapter Configuration module is a PowerShell module which provides functions for managing network adapters using WMI.public traffic tracker: This is a university project for a .net course. We develop a public traffic tracker applications for Windows Phone 7 devices, that can give information about the actual positions of the nearest vehicle on a given line. The speciality is that we use only the GPS information of the users' WP7 devices, so this is a completely software solution without any hardware investment. The disatvantage is that for the real operation we would need a lot of active WP7 user.puyo: puyoRadioTroll: Projeto web Radio TrollRead Feed Community: Read Feed CommunityReviewer: Reviewer.dk - Dansk spil og anmeldelsessite.Rollout Sharepoint Solutions - ROSS: ROSS performs the following actions: - Delete sitecollection and restart services - 'Get Latest Version' from SourceSafe - Rebuild Solution - Install all wsp solutions - Create SiteCollections - Check for build en provisioning errors - Send email to developers if errors occurredSchool Management: school managementSQL File Executer: This project is a class library written in c# which is used for executing *.sql files in remote server. Simply one dll file. You include it in your web project, add using statement at the top of your page, pass the parameters inside. Rest, it will do.Startup Manager: Startup Manager launches all startup programs at a managed rate therefore meaning that your computer doesn't crash everytime it starts up and you can use it immediately.stetic: ...Test Infrastructure Guidance: The purpose of this project is to provide guidance to testers in using TFS effectively as an ALM solution. TFS is much more than a simple code repository. Used with Visual Studio it can form a powerful testing solution and remove a lot of pain in dealing with test infrastructure overhead.Tête-à-tête: Tete-a-tete is an address book with a built-in function to send electronic mail over the Internet.Tipeysh! - Add-in that helps you creating C/C++ header files on a single click: Are you also feel miserable when you need to create a new header file in your Visual Studio C/C++ project? Repeatedly choosing "new header file", then writing the annoying (but needed) "#ifndef" section, then writing the class name with it's "private", "protected" and "public" access modifiers... too much clicks and typewriting! Well, there is a solution: Tipeysh! is a simple, easy to use, very handy and configurable Visual Studio Add-In, compatible for both the 2005 and 2008 versions. Once ...UMN Dashboard Project: academic projUsersMOSS: UsersMOSS est une petite application permettant de consulter sur un serveur MOSS les sites web (SPWeb) les users (SPUser), et les groupes (SPGroup). Cette application utilise le modèle objet de MOSS pour inspecter le contenu des objets d'un serveur MOSS. Cette application est loin d'être professionnelle, ou même terminée, mais elle me rend très souvent service. Surtout ne l'utilisez pas sur un serveur de production car le gestion du GC n'est pas faite, ce qui peut provoquer des plantages de v...UtilityLibrary.Win32: UtilityLibrary.Win32UW iLearn: The iLearn activity inference platform is a suite of desktop and mobile tools for logging, modeling, and classifying sensor data for mobile devices. It was created at the University of Washington.VsDocGen: Dynamic javascript documentation generation directly from xml comment documented source code.Windows Live Spaces Photo Album plugin: This is going to be a plugin for Windows Live Writer that will allow you to browse a Windows Live Space Photo Album.Windows Live Writer Plugin for Amazon Books using CueCat: This Windows Live Writer Plugin is for users who use WLW and wish to use their CueCat to scan books. ItemLookups are run against Amazon via its AWS and book image, title, author, and publisher is returned. This project was first created by Scott Hanselman on MSDN's Coding4Fun! X7: X7 makes it easier for win7user to clean the system. You'll no longer have to delete useless stuff in your win7. It's developed in bat.xDT - Commander: Using this application, the user can assign shortcuts (short texts) for various links/URLs. These short texts will be typed into a Textbox to then launch/go to the target (similar to the "Run" program in Windows).

    Read the article

  • What is hogging my connection?

    - by SF.
    At times it seems like dozens, if not hundreds of root-owned HTTP connections spring up. This is not much of a problem on LAN or WLAN as each of them seems to transfer very little, but if I use GPRS link, my ping times go into minutes (seriously, 80000ms is not infrequent!) and all connections grind to a halt waiting till these end. This usually lasts some 15 minutes and ends about when I start troubleshooting it for real. I've managed to capture a fragment of Nethogs output NetHogs version 0.8.0 PID USER PROGRAM DEV SENT RECEIVED ? root 37.209.147.180:59854-141.101.114.59:80 0.013 0.000 KB/sec ? root 37.209.147.180:59853-141.101.114.59:80 0.000 0.000 KB/sec ? root 37.209.147.180:52804-173.194.70.95:80 0.000 0.000 KB/sec 1954 bw /home/bw/.dropbox-dist/dropbox ppp0 0.000 0.000 KB/sec ? root 37.209.147.180:59851-141.101.114.59:80 0.000 0.000 KB/sec ? root 37.209.147.180:59850-141.101.114.59:80 0.000 0.000 KB/sec ? root 37.209.147.180:52801-173.194.70.95:80 0.000 0.000 KB/sec 13301 bw /usr/lib/firefox/firefox ppp0 0.000 0.000 KB/sec ? root unknown TCP 0.000 0.000 KB/sec Unfortunately, it doesn't display the owning process of these. Does anyone recognize these addresses or is able to suggest how to troubleshoot it further or disable it? Is it some automatic update or something like that? EDIT: per request; netstat -n, for obvious reason that normal netstat won't ever launch as all DNS requests are hogged just the same. netstat -n Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 1 93.154.166.62:51314 198.252.206.16:80 FIN_WAIT1 tcp 0 1 37.209.147.180:44098 198.252.206.16:80 FIN_WAIT1 tcp 0 1 37.209.147.180:59855 141.101.114.59:80 FIN_WAIT1 tcp 1 0 192.168.43.224:38237 213.189.45.39:443 CLOSE_WAIT tcp 1 0 93.154.146.186:35167 75.101.152.29:80 CLOSE_WAIT tcp 1 0 192.168.43.224:32939 199.15.160.100:80 CLOSE_WAIT tcp 1 0 192.168.43.224:55619 63.245.217.207:443 CLOSE_WAIT tcp 1 0 93.154.146.186:60210 75.101.152.29:443 CLOSE_WAIT tcp 1 0 192.168.43.224:32944 199.15.160.100:80 CLOSE_WAIT tcp 0 1 37.209.147.180:52804 173.194.70.95:80 FIN_WAIT1 tcp 1 0 93.154.146.186:46606 23.21.151.181:80 CLOSE_WAIT tcp 1 0 93.154.146.186:52619 107.22.246.76:80 CLOSE_WAIT tcp 415 0 93.154.146.186:36156 82.112.106.104:80 CLOSE_WAIT tcp 1 0 93.154.146.186:50352 107.22.246.76:443 CLOSE_WAIT tcp 1 0 192.168.43.224:55000 213.189.45.44:443 CLOSE_WAIT tcp 0 1 37.209.147.180:59853 141.101.114.59:80 FIN_WAIT1 tcp 1 0 192.168.43.224:32937 199.15.160.100:80 CLOSE_WAIT tcp 1 0 192.168.43.224:56055 93.184.221.40:80 CLOSE_WAIT tcp 415 0 93.154.146.186:36155 82.112.106.104:80 CLOSE_WAIT tcp 0 1 37.209.147.180:44097 198.252.206.16:80 FIN_WAIT1 tcp 1 0 93.154.146.186:35166 75.101.152.29:80 CLOSE_WAIT tcp 1 0 192.168.43.224:32943 199.15.160.100:80 CLOSE_WAIT tcp 1 0 93.154.146.186:46607 23.21.151.181:80 CLOSE_WAIT tcp 1 0 93.154.146.186:36422 23.21.151.181:443 CLOSE_WAIT tcp 1 0 192.168.43.224:36081 93.184.220.148:80 CLOSE_WAIT tcp 1 0 192.168.43.224:44462 213.189.45.29:443 CLOSE_WAIT tcp 1 0 192.168.43.224:32938 199.15.160.100:80 CLOSE_WAIT tcp 1 0 93.154.146.186:36419 23.21.151.181:443 CLOSE_WAIT tcp 0 497 93.154.166.62:51313 198.252.206.16:80 FIN_WAIT1 tcp 0 1 37.209.147.180:59851 141.101.114.59:80 FIN_WAIT1 tcp 0 1 37.209.147.180:44095 198.252.206.16:80 FIN_WAIT1 tcp 1 0 93.154.146.186:46611 23.21.151.181:80 CLOSE_WAIT tcp 1 0 192.168.43.224:38236 213.189.45.39:443 CLOSE_WAIT tcp 0 171 37.209.147.180:45341 173.194.113.146:443 ESTABLISHED tcp 0 1 37.209.147.180:52801 173.194.70.95:80 FIN_WAIT1 tcp 1 0 192.168.43.224:36080 93.184.220.148:80 CLOSE_WAIT tcp 0 1 37.209.147.180:59856 141.101.114.59:80 FIN_WAIT1 tcp 0 1 37.209.147.180:44096 198.252.206.16:80 FIN_WAIT1 tcp 0 1 93.154.166.62:57471 108.160.162.49:80 FIN_WAIT1 tcp 0 1 37.209.147.180:59854 141.101.114.59:80 FIN_WAIT1 tcp 0 171 37.209.147.180:45340 173.194.113.146:443 ESTABLISHED tcp 0 168 37.209.147.180:45334 173.194.113.146:443 FIN_WAIT1 tcp 1 0 93.154.146.186:46609 23.21.151.181:80 CLOSE_WAIT tcp 0 1248 93.154.166.62:58270 64.251.23.59:443 FIN_WAIT1 tcp 0 1 37.209.147.180:59850 141.101.114.59:80 FIN_WAIT1 tcp 1 0 93.154.146.186:35181 75.101.152.29:80 CLOSE_WAIT tcp 232 0 93.154.172.168:46384 198.252.206.25:80 ESTABLISHED tcp 1 0 93.154.146.186:52618 107.22.246.76:80 CLOSE_WAIT tcp 1 0 93.154.172.168:36298 173.194.69.95:443 CLOSE_WAIT tcp 1 0 93.154.146.186:60209 75.101.152.29:443 CLOSE_WAIT tcp 0 168 37.209.147.180:45335 173.194.113.146:443 FIN_WAIT1 tcp 415 0 93.154.146.186:36157 82.112.106.104:80 CLOSE_WAIT tcp 1 0 192.168.43.224:36082 93.184.220.148:80 CLOSE_WAIT tcp 1 0 192.168.43.224:32942 199.15.160.100:80 CLOSE_WAIT tcp 1 0 93.154.146.186:50350 107.22.246.76:443 CLOSE_WAIT tcp 1 0 192.168.43.224:32941 199.15.160.100:80 CLOSE_WAIT tcp 0 534 37.209.147.180:44089 198.252.206.16:80 FIN_WAIT1 tcp 1 0 93.154.146.186:46608 23.21.151.181:80 CLOSE_WAIT tcp 1 0 93.154.146.186:46612 23.21.151.181:80 CLOSE_WAIT udp 0 0 37.209.147.180:49057 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:51631 193.41.112.18:53 ESTABLISHED udp 0 0 37.209.147.180:34827 193.41.112.18:53 ESTABLISHED udp 0 0 37.209.147.180:35908 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:44106 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:42184 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:54485 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:42216 193.41.112.18:53 ESTABLISHED udp 0 0 37.209.147.180:51961 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:48412 193.41.112.14:53 ESTABLISHED The interesting lines from ping got lost, but the summary over past few hours is: --- 8.8.8.8 ping statistics --- 107459 packets transmitted, 104376 received, +22 duplicates, 2% packet loss, time 195427362ms rtt min/avg/max/mdev = 24.822/528.132/90538.257/2519.263 ms, pipe 90 EDIT: Per request: Happened again, reboot didn't help but cleaned up all "hanging" processes. Currently netstat shows: bw@pony:/var/log$ netstat -n -t Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 93.154.188.68:42767 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:50270 173.194.69.189:443 ESTABLISHED tcp 0 0 93.154.188.68:45250 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:53488 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:53490 173.194.32.198:80 ESTABLISHED tcp 0 159 93.154.188.68:42741 74.125.239.143:443 LAST_ACK tcp 0 0 93.154.188.68:45808 198.252.206.25:80 ESTABLISHED tcp 0 0 93.154.188.68:52449 173.194.32.199:443 ESTABLISHED tcp 0 0 93.154.188.68:52600 173.194.32.199:443 TIME_WAIT tcp 0 0 93.154.188.68:50300 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:45253 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:46252 173.194.32.204:443 ESTABLISHED tcp 0 0 93.154.188.68:45246 190.93.244.58:80 ESTABLISHED tcp 0 0 93.154.188.68:47064 173.194.113.143:443 ESTABLISHED tcp 0 0 93.154.188.68:34484 173.194.69.95:443 ESTABLISHED tcp 0 0 93.154.188.68:45252 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:54290 173.194.32.202:443 ESTABLISHED tcp 0 0 93.154.188.68:47063 173.194.113.143:443 ESTABLISHED tcp 0 0 93.154.188.68:53469 173.194.32.198:80 TIME_WAIT tcp 0 0 93.154.188.68:45242 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:53468 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:50299 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:42764 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:45256 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:58047 108.160.162.105:80 ESTABLISHED tcp 0 0 93.154.188.68:45249 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:50297 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:53470 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:34100 68.232.35.121:443 ESTABLISHED tcp 0 0 93.154.188.68:42758 74.125.239.143:443 ESTABLISHED tcp 0 0 93.154.188.68:42765 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:39000 173.194.69.95:80 TIME_WAIT tcp 0 0 93.154.188.68:50296 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:53467 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:42766 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:45251 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:45248 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:45247 190.93.244.58:80 ESTABLISHED tcp 0 159 93.154.188.68:50254 173.194.69.189:443 LAST_ACK tcp 0 0 93.154.188.68:34483 173.194.69.95:443 ESTABLISHED Output of ps: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.8 0.0 3628 2092 ? Ss 16:52 0:03 /sbin/init root 2 0.0 0.0 0 0 ? S 16:52 0:00 [kthreadd] root 3 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/0] root 4 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/0:0] root 6 0.0 0.0 0 0 ? S 16:52 0:00 [migration/0] root 7 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/0] root 8 0.0 0.0 0 0 ? S 16:52 0:00 [migration/1] root 10 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/1] root 11 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/1] root 12 0.0 0.0 0 0 ? S 16:52 0:00 [migration/2] root 14 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/2] root 15 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/2] root 16 0.0 0.0 0 0 ? S 16:52 0:00 [migration/3] root 17 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/3:0] root 18 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/3] root 19 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/3] root 20 0.0 0.0 0 0 ? S< 16:52 0:00 [cpuset] root 21 0.0 0.0 0 0 ? S< 16:52 0:00 [khelper] root 22 0.0 0.0 0 0 ? S 16:52 0:00 [kdevtmpfs] root 23 0.0 0.0 0 0 ? S< 16:52 0:00 [netns] root 24 0.0 0.0 0 0 ? S 16:52 0:00 [sync_supers] root 25 0.0 0.0 0 0 ? S 16:52 0:00 [bdi-default] root 26 0.0 0.0 0 0 ? S< 16:52 0:00 [kintegrityd] root 27 0.0 0.0 0 0 ? S< 16:52 0:00 [kblockd] root 28 0.0 0.0 0 0 ? S< 16:52 0:00 [ata_sff] root 29 0.0 0.0 0 0 ? S 16:52 0:00 [khubd] root 30 0.0 0.0 0 0 ? S< 16:52 0:00 [md] root 42 0.0 0.0 0 0 ? S 16:52 0:00 [khungtaskd] root 43 0.0 0.0 0 0 ? S 16:52 0:00 [kswapd0] root 44 0.0 0.0 0 0 ? SN 16:52 0:00 [ksmd] root 45 0.0 0.0 0 0 ? SN 16:52 0:00 [khugepaged] root 46 0.0 0.0 0 0 ? S 16:52 0:00 [fsnotify_mark] root 47 0.0 0.0 0 0 ? S 16:52 0:00 [ecryptfs-kthrea] root 48 0.0 0.0 0 0 ? S< 16:52 0:00 [crypto] root 59 0.0 0.0 0 0 ? S< 16:52 0:00 [kthrotld] root 70 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/2:1] root 71 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_0] root 72 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_1] root 73 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_2] root 74 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_3] root 75 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/u:2] root 76 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/u:3] root 79 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/1:1] root 99 0.0 0.0 0 0 ? S< 16:52 0:00 [deferwq] root 100 0.0 0.0 0 0 ? S< 16:52 0:00 [charger_manager] root 101 0.0 0.0 0 0 ? S< 16:52 0:00 [devfreq_wq] root 102 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/2:2] root 106 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_4] root 107 0.0 0.0 0 0 ? S 16:52 0:00 [usb-storage] root 108 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_5] root 109 0.0 0.0 0 0 ? S 16:52 0:00 [usb-storage] root 271 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/1:2] root 316 0.0 0.0 0 0 ? S 16:52 0:00 [jbd2/sda1-8] root 317 0.0 0.0 0 0 ? S< 16:52 0:00 [ext4-dio-unwrit] root 440 0.1 0.0 2820 608 ? S 16:52 0:00 upstart-udev-bridge --daemon root 478 0.0 0.0 3460 1648 ? Ss 16:52 0:00 /sbin/udevd --daemon root 632 0.0 0.0 3348 1336 ? S 16:52 0:00 /sbin/udevd --daemon root 633 0.0 0.0 3348 1204 ? S 16:52 0:00 /sbin/udevd --daemon root 782 0.0 0.0 2816 596 ? S 16:52 0:00 upstart-socket-bridge --daemon root 822 0.0 0.0 6684 2400 ? Ss 16:52 0:00 /usr/sbin/sshd -D 102 834 0.2 0.0 4064 1864 ? Ss 16:52 0:01 dbus-daemon --system --fork root 857 0.0 0.1 7420 3380 ? Ss 16:52 0:00 /usr/sbin/modem-manager root 858 0.0 0.0 4784 1636 ? Ss 16:52 0:00 /usr/sbin/bluetoothd syslog 860 0.0 0.0 31068 1496 ? Sl 16:52 0:00 rsyslogd -c5 root 869 0.1 0.1 24280 5564 ? Ssl 16:52 0:00 NetworkManager avahi 883 0.0 0.0 3448 1488 ? S 16:52 0:00 avahi-daemon: running [pony.local] avahi 884 0.0 0.0 3448 436 ? S 16:52 0:00 avahi-daemon: chroot helper root 885 0.0 0.0 0 0 ? S< 16:52 0:00 [kpsmoused] root 892 0.0 0.1 25696 4140 ? Sl 16:52 0:00 /usr/lib/policykit-1/polkitd --no-debug root 923 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_6] root 959 0.0 0.0 0 0 ? S< 16:52 0:00 [krfcommd] root 970 0.0 0.1 7536 3120 ? Ss 16:52 0:00 /usr/sbin/cupsd -F colord 976 0.1 0.3 55080 10396 ? Sl 16:52 0:00 /usr/lib/i386-linux-gnu/colord/colord root 979 0.0 0.0 4632 872 tty4 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty4 root 987 0.0 0.0 4632 884 tty5 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty5 root 994 0.0 0.0 4632 884 tty2 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty2 root 995 0.0 0.0 4632 868 tty3 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty3 root 998 0.0 0.0 4632 876 tty6 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty6 root 1022 0.0 0.0 2176 680 ? Ss 16:52 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket root 1029 0.0 0.0 3632 664 ? Ss 16:52 0:00 /usr/sbin/irqbalance daemon 1030 0.0 0.0 2476 120 ? Ss 16:52 0:00 atd root 1031 0.0 0.0 2620 880 ? Ss 16:52 0:00 cron root 1061 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/3:2] root 1064 0.0 1.0 34116 31072 ? SLsl 16:52 0:00 lightdm root 1076 13.4 1.2 118688 37920 tty7 Ssl+ 16:52 0:55 /usr/bin/X :0 -core -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswit root 1085 0.0 0.0 0 0 ? S 16:52 0:00 [rts_pstor] root 1087 0.0 0.0 0 0 ? S 16:52 0:00 [rtsx-polling] root 1095 0.0 0.0 0 0 ? S< 16:52 0:00 [cfg80211] root 1127 0.0 0.0 0 0 ? S 16:52 0:00 [flush-8:0] root 1130 0.0 0.0 6136 1824 ? Ss 16:52 0:00 /sbin/wpa_supplicant -B -P /run/sendsigs.omit.d/wpasupplicant.pid -u -s -O /va root 1137 0.0 0.1 24604 3164 ? Sl 16:52 0:00 /usr/lib/accountsservice/accounts-daemon root 1140 0.0 0.0 0 0 ? S< 16:52 0:00 [hd-audio0] root 1188 0.0 0.1 34308 3420 ? Sl 16:52 0:00 /usr/sbin/console-kit-daemon --no-daemon root 1425 0.0 0.0 4632 872 tty1 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty1 root 1443 0.1 0.1 29460 4664 ? Sl 16:52 0:00 /usr/lib/upower/upowerd root 1579 0.0 0.1 16540 3272 ? Sl 16:53 0:00 lightdm --session-child 12 19 bw 1623 0.0 0.0 2232 644 ? Ss 16:53 0:00 /bin/sh /usr/bin/startkde bw 1672 0.0 0.0 4092 204 ? Ss 16:53 0:00 /usr/bin/ssh-agent /usr/bin/gpg-agent --daemon --sh --write-env-file=/home/bw/ bw 1673 0.0 0.0 5492 384 ? Ss 16:53 0:00 /usr/bin/gpg-agent --daemon --sh --write-env-file=/home/bw/.gnupg/gpg-agent-in bw 1676 0.0 0.0 3848 792 ? S 16:53 0:00 /usr/bin/dbus-launch --exit-with-session /usr/bin/startkde bw 1677 0.5 0.0 5384 2180 ? Ss 16:53 0:02 //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session root 1704 0.3 0.1 25348 3600 ? Sl 16:53 0:01 /usr/lib/udisks/udisks-daemon root 1705 0.0 0.0 6620 728 ? S 16:53 0:00 udisks-daemon: not polling any devices bw 1736 0.0 0.0 2008 64 ? S 16:53 0:00 /usr/lib/kde4/libexec/start_kdeinit +kcminit_startup bw 1737 0.0 0.5 115200 15588 ? Ss 16:53 0:00 kdeinit4: kdeinit4 Running... bw 1738 0.1 0.2 116756 8728 ? S 16:53 0:00 kdeinit4: klauncher [kdeinit] --fd=9 bw 1740 0.6 1.0 340524 31264 ? Sl 16:53 0:02 kdeinit4: kded4 [kdeinit] bw 1742 0.0 0.0 8944 2144 ? S 16:53 0:00 /usr/lib/i386-linux-gnu/gconf/gconfd-2 bw 1746 0.2 0.4 92028 14688 ? S 16:53 0:00 /usr/bin/kglobalaccel bw 1748 0.0 0.4 90804 13500 ? S 16:53 0:00 /usr/bin/kwalletd bw 1752 0.1 0.5 103764 15152 ? S 16:53 0:00 /usr/bin/kactivitymanagerd bw 1758 0.0 0.0 2144 280 ? S 16:53 0:00 kwrapper4 ksmserver bw 1759 0.1 0.5 150016 16088 ? Sl 16:53 0:00 kdeinit4: ksmserver [kdeinit] bw 1763 2.2 1.0 178492 32100 ? Sl 16:53 0:08 kwin bw 1772 0.2 0.5 106292 16340 ? Sl 16:53 0:00 /usr/bin/knotify4 bw 1777 0.9 1.1 246120 32912 ? Sl 16:53 0:03 /usr/bin/krunner bw 1778 6.3 2.7 389884 80216 ? Sl 16:53 0:23 /usr/bin/plasma-desktop bw 1785 0.0 0.0 2844 1208 ? S 16:53 0:00 ksysguardd bw 1789 0.1 0.4 82036 14176 ? S 16:53 0:00 /usr/bin/kuiserver bw 1805 0.3 0.1 61560 5612 ? Sl 16:53 0:01 /usr/bin/akonadi_control root 1806 0.0 0.0 0 0 ? S 16:53 0:00 [kworker/0:2] bw 1808 0.1 0.2 211852 8460 ? Sl 16:53 0:00 akonadiserver bw 1810 0.4 0.8 244116 25360 ? Sl 16:53 0:01 /usr/sbin/mysqld --defaults-file=/home/bw/.local/share/akonadi/mysql.conf --da bw 1874 0.0 0.0 35284 2956 ? Sl 16:53 0:00 /usr/bin/xsettings-kde bw 1876 0.0 0.3 68776 9488 ? Sl 16:53 0:00 /usr/bin/nepomukserver bw 1884 0.4 0.9 173876 29240 ? SNl 16:53 0:01 /usr/bin/nepomukservicestub nepomukstorage bw 1902 6.1 2.1 451512 63924 ? Sl 16:53 0:21 /home/bw/.dropbox-dist/dropbox bw 1906 3.8 1.0 142368 32376 ? Rl 16:53 0:13 /usr/bin/yakuake bw 1933 0.0 0.1 54636 4680 ? Sl 16:53 0:00 /usr/bin/zeitgeist-datahub bw 1943 0.5 1.5 164836 46836 ? Sl 16:53 0:01 python /usr/bin/printer-applet bw 1945 0.1 0.1 99636 5048 ? S<l 16:53 0:00 /usr/bin/pulseaudio --start --log-target=syslog rtkit 1947 0.0 0.0 21336 1248 ? SNl 16:53 0:00 /usr/lib/rtkit/rtkit-daemon bw 1958 0.0 0.1 44204 3792 ? Sl 16:53 0:00 /usr/bin/zeitgeist-daemon bw 1972 0.0 0.0 27008 2684 ? Sl 16:53 0:00 /usr/lib/gvfs/gvfsd bw 1974 0.1 0.5 90480 16660 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_akonotes_resource akonadi_akonotes_res bw 1984 0.1 0.5 90472 16636 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_akonotes_resource akonadi_akonotes_res bw 1985 0.3 0.9 148800 28304 ? S 16:53 0:01 /usr/bin/akonadi_archivemail_agent --identifier akonadi_archivemail_agent bw 1992 0.1 0.5 90020 16148 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_contacts_resource akonadi_contacts_res bw 1993 0.1 0.5 90132 16452 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_contacts_resource akonadi_contacts_res bw 1994 0.1 0.5 90564 16332 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_ical_resource akonadi_ical_resource_0 bw 1995 0.1 0.5 90676 16732 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_ical_resource akonadi_ical_resource_1 bw 1996 0.1 0.5 90468 16800 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_maildir_resource akonadi_maildir_resou bw 1999 0.2 0.6 99324 19276 ? S 16:53 0:00 /usr/bin/akonadi_maildispatcher_agent --identifier akonadi_maildispatcher_agen bw 2006 0.3 0.9 148808 28332 ? S 16:53 0:01 /usr/bin/akonadi_mailfilter_agent --identifier akonadi_mailfilter_agent bw 2017 0.0 0.1 50256 4716 ? Sl 16:53 0:00 /usr/lib/zeitgeist/zeitgeist-fts bw 2024 0.2 0.6 103632 18376 ? Sl 16:53 0:00 /usr/bin/akonadi_nepomuk_feeder --identifier akonadi_nepomuk_feeder bw 2043 0.0 0.0 4484 280 ? S 16:53 0:00 /bin/cat bw 2101 0.2 0.7 113600 22396 ? Sl 16:53 0:00 /usr/lib/kde4/libexec/polkit-kde-authentication-agent-1 bw 2105 0.2 0.7 114196 22072 ? Sl 16:53 0:00 /usr/bin/nepomukcontroller bw 2156 0.3 1.0 333188 31244 ? Sl 16:54 0:01 /usr/bin/kmix bw 2167 0.0 0.0 6548 2724 pts/2 Ss 16:54 0:00 /bin/bash bw 2177 0.2 0.7 113496 22960 ? Sl 16:54 0:00 /usr/bin/klipper bw 2394 3.5 1.2 52932 35596 ? SNl 16:54 0:11 /usr/bin/virtuoso-t +foreground +configfile /tmp/virtuoso_hX1884.ini +wait root 2460 0.0 0.0 6184 1876 pts/2 S 16:54 0:00 sudo -s root 2500 0.0 0.0 6528 2700 pts/2 S 16:54 0:00 /bin/bash root 2599 0.0 0.0 5444 1280 pts/2 S+ 16:54 0:00 /bin/bash bin/aero root 2606 0.1 0.0 9836 2500 pts/2 S+ 16:54 0:00 wvdial aero2 root 2619 0.0 0.0 3504 1280 pts/2 S 16:54 0:00 /usr/sbin/pppd 57600 modem crtscts defaultroute usehostname -detach user aero bw 2653 0.0 0.0 6600 2880 pts/3 Ss 16:54 0:00 /bin/bash bw 2676 0.4 0.8 130296 24016 ? SNl 16:54 0:01 /usr/bin/nepomukservicestub nepomukfilewatch bw 2679 0.1 0.7 101636 22252 ? SNl 16:54 0:00 /usr/bin/nepomukservicestub nepomukqueryservice bw 2681 0.2 0.8 109836 24280 ? SNl 16:54 0:00 /usr/bin/nepomukservicestub nepomukbackupsync bw 3833 46.0 9.7 829272 288012 ? Rl 16:55 1:46 /usr/lib/firefox/firefox bw 3903 0.0 0.0 35128 2804 ? Sl 16:55 0:00 /usr/lib/at-spi2-core/at-spi-bus-launcher bw 4708 0.1 0.0 6564 2736 pts/4 Ss 16:56 0:00 /bin/bash root 5210 0.0 0.0 0 0 ? S 16:57 0:00 [kworker/u:0] root 6140 0.2 0.0 0 0 ? S 16:58 0:00 [kworker/0:1] root 6371 0.5 0.0 6184 1868 pts/4 S+ 16:59 0:00 sudo nethogs ppp0 root 6411 17.7 0.2 8616 6144 pts/4 S+ 16:59 0:05 nethogs ppp0 bw 6787 0.0 0.0 5464 1220 pts/3 R+ 16:59 0:00 ps auxw

    Read the article

  • CodePlex Daily Summary for Saturday, October 29, 2011

    CodePlex Daily Summary for Saturday, October 29, 2011Popular Releasespatterns & practices: Enterprise Library Contrib: Enterprise Library Contrib - 5.0 (Oct 2011): This release of Enterprise Library Contrib is based on the Microsoft patterns & practices Enterprise Library 5.0 core and contains the following: Common extensionsTypeConfigurationElement<T> - A Polymorphic Configuration Element without having to be part of a PolymorphicConfigurationElementCollection. AnonymousConfigurationElement - A Configuration element that can be uniquely identified without having to define its name explicitly. Data Access Application Block extensionsMySql Provider - ...Network Monitor Open Source Parsers: Network Monitor Parsers 3.4.2748: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, NetowrkMonitor_Parsers.msi continues to improve quality and fix bugs. It has included the fo...Duckworth Lewis Professional Edition Calculator: DLcalc 3.0: DLcalc 3.0 can perform Duckworth/Lewis Professional Edition calculations 100% accurately. It also produces over-by-over and ball-by-ball PAR score tables.Folder Bookmarks: Folder Bookmarks 2.2.0.1: In this version: Custom Icons - now you can change the icons of the bookmarks. By default, whenever an image is added, the icon is automatically changed to a thumbnail of the picture. This can be turned off in the settings (Options... > Settings) Ability to remove items from the 'Recent' category Bugfixes - 'Choose' button in 'Edit Bookmark' now works Another bug fix: another problem in the 'Edit Bookmark' windowMedia Companion: MC 3.420b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movies Fixed: Fanart and poster scraping issues TV Shows (Re)Added: Rebuild single show Fixed: Issue when shows are moved from original location Ability to handle " for actor nicknames Crash when episode name contains "<" (does not scrape yet) Clears fanart when switch...patterns & practices - Unity: Unity 3.0 for .NET4.5 Preview: The Unity 3.0.1026.0 Preview enables Unity to work on .NET 4.5 with both the WinRT and desktop profiles. The major changes include: Unity projects updated to target .NET 4.5. Dynamic build plans modified to use compiled lambda expressions instead of Reflection.Emit Converting reflection to use the new TypeInfo for reflection. Projects updated to work with the Microsoft Visual Studio 2011 Preview Notes/Known Issues: The Microsoft.Practices.Unity.UnityServiceLocator class cannot be use...Managed Extensibility Framework: MEF 2 Preview 4: Detailed information on this release is available on the BCL team blog.Image Converter: Image Converter 0.3: New Features: - English and German support Technical Improvements: - Microsoft All Rules using Code Analysis Planned Features for future release: 1. Unit testing 2. Command line interface 3. Automatic UpdatesAcDown????? - Anime&Comic Downloader: AcDown????? v3.6: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6?? ??“????”...DotNetNuke® Events: 05.02.01: This release fixes any know bugs from any previous version. Events 05.02.01 will work for any DNN version 5.5.0 and up. Full details on the changes can be found at http://dnnevents.codeplex.com/workitem/list/basic Please review and rate this release... (stars are welcome)BUG FIXESAdded validation around category cookie RSS feed was missing an explicit close of the file when writing. Fixed. Added extra security into detail view .ICS Files did not include correct line folding. Fixed Cha...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.33: Add JSParser.ParseExpression method to parse JavaScript expressions rather than source-elements. Add -strict switch (CodeSettings.StrictMode) to force input code to ECMA5 Strict-mode (extra error-checking, "use strict" at top). Fixed bug when MinifyCode setting was set to false but RemoveUnneededCode was left it's default value of true.Path Copy Copy: 8.0: New version that mostly adds lots of requested features: 11340 11339 11338 11337 This version also features a more elaborate Settings UI that has several tabs. I tried to add some notes to better explain the use and purpose of the various options. The Path Copy Copy documentation is also on the way, both to explain how to develop custom plugins and to explain how to pre-configure options if you're a network admin. Stay tuned.MVC Controls Toolkit: Mvc Controls Toolkit 1.5.0: Added: The new Client Blocks feaure of Views A new "move" js method for the TreeViews The NewHtmlCreated js event to the DataGrid Improved the ChoiceList structure that now allows also the selection list of a dropdown to be chosen with a lambda expression Improved the AcceptViewHintAttribute controller filter. Now a client can specify not only the name of a View or Partial View it prefers, but also to receive just the rough data in Json format. Fixed: Issue with partial thrust Cl...Free SharePoint Master Pages: Buried Alive (Halloween) Theme: Release Notes *Created for Halloween, you will find theme file, custom css file and images. *Created by Al Roome @AlstarRoome Features: Custom styling for web part Custom background *Screenshot https://s3.amazonaws.com/kkhipple/post/sharepoint-showcase-halloween.pngDevForce Application Framework: DevForce AF 2.0.3 RTW: PrerequisitesWPF 4.0 Silverlight 4.0 DevForce 2010 6.1.3.1 Download ContentsDebug and Release Assemblies API Documentation Source code License.txt Requirements.txt Release HighlightsNew: EventAggregator event forwarding New: EntityManagerInterceptor<T> to intercept EntityManger events New: IHarnessAware to allow for ViewModel setup when executed inside of the Development Harness New: Improved design time stability New: Support for add-in development New: CoroutineFns.To...NicAudio: NicAudio 2.0.5: Minor change to accept special DTS stereo modes (LtRt, AB,...)NDepend TFS 2010 integration: version 0.5.0 beta 1: Only the activity and the VS plugin are avalaible right now. They basically work. Data types that are logged into tfs reports are subject to change. This is no big deal since data is not yet sent into the warehouse.Windows Azure Toolkit for Windows Phone: Windows Azure Toolkit for Windows Phone v1.3.1: Upgraded Windows Azure projects to Windows Azure Tools for Microsoft Visual Studio 2010 1.5 – September 2011 Upgraded the tools tools to support the Windows Phone Developer Tools RTW Update SQL Azure only scenarios to use ASP.NET Universal Providers (through the System.Web.Providers v1.0.1 NuGet package) Changed Shared Access Signature service interface to support more operations Refactored Blobs API to have a similar interface and usage to that provided by the Windows Azure SDK Stor...DotNetNuke® FAQ: 05.00.00: FAQ (Frequently Asked Questions) 05.00.00 will work for any DNN version 5.6.1 and up. It is the first version which is rewritten in C#. The scope of this update is to fix all known issues and improve user interface. Please review and rate this release... (stars are welcome)BUG FIXESManage Categories button text was not localized Edit/Add FAQ Entry: button text was not localized ENHANCEMENTSAdded an option to select the control for category display: Listbox with checkboxes (flat category ...SiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.0.921.340): Added CodePlex and PayPal links New iconNew ProjectsAsynk: Asynk is a framework/application that allows existing applications to easily be extended with an offloaded asynchronous worker layer. Asynk is developed using C#.Blob Tower Defense: 3D tower defense game for Windows Phone 7. School project for Brno University of Technology, computer graphics class.Booz: Booz is... An extended version of the boo shell (booish2 to be precise). Offers additional commands like cd, md, ls etc. I hope this shell can be used to take the position of/surpass the native windows shell in the near future.CIMS: a sanction infomation system for sencience and technology of hustCrystalDot - Icon Collection / Pack (LGPL): .Net / Mono freundliche Varainte der Crystal-Icons von Everaldo Icon collection / pack for .NET and Mono designed by Everaldo - KDE style http://www.everaldo.com/crystal/dotetes: dotetes adalah teka teki silang tool dikembangkan dengan bahasa c#Emoe': This Project is a Windows Phone 7.1 application.Equation Inversion: Visual Studion 2008 Add-in for equation inversions.Exploring VMR Features on WEC7: This is the sample application helps you to do alpha blending the bitmap on camera streaming in Windows Embedded Compact 7 using Directshow video Renderer (VMR). It is a VS2008 based smart device project developed on C++. I have explained the sample application in the following blog link. http://www.e-consystems.com/blog/windowsce/?p=759 EzValidation: Custom validation extensions for ASP.NET MVC 3. Includes server and client side model based validation attributes for: -- Equal To -- Not Equal To -- Greater Than -- Greater Than or Equal To -- Less Than -- Less Than or Equal To Supports validating against: -- Another Model Field -- A Specific Value -- Current Date/Yesterday/Tomorrow (for Dates and Strings) Download & Install via NuGet "package-install ezvalidation"Flu.net: Flu.net is a tool that helps you creating your own fluent syntax for .NET Framework applications in a declarative fashion. It is aimed for infrastructures and other open-source projects use.For Chess Endgames: King vs. King Opposition Calculator: You must input the locations of 2 kings on a chessboard, and whose turn it is to move. The calculator will display which king has the opposition, and how it can be used or maintained.GameTrakXNA: This project aims to create a simple library to use the unique GameTrak controller within XNA and Flash.Google Speech Recognition Example: Google Speech Recognition contains a working example of application that uses google speech recognition API. App contains all necessary dlls to record, decode and send your voice request to google service and recieve a text representation of what you've said. It's developed in C#Interval Mandelbrot Explorer: Explore the Mandelbrot set using interval arithmetic.ISD training tasks: ISD training examples and tasksiTunesControlBar: The iTunesControlBar helps user control their iTunes Application while it is minimized. iTunesControlBar resides at the top of the screen, invisible when not used, and allows playback and volume control, library searches and media information without the need to bring up iTunes.iTurtle: A bunch of Powerscripts to automate server management in AD environment.M26WC - Mono 2.6 Wizard Control: Wizard which runs under Mono2.6 A fork of: http://aerowizard.codeplex.com/Microsoft Help Viewer 2: Help Viewer 2 is the help runtime for both Visual Studio 11 help and Windows 8 help. The code in this project will help you use and understand the HV2 runtime API.MONTRASEC: Monitoring Trafficking in human beings and Sexual Exploitation of Children: benchmarking for member state and EU reporting, turning the SIAMSECT templates into a user-friendly interface and reporting tool. MTF.NET Runtime: Managed Task Framework .NET Runtime The MTF.NET runtime software and resulting assemblies are required to run applications built using the Managed Task Framework.NET Professional (Visual Studio 2010 extension) software design editor. The MTF.NET team are committed to continuously improving the core MTF.NET runtime and ensuring it is always available free and fully transparent. Pandoras Box: A greenfield inversion of control project utilising the power and flexibility of expressions and preferring convention over configuration.Pass the Puzzle: Pass the Puzzle is a frantic word-guessing party game. The game displays a few letters, and the players must come up with words containing those letters. But beware: if the timer goes off, you lose! It is based on the folk party game Pass the Parcel and is written in C#.PerCiGal: Percigal is a project for the development of applications for managing your personal media library. It consists in - a windows application to use at home to catalog movies, TV series, cast and books, with the support of the Internet for information retrieval; - a web interface for viewing and cataloging everywhere your media; - an application for smartphones. Project Flying Carpet: Este jogo é um projeto para a cadeira Projeto de Jogos: Motores Jogos do curso de Jogos Digitais da Unisinos.proxy browser: sed leo Latin's Butterfly....Python Multiple Dispatch: Multiple dispatch (AKA multimethods) for Python 3 via a metaclass and type annotations.reDune: ?????????? ???? ? ????? «????????? ? ???????? ???????». ???????? ?? Dune2000 ?? Westwood ? Electronic Arts.Rereadable: Keep page from internet for read it latter.ServStop: ServStop is a .NET application that makes it easy to stop several system services at once. Now you don't have to change startup types or stop them one at a time. It has a simple list-based interface with the ability to save and load lists of user services to stop. Written in C#.SharePoint 2010 Audience Membership Workflow Activity (Full Trust): A simple SharePoint 2010 workflow activity / workflow condition to check whether the user initiating the workflow is a member of a specified audience. Farm-level .wsp solution, written in C#. Once installed, the workflow activity can be used in SharePoint Designer 2010 declarative workflows.SQL Server® to Firebird DB converter: Converts Microsoft SQL Server® database into Firebird database including entire structure and datastegitest: test projectSystem.Threading.Joins: The Joins project provides asynchronous concurrency semantics based on join calculus and modeled after the Microsoft Research C? (C Omega) project.TestAndroidGame: try dev a TestAndroidGametetribricks: block game Topographic Explorer: A project to import, convert, explore, manipulate, and save topographical maps. Looking to use C# and WPF.Trading: Under construction!!!Trombone: Trombone makes it easier for Windows Mobile Professional users to automate status reply through SMS. It's developed in Visual C# 2008.Tulsa SharePoint Interest Group: Repository for source code for the Tulsa SharePoint Interest Group's web site. The Tulsa SharePoint Interest Group is using the Community Kit for SharePoint. This project will house any modifications that are specific to our user group.World of Tanks RU tiny stats collection utilty.: Tiny utility to load players stats for World of Tanks RU server. Results saved to comma separated file.WS-Discovery Proxy: Attempt at creating general purpose WS-Discovery Proxy.Yamaha Tu?n Tr?c: This application is used to manage information for Yamaha Tu?n Tr?c

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Error compiling GLib in Ubuntu 14.04 (trying to install GimpShop)

    - by Nicolás Salvarrey
    I'm kinda new in Linux, so please take it easy on the most complicated stuff. I'm trying to install GimpShop. Installation guide asks me to install GLib first, and when I try to compile it using the make command I get errors. When I run the ./configure --prefix=/usr command, I get this: checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking for the BeOS... no checking for Win32... no checking whether to enable garbage collector friendliness... no checking whether to disable memory pools... no checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ANSI C... none needed checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking for c++... no checking for g++... no checking for gcc... gcc checking whether we are using the GNU C++ compiler... no checking whether gcc accepts -g... no checking dependency style of gcc... gcc3 checking for gcc option to accept ANSI C... none needed checking for a BSD-compatible install... /usr/bin/install -c checking for special C compiler options needed for large files... no checking for _FILE_OFFSET_BITS value needed for large files... no checking for _LARGE_FILES value needed for large files... no checking for pkg-config... /usr/bin/pkg-config checking for gawk... (cached) mawk checking for perl5... no checking for perl... perl checking for indent... no checking for perl... /usr/bin/perl checking for iconv_open... yes checking how to run the C preprocessor... gcc -E checking for egrep... grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking locale.h usability... yes checking locale.h presence... yes checking for locale.h... yes checking for LC_MESSAGES... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking for ngettext in libc... yes checking for dgettext in libc... yes checking for bind_textdomain_codeset... yes checking for msgfmt... /usr/bin/msgfmt checking for dcgettext... yes checking for gmsgfmt... /usr/bin/msgfmt checking for xgettext... /usr/bin/xgettext checking for catalogs to be installed... am ar az be bg bn bs ca cs cy da de el en_CA en_GB eo es et eu fa fi fr ga gl gu he hi hr id is it ja ko lt lv mk mn ms nb ne nl nn no or pa pl pt pt_BR ro ru sk sl sq sr sr@ije sr@Latn sv ta tl tr uk vi wa xh yi zh_CN zh_TW checking for a sed that does not truncate output... /bin/sed checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for /usr/bin/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm -B checking whether ln -s works... yes checking how to recognise dependent libraries... pass_all checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking for g77... no checking for f77... no checking for xlf... no checking for frt... no checking for pgf77... no checking for fort77... no checking for fl32... no checking for af77... no checking for f90... no checking for xlf90... no checking for pgf90... no checking for epcf90... no checking for f95... no checking for fort... no checking for xlf95... no checking for ifc... no checking for efc... no checking for pgf95... no checking for lf95... no checking for gfortran... no checking whether we are using the GNU Fortran 77 compiler... no checking whether accepts -g... no checking the maximum length of command line arguments... 32768 checking command to parse /usr/bin/nm -B output from gcc object... ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip checking if gcc static flag works... yes checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC checking if gcc PIC flag -fPIC works... yes checking if gcc supports -c -o file.o... yes checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no configure: creating libtool appending configuration tag "CXX" to libtool appending configuration tag "F77" to libtool checking for extra flags to get ANSI library prototypes... none needed checking for extra flags for POSIX compliance... none needed checking for ANSI C header files... (cached) yes checking for vprintf... yes checking for _doprnt... no checking for working alloca.h... yes checking for alloca... yes checking for atexit... yes checking for on_exit... yes checking for char... yes checking size of char... 1 checking for short... yes checking size of short... 2 checking for long... yes checking size of long... 8 checking for int... yes checking size of int... 4 checking for void *... yes checking size of void *... 8 checking for long long... yes checking size of long long... 8 checking for __int64... no checking size of __int64... 0 checking for format to printf and scanf a guint64... %llu checking for an ANSI C-conforming const... yes checking if malloc() and friends prototypes are gmem.h compatible... no checking for growing stack pointer... yes checking for __inline... yes checking for __inline__... yes checking for inline... yes checking if inline functions in headers work... yes checking for ISO C99 varargs macros in C... yes checking for ISO C99 varargs macros in C++... no checking for GNUC varargs macros... yes checking for GNUC visibility attribute... yes checking whether byte ordering is bigendian... no checking dirent.h usability... yes checking dirent.h presence... yes checking for dirent.h... yes checking float.h usability... yes checking float.h presence... yes checking for float.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking pwd.h usability... yes checking pwd.h presence... yes checking for pwd.h... yes checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking sys/poll.h usability... yes checking sys/poll.h presence... yes checking for sys/poll.h... yes checking sys/select.h usability... yes checking sys/select.h presence... yes checking for sys/select.h... yes checking for sys/types.h... (cached) yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking sys/times.h usability... yes checking sys/times.h presence... yes checking for sys/times.h... yes checking for unistd.h... (cached) yes checking values.h usability... yes checking values.h presence... yes checking for values.h... yes checking for stdint.h... (cached) yes checking sched.h usability... yes checking sched.h presence... yes checking for sched.h... yes checking langinfo.h usability... yes checking langinfo.h presence... yes checking for langinfo.h... yes checking for nl_langinfo... yes checking for nl_langinfo and CODESET... yes checking whether we are using the GNU C Library 2.1 or newer... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdlib.h... (cached) yes checking for string.h... (cached) yes checking for setlocale... yes checking for size_t... yes checking size of size_t... 8 checking for the appropriate definition for size_t... unsigned long checking for lstat... yes checking for strerror... yes checking for strsignal... yes checking for memmove... yes checking for mkstemp... yes checking for vsnprintf... yes checking for stpcpy... yes checking for strcasecmp... yes checking for strncasecmp... yes checking for poll... yes checking for getcwd... yes checking for nanosleep... yes checking for vasprintf... yes checking for setenv... yes checking for unsetenv... yes checking for getc_unlocked... yes checking for readlink... yes checking for symlink... yes checking for C99 vsnprintf... yes checking whether printf supports positional parameters... yes checking for signed... yes checking for long long... (cached) yes checking for long double... yes checking for wchar_t... yes checking for wint_t... yes checking for size_t... (cached) yes checking for ptrdiff_t... yes checking for inttypes.h... yes checking for stdint.h... yes checking for snprintf... yes checking for C99 snprintf... yes checking for sys_errlist... yes checking for sys_siglist... yes checking for sys_siglist declaration... yes checking for fd_set... yes, found in sys/types.h checking whether realloc (NULL,) will work... yes checking for nl_langinfo (CODESET)... yes checking for OpenBSD strlcpy/strlcat... no checking for an implementation of va_copy()... yes checking for an implementation of __va_copy()... yes checking whether va_lists can be copied by value... no checking for dlopen... no checking for NSLinkModule... no checking for dlopen in -ldl... yes checking for dlsym in -ldl... yes checking for RTLD_GLOBAL brokenness... no checking for preceeding underscore in symbols... no checking for dlerror... yes checking for the suffix of shared libraries... .so checking for gspawn implementation... gspawn.lo checking for GIOChannel implementation... giounix.lo checking for platform-dependent source... checking whether to compile timeloop... yes checking if building for some Win32 platform... no checking for thread implementation... posix checking thread related cflags... -pthread checking for sched_get_priority_min... yes checking thread related libraries... -pthread checking for localtime_r... yes checking for posix getpwuid_r... yes checking size of pthread_t... 8 checking for pthread_attr_setstacksize... yes checking for minimal/maximal thread priority... sched_get_priority_min(SCHED_OTHER)/sched_get_priority_max(SCHED_OTHER) checking for pthread_setschedparam... yes checking for posix yield function... sched_yield checking size of pthread_mutex_t... 40 checking byte contents of PTHREAD_MUTEX_INITIALIZER... 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 checking whether to use assembler code for atomic operations... x86_64 checking value of POLLIN... 1 checking value of POLLOUT... 4 checking value of POLLPRI... 2 checking value of POLLERR... 8 checking value of POLLHUP... 16 checking value of POLLNVAL... 32 checking for EILSEQ... yes configure: creating ./config.status config.status: creating glib-2.0.pc config.status: creating glib-2.0-uninstalled.pc config.status: creating gmodule-2.0.pc config.status: creating gmodule-no-export-2.0.pc config.status: creating gmodule-2.0-uninstalled.pc config.status: creating gthread-2.0.pc config.status: creating gthread-2.0-uninstalled.pc config.status: creating gobject-2.0.pc config.status: creating gobject-2.0-uninstalled.pc config.status: creating glib-zip config.status: creating glib-gettextize config.status: creating Makefile config.status: creating build/Makefile config.status: creating build/win32/Makefile config.status: creating build/win32/dirent/Makefile config.status: creating glib/Makefile config.status: creating glib/libcharset/Makefile config.status: creating glib/gnulib/Makefile config.status: creating gmodule/Makefile config.status: creating gmodule/gmoduleconf.h config.status: creating gobject/Makefile config.status: creating gobject/glib-mkenums config.status: creating gthread/Makefile config.status: creating po/Makefile.in config.status: creating docs/Makefile config.status: creating docs/reference/Makefile config.status: creating docs/reference/glib/Makefile config.status: creating docs/reference/glib/version.xml config.status: creating docs/reference/gobject/Makefile config.status: creating docs/reference/gobject/version.xml config.status: creating tests/Makefile config.status: creating tests/gobject/Makefile config.status: creating m4macros/Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands config.status: executing default-1 commands config.status: executing glibconfig.h commands config.status: glibconfig.h is unchanged config.status: executing chmod-scripts commands nsalvarrey@Delleuze:~/glib-2.6.3$ ^C nsalvarrey@Delleuze:~/glib-2.6.3$ And then, with the make command, I get this: galias.h:83:39: error: 'g_ascii_digit_value' aliased to undefined symbol 'IA__g_ascii_digit_value' extern __typeof (g_ascii_digit_value) g_ascii_digit_value __attribute((alias("IA__g_ascii_digit_value"), visibility("default"))); ^ In file included from garray.c:35:0: galias.h:31:35: error: 'g_allocator_new' aliased to undefined symbol 'IA__g_allocator_new' extern __typeof (g_allocator_new) g_allocator_new __attribute((alias("IA__g_allocator_new"), visibility("default"))); ^ make[4]: *** [garray.lo] Error 1 make[4]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[3]: *** [all-recursive] Error 1 make[3]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[2]: *** [all] Error 2 make[2]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[1]: *** [all-recursive] Error 1 make[1]: se sale del directorio «/home/nsalvarrey/glib-2.6.3» make: *** [all] Error 2 nsalvarrey@Delleuze:~/glib-2.6.3$ (it's actually a lot longer) Can somebody help me?

    Read the article

  • Using the ASP.NET Cache to cache data in a Model or Business Object layer, without a dependency on System.Web in the layer - Part One.

    - by Rhames
    ASP.NET applications can make use of the System.Web.Caching.Cache object to cache data and prevent repeated expensive calls to a database or other store. However, ideally an application should make use of caching at the point where data is retrieved from the database, which typically is inside a Business Objects or Model layer. One of the key features of using a UI pattern such as Model-View-Presenter (MVP) or Model-View-Controller (MVC) is that the Model and Presenter (or Controller) layers are developed without any knowledge of the UI layer. Introducing a dependency on System.Web into the Model layer would break this independence of the Model from the View. This article gives a solution to this problem, using dependency injection to inject the caching implementation into the Model layer at runtime. This allows caching to be used within the Model layer, without any knowledge of the actual caching mechanism that will be used. Create a sample application to use the caching solution Create a test SQL Server database This solution uses a SQL Server database with the same Sales data used in my previous post on calculating running totals. The advantage of using this data is that it gives nice slow queries that will exaggerate the effect of using caching! To create the data, first create a new SQL database called CacheSample. Next run the following script to create the Sale table and populate it: USE CacheSample GO   CREATE TABLE Sale(DayCount smallint, Sales money) CREATE CLUSTERED INDEX ndx_DayCount ON Sale(DayCount) go INSERT Sale VALUES (1,120) INSERT Sale VALUES (2,60) INSERT Sale VALUES (3,125) INSERT Sale VALUES (4,40)   DECLARE @DayCount smallint, @Sales money SET @DayCount = 5 SET @Sales = 10   WHILE @DayCount < 5000  BEGIN  INSERT Sale VALUES (@DayCount,@Sales)  SET @DayCount = @DayCount + 1  SET @Sales = @Sales + 15  END Next create a stored procedure to calculate the running total, and return a specified number of rows from the Sale table, using the following script: USE [CacheSample] GO   SET ANSI_NULLS ON GO   SET QUOTED_IDENTIFIER ON GO   -- ============================================= -- Author:        Robin -- Create date: -- Description:   -- ============================================= CREATE PROCEDURE [dbo].[spGetRunningTotals]       -- Add the parameters for the stored procedure here       @HighestDayCount smallint = null AS BEGIN       -- SET NOCOUNT ON added to prevent extra result sets from       -- interfering with SELECT statements.       SET NOCOUNT ON;         IF @HighestDayCount IS NULL             SELECT @HighestDayCount = MAX(DayCount) FROM dbo.Sale                   DECLARE @SaleTbl TABLE (DayCount smallint, Sales money, RunningTotal money)         DECLARE @DayCount smallint,                   @Sales money,                   @RunningTotal money         SET @RunningTotal = 0       SET @DayCount = 0         DECLARE rt_cursor CURSOR       FOR       SELECT DayCount, Sales       FROM Sale       ORDER BY DayCount         OPEN rt_cursor         FETCH NEXT FROM rt_cursor INTO @DayCount,@Sales         WHILE @@FETCH_STATUS = 0 AND @DayCount <= @HighestDayCount        BEGIN        SET @RunningTotal = @RunningTotal + @Sales        INSERT @SaleTbl VALUES (@DayCount,@Sales,@RunningTotal)        FETCH NEXT FROM rt_cursor INTO @DayCount,@Sales        END         CLOSE rt_cursor       DEALLOCATE rt_cursor         SELECT DayCount, Sales, RunningTotal       FROM @SaleTbl   END   GO   Create the Sample ASP.NET application In Visual Studio create a new solution and add a class library project called CacheSample.BusinessObjects and an ASP.NET web application called CacheSample.UI. The CacheSample.BusinessObjects project will contain a single class to represent a Sale data item, with all the code to retrieve the sales from the database included in it for simplicity (normally I would at least have a separate Repository or other object that is responsible for retrieving data, and probably a data access layer as well, but for this sample I want to keep it simple). The C# code for the Sale class is shown below: using System; using System.Collections.Generic; using System.Data; using System.Data.SqlClient;   namespace CacheSample.BusinessObjects {     public class Sale     {         public Int16 DayCount { get; set; }         public decimal Sales { get; set; }         public decimal RunningTotal { get; set; }           public static IEnumerable<Sale> GetSales(int? highestDayCount)         {             List<Sale> sales = new List<Sale>();               SqlParameter highestDayCountParameter = new SqlParameter("@HighestDayCount", SqlDbType.SmallInt);             if (highestDayCount.HasValue)                 highestDayCountParameter.Value = highestDayCount;             else                 highestDayCountParameter.Value = DBNull.Value;               string connectionStr = System.Configuration.ConfigurationManager .ConnectionStrings["CacheSample"].ConnectionString;               using(SqlConnection sqlConn = new SqlConnection(connectionStr))             using (SqlCommand sqlCmd = sqlConn.CreateCommand())             {                 sqlCmd.CommandText = "spGetRunningTotals";                 sqlCmd.CommandType = CommandType.StoredProcedure;                 sqlCmd.Parameters.Add(highestDayCountParameter);                   sqlConn.Open();                   using (SqlDataReader dr = sqlCmd.ExecuteReader())                 {                     while (dr.Read())                     {                         Sale newSale = new Sale();                         newSale.DayCount = dr.GetInt16(0);                         newSale.Sales = dr.GetDecimal(1);                         newSale.RunningTotal = dr.GetDecimal(2);                           sales.Add(newSale);                     }                 }             }               return sales;         }     } }   The static GetSale() method makes a call to the spGetRunningTotals stored procedure and then reads each row from the returned SqlDataReader into an instance of the Sale class, it then returns a List of the Sale objects, as IEnnumerable<Sale>. A reference to System.Configuration needs to be added to the CacheSample.BusinessObjects project so that the connection string can be read from the web.config file. In the CacheSample.UI ASP.NET project, create a single web page called ShowSales.aspx, and make this the default start up page. This page will contain a single button to call the GetSales() method and a label to display the results. The html mark up and the C# code behind are shown below: ShowSales.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ShowSales.aspx.cs" Inherits="CacheSample.UI.ShowSales" %>   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">   <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server">     <title>Cache Sample - Show All Sales</title> </head> <body>     <form id="form1" runat="server">     <div>         <asp:Button ID="btnTest1" runat="server" onclick="btnTest1_Click"             Text="Get All Sales" />         &nbsp;&nbsp;&nbsp;         <asp:Label ID="lblResults" runat="server"></asp:Label>         </div>     </form> </body> </html>   ShowSales.aspx.cs using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls;   using CacheSample.BusinessObjects;   namespace CacheSample.UI {     public partial class ShowSales : System.Web.UI.Page     {         protected void Page_Load(object sender, EventArgs e)         {         }           protected void btnTest1_Click(object sender, EventArgs e)         {             System.Diagnostics.Stopwatch stopWatch = new System.Diagnostics.Stopwatch();             stopWatch.Start();               var sales = Sale.GetSales(null);               var lastSales = sales.Last();               stopWatch.Stop();               lblResults.Text = string.Format( "Count of Sales: {0}, Last DayCount: {1}, Total Sales: {2}. Query took {3} ms", sales.Count(), lastSales.DayCount, lastSales.RunningTotal, stopWatch.ElapsedMilliseconds);         }       } }   Finally we need to add a connection string to the CacheSample SQL Server database, called CacheSample, to the web.config file: <?xmlversion="1.0"?>   <configuration>    <connectionStrings>     <addname="CacheSample"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;Initial Catalog=CacheSample"          providerName="System.Data.SqlClient" />  </connectionStrings>    <system.web>     <compilationdebug="true"targetFramework="4.0" />  </system.web>   </configuration>   Run the application and click the button a few times to see how long each call to the database takes. On my system, each query takes about 450ms. Next I shall look at a solution to use the ASP.NET caching to cache the data returned by the query, so that subsequent requests to the GetSales() method are much faster. Adding Data Caching Support I am going to create my caching support in a separate project called CacheSample.Caching, so the next step is to add a class library to the solution. We shall be using the application configuration to define the implementation of our caching system, so we need a reference to System.Configuration adding to the project. ICacheProvider<T> Interface The first step in adding caching to our application is to define an interface, called ICacheProvider, in the CacheSample.Caching project, with methods to retrieve any data from the cache or to retrieve the data from the data source if it is not present in the cache. Dependency Injection will then be used to inject an implementation of this interface at runtime, allowing the users of the interface (i.e. the CacheSample.BusinessObjects project) to be completely unaware of how the caching is actually implemented. As data of any type maybe retrieved from the data source, it makes sense to use generics in the interface, with a generic type parameter defining the data type associated with a particular instance of the cache interface implementation. The C# code for the ICacheProvider interface is shown below: using System; using System.Collections.Generic;   namespace CacheSample.Caching {     public interface ICacheProvider     {     }       public interface ICacheProvider<T> : ICacheProvider     {         T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry);           IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry);     } }   The empty non-generic interface will be used as a type in a Dictionary generic collection later to store instances of the ICacheProvider<T> implementation for reuse, I prefer to use a base interface when doing this, as I think the alternative of using object makes for less clear code. The ICacheProvider<T> interface defines two overloaded Fetch methods, the difference between these is that one will return a single instance of the type T and the other will return an IEnumerable<T>, providing support for easy caching of collections of data items. Both methods will take a key parameter, which will uniquely identify the cached data, a delegate of type Func<T> or Func<IEnumerable<T>> which will provide the code to retrieve the data from the store if it is not present in the cache, and absolute or relative expiry policies to define when a cached item should expire. Note that at present there is no support for cache dependencies, but I shall be showing a method of adding this in part two of this article. CacheProviderFactory Class We need a mechanism of creating instances of our ICacheProvider<T> interface, using Dependency Injection to get the implementation of the interface. To do this we shall create a CacheProviderFactory static class in the CacheSample.Caching project. This factory will provide a generic static method called GetCacheProvider<T>(), which shall return instances of ICacheProvider<T>. We can then call this factory method with the relevant data type (for example the Sale class in the CacheSample.BusinessObject project) to get a instance of ICacheProvider for that type (e.g. call CacheProviderFactory.GetCacheProvider<Sale>() to get the ICacheProvider<Sale> implementation). The C# code for the CacheProviderFactory is shown below: using System; using System.Collections.Generic;   using CacheSample.Caching.Configuration;   namespace CacheSample.Caching {     public static class CacheProviderFactory     {         private static Dictionary<Type, ICacheProvider> cacheProviders = new Dictionary<Type, ICacheProvider>();         private static object syncRoot = new object();           ///<summary>         /// Factory method to create or retrieve an implementation of the  /// ICacheProvider interface for type <typeparamref name="T"/>.         ///</summary>         ///<typeparam name="T">  /// The type that this cache provider instance will work with  ///</typeparam>         ///<returns>An instance of the implementation of ICacheProvider for type  ///<typeparamref name="T"/>, as specified by the application  /// configuration</returns>         public static ICacheProvider<T> GetCacheProvider<T>()         {             ICacheProvider<T> cacheProvider = null;             // Get the Type reference for the type parameter T             Type typeOfT = typeof(T);               // Lock the access to the cacheProviders dictionary             // so multiple threads can work with it             lock (syncRoot)             {                 // First check if an instance of the ICacheProvider implementation  // already exists in the cacheProviders dictionary for the type T                 if (cacheProviders.ContainsKey(typeOfT))                     cacheProvider = (ICacheProvider<T>)cacheProviders[typeOfT];                 else                 {                     // There is not already an instance of the ICacheProvider in       // cacheProviders for the type T                     // so we need to create one                       // Get the Type reference for the application's implementation of       // ICacheProvider from the configuration                     Type cacheProviderType = Type.GetType(CacheProviderConfigurationSection.Current. CacheProviderType);                     if (cacheProviderType != null)                     {                         // Now get a Type reference for the Cache Provider with the                         // type T generic parameter                         Type typeOfCacheProviderTypeForT = cacheProviderType.MakeGenericType(new Type[] { typeOfT });                         if (typeOfCacheProviderTypeForT != null)                         {                             // Create the instance of the Cache Provider and add it to // the cacheProviders dictionary for future use                             cacheProvider = (ICacheProvider<T>)Activator. CreateInstance(typeOfCacheProviderTypeForT);                             cacheProviders.Add(typeOfT, cacheProvider);                         }                     }                 }             }               return cacheProvider;                 }     } }   As this code uses Activator.CreateInstance() to create instances of the ICacheProvider<T> implementation, which is a slow process, the factory class maintains a Dictionary of the previously created instances so that a cache provider needs to be created only once for each type. The type of the implementation of ICacheProvider<T> is read from a custom configuration section in the application configuration file, via the CacheProviderConfigurationSection class, which is described below. CacheProviderConfigurationSection Class The implementation of ICacheProvider<T> will be specified in a custom configuration section in the application’s configuration. To handle this create a folder in the CacheSample.Caching project called Configuration, and add a class called CacheProviderConfigurationSection to this folder. This class will extend the System.Configuration.ConfigurationSection class, and will contain a single string property called CacheProviderType. The C# code for this class is shown below: using System; using System.Configuration;   namespace CacheSample.Caching.Configuration {     internal class CacheProviderConfigurationSection : ConfigurationSection     {         public static CacheProviderConfigurationSection Current         {             get             {                 return (CacheProviderConfigurationSection) ConfigurationManager.GetSection("cacheProvider");             }         }           [ConfigurationProperty("type", IsRequired=true)]         public string CacheProviderType         {             get             {                 return (string)this["type"];             }         }     } }   Adding Data Caching to the Sales Class We now have enough code in place to add caching to the GetSales() method in the CacheSample.BusinessObjects.Sale class, even though we do not yet have an implementation of the ICacheProvider<T> interface. We need to add a reference to the CacheSample.Caching project to CacheSample.BusinessObjects so that we can use the ICacheProvider<T> interface within the GetSales() method. Once the reference is added, we can first create a unique string key based on the method name and the parameter value, so that the same cache key is used for repeated calls to the method with the same parameter values. Then we get an instance of the cache provider for the Sales type, using the CacheProviderFactory, and pass the existing code to retrieve the data from the database as the retrievalMethod delegate in a call to the Cache Provider Fetch() method. The C# code for the modified GetSales() method is shown below: public static IEnumerable<Sale> GetSales(int? highestDayCount) {     string cacheKey = string.Format("CacheSample.BusinessObjects.GetSalesWithCache({0})", highestDayCount);       return CacheSample.Caching.CacheProviderFactory. GetCacheProvider<Sale>().Fetch(cacheKey,         delegate()         {             List<Sale> sales = new List<Sale>();               SqlParameter highestDayCountParameter = new SqlParameter("@HighestDayCount", SqlDbType.SmallInt);             if (highestDayCount.HasValue)                 highestDayCountParameter.Value = highestDayCount;             else                 highestDayCountParameter.Value = DBNull.Value;               string connectionStr = System.Configuration.ConfigurationManager. ConnectionStrings["CacheSample"].ConnectionString;               using (SqlConnection sqlConn = new SqlConnection(connectionStr))             using (SqlCommand sqlCmd = sqlConn.CreateCommand())             {                 sqlCmd.CommandText = "spGetRunningTotals";                 sqlCmd.CommandType = CommandType.StoredProcedure;                 sqlCmd.Parameters.Add(highestDayCountParameter);                   sqlConn.Open();                   using (SqlDataReader dr = sqlCmd.ExecuteReader())                 {                     while (dr.Read())                     {                         Sale newSale = new Sale();                         newSale.DayCount = dr.GetInt16(0);                         newSale.Sales = dr.GetDecimal(1);                         newSale.RunningTotal = dr.GetDecimal(2);                           sales.Add(newSale);                     }                 }             }               return sales;         },         null,         new TimeSpan(0, 10, 0)); }     This example passes the code to retrieve the Sales data from the database to the Cache Provider as an anonymous method, however it could also be written as a lambda. The main advantage of using an anonymous function (method or lambda) is that the code inside the anonymous function can access the parameters passed to the GetSales() method. Finally the absolute expiry is set to null, and the relative expiry set to 10 minutes, to indicate that the cache entry should be removed 10 minutes after the last request for the data. As the ICacheProvider<T> has a Fetch() method that returns IEnumerable<T>, we can simply return the results of the Fetch() method to the caller of the GetSales() method. This should be all that is needed for the GetSales() method to now retrieve data from a cache after the first time the data has be retrieved from the database. Implementing a ASP.NET Cache Provider The final step is to actually implement the ICacheProvider<T> interface, and add the implementation details to the web.config file for the dependency injection. The cache provider implementation needs to have access to System.Web. Therefore it could be placed in the CacheSample.UI project, or in its own project that has a reference to System.Web. Implementing the Cache Provider in a separate project is my favoured approach. Create a new project inside the solution called CacheSample.CacheProvider, and add references to System.Web and CacheSample.Caching to this project. Add a class to the project called AspNetCacheProvider. Make the class a generic class by adding the generic parameter <T> and indicate that the class implements ICacheProvider<T>. The C# code for the AspNetCacheProvider class is shown below: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Caching;   using CacheSample.Caching;   namespace CacheSample.CacheProvider {     public class AspNetCacheProvider<T> : ICacheProvider<T>     {         #region ICacheProvider<T> Members           public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry);         }           public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry);         }           #endregion           #region Helper Methods           private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             U value;             if (!TryGetValue<U>(key, out value))             {                 value = retrieveData();                 if (!absoluteExpiry.HasValue)                     absoluteExpiry = Cache.NoAbsoluteExpiration;                   if (!relativeExpiry.HasValue)                     relativeExpiry = Cache.NoSlidingExpiration;                   HttpContext.Current.Cache.Insert(key, value, null, absoluteExpiry.Value, relativeExpiry.Value);             }             return value;         }           private bool TryGetValue<U>(string key, out U value)         {             object cachedValue = HttpContext.Current.Cache.Get(key);             if (cachedValue == null)             {                 value = default(U);                 return false;             }             else             {                 try                 {                     value = (U)cachedValue;                     return true;                 }                 catch                 {                     value = default(U);                     return false;                 }             }         }           #endregion       } }   The two interface Fetch() methods call a private method called FetchAndCache(). This method first checks for a element in the HttpContext.Current.Cache with the specified cache key, and if so tries to cast this to the specified type (either T or IEnumerable<T>). If the cached element is found, the FetchAndCache() method simply returns it. If it is not found in the cache, the method calls the retrievalMethod delegate to get the data from the data source, and then adds this to the HttpContext.Current.Cache. The final step is to add the AspNetCacheProvider class to the relevant custom configuration section in the CacheSample.UI.Web.Config file. To do this there needs to be a <configSections> element added as the first element in <configuration>. This will match a custom section called <cacheProvider> with the CacheProviderConfigurationSection. Then we add a <cacheProvider> element, with a type property set to the fully qualified assembly name of the AspNetCacheProvider class, as shown below: <?xmlversion="1.0"?>   <configuration>  <configSections>     <sectionname="cacheProvider" type="CacheSample.Base.Configuration.CacheProviderConfigurationSection, CacheSample.Base" />  </configSections>    <connectionStrings>     <addname="CacheSample"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;Initial Catalog=CacheSample"          providerName="System.Data.SqlClient" />  </connectionStrings>    <cacheProvidertype="CacheSample.CacheProvider.AspNetCacheProvider`1, CacheSample.CacheProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null">  </cacheProvider>    <system.web>     <compilationdebug="true"targetFramework="4.0" />  </system.web>   </configuration>   One point to note is that the fully qualified assembly name of the AspNetCacheProvider class includes the notation `1 after the class name, which indicates that it is a generic class with a single generic type parameter. The CacheSample.UI project needs to have references added to CacheSample.Caching and CacheSample.CacheProvider so that the actual application is aware of the relevant cache provider implementation. Conclusion After implementing this solution, you should have a working cache provider mechanism, that will allow the middle and data access layers to implement caching support when retrieving data, without any knowledge of the actually caching implementation. If the UI is not ASP.NET based, if for example it is Winforms or WPF, the implementation of ICacheProvider<T> would be written around whatever technology is available. It could even be a standalone caching system that takes full responsibility for adding and removing items from a global store. The next part of this article will show how this caching mechanism may be extended to provide support for cache dependencies, such as the System.Web.Caching.SqlCacheDependency. Another possible extension would be to cache the cache provider implementations instead of storing them in a static Dictionary in the CacheProviderFactory. This would prevent a build up of seldom used cache providers in the application memory, as they could be removed from the cache if not used often enough, although in reality there are probably unlikely to be vast numbers of cache provider implementation instances, as most applications do not have a massive number of business object or model types.

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

< Previous Page | 639 640 641 642 643 644 645 646  | Next Page >