Search Results

Search found 7605 results on 305 pages for 'newbie 25'.

Page 291/305 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • Raspberry Pi broadcast serial port data to local network

    - by D051P0
    I didn't find anything to help me with this problem. What I want is: Serial device sends repeatedly some data to serial port. Raspberry Pi should get this data from RxD and stream it to local network via port 10001 without filtering it. So I can find this device on my pc. This should also work in other direction: Raspberry listen to port 10001 and forward all data from local network to TxD. I'm newbie in Linux World. How can I listen to some port on Raspberry Pi and send broadcast to the same port? I'm using Raspbian Wheezy with soft float. I have found a library Pi4j for Java, that I already use to get and write data from/to serial port. final Serial serial = SerialFactory.createInstance(); serial.addListener(new SerialDataListener() { public void dataReceived(SerialDataEvent event) { forward(event.getData()); } }); event.getData() is a String, which I want to broadcast in my local network. Is it generally a good Idea to use Java for that? I need also a String from port 10001, which I can forward to serial port.

    Read the article

  • Generics in a bidirectional association

    - by Verhoevenv
    Let's say I have two classes A and B, with B a subtype of A. This is only part of a richer type hierarchy, obviously, but I don't think that's relevant. Assume A is the root of the hierarchy. There is a collection class C that keeps track of a list of A's. However, I want to make C generic, so that it is possible to make an instance that only keeps B's and won't accept A's. class A(val c: C[A]) { c.addEntry(this) } class B(c: C[A]) extends A(c) class C[T <: A]{ val entries = new ArrayBuffer[T]() def addEntry(e: T) { entries += e } } object Generic { def main(args : Array[String]) { val c = new C[B]() new B(c) } } The code above obviously give the error 'type mismatch: found C[B], required C[A]' on the new B(c) line. I'm not sure how this can be fixed. It's not possible to make C covariant in T (like C[+T <: A]) because the ArrayBuffer is non-variantly typed in T. It's not possible to make the constructor of B require a C[B] because C can't be covariant. Am I barking up the wrong tree here? I'm a complete Scala newbie, so any ideas and tips might be helpful. Thank you! EDIT: Basically, what I'd like to have is that the compiler accepts both val c = new C[B]() new B(c) and val c = new C[A]() new B(c) but would reject val c = new C[B]() new A(c) It's probably possible to relax the typing of the ArrayBuffer in C to be A instead of T, and thus in the addEntry method as well, if that helps.

    Read the article

  • Obj-C Sending Messages Between Classes

    - by user544359
    I'm a newbie in iPhone Programming. I'm trying to send a message from one view controller to another. The idea is that viewControllerA takes information from the user and sends it to viewControllerB. viewControllerB is then supposed to display the information in a label. viewControllerA.h #import <UIKit/UIKit.h> @interface viewControllerA : UIViewController { int num; } -(IBAction)do; @end viewControllerA.m #import "viewControllerA.h" #import "viewControllerB.h" @implementation viewControllerA - (IBAction)do { //initializing int for example num = 2; viewControllerB *viewB = [[viewControllerB alloc] init]; [viewB display:num]; [viewB release]; //viewA is presented as a ModalViewController, so it dismisses itself to return to the //original view, i know it is not efficient [self dismissModalViewControllerAnimated:YES]; } - (void)dealloc { [super dealloc]; } @end viewControllerB.h #import <UIKit/UIKit.h> @interface viewControllerB : UIViewController { IBOutlet UILabel *label; } - (IBAction)openA; - (void)display:(NSInteger)myNum; @end viewControllerB.m #import "viewControllerB.h" #import "viewControllerA.h" @implementation viewControllerB - (IBAction)openA { //presents viewControllerA when a button is pressed viewControllerA *viewA = [[viewControllerA alloc] init]; [self presentModalViewController:viewA animated:YES]; } - (void)display:(NSInteger)myNum { NSLog(@"YES"); [label setText:[NSString stringWithFormat:@"%d", myNum]]; } @end YES is logged successfully, but the label's text does not change. I have made sure that all of my connections in Interface Builder are correct, in fact there are other (IBAction) methods in my program that change the text of this very label, and all of those other methods work perfectly... Any ideas, guys? You don't need to give me a full solution, any bits of information will help. Thanks.

    Read the article

  • .NET Development of iPhone App with MonoTouch - which development environment?

    - by Click Ahead
    Hi All, I'm a .NET developer (C#) with several years developing Windows Mobile Apps. I would like to get into developing iPhone Apps and MonoTouch looks good based on reviews I've read. So I'm going to go with MonoTouch. My understanding is that I'll need a new Mac, but as it happens I also need a new PC for my .NET windows development. My question is should I (a) Purchase a Mac Book Pro and dual boot with Windows 7 (b) Purchase a Mac Pro and dual boot with Windows 7 (c) Purchase a good Dev PC and a slighlty less well spec'd Mac Book Pro or Mac Pro Bear in mind I'm only doing MonoTouch development with the Mac, most of my development (approx. 80% initially) will be done on the Windows side. My budget is approx. €3,000 / $4,000 and I'd like a good, fast development environment.It's purely for development so on the windows side installing SQL 2008/VS 2010/Office and on the OS X side installing MonoTouch. BTW - my budget excludes licensing for VS/MonoTouch/etc, I have a MonoTouch and MSDN license. Any opinions are greatly appreciated. I'm a newbie to Mac's !

    Read the article

  • What are the possible tags inside the "global" tag in Magento "config.xml" file?

    - by Knowledge Craving
    Can someone professional experienced Magento developer tell me how to accomplish the following in Magento? I want to know what are the possible tags that can fit in the "global" tag of the "config.xml" page of every module's etc folder? I have tried searching for this answer at many places in Internet but in vain. Please provide the full details along with it for Magento version = 1.4.0.0, because I want at least the users accessing this website find it useful enough, instead of scratching their heads. I really want a detailed explanation, because every newbie like me gets totally confused at this point. From what I know till now, is that in this page, you can set routers, rewrites, cron jobs, admin html, front-end html and many more. But without any strong concepts, none can go ahead with the belief that his code is 100% correct w.r.t. the Magento MVC architecture. So please I want that strong fundamental concept, getting underlined here, with a detailed explanation about it, so that no one gets into this pitfall ever again. Many many thanks to those who can answer only upon knowing the total concept clearly.

    Read the article

  • NSMagedObjectContext, threads and NSFechedResultsController

    - by tmpz
    Dear iphone developers, Core Data newbie speaking here. In my application I have two NSManagedObjectContext that refer to that same NSPersistentStorageController. One ManagedObjectContext (c1) is in the main thread --created when I create a NSFetchedResultsController -- and the second ManagedObjectContext (c2) created in a second thread, running in the background, detached from the main thread. In the background thread I pull some data off a website and insert the entities created for the pulled data in the thread's ManagedObjectContext (c2). In the meanwhile, the main thread sits doing nothing and displaying a UITableView whose data do be display should be provided by the NSFetchedResultsController. When the background thread has finished pulling the data and inserting entities in c2, c2 saves, and the background thread notifies the main thread that the processing has finished before it exiting. As a matter of fact, the entities that I have inserted in c2 are know by c1 because it can ask it about one particular entity with [c1 existingObjectWithID:ObjectID error:&error]; I would expect at this point, if I call on my tableview reloadData to see some rows showing up with the data I pulled from the web in the background thread thanks to the NSFetchedResults controller which should react to the modifications of its ManagedObjectContext (c1). But nothing is happening! Only if I restart the application I see what I have previously pulled from the web! Where am I doing things wrong? Thank you in advance!

    Read the article

  • Saving Email/Password to Keychain in iOS

    - by Jason
    I'm very new to iOS development so forgive me if this is a newbie question. I have a simple authentication mechanism for my app that takes a user's email address and password. I also have a switch that says 'Remember me'. If the user toggles that switch on, I'd like to preserve their email/password so those fields can be auto-populated in the future. I've gotten this to work with saving to a plist file but I know that's not the best idea since the password is unencrypted. I found some sample code for saving to the keychain, but to be honest, I'm a little lost. For the function below, I'm not sure how to call it and how to modify it to save the email address as well. I'm guessing to call it would be: saveString(@"passwordgoeshere"); Thank you for any help!!! + (void)saveString:(NSString *)inputString forKey:(NSString *)account { NSAssert(account != nil, @"Invalid account"); NSAssert(inputString != nil, @"Invalid string"); NSMutableDictionary *query = [NSMutableDictionary dictionary]; [query setObject:(id)kSecClassGenericPassword forKey:(id)kSecClass]; [query setObject:account forKey:(id)kSecAttrAccount]; [query setObject:(id)kSecAttrAccessibleWhenUnlocked forKey:(id)kSecAttrAccessible]; OSStatus error = SecItemCopyMatching((CFDictionaryRef)query, NULL); if (error == errSecSuccess) { // do update NSDictionary *attributesToUpdate = [NSDictionary dictionaryWithObject:[inputString dataUsingEncoding:NSUTF8StringEncoding] forKey:(id)kSecValueData]; error = SecItemUpdate((CFDictionaryRef)query, (CFDictionaryRef)attributesToUpdate); NSAssert1(error == errSecSuccess, @"SecItemUpdate failed: %d", error); } else if (error == errSecItemNotFound) { // do add [query setObject:[inputString dataUsingEncoding:NSUTF8StringEncoding] forKey:(id)kSecValueData]; error = SecItemAdd((CFDictionaryRef)query, NULL); NSAssert1(error == errSecSuccess, @"SecItemAdd failed: %d", error); } else { NSAssert1(NO, @"SecItemCopyMatching failed: %d", error); } }

    Read the article

  • Android onFling not responding

    - by Kevin Moore
    I am new to android first of all so think of any newbie mistakes first I am trying to add a fling function in my code. public class MainGamePanel extends SurfaceView implements SurfaceHolder.Callback, OnGestureListener { private MainThread thread; private Droid droid; private Droid droid2; private static final String TAG = gameView.class.getSimpleName(); private GestureDetector gestureScanner; public MainGamePanel(Context context){ super(context); getHolder().addCallback(this); droid = new Droid(BitmapFactory.decodeResource(getResources(), R.drawable.playerbox2), 270, 300); droid2 = new Droid(BitmapFactory.decodeResource(getResources(), R.drawable.playerbox2), 50, 300); thread = new MainThread(getHolder(), this); setFocusable(true); gestureScanner = new GestureDetector(this); } public boolean onTouchEvent(MotionEvent event){ return gestureScanner.onTouchEvent(event); } @Override protected void onDraw(Canvas canvas){ canvas.drawColor(Color.BLACK); droid.draw(canvas); droid2.draw(canvas); } @Override public boolean onFling(MotionEvent e1, MotionEvent e2, float velocityX, float velocityY) { droid.setX(50); droid.setY(50); Log.d(TAG, "Coords: x=" + e1.getX() + ",y=" + e2.getY()); return true; } @Override public boolean onDown(MotionEvent e) { droid2.setX((int)e.getX()); droid2.setY((int)e.getY()); Log.d(TAG, "Coords: x=" + e.getX() + ",y=" + e.getY()); return false; } I got the gestureListener to work with: onDown, onLongPress, and onShowPress. But i can't get any response with onFling, onSingleTapUp, and onScroll. What mistake am I making? does it have to do with views? I don't know what code would be useful to see.... so any suggestions would be much appreciated. Thank You!

    Read the article

  • two view controllers and reusability with delegate

    - by netcharmer
    Newbie question about design patterns in objC. I'm writing a functionality for my iphone app which I plan to use in other apps too. The functionality is written over two classes - Viewcontroller1 and Viewcontroller2. Viewcontroller1 is the root view of a navigation controller and it can push Viewcontroller2. Rest of the app will use only ViewController1 and will never access Viewcontroller2 directly. However, triggered by user events, Viewcontroller2 has to send a message to the rest of the app. My question is what is the best way of achieving it? Currently, I use two level of delegation to send the message out from Viewcontroller2. First send it to Viewcontroller1 and then let Viewcontroller1 send it to rest of the app or the application delegate. So my code looks like - //Viewcontroller1.h @protocol bellDelegate -(int)bellRang:(int)size; @end @interface Viewcontroller1 : UITableViewController <dummydelegate> { id <bellDelegate> delegate; @end //Viewcontroller1.m @implementation Viewcontroller1 -(void)viewDidLoad { //some stuff here Viewcontroller2 *vc2 = [[Viewcontroller2 alloc] init]; vc2.delegate = self; [self.navigationController pushViewController:vc2 animated:YES]; } -(int)dummyBell:(int)size { return([self.delegate bellRang:size]); } //Viewcontroller2.h @protocol dummyDelegate -(int)dummyBell:(int)size; @end @interface Viewcontroller2 : UITableViewController { id <dummyDelegate> delegate; @end //Viewcontroller2.m @implementation Viewcontroller2 -(int)eventFoo:(int)size { rval = [self.delegate dummyBell:size]; } @end

    Read the article

  • On load rather than on click

    - by connor
    I have the following code: jQuery(function ($) { var OSX = { container: null, init: function () { $("input.apply, a.apply").click(function (e) { e.preventDefault(); $("#osx-modal-content").modal({ overlayId: 'osx-overlay', containerId: 'osx-container', closeHTML: null, minHeight: 80, opacity: 65, position: ['0',], overlayClose: true, onOpen: OSX.open, onClose: OSX.close }); }); }, open: function (d) { var self = this; self.container = d.container[0]; d.overlay.fadeIn('fast', function () { $("#osx-modal-content", self.container).show(); var title = $("#osx-modal-title", self.container); title.show(); d.container.slideDown('fast', function () { setTimeout(function () { var h = $("#osx-modal-data", self.container).height() + title.height() + 20; // padding d.container.animate( {height: h}, 200, function () { $("div.close", self.container).show(); $("#osx-modal-data", self.container).show(); } ); }, 300); }); }) }, close: function (d) { var self = this; // this = SimpleModal object d.container.animate( {top:"-" + (d.container.height() + 20)}, 500, function () { self.close(); // or $.modal.close(); } ); } }; OSX.init(); }); To load this script, a button with the class "apply" has to be clicked... Does anyone know a way of loading this script "onload" without a button being clicked? I have tried a lot of things, but I am a newbie when it comes to Javascript. Thanks!

    Read the article

  • Framework or CMS for php web development?

    - by rocknroll
    Hi all, I am a C++,Unix developer who has never dabbled in web development apart from creating simple HTML pages. I am going to change that and develop a website at a personal level soon. I am going to use php,mysql on a linux machine. In this regard I am browsing through relevant literature. The language isn't a problem but reading about CMS's and frameworks is confusing. And since I am new to web development, the number of CMS's and Frameworks are overwhelming. ? My question is do I need to have knowledge of one or more CMS' and/or Frameworks like Drupal,Joomla,Zend,Wordpress etc. If yes which is the best open source CMS' and/or Framework suggested for a newbie. ?? If the answer to the above question is yes, does the choice changes if one delves into the realms of commercial web development? Note:- I will be developing on a Linux machine, using open source tools. Thanks in advance

    Read the article

  • Something missing

    - by DHF
    The error of "} expected" appears at line 5. Why is that? I have included it at line 15. The error of "Declaration missing ;" appears at line 8. Why? There is ";" at the end of the line. #include<iostream> using namespace std; class PEmployee { public: PEmployee(); PEmployee(string employee_name, double initial_salary); void set_salary(double new_salary); double get_salary() const; string get_name() const; private: Person person_data; double salary; }; //line 15 int main() { PEmployee f("Patrick", 1000.00); cout << f.get_name() << " earns a salary of " << f.get_salary() << endl; return 0; } Newbie here, sorry for unclear question

    Read the article

  • Use combobox value as query value (VC# 2k8 / ADO SQLITE)

    - by nicodevil
    Hi, I'm a newbie in VC# and ADO SQLITE... I'm trying to change the content of a combobox according to the value present in another one... My problem is here : SQLiteCommand cmd3 = new SQLiteCommand("select distinct(ACTION) from ACTION_LIST where CATEGORY='comboBox1.text'", conn2); What to use to do the job ? Here 'comboBox1.text' is see as a sentence not a variable... Here is the code : private void Form1_Load(object sender, EventArgs e) { using (SQLiteConnection conn1 = new SQLiteConnection(@"Data Source = Data\MRIS_DB_MASTER")) { conn1.Open(); SQLiteCommand cmd2 = new SQLiteCommand("select distinct(CATEGORY) from ACTION_LIST", conn1); SQLiteDataAdapter adapter1 = new SQLiteDataAdapter(cmd2); DataTable tbl1 = new DataTable(); adapter1.Fill(tbl1); comboBox1.DataSource = tbl1; comboBox1.DisplayMember = "CATEGORY"; adapter1.Dispose(); cmd2.Dispose(); } using (SQLiteConnection conn2 = new SQLiteConnection(@"Data Source = Data\MRIS_DB_MASTER")) { conn2.Open(); SQLiteCommand cmd3 = new SQLiteCommand("select distinct(ACTION) from ACTION_LIST where CATEGORY='comboBox1.text'", conn2); SQLiteDataAdapter adapter2 = new SQLiteDataAdapter(cmd3); DataTable tbl2 = new DataTable(); adapter2.Fill(tbl2); comboBox2.DataSource = tbl2; comboBox2.DisplayMember = "ACTION"; adapter2.Dispose(); cmd3.Dispose(); } } Thanks regards

    Read the article

  • Setting default values for object properties in AS3

    - by Paul T
    I'm an actionscript newbie so please bear with me. Below is a function, and I am curious how to set default property values for objects that are being created in a loop. In the example below, these propeties are the same for each object created in the loop: titleTextField.selectable, titleTextField.wordWrap, titleTextField.x If you pull these properties out of the loop, they are null because the TextField objects have not been created, but it seems silly to have to set them each time. What is the correct way to do this. Thanks! var titleTextFormat:TextFormat = new TextFormat(); titleTextFormat.size = 10; titleTextFormat.font = "Arial"; titleTextFormat.color = 0xfff200; for (var i=0; i<arrThumbPicList.length; i++) { var yPos = 55 * i var titleTextField:TextField = new TextField(); titleTextField.selectable = false; titleTextField.wordWrap = true; titleTextField.text = arrThumbTitles[i]; titleTextField.x = 106; titleTextField.y = 331 + yPos; container.addChild(titleTextField); titleTextField.setTextFormat(titleTextFormat); }

    Read the article

  • how to filter selected objects then hide objects not selected.

    - by ussteele
    following up from yesterday... This portion of the code does work. $(document).ready(function(){ $('#listMenu a').click(function (selected) { var getPage = $(this).attr('id'); var getName = $(this).attr('name'); //console.log(getPage); //console.log(getName); $(function() { $("#" + getName ).show(); var getColor = $('#' + getName ).css('background-color'); $('#header' ).css('background', getColor); $('#slideshow' ).css('background', getColor); $('span').css('background', getColor); $('#menuContainer').css('background', getColor); }); $('li.listMenu ul').hide(); }); }); I am able to get what I wanted selected; based on vars getPage and getName, now I need to hide what is not selected. I have tried this: $(function() { var notSelected = $('div').filter(selected).attr('id', 'name'); $(this).hide(); console.log(notSelected); }); placed just above: $('li.listMenu ul').hide(); but it's not correct, remember I am really a newbie. what I see on screen is the new selected items on top of what should be hidden. again any help is appreciated. -sjs

    Read the article

  • How to check whether a default browser is opened in the operating system (Java)?

    - by stempel1984
    Hi, I am newbie here. During my work, I faced an interesting problem. I need to: check whether a default html browser is opened; check whether the browser is minimized/maximized (simply, a window's state); get an url address typed in the browser. If any of these conditions is not met, I have to open the browser in a maximized view with a desired url address. I primarily wanted to do all this in Java, but it came to my mind that I should employ many techniques/technologies and combine them appropriately to complete the functionality. But, which ones? That's the problem. I just recalled Windows API, but I'm not sure if it is of any help... Some users on another forum suggested that I should consider JNI (no experience at all)... I only know how to open a default browser (e.g. with use of the 'browse(URI uri)' method of the 'java.awt.Desktop' class) - that's too little to be proud of. Please give me some hints, maybe links to reasonable discussions, I would greatly appreciate any suggestions how to approach the problem.

    Read the article

  • How to write a cgi script in perl that accepts output (an image file) from a urlconnection in a Java applet and writes it to the server?

    - by Brad Rock
    I created a Java applet for my web page that creates an image file. I want users to be able to run the applet and create unique images, click a button, and have the image they created be saved to the web server. I think I have the code down for writing the image to a URLconnection in the Java applet itself after having successfully written a file while running the applet on my system rather than the web page and saving a file to the local disk, and then altering a few things to write to a URLConnection instead of to the local disk (although we shall see how that works too). I am now trying to write a cgi script in perl that takes the image output from the URLConnection and writes the image to a file on my web server (note: I am a newbie to perl). I have found many examples of how to do something similar with simple text coming from an applet and then writing it to a text file, but I want to know how to apply the same concept to an image. Particularly, how do I read in the image? I've seen text input get read by using read(STDIN, $some_variable)--does the same thing work with an image? Likewise, how do I write the image file? What function do I use? Thanks for your help. I know that I am rather naive about all of this.

    Read the article

  • Best (in your opinion) GIT workflow for case when releases are done on demand (in most cases 1-2 tickets at once)

    - by Robert
    I'm rather a Git newbie and I'm looking for your advice. In the company I work for we have a "workflow" where we have a single Git repo for our project with 2 branches: master and prod. All devs work on the master branch. If a ticket is done (from the dev perspective), we push to the repo. If all tests are passed, we make a release. The issue is that in most cases, the request from business guys sounds like: "please release ticket A or A && B". In most cases, I end up doing something like git checkout prod git cherry-pick --no-commit commit_hash git commit -m "blah blah to prod" -a As you can see this is not a perfect solution, and I'm under a huge impression this is a perfect way to nowhere especially when change A depends on changes B and C. Do you have any suggestions how to handle releases on demand if more devs works on the same branch and the flow looks like I described above? All suggestions are welcome. I cannot change business processes and it will have to stay as it is - unfortunately.

    Read the article

  • Table sorting & pagination with jQuery and Razor in ASP.NET MVC

    - by hajan
    Introduction jQuery enjoys living inside pages which are built on top of ASP.NET MVC Framework. The ASP.NET MVC is a place where things are organized very well and it is quite hard to make them dirty, especially because the pattern enforces you on purity (you can still make it dirty if you want so ;) ). We all know how easy is to build a HTML table with a header row, footer row and table rows showing some data. With ASP.NET MVC we can do this pretty easy, but, the result will be pure HTML table which only shows data, but does not includes sorting, pagination or some other advanced features that we were used to have in the ASP.NET WebForms GridView. Ok, there is the WebGrid MVC Helper, but what if we want to make something from pure table in our own clean style? In one of my recent projects, I’ve been using the jQuery tablesorter and tablesorter.pager plugins that go along. You don’t need to know jQuery to make this work… You need to know little CSS to create nice design for your table, but of course you can use mine from the demo… So, what you will see in this blog is how to attach this plugin to your pure html table and a div for pagination and make your table with advanced sorting and pagination features.   Demo Project Resources The resources I’m using for this demo project are shown in the following solution explorer window print screen: Content/images – folder that contains all the up/down arrow images, pagination buttons etc. You can freely replace them with your own, but keep the names the same if you don’t want to change anything in the CSS we will built later. Content/Site.css – The main css theme, where we will add the theme for our table too Controllers/HomeController.cs – The controller I’m using for this project Models/Person.cs – For this demo, I’m using Person.cs class Scripts – jquery-1.4.4.min.js, jquery.tablesorter.js, jquery.tablesorter.pager.js – required script to make the magic happens Views/Home/Index.cshtml – Index view (razor view engine) the other items are not important for the demo. ASP.NET MVC 1. Model In this demo I use only one Person class which defines Person entity with several properties. You can use your own model, maybe one which will access data from database or any other resource. Person.cs public class Person {     public string Name { get; set; }     public string Surname { get; set; }     public string Email { get; set; }     public int? Phone { get; set; }     public DateTime? DateAdded { get; set; }     public int? Age { get; set; }     public Person(string name, string surname, string email,         int? phone, DateTime? dateadded, int? age)     {         Name = name;         Surname = surname;         Email = email;         Phone = phone;         DateAdded = dateadded;         Age = age;     } } 2. View In our example, we have only one Index.chtml page where Razor View engine is used. Razor view engine is my favorite for ASP.NET MVC because it’s very intuitive, fluid and keeps your code clean. 3. Controller Since this is simple example with one page, we use one HomeController.cs where we have two methods, one of ActionResult type (Index) and another GetPeople() used to create and return list of people. HomeController.cs public class HomeController : Controller {     //     // GET: /Home/     public ActionResult Index()     {         ViewBag.People = GetPeople();         return View();     }     public List<Person> GetPeople()     {         List<Person> listPeople = new List<Person>();                  listPeople.Add(new Person("Hajan", "Selmani", "[email protected]", 070070070,DateTime.Now, 25));                     listPeople.Add(new Person("Straight", "Dean", "[email protected]", 123456789, DateTime.Now.AddDays(-5), 35));         listPeople.Add(new Person("Karsen", "Livia", "[email protected]", 46874651, DateTime.Now.AddDays(-2), 31));         listPeople.Add(new Person("Ringer", "Anne", "[email protected]", null, DateTime.Now, null));         listPeople.Add(new Person("O'Leary", "Michael", "[email protected]", 32424344, DateTime.Now, 44));         listPeople.Add(new Person("Gringlesby", "Anne", "[email protected]", null, DateTime.Now.AddDays(-9), 18));         listPeople.Add(new Person("Locksley", "Stearns", "[email protected]", 2135345, DateTime.Now, null));         listPeople.Add(new Person("DeFrance", "Michel", "[email protected]", 235325352, DateTime.Now.AddDays(-18), null));         listPeople.Add(new Person("White", "Johnson", null, null, DateTime.Now.AddDays(-22), 55));         listPeople.Add(new Person("Panteley", "Sylvia", null, 23233223, DateTime.Now.AddDays(-1), 32));         listPeople.Add(new Person("Blotchet-Halls", "Reginald", null, 323243423, DateTime.Now, 26));         listPeople.Add(new Person("Merr", "South", "[email protected]", 3232442, DateTime.Now.AddDays(-5), 85));         listPeople.Add(new Person("MacFeather", "Stearns", "[email protected]", null, DateTime.Now, null));         return listPeople;     } }   TABLE CSS/HTML DESIGN Now, lets start with the implementation. First of all, lets create the table structure and the main CSS. 1. HTML Structure @{     Layout = null;     } <!DOCTYPE html> <html> <head>     <title>ASP.NET & jQuery</title>     <!-- referencing styles, scripts and writing custom js scripts will go here --> </head> <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th> value </th>                 </tr>             </thead>             <tbody>                 <tr>                     <td>value</td>                 </tr>             </tbody>             <tfoot>                 <tr>                     <th> value </th>                 </tr>             </tfoot>         </table>         <div id="pager">                      </div>     </div> </body> </html> So, this is the main structure you need to create for each of your tables where you want to apply the functionality we will create. Of course the scripts are referenced once ;). As you see, our table has class tablesorter and also we have a div with id pager. In the next steps we will use both these to create the needed functionalities. The complete Index.cshtml coded to get the data from controller and display in the page is: <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </thead>             <tbody>                 @{                     foreach (var p in ViewBag.People)                     {                                 <tr>                         <td>@p.Name</td>                         <td>@p.Surname</td>                         <td>@p.Email</td>                         <td>@p.Phone</td>                         <td>@p.DateAdded</td>                     </tr>                     }                 }             </tbody>             <tfoot>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </tfoot>         </table>         <div id="pager" style="position: none;">             <form>             <img src="@Url.Content("~/Content/images/first.png")" class="first" />             <img src="@Url.Content("~/Content/images/prev.png")" class="prev" />             <input type="text" class="pagedisplay" />             <img src="@Url.Content("~/Content/images/next.png")" class="next" />             <img src="@Url.Content("~/Content/images/last.png")" class="last" />             <select class="pagesize">                 <option selected="selected" value="5">5</option>                 <option value="10">10</option>                 <option value="20">20</option>                 <option value="30">30</option>                 <option value="40">40</option>             </select>             </form>         </div>     </div> </body> So, mainly the structure is the same. I have added @Razor code to create table with data retrieved from the ViewBag.People which has been filled with data in the home controller. 2. CSS Design The CSS code I’ve created is: /* DEMO TABLE */ body {     font-size: 75%;     font-family: Verdana, Tahoma, Arial, "Helvetica Neue", Helvetica, Sans-Serif;     color: #232323;     background-color: #fff; } table { border-spacing:0; border:1px solid gray;} table.tablesorter thead tr .header {     background-image: url(images/bg.png);     background-repeat: no-repeat;     background-position: center right;     cursor: pointer; } table.tablesorter tbody td {     color: #3D3D3D;     padding: 4px;     background-color: #FFF;     vertical-align: top; } table.tablesorter tbody tr.odd td {     background-color:#F0F0F6; } table.tablesorter thead tr .headerSortUp {     background-image: url(images/asc.png); } table.tablesorter thead tr .headerSortDown {     background-image: url(images/desc.png); } table th { width:150px;            border:1px outset gray;            background-color:#3C78B5;            color:White;            cursor:pointer; } table thead th:hover { background-color:Yellow; color:Black;} table td { width:150px; border:1px solid gray;} PAGINATION AND SORTING Now, when everything is ready and we have the data, lets make pagination and sorting functionalities 1. jQuery Scripts referencing <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.pager.js")" type="text/javascript"></script> 2. jQuery Sorting and Pagination script   <script type="text/javascript">     $(function () {         $("table.tablesorter").tablesorter({ widthFixed: true, sortList: [[0, 0]] })         .tablesorterPager({ container: $("#pager"), size: $(".pagesize option:selected").val() });     }); </script> So, with only two lines of code, I’m using both tablesorter and tablesorterPager plugins, giving some options to both these. Options added: tablesorter - widthFixed: true – gives fixed width of the columns tablesorter - sortList[[0,0]] – An array of instructions for per-column sorting and direction in the format: [[columnIndex, sortDirection], ... ] where columnIndex is a zero-based index for your columns left-to-right and sortDirection is 0 for Ascending and 1 for Descending. A valid argument that sorts ascending first by column 1 and then column 2 looks like: [[0,0],[1,0]] (source: http://tablesorter.com/docs/) tablesorterPager – container: $(“#pager”) – tells the pager container, the div with id pager in our case. tablesorterPager – size: the default size of each page, where I get the default value selected, so if you put selected to any other of the options in your select list, you will have this number of rows as default per page for the table too. END RESULTS 1. Table once the page is loaded (default results per page is 5 and is automatically sorted by 1st column as sortList is specified) 2. Sorted by Phone Descending 3. Changed pagination to 10 items per page 4. Sorted by Phone and Name (use SHIFT to sort on multiple columns) 5. Sorted by Date Added 6. Page 3, 5 items per page   ADDITIONAL ENHANCEMENTS We can do additional enhancements to the table. We can make search for each column. I will cover this in one of my next blogs. Stay tuned. DEMO PROJECT You can download demo project source code from HERE.CONCLUSION Once you finish with the demo, run your page and open the source code. You will be amazed of the purity of your code.Working with pagination in client side can be very useful. One of the benefits is performance, but if you have thousands of rows in your tables, you will get opposite result when talking about performance. Hence, sometimes it is nice idea to make pagination on back-end. So, the compromise between both approaches would be best to combine both of them. I use at most up to 500 rows on client-side and once the user reach the last page, we can trigger ajax postback which can get the next 500 rows using server-side pagination of the same data. I would like to recommend the following blog post http://weblogs.asp.net/gunnarpeipman/archive/2010/09/14/returning-paged-results-from-repositories-using-pagedresult-lt-t-gt.aspx, which will help you understand how to return page results from repository. I hope this was helpful post for you. Wait for my next posts ;). Please do let me know your feedback. Best Regards, Hajan

    Read the article

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • Scroll Viewer not visible in wpf DataGrid

    - by cre-johnny07
    I have a datagrid in a grid but the scrollviewer is not visibile even though I made it auto. Below in my code. I can't figure out where's the problem. <Grid Grid.Row="0" Grid.Column="0"> <Grid.RowDefinitions> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"></ColumnDefinition> <ColumnDefinition Width="Auto"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBlock Text="Doctor Name" Grid.Row="0" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Doctor Address" Grid.Row="1" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Entry Note" Grid.Row="2" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Join Date" Grid.Row="3" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Default Discount" Grid.Row="4" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Discount Valid Till" Grid.Row="5" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Employee Name" Grid.Row="6" Grid.Column="0" Margin="5,5,0,0"/> <Grid Grid.Row="7" Grid.Column="0" Grid.ColumnSpan="2"> <Grid.ColumnDefinitions> <ColumnDefinition></ColumnDefinition> <ColumnDefinition></ColumnDefinition> <ColumnDefinition></ColumnDefinition> </Grid.ColumnDefinitions> <TextBlock Text="Report Type" Grid.Row="0" Grid.Column="0" Margin="5,5,0,0"/> <ComboBox Grid.Row="0" Grid.Column="1" Name="cmbReportType" Text="{Binding CurrentEntity.ReportType}"/> <Button Grid.Row="0" Grid.Column="2" Name="btnAddDetail" Content="Add Details" Command="{Binding AddDetailsCommand}"/> </Grid> <TextBox Grid.Row="0" Grid.Column="1" Margin="5,5,0,0" Width="190" Name="txtDocName" Text="{Binding CurrentEntity.RefName}"/> <TextBox Grid.Row="1" Grid.Column="1" Margin="5,5,0,0" Width="190" Height="75" Name="txtDocAddress" Text="{Binding CurrentEntity.RefAddress}"/> <TextBox Grid.Row="2" Grid.Column="1" Margin="5,5,0,0" Width="190" Height="100" Name="txtEntryNote" Text="{Binding CurrentEntity.EntryNotes}"/> <Custom:DatePicker Grid.Row="3" Grid.Column="1" Margin="5,3,0,0" Width="125" Name="dtpJoinDate" Height="24" HorizontalAlignment="Left" VerticalAlignment="Top" SelectedDate="{Binding CurrentEntity.DateStarted}" SelectedDateFormat="Short"/> <TextBox Grid.Row="4" Grid.Column="1" Height="25" Width="75" Name="txtDefaultDiscount" HorizontalAlignment="Left" Margin="5,0,0,0" VerticalAlignment="Top" Text="{Binding CurrentEntity.DefaultDiscount}"/> <Custom:DatePicker Grid.Row="5" Grid.Column="1" Margin="5,3,0,0" Width="125" Name="dtpValidTill" Height="24" HorizontalAlignment="Left" VerticalAlignment="Top" SelectedDate="{Binding CurrentEntity.DefaultDiscountValidTill}" SelectedDateFormat="Short"/> <ComboBox Grid.Row="6" Grid.Column="1" Margin="5,3,0,0" Width="190" Height="30" Name="cmbEmployeeName" ItemsSource="{Binding Employees}" DisplayMemberPath="FullName" SelectedIndex="{Binding SelecteIndex}"> </ComboBox> <Custom:DataGrid Grid.Row="8" Grid.Column="0" Grid.ColumnSpan="2" ItemsSource="{Binding XYZ}" AutoGenerateColumns="False" Name="grdTestDept"> <Custom:DataGrid.Columns> <Custom:DataGridTextColumn Binding="{Binding dep_id}" Width="40" Header="ID"/> <Custom:DataGridTextColumn Binding="{Binding dep_name}" Width="125" Header="Name"/> <Custom:DataGridTextColumn Binding="{Binding default_data}" Width="100" Header="Default Data"/> </Custom:DataGrid.Columns> </Custom:DataGrid> </Grid> <Grid Grid.Row="0" Grid.Column="1" Grid.RowSpan="9"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" MinWidth="43"></ColumnDefinition> <ColumnDefinition Width="Auto" MinWidth="150"></ColumnDefinition> <ColumnDefinition Width="Auto" MinWidth="50"></ColumnDefinition> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="34*" ></RowDefinition> <RowDefinition Height="337.88*"></RowDefinition> </Grid.RowDefinitions> <TextBlock Text="Name: " Grid.Row="0" Grid.Column="0" Margin="5,4,0,0" /> <cc:ValueEnabledCombo Grid.Column="1" x:Name="cmbfilEmployeeName" Width="150" Height="30" Margin="5,4,0,0" VerticalAlignment="Top" SelectedIndex="0" ItemsSource="{Binding Employees}" DisplayMemberPath="FullName" SelectedValuePath="EmployeeId" cc:ValueEnabledCombo.SelectionChanged="{Binding SelectionChangedCommand}"> </cc:ValueEnabledCombo> <Button Grid.Column="2" Name="btnReport" Width="50" Content="Report" Height="28" Margin="5,4,0,0" Command="{Binding ReportCommand}" VerticalAlignment="Top" /> <Grid Grid.Row="1" Grid.Column="0" Grid.ColumnSpan="3"> <Custom:DataGrid ItemsSource="{Binding DoctorList}" AutoGenerateColumns="False" Name="grdDoctor" ScrollViewer.HorizontalScrollBarVisibility="Auto" ScrollViewer.VerticalScrollBarVisibility="Auto"> <Custom:DataGrid.Columns> <Custom:DataGridTextColumn Binding="{Binding RefName}" Width="Auto" Header="Doctor Name"/> <Custom:DataGridTextColumn Binding="{Binding EmployeeFullName}" Width="Auto" Header="Employee Name"/> </Custom:DataGrid.Columns> </Custom:DataGrid> </Grid> </Grid> </Grid>

    Read the article

  • Problem with Google Web Toolkit Maven Plugin

    - by arreche
    I got an error following the PetClinic GWT application in less then 30 minutes Any idea? C:\Users\user\Desktop\petclinic>mvn -e gwt:run + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building petclinic [INFO] task-segment: [gwt:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing gwt:run [INFO] [aspectj:compile {execution: default}] [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 4 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date Downloading: http://repository.springsource.com/maven/bundles/release/org/codeha us/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository com.springsource.repository.bundles.release (http://repository.sp ringsource.com/maven/bundles/release) Downloading: http://repository.springsource.com/maven/bundles/external/org/codeh aus/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository com.springsource.repository.bundles.external (http://repository.s pringsource.com/maven/bundles/external) Downloading: http://repository.springsource.com/maven/bundles/milestone/org/code haus/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository com.springsource.repository.bundles.milestone (http://repository. springsource.com/maven/bundles/milestone) Downloading: http://maven.springframework.org/milestone/org/codehaus/plexus/plex us-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository spring-maven-milestone (http://maven.springframework.org/mileston e) Downloading: http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/gwt/maven/org /codehaus/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository gwt-repo (http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/g wt/maven) Downloading: http://repo1.maven.org/maven2/org/codehaus/plexus/plexus-components /1.1.6/plexus-components-1.1.6.pom [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error building POM (may not be this project's POM). Project ID: org.codehaus.plexus:plexus-components:pom:1.1.6 Reason: Cannot find parent: org.codehaus.plexus:plexus for project: org.codehaus .plexus:plexus-components:pom:1.1.6 for project org.codehaus.plexus:plexus-compo nents:pom:1.1.6 [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Unable to get dependency information: Unable to read the metadata file for artifact 'org.codehaus.plexus :plexus-compiler-api:jar': Cannot find parent: org.codehaus.plexus:plexus for pr oject: org.codehaus.plexus:plexus-components:pom:1.1.6 for project org.codehaus. plexus:plexus-components:pom:1.1.6 org.codehaus.plexus:plexus-compiler-api:jar:1.5.3 from the specified remote repositories: com.springsource.repository.bundles.release (http://repository.springsource.co m/maven/bundles/release), com.springsource.repository.bundles.milestone (http://repository.springsource. com/maven/bundles/milestone), spring-maven-snapshot (http://maven.springframework.org/snapshot), com.springsource.repository.bundles.external (http://repository.springsource.c om/maven/bundles/external), spring-maven-milestone (http://maven.springframework.org/milestone), central (http://repo1.maven.org/maven2), gwt-repo (http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/gwt/maven), codehaus.org (http://snapshots.repository.codehaus.org), JBoss Repo (http://repository.jboss.com/maven2) Path to dependency: 1) org.codehaus.mojo:gwt-maven-plugin:maven-plugin:1.3.1.google at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:711) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.artifact.resolver.ArtifactResolutionException: Unabl e to get dependency information: Unable to read the metadata file for artifact ' org.codehaus.plexus:plexus-compiler-api:jar': Cannot find parent: org.codehaus.p lexus:plexus for project: org.codehaus.plexus:plexus-components:pom:1.1.6 for pr oject org.codehaus.plexus:plexus-components:pom:1.1.6 org.codehaus.plexus:plexus-compiler-api:jar:1.5.3 from the specified remote repositories: com.springsource.repository.bundles.release (http://repository.springsource.co m/maven/bundles/release), com.springsource.repository.bundles.milestone (http://repository.springsource. com/maven/bundles/milestone), spring-maven-snapshot (http://maven.springframework.org/snapshot), com.springsource.repository.bundles.external (http://repository.springsource.c om/maven/bundles/external), spring-maven-milestone (http://maven.springframework.org/milestone), central (http://repo1.maven.org/maven2), gwt-repo (http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/gwt/maven), codehaus.org (http://snapshots.repository.codehaus.org), JBoss Repo (http://repository.jboss.com/maven2) Path to dependency: 1) org.codehaus.mojo:gwt-maven-plugin:maven-plugin:1.3.1.google at org.apache.maven.artifact.resolver.DefaultArtifactCollector.recurse(D efaultArtifactCollector.java:430) at org.apache.maven.artifact.resolver.DefaultArtifactCollector.collect(D efaultArtifactCollector.java:74) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolveTra nsitively(DefaultArtifactResolver.java:316) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolveTra nsitively(DefaultArtifactResolver.java:304) at org.apache.maven.plugin.DefaultPluginManager.ensurePluginContainerIsC omplete(DefaultPluginManager.java:835) at org.apache.maven.plugin.DefaultPluginManager.getConfiguredMojo(Defaul tPluginManager.java:647) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:468) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) ... 17 more Caused by: org.apache.maven.artifact.metadata.ArtifactMetadataRetrievalException : Unable to read the metadata file for artifact 'org.codehaus.plexus:plexus-comp iler-api:jar': Cannot find parent: org.codehaus.plexus:plexus for project: org.c odehaus.plexus:plexus-components:pom:1.1.6 for project org.codehaus.plexus:plexu s-components:pom:1.1.6 at org.apache.maven.project.artifact.MavenMetadataSource.retrieveRelocat edProject(MavenMetadataSource.java:200) at org.apache.maven.project.artifact.MavenMetadataSource.retrieveRelocat edArtifact(MavenMetadataSource.java:94) at org.apache.maven.artifact.resolver.DefaultArtifactCollector.recurse(D efaultArtifactCollector.java:387) ... 24 more Caused by: org.apache.maven.project.ProjectBuildingException: Cannot find parent : org.codehaus.plexus:plexus for project: org.codehaus.plexus:plexus-components: pom:1.1.6 for project org.codehaus.plexus:plexus-components:pom:1.1.6 at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1396) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1407) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1407) at org.apache.maven.project.DefaultMavenProjectBuilder.buildInternal(Def aultMavenProjectBuilder.java:823) at org.apache.maven.project.DefaultMavenProjectBuilder.buildFromReposito ry(DefaultMavenProjectBuilder.java:255) at org.apache.maven.project.artifact.MavenMetadataSource.retrieveRelocat edProject(MavenMetadataSource.java:163) ... 26 more Caused by: org.apache.maven.project.InvalidProjectModelException: Parse error re ading POM. Reason: expected START_TAG or END_TAG not TEXT (position: TEXT seen . ..<role>Developer</role>\n 6878/?\r</... @163:16) for project org.codehaus .plexus:plexus at C:\Users\user\.m2\repository\org\codehaus\plexus\plexus\1.0.8\ plexus-1.0.8.pom at org.apache.maven.project.DefaultMavenProjectBuilder.readModel(Default MavenProjectBuilder.java:1610) at org.apache.maven.project.DefaultMavenProjectBuilder.readModel(Default MavenProjectBuilder.java:1571) at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepo sitory(DefaultMavenProjectBuilder.java:562) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1392) ... 31 more Caused by: org.codehaus.plexus.util.xml.pull.XmlPullParserException: expected ST ART_TAG or END_TAG not TEXT (position: TEXT seen ...<role>Developer</role>\n 6878/?\r</... @163:16) at hidden.org.codehaus.plexus.util.xml.pull.MXParser.nextTag(MXParser.ja va:1095) at org.apache.maven.model.io.xpp3.MavenXpp3Reader.parseDeveloper(MavenXp p3Reader.java:1389) at org.apache.maven.model.io.xpp3.MavenXpp3Reader.parseModel(MavenXpp3Re ader.java:1944) at org.apache.maven.model.io.xpp3.MavenXpp3Reader.read(MavenXpp3Reader.j ava:3912) at org.apache.maven.project.DefaultMavenProjectBuilder.readModel(Default MavenProjectBuilder.java:1606) ... 34 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11 seconds [INFO] Finished at: Fri May 21 20:28:23 BST 2010 [INFO] Final Memory: 45M/205M [INFO] ------------------------------------------------------------------------

    Read the article

  • Rails with Oracle often got "no listener" error

    - by qichunren
    I am on Rails 2.3.5 and use oracle 10 as my database,use oracle_adapter ,ruby-oci8 to connect oracle host. But I often got exception as the log info show: Completed in 463ms (View: 18, DB: 166) | 200 OK [http://192.168.30.128/auctions?page=1] /!\ FAILSAFE /!\ Mon Feb 01 19:02:11 +0800 2010 Status: 500 Internal Server Error ORA-12541: TNS:no listener env.c:257:in oci8lib.so /home/qichunren/.gem/ruby/1.8/gems/ruby-oci8-1.0.7/lib/oci8.rb:229:in `initialize' /home/qichunren/.gem/ruby/1.8/gems/activerecord-oracle-adapter-1.0.0.9250/lib/active_record/connection_adapters/oracle_adapter.rb:623:in `new' /home/qichunren/.gem/ruby/1.8/gems/activerecord-oracle-adapter-1.0.0.9250/lib/active_record/connection_adapters/oracle_adapter.rb:623:in `new_connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-oracle-adapter-1.0.0.9250/lib/active_record/connection_adapters/oracle_adapter.rb:659:in `initialize' /home/qichunren/.gem/ruby/1.8/gems/activerecord-oracle-adapter-1.0.0.9250/lib/active_record/connection_adapters/oracle_adapter.rb:35:in `new' /home/qichunren/.gem/ruby/1.8/gems/activerecord-oracle-adapter-1.0.0.9250/lib/active_record/connection_adapters/oracle_adapter.rb:35:in `oracle_connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `send' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `new_connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:in `checkout' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `loop' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `checkout' /usr/local/ruby187/lib/ruby/1.8/monitor.rb:242:in `synchronize' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:in `checkout' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:in `connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:in `connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:9:in `cache' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:28:in `call' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in `call' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:25:in `call' /home/qichunren/.gem/ruby/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in `call' /home/qichunren/.gem/ruby/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in `call' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/params_parser.rb:15:in `call' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/session/cookie_store.rb:93:in `call' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/failsafe.rb:26:in `call' /home/qichunren/.gem/ruby/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' /home/qichunren/.gem/ruby/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchronize' /home/qichunren/.gem/ruby/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:114:in `call' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/reloader.rb:34:in `run' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:108:in `call' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/cgi_process.rb:44:in `dispatch_cgi' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:101:in `dispatch_cgi' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:27:in `dispatch' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/rails.rb:76:in `process' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/rails.rb:74:in `synchronize' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/rails.rb:74:in `process' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:159:in `process_client' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:158:in `each' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:158:in `process_client' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:285:in `run' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:285:in `initialize' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:285:in `new' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:285:in `run' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:268:in `initialize' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:268:in `new' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:268:in `run' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:282:in `run' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:281:in `each' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:281:in `run' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/mongrel_rails:128:in `run' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/command.rb:212:in `run' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/mongrel_rails:281 /home/qichunren/.gem/ruby/1.8/bin/mongrel_rails:19:in `load' /home/qichunren/.gem/ruby/1.8/bin/mongrel_rails:19 it seems that connection to oracle often disconnect.it show oracle error:ORA-12541: TNS:no listener How to fix this ? Many tks. oci8.c:270:in oci8lib.so: ORA-12541: TNS:no listener (OCIError) from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_oci_connection.rb:223:in new' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_oci_connection.rb:223:innew_connection' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_oci_connection.rb:328:in initialize' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_oci_connection.rb:24:innew' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_oci_connection.rb:24:in initialize' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_connection.rb:9:innew' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_connection.rb:9:in create' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_adapter.rb:50:inoracle_enhanced_connection' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/connection_adapters/abstract/c onnection_specification.rb:291:in send' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/connection_adapters/abstract/c onnection_specification.rb:291:inconnection=' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/connection_adapters/abstract/c onnection_specification.rb:259:in retrieve_connection' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/connection_adapters/abstract/c onnection_specification.rb:78:inconnection' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/base.rb:2438:in quoted_table_ name' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/base.rb:1259:infind_one' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/base.rb:1250:in find_from_ids ' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/base.rb:504:infind' from script/maintenance/adjust_settlement.rb:19

    Read the article

  • Problem with cometd and jetty 6 / 7

    - by Ceilingfish
    Hi chaps, I'm trying to get started with cometd (http://cometd.org/) and jetty 6 or 7, but I seem to be having problems. I've got an ant script that packages my code up into a war with the cometd 1.1.1 binaries and jetty binaries that are appropriate to the version of jetty I deploy the war to (so 7.1.2.v20100523 binaries when I deploy to jetty 7.1.2.v20100523 and 6.1.24 when I deploy to 6.1.24). I first tried getting a setup with version 7.1.2.v20100523, but when I tried to deploy I got a very long stack trace sample of which is: 2010-05-26 15:32:12.906:WARN::Problem processing jar entry org/eclipse/jetty/util/MultiPartOutputStream.class java.io.IOException: Invalid resource at org.eclipse.jetty.util.resource.URLResource.getInputStream(URLResource.java:204) at org.eclipse.jetty.util.resource.JarResource.getInputStream(JarResource.java:113) at org.eclipse.jetty.annotations.AnnotationParser$2.processEntry(AnnotationParser.java:575) at org.eclipse.jetty.webapp.JarScanner.matched(JarScanner.java:152) at org.eclipse.jetty.util.PatternMatcher.matchPatterns(PatternMatcher.java:82) at org.eclipse.jetty.util.PatternMatcher.match(PatternMatcher.java:64) at org.eclipse.jetty.webapp.JarScanner.scan(JarScanner.java:75) at org.eclipse.jetty.annotations.AnnotationParser.parse(AnnotationParser.java:587) at org.eclipse.jetty.annotations.AbstractConfiguration.parseWebInfLib(AbstractConfiguration.java:107) at org.eclipse.jetty.annotations.AnnotationConfiguration.configure(AnnotationConfiguration.java:68) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:992) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:579) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:381) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:36) at org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:182) at org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:497) at org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:135) at org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileChanged(ScanningAppProvider.java:77) at org.eclipse.jetty.util.Scanner.reportChange(Scanner.java:490) at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:355) at org.eclipse.jetty.util.Scanner.scan(Scanner.java:306) at org.eclipse.jetty.util.Scanner$1.run(Scanner.java:258) at java.util.TimerThread.mainLoop(Timer.java:512) at java.util.TimerThread.run(Timer.java:462) 2010-05-26 15:32:12.907:WARN::Problem processing jar entry org/eclipse/jetty/util/MultiPartWriter.class java.io.IOException: Invalid resource at org.eclipse.jetty.util.resource.URLResource.getInputStream(URLResource.java:204) at org.eclipse.jetty.util.resource.JarResource.getInputStream(JarResource.java:113) at org.eclipse.jetty.annotations.AnnotationParser$2.processEntry(AnnotationParser.java:575) at org.eclipse.jetty.webapp.JarScanner.matched(JarScanner.java:152) at org.eclipse.jetty.util.PatternMatcher.matchPatterns(PatternMatcher.java:82) at org.eclipse.jetty.util.PatternMatcher.match(PatternMatcher.java:64) at org.eclipse.jetty.webapp.JarScanner.scan(JarScanner.java:75) at org.eclipse.jetty.annotations.AnnotationParser.parse(AnnotationParser.java:587) at org.eclipse.jetty.annotations.AbstractConfiguration.parseWebInfLib(AbstractConfiguration.java:107) at org.eclipse.jetty.annotations.AnnotationConfiguration.configure(AnnotationConfiguration.java:68) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:992) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:579) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:381) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:36) at org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:182) at org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:497) at org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:135) at org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileChanged(ScanningAppProvider.java:77) at org.eclipse.jetty.util.Scanner.reportChange(Scanner.java:490) at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:355) at org.eclipse.jetty.util.Scanner.scan(Scanner.java:306) at org.eclipse.jetty.util.Scanner$1.run(Scanner.java:258) at java.util.TimerThread.mainLoop(Timer.java:512) at java.util.TimerThread.run(Timer.java:462) 2010-05-26 15:32:12.907:WARN::Problem processing jar entry org/eclipse/jetty/util/Attributes.class java.io.IOException: Invalid resource at org.eclipse.jetty.util.resource.URLResource.getInputStream(URLResource.java:204) at org.eclipse.jetty.util.resource.JarResource.getInputStream(JarResource.java:113) at org.eclipse.jetty.annotations.AnnotationParser$2.processEntry(AnnotationParser.java:575) at org.eclipse.jetty.webapp.JarScanner.matched(JarScanner.java:152) at org.eclipse.jetty.util.PatternMatcher.matchPatterns(PatternMatcher.java:82) at org.eclipse.jetty.util.PatternMatcher.match(PatternMatcher.java:64) at org.eclipse.jetty.webapp.JarScanner.scan(JarScanner.java:75) at org.eclipse.jetty.annotations.AnnotationParser.parse(AnnotationParser.java:587) at org.eclipse.jetty.annotations.AbstractConfiguration.parseWebInfLib(AbstractConfiguration.java:107) at org.eclipse.jetty.annotations.AnnotationConfiguration.configure(AnnotationConfiguration.java:68) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:992) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:579) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:381) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:36) at org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:182) at org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:497) at org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:135) at org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileChanged(ScanningAppProvider.java:77) at org.eclipse.jetty.util.Scanner.reportChange(Scanner.java:490) at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:355) at org.eclipse.jetty.util.Scanner.scan(Scanner.java:306) at org.eclipse.jetty.util.Scanner$1.run(Scanner.java:258) at java.util.TimerThread.mainLoop(Timer.java:512) at java.util.TimerThread.run(Timer.java:462) Seemed to go through all the jetty binaries and complain about each class file. When I tried to deploy to 6.1.24 I got org.mortbay.util.MultiException[java.lang.NoClassDefFoundError: org/eclipse/jetty/util/ajax/JSON$Source, java.lang.NoClassDefFoundError: org/eclipse/jetty/util/thread/ThreadPool] at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:656) at org.mortbay.jetty.servlet.Context.startContext(Context.java:140) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:517) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:467) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.Server.doStart(Server.java:224) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.mortbay.start.Main.invokeMain(Main.java:194) at org.mortbay.start.Main.start(Main.java:534) at org.mortbay.start.Main.start(Main.java:441) at org.mortbay.start.Main.main(Main.java:119) My web.xml looks like this: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5"> <servlet> <servlet-name>cometd</servlet-name> <servlet-class>org.cometd.server.continuation.ContinuationCometdServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>cometd</servlet-name> <url-pattern>/cometd/*</url-pattern> </servlet-mapping> <servlet> <servlet-name>initializer</servlet-name> <servlet-class>uk.co.dubit.nexus.comet.BayeuxInitializer</servlet-class> <load-on-startup>2</load-on-startup> </servlet> <!-- <filter> <filter-name>cross-origin</filter-name> <filter-class>org.eclipse.jetty.servlets.CrossOriginFilter</filter-class> </filter> <filter-mapping> <filter-name>cross-origin</filter-name> <url-pattern>/cometd/*</url-pattern> </filter-mapping> --> </web-app> note cross origin filter is commented out. The class didn't seem to exist when I tried to run on 6.1.24 (which as far as I understand is the correct behaviour, yes?). Sorry for the noob question but does anyone know what I'm doing wrong here? Regards, Tom

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >