Search Results

Search found 10597 results on 424 pages for 'dynamic attributes'.

Page 185/424 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • PHP parsing XML file with and without namespaces

    - by Mike
    I need to get a XML File into a Database. Thats not the problem. Cant read it, parse it and create some Objects to map to the DB. Problem is, that sometimes the XML File can contain namespaces and sometimes not. Furtermore sometimes there is no namespace defined at all. So what i first got was something like this: <?xml version="1.0" encoding="UTF-8"?> <struct xmlns:b="http://www.w3schools.com/test/"> <objects> <object> <node_1>value1</node_1> <node_2>value2</node_2> <node_3 iso_land="AFG"/> <coords lat="12.00" long="13.00"/> </object> </objects> </struct> And the parsing: $t = $xml->xpath('/objects/object'); foreach($nodes AS $node) { if($t[0]->$node) { $obj->$node = (string) $t[0]->$node; } } Thats fine as long as there are no namespaces. Here comes the XML File with namespaces: <?xml version="1.0" encoding="UTF-8"?> <b:struct xmlns:b="http://www.w3schools.com/test/"> <b:objects> <b:object> <b:node_1>value1</b:node_1> <b:node_2>value2</b:node_2> <b:node_3 iso_land="AFG"/> <b:coords lat="12.00" long="13.00"/> </b:object> </b:objects> </b:struct> I now came up with something like this: $xml = simplexml_load_file("test.xml"); $namespaces = $xml->getNamespaces(TRUE); $ns = count($namespaces) ? 'a:' : ''; $xml->registerXPathNamespace("a", "http://www.w3schools.com/test/"); $nodes = array('node_1', 'node_2'); $obj = new stdClass(); foreach($nodes AS $node) { $t = $xml->xpath('/'.$ns.'objects/'.$ns.'object/'.$ns.$node); if($t[0]) { $obj->$node = (string) $t[0]; } } $t = $xml->xpath('/'.$ns.'objects/'.$ns.'object/'.$ns.'node_3'); if($t[0]) { $obj->iso_land = (string) $t[0]->attributes()->iso_land; } $t = $xml->xpath('/'.$ns.'objects/'.$ns.'object/'.$ns.'coords'); if($t[0]) { $obj->lat = (string) $t[0]->attributes()->lat; $obj->long = (string) $t[0]->attributes()->long; } That works with namespaces and without. But i feel that there must be a better way. Before that i could do something like this: $t = $xml->xpath('/'.$ns.'objects/'.$ns.'object'); foreach($nodes AS $node) { if($t[0]->$node) { $obj->$node = (string) $t[0]->$node; } } But that just wont work with namespaces.

    Read the article

  • OSX 10.6 Cisco IPSEC strange behavior

    - by tair
    I'm trying to connect to Cisco IPSEC VPN of my company over DSL Internet. I managed to successfully connect using Cisco VPN Client, now I'm trying to switch to OSX 10.6 native client, because of licensing issues. The problems is that the connection fails with a dialog box containing the message: The negotiation with the VPN server failed. Verify the server address and try reconnecting. I checked logs: Jun 29 13:10:39 racoon[4551]: Connecting. Jun 29 13:10:39 racoon[4551]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 1). Jun 29 13:10:39 racoon[4551]: IKEv1 Phase1 AUTH: success. (Initiator, Aggressive-Mode Message 2). Jun 29 13:10:39 racoon[4551]: IKE Packet: receive success. (Initiator, Aggressive-Mode message 2). Jun 29 13:10:39 racoon[4551]: IKEv1 Phase1 Initiator: success. (Initiator, Aggressive-Mode). Jun 29 13:10:39 racoon[4551]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 3). Jun 29 13:10:42 racoon[4551]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:10:42 racoon[4551]: IKEv1 XAUTH: success. (XAUTH Status is OK). Jun 29 13:10:42 racoon[4551]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:10:42 racoon[4551]: IKEv1 Config: retransmited. (Mode-Config retransmit). Jun 29 13:10:42 racoon[4551]: IKE Packet: receive success. (MODE-Config). Jun 29 13:10:42 configd[19]: event_callback: Address added. previous interface setting (name: en1, address: 192.168.1.107), current interface setting (name: u92.168.54.147, subnet: 255.255.255.0, destination: 192.168.54.147). Jun 29 13:10:42 configd[19]: network configuration changed. Jun 29 13:10:42 vmnet-bridge[111]: Dynamic store changed Jun 29 13:10:42 named[62]: not listening on any interfaces Jun 29 13:10:58: --- last message repeated 1 time --- Jun 29 13:10:58 configd[19]: SCNCController: Disconnecting. (Connection tried to negotiate for, 16 seconds). Jun 29 13:10:58 racoon[4551]: IKE Packet: transmit success. (Information message). Jun 29 13:10:58 racoon[4551]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Jun 29 13:10:58 racoon[4551]: Disconnecting. (Connection tried to negotiate for, 19.113382 seconds). Jun 29 13:10:58 named[62]: not listening on any interfaces Jun 29 13:10:58 vmnet-bridge[111]: Dynamic store changed Jun 29 13:10:58 named[62]: not listening on any interfaces Jun 29 13:10:58 configd[19]: network configuration changed. Then I opened Terminal, started pinging a server behind VPN, and tried to connect again. Now connection is OK! Logs this time: Jun 29 13:46:53 racoon[8136]: Connecting. Jun 29 13:46:53 racoon[8136]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 1). Jun 29 13:46:53 racoon[8136]: IKEv1 Phase1 AUTH: success. (Initiator, Aggressive-Mode Message 2). Jun 29 13:46:53 racoon[8136]: IKE Packet: receive success. (Initiator, Aggressive-Mode message 2). Jun 29 13:46:53 racoon[8136]: IKEv1 Phase1 Initiator: success. (Initiator, Aggressive-Mode). Jun 29 13:46:53 racoon[8136]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 3). Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:46:56 racoon[8136]: IKEv1 XAUTH: success. (XAUTH Status is OK). Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Mode-Config message). Jun 29 13:46:56 racoon[8136]: IKEv1 Config: retransmited. (Mode-Config retransmit). Jun 29 13:46:56 racoon[8136]: IKE Packet: receive success. (MODE-Config). Jun 29 13:46:56 configd[19]: event_callback: Address added. previous interface setting (name: en1, address: 192.168.1.107), current interface settinaddress: 192.168.54.149, subnet: 255.255.255.0, destination: 192.168.54.149). Jun 29 13:46:56 vmnet-bridge[111]: Dynamic store changed Jun 29 13:46:56 named[62]: not listening on any interfaces Jun 29 13:46:56 configd[19]: network configuration changed. Jun 29 13:46:56 named[62]: not listening on any interfaces Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Jun 29 13:46:56 racoon[8136]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Jun 29 13:46:56 racoon[8136]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Jun 29 13:46:56 racoon[8136]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Jun 29 13:46:56 racoon[8136]: Connected. Jun 29 13:46:56 configd[19]: SCNCController: Connected. I tested it several times and it consistently behaves the same. What is the magic?

    Read the article

  • Issues writing to serial port on MAC OSX using unistd.h in c

    - by Schuyler
    I am trying to write to a bluetooth device on MAC OSX using the unistd.h Linux functions in c. I am connecting fine and writing the first few bytes with success. When I try to write other commands to it (there are bytes added to the write buffer every 15ms), I don't see any results even though the write() function returns 1 (write success). If you start a write and it doesn't finish by the time you try to start another write (since it is non-blocking), could that possibly screw up the initial write? (If so, is there any way to check if a write has completed?) That is the only thing I can think of since the writes are occurring fairly frequently and the first two are successfully sent. qwbyte() simply adds a byte to the output array and increments its length The open port function: BAMid = -1; struct termios options; struct termios originalTTYAttrs; // Open the serial port read/write, nonblocking, with no controlling terminal, and don't wait for a connection. BAMid = open(strPath, O_RDWR | O_NOCTTY | O_NONBLOCK); if (BAMid == -1) { printf("Error opening serial port %s - %s(%d).\n", strPath, strerror(errno), errno); goto error; } // Issue TIOCEXCL ioctl to prevent additional opens except by root-owned processes. if (ioctl(BAMid, TIOCEXCL) == -1) { printf("Error setting TIOCEXCL on %s - %s(%d).\n", strPath, strerror(errno), errno); goto error; } // Get the current options and save them so we can restore the default settings later. if (tcgetattr(BAMid, &originalTTYAttrs) == -1) { printf("Error getting tty attributes %s - %s(%d).\n", strPath, strerror(errno), errno); goto error; } // The serial port attributes such as timeouts and baud rate are set by modifying the termios // structure and then calling tcsetattr() to cause the changes to take effect. Note that the // changes will not become effective without the tcsetattr() call. options = originalTTYAttrs; // Set raw input (non-canonical) mode, with reads blocking until either a single character // has been received or a one second timeout expires. [should be moot since we are leaving it as nonblocking] cfmakeraw(&options); options.c_cc[VMIN] = 1; options.c_cc[VTIME] = 10; cfsetspeed(&options, B57600); // Set 57600 baud options.c_cflag |= CS8; // Use 8 bit words // Cause the new options to take effect immediately. if (tcsetattr(BAMid, TCSANOW, &options) == -1) { printf("Error setting tty attributes %s - %s(%d).\n", strPath, strerror(errno), errno); goto error; } //flush old transmissions if (tcflush(BAMid,TCIOFLUSH) == -1) { printf("Error flushing BAM serial port - %s(%d).\n", strerror(errno), errno); } oBufLength = 0; // Ask it to start if (! qwbyte(CmdStart) ) { goto error; } if (! qwbyte(CmdFull) ) { goto error; } //this transmit works txbytes(); printf("success opening port!"); return -1; // Failure path error: if (BAMid != -1) { close(BAMid); } printf("returning an error--%d",errno); return errno; } The write function (txbytes): int i, bufSize, numBytes; if(oBufLength != 0) { //if the output array isn't empty //duplicating the output array and its size so it can //be overwritten while this write is occuring printf("about to transmit: "); for(i = 0; i < oBufLength; i++) { printf(" %u",oBuf[i]); tempBuf[i] = oBuf[i]; } printf("\n"); bufSize = oBufLength; oBufLength = 0; numBytes = write(BAMid, &tempBuf, bufSize); printf("bytes written = %d\n",numBytes); if (numBytes == -1) { printf("Error writing to port - %s(%d).\n", strerror(errno), errno); } return (numBytes 0); } else { return 0; }

    Read the article

  • Paging, sorting and filtering in a stored procedure (SQL Server)

    - by Fruitbat
    I was looking at different ways of writing a stored procedure to return a "page" of data. This was for use with the asp ObjectDataSource, but it could be considered a more general problem. The requirement is to return a subset of the data based on the usual paging paremeters, startPageIndex and maximumRows, but also a sortBy parameter to allow the data to be sorted. Also there are some parameters passed in to filter the data on various conditions. One common way to do this seems to be something like this: [Method 1] ;WITH stuff AS ( SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) ) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row One problem with this is that it doesn't give the total count and generally we need another stored procedure for that. This second stored procedure has to replicate the parameter list and the complex WHERE clause. Not nice. One solution is to append an extra column to the final select list, (SELECT COUNT(*) FROM stuff) AS TotalRows. This gives us the total but repeats it for every row in the result set, which is not ideal. [Method 2] An interesting alternative is given here (http://www.4guysfromrolla.com/articles/032206-1.aspx) using dynamic SQL. He reckons that the performance is better because the CASE statement in the first solution drags things down. Fair enough, and this solution makes it easy to get the totalRows and slap it into an output parameter. But I hate coding dynamic SQL. All that 'bit of SQL ' + STR(@parm1) +' bit more SQL' gubbins. [Method 3] The only way I can find to get what I want, without repeating code which would have to be synchronised, and keeping things reasonably readable is to go back to the "old way" of using a table variable: DECLARE @stuff TABLE (Row INT, ...) INSERT INTO @stuff SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row (Or a similar method using an IDENTITY column on the table variable). Here I can just add a SELECT COUNT on the table variable to get the totalRows and put it into an output parameter. I did some tests and with a fairly simple version of the query (no sortBy and no filter), method 1 seems to come up on top (almost twice as quick as the other 2). Then I decided to test probably I needed the complexity and I needed the SQL to be in stored procedures. With this I get method 1 taking nearly twice as long as the other 2 methods. Which seems strange. Is there any good reason why I shouldn't spurn CTEs and stick with method 3? UPDATE - 15 March 2012 I tried adapting Method 1 to dump the page from the CTE into a temporary table so that I could extract the TotalRows and then select just the relevant columns for the resultset. This seemed to add significantly to the time (more than I expected). I should add that I'm running this on a laptop with SQL Server Express 2008 (all that I have available) but still the comparison should be valid. I looked again at the dynamic SQL method. It turns out I wasn't really doing it properly (just concatenating strings together). I set it up as in the documentation for sp_executesql (with a parameter description string and parameter list) and it's much more readable. Also this method runs fastest in my environment. Why that should be still baffles me, but I guess the answer is hinted at in Hogan's comment.

    Read the article

  • Creating a custom module for Orchard

    - by Moran Monovich
    I created a custom module using this guide from Orchard documentation, but for some reason I can't see the fields in the content type when I want to create a new one. this is my model: public class CustomerPartRecord : ContentPartRecord { public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual int PhoneNumber { get; set; } public virtual string Address { get; set; } public virtual string Profession { get; set; } public virtual string ProDescription { get; set; } public virtual int Hours { get; set; } } public class CustomerPart : ContentPart<CustomerPartRecord> { [Required(ErrorMessage="you must enter your first name")] [StringLength(200)] public string FirstName { get { return Record.FirstName; } set { Record.FirstName = value; } } [Required(ErrorMessage = "you must enter your last name")] [StringLength(200)] public string LastName { get { return Record.LastName; } set { Record.LastName = value; } } [Required(ErrorMessage = "you must enter your phone number")] [DataType(DataType.PhoneNumber)] public int PhoneNumber { get { return Record.PhoneNumber; } set { Record.PhoneNumber = value; } } [StringLength(200)] public string Address { get { return Record.Address; } set { Record.Address = value; } } [Required(ErrorMessage = "you must enter your profession")] [StringLength(200)] public string Profession { get { return Record.Profession; } set { Record.Profession = value; } } [StringLength(500)] public string ProDescription { get { return Record.ProDescription; } set { Record.ProDescription = value; } } [Required(ErrorMessage = "you must enter your hours")] public int Hours { get { return Record.Hours; } set { Record.Hours = value; } } } this is the Handler: class CustomerHandler : ContentHandler { public CustomerHandler(IRepository<CustomerPartRecord> repository) { Filters.Add(StorageFilter.For(repository)); } } the Driver: class CustomerDriver : ContentPartDriver<CustomerPart> { protected override DriverResult Display(CustomerPart part, string displayType, dynamic shapeHelper) { return ContentShape("Parts_Customer", () => shapeHelper.Parts_BankCustomer( FirstName: part.FirstName, LastName: part.LastName, PhoneNumber: part.PhoneNumber, Address: part.Address, Profession: part.Profession, ProDescription: part.ProDescription, Hours: part.Hours)); } //GET protected override DriverResult Editor(CustomerPart part, dynamic shapeHelper) { return ContentShape("Parts_Customer", () => shapeHelper.EditorTemplate( TemplateName:"Parts/Customer", Model: part, Prefix: Prefix)); } //POST protected override DriverResult Editor(CustomerPart part, IUpdateModel updater, dynamic shapeHelper) { updater.TryUpdateModel(part, Prefix, null, null); return Editor(part, shapeHelper); } the migration: public class Migrations : DataMigrationImpl { public int Create() { // Creating table CustomerPartRecord SchemaBuilder.CreateTable("CustomerPartRecord", table => table .ContentPartRecord() .Column("FirstName", DbType.String) .Column("LastName", DbType.String) .Column("PhoneNumber", DbType.Int32) .Column("Address", DbType.String) .Column("Profession", DbType.String) .Column("ProDescription", DbType.String) .Column("Hours", DbType.Int32) ); return 1; } public int UpdateFrom1() { ContentDefinitionManager.AlterPartDefinition("CustomerPart", builder => builder.Attachable()); return 2; } public int UpdateFrom2() { ContentDefinitionManager.AlterTypeDefinition("Customer", cfg => cfg .WithPart("CommonPart") .WithPart("RoutePart") .WithPart("BodyPart") .WithPart("CustomerPart") .WithPart("CommentsPart") .WithPart("TagsPart") .WithPart("LocalizationPart") .Creatable() .Indexed()); return 3; } } Can someone please tell me if I am missing something?

    Read the article

  • Using CreateSourceQuery in CTP4 Code First

    - by Adam Rackis
    I'm guessing this is impossible, but I'll throw it out there anyway. Is it possible to use CreateSourceQuery when programming with the EF4 CodeFirst API, in CTP4? I'd like to eagerly load properties attached to a collection of properties, like this: var sourceQuery = this.CurrentInvoice.PropertyInvoices.CreateSourceQuery(); sourceQuery.Include("Property").ToList(); But of course CreateSourceQuery is defined on EntityCollection<T>, whereas CodeFirst uses plain old ICollection (obviously). Is there some way to convert? I've gotten the below to work, but it's not quite what I'm looking for. Anyone know how to go from what's below to what's above (code below is from a class that inherits DbContext)? ObjectSet<Person> OSPeople = base.ObjectContext.CreateObjectSet<Person>(); OSPeople.Include(Pinner => Pinner.Books).ToList(); Thanks! EDIT: here's my version of the solution posted by zeeshanhirani - who's book by the way is amazing! dynamic result; if (invoice.PropertyInvoices is EntityCollection<PropertyInvoice>) result = (invoices.PropertyInvoices as EntityCollection<PropertyInvoice>).CreateSourceQuery().Yadda.Yadda.Yadda else //must be a unit test! result = invoices.PropertyInvoices; return result.ToList(); EDIT2: Ok, I just realized that you can't dispatch extension methods whilst using dynamic. So I guess we're not quite as dynamic as Ruby, but the example above is easily modifiable to comport with this restriction EDIT3: As mentioned in zeeshanhirani's blog post, this only works if (and only if) you have change-enabled proxies, which will get created if all of your properties are declared virtual. Here's another version of what the method might look like to use CreateSourceQuery with POCOs public class Person { public virtual int ID { get; set; } public virtual string FName { get; set; } public virtual string LName { get; set; } public virtual double Weight { get; set; } public virtual ICollection<Book> Books { get; set; } } public class Book { public virtual int ID { get; set; } public virtual string Title { get; set; } public virtual int Pages { get; set; } public virtual int OwnerID { get; set; } public virtual ICollection<Genre> Genres { get; set; } public virtual Person Owner { get; set; } } public class Genre { public virtual int ID { get; set; } public virtual string Name { get; set; } public virtual Genre ParentGenre { get; set; } public virtual ICollection<Book> Books { get; set; } } public class BookContext : DbContext { public void PrimeBooksCollectionToIncludeGenres(Person P) { if (P.Books is EntityCollection<Book>) (P.Books as EntityCollection<Book>).CreateSourceQuery().Include(b => b.Genres).ToList(); }

    Read the article

  • I can connect to Samba server but cannot access shares.

    - by jlego
    I'm having trouble getting samba sharing working to access shares. I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. It needs to be able to share files with a Windows 7 PC and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool on Fedora. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error: 0x80070035 " Windows cannot access \\SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error: The original item for ShareName cannot be found. When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: I Figured it out, it was because I was sharing a second hard drive. See checked answer below. Speculation 1: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? Speculation 2: As the directory I am trying to share is actually an entire internal disk, I have tried these things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. Speculation 3: Whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Speculation 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. Speculation 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally)

    Read the article

  • Java saying XML Document Not Well Formed

    - by Pyroclastic
    Hey all. Java's XML parser seems to be thinking that my XML document is not well formed following the root element, but I've validated it with several tools and they all disagree. It's probably an error in my code rather than in the document itself, I'd really appreciate any help you all could offer me. Here is my Java method: private void loadFromXMLFile(File f) throws ParserConfigurationException, IOException, SAXException { File file = f; DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db; Document doc = null; db = dbf.newDocumentBuilder(); doc = db.parse(file); doc.getDocumentElement().normalize(); String desc = ""; String due = ""; String comment = ""; NodeList tasksList = doc.getElementsByTagName("task"); for (int i = 0; i < tasksList.getLength(); i++) { NodeList attributes = tasksList.item(i).getChildNodes(); for (int j = 0; i < attributes.getLength(); j++) { Node attribute = attributes.item(i); if (attribute.getNodeName() == "description") { desc = attribute.getTextContent(); } if (attribute.getNodeName() == "due") { due = attribute.getTextContent(); } if (attribute.getNodeName() == "comment") { comment = attribute.getTextContent(); } tasks.add(new Task(desc, due, comment)); } desc = ""; due = ""; comment = ""; } } And here is the XML file I'm trying to load: <?xml version="1.0"?> <tasklist> <task> <description>Task 1</description> <due>Due date 1</due> <comment>Comment 1</comment> <completed>false</completed> </task> <task> <description>Task 2</description> <due>Due date 2</due> <comment>Comment 2</comment> <completed>false</completed> </task> <task> <description>Task 3</description> <due>Due date 3</due> <comment>Comment 3</comment> <completed>true</completed> </task> </tasklist> And here is the error message java is throwing for me: run: [Fatal Error] tasks.xml:28:3: The markup in the document following the root element must be well-formed. May 17, 2010 6:07:02 PM todolist.TodoListGUI SEVERE: null org.xml.sax.SAXParseException: The markup in the document following the root element must be well-formed. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:239) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:283) at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:208) at todolist.TodoListGUI.loadFromXMLFile(TodoListGUI.java:199) at todolist.TodoListGUI.(TodoListGUI.java:42) at todolist.Main.main(Main.java:25) BUILD SUCCESSFUL (total time: 19 seconds) For reference TodoListGUI.java:199 is doc = db.parse(file); If context is helpful to anyone here, I'm trying to write a simple GUI application to manage a todo list that can read and write to and from XML files defining the tasks. Any advice is appreciated!

    Read the article

  • Java -Android. Parser problem

    - by Kano
    I am making a very simple app with an RSS reader. The reader works great, but it's only giving me the title, and i want the description too. I'am very new to android, and I have tried a lot of things, but I can't get it to work. I've found a lot of parsers but they are to complicated for me to understand, so I was hoping to find a simple solution, since it's only title and description i want. Can anyone help me? import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import javax.xml.parsers.ParserConfigurationException; import javax.xml.parsers.SAXParser; import javax.xml.parsers.SAXParserFactory; import org.xml.sax.Attributes; import org.xml.sax.InputSource; import org.xml.sax.SAXException; import org.xml.sax.XMLReader; import org.xml.sax.helpers.DefaultHandler; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class NyhedActivity extends Activity { String streamTitle = ""; @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.nyheder); TextView result = (TextView)findViewById(R.id.result); try { URL rssUrl = new URL("http://tv2sport.dk/rss/*/*/*/248/*/*"); SAXParserFactory mySAXParserFactory = SAXParserFactory.newInstance(); SAXParser mySAXParser = mySAXParserFactory.newSAXParser(); XMLReader myXMLReader = mySAXParser.getXMLReader(); RSSHandler myRSSHandler = new RSSHandler(); myXMLReader.setContentHandler(myRSSHandler); InputSource myInputSource = new InputSource(rssUrl.openStream()); myXMLReader.parse(myInputSource); result.setText(streamTitle); } catch (MalformedURLException e) { // TODO Auto-generated catch block e.printStackTrace(); result.setText("Cannot connect RSS!"); } catch (ParserConfigurationException e) { // TODO Auto-generated catch block e.printStackTrace(); result.setText("Cannot connect RSS!"); } catch (SAXException e) { // TODO Auto-generated catch block e.printStackTrace(); result.setText("Cannot connect RSS!"); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); result.setText("Cannot connect RSS!"); } } private class RSSHandler extends DefaultHandler { final int stateUnknown = 0; final int stateTitle = 1; int state = stateUnknown; int numberOfTitle = 0; String strTitle = ""; String strElement = ""; @Override public void startDocument() throws SAXException { // TODO Auto-generated method stub strTitle = "Nyheder fra "; } @Override public void endDocument() throws SAXException { // TODO Auto-generated method stub strTitle += ""; streamTitle = "" + strTitle; } @Override public void startElement(String uri, String localName, String qName, Attributes attributes) throws SAXException { // TODO Auto-generated method stub if (localName.equalsIgnoreCase("title")) { state = stateTitle; strElement = ""; numberOfTitle++; } else { state = stateUnknown; } } @Override public void endElement(String uri, String localName, String qName) throws SAXException { // TODO Auto-generated method stub if (localName.equalsIgnoreCase("title")) { strTitle += strElement + "\n"+"\n"; } state = stateUnknown; } @Override public void characters(char[] ch, int start, int length) throws SAXException { // TODO Auto-generated method stub String strCharacters = new String(ch, start, length); if (state == stateTitle) { strElement += strCharacters; } } } }

    Read the article

  • How to select table column names in a view and pass to controller in rails?

    - by zachd1_618
    So I am new to Rails, and OO programming in general. I have some grasp of the MVC architecture. My goal is to make a (nearly) completely dynamic plug-and-play plotting web server. I am fairly confused with params, forms, and select helpers. What I want to do is use Rails drop downs to basically pass parameters as strings to my controller, which will use the params to select certain column data from my database and plot it dynamically. I have the latter part of the task working, but I can't seem to pass values from my view to controller. For simplicity's sake, say my database schema looks like this: --------------Plot--------------- |____x____|____y1____|____y2____| | 1 | 1 | 1 | | 2 | 2 | 4 | | 3 | 3 | 9 | | 4 | 4 | 16 | | 5 | 5 | 25 | ... and in my Model, I have dynamic selector scopes that will let me select just certain columns of data: in Plot.rb class Plot < ActiveRecord::Base scope :select_var, lambda {|varname| select(varname)} scope :between_x, lambda {|x1,x2| where("x BETWEEN ? and ?","#{x1}","#{x2}")} So this way, I can call: irb>>@p1 = Plot.select_var(['x','y1']).between_x(1,3) and get in return a class where @p1.x and @p1.y1 are my only attributes, only for values between x=1 to x=4, which I dynamically plot. I want to start off in a view (plot/index), where I can dynamically select which variable names (table column names), and which rows from the database to fetch and plot. The problem is, most select helpers don't seem to work with columns in the database, only rows. So to select columns, I first get an array of column names that exist in my database with a function I wrote. Plots Controller def index d=Plot.first @tags = d.list_vars end So @tags = ['x','y1','y2'] Then in my plot/index.html.erb I try to use a drop down to select wich variables I send back to the controller. index.html.erb <%= select_tag( :variable, options_for_select(@plots.first.list_vars,:name,:multiple=>:true) )%> <%= button_to 'Plot now!', :controller =>"plots/plot_vars", :variable => params[:variable]%> Finally, in the controller again Plots controller ... def plot_vars @plot_data=Plot.select_vars([params[:variable]]) end The problem is everytime I try this (or one of a hundred variations thereof), the params[:variable] is nill. How can I use a drop down to pass a parameter with string variable names to the controller? Sorry its so long, I have been struggling with this for about a month now. :-( I think my biggest problem is that this setup doesn't really match the Rails architecture. I don't have "users" and "articles" as individual entities. I really have a data structure, not a data object. Trying to work with the structure in terms of data object speak is not necessarily the easiest thing to do I think. For background: My actual database has about 250 columns and a couple million rows, and they get changed and modified from time to time. I know I can make the database smarter, but its not worth it on my end. I work at a scientific institute where there are a ton of projects with databases just like this. Each one has a web developer that spends months setting up a web interface and their own janky plotting setups. I want to make this completely dynamic, as a plug-and-play solution so all you have to do is specify your database connection, and this rails setup will automatically show and plot which data you want in it. I am more of a sequential programmer and number cruncher, as are many people here. I think this project could be very helpful in the end, but its difficult to figure out for me right now.

    Read the article

  • Version Assemblies with TFS 2010 Continuous Integration

    - by Steve Michelotti
    When I first heard that TFS 2010 had moved to Workflow Foundation for Team Build, I was *extremely* skeptical. I’ve loved MSBuild and didn’t quite understand the reasons for this change. In fact, given that I’ve been exclusively using Cruise Control for Continuous Integration (CI) for the last 5+ years of my career, I was skeptical of TFS for CI in general. However, after going through the learning process for TFS 2010 recently, I’m starting to become a believer. I’m also starting to see some of the benefits with Workflow Foundation for the overall processing because it gives you constructs not available in MSBuild such as parallel tasks, better control flow constructs, and a slightly better customization story. The first customization I had to make to the build process was to version the assemblies of my solution. This is not new. In fact, I’d recommend reading Mike Fourie’s well known post on Versioning Code in TFS before you get started. This post describes several foundational aspects of versioning assemblies regardless of your version of TFS. The main points are: 1) don’t use source control operations for your version file, 2) use a schema like <Major>.<Minor>.<IncrementalNumber>.0, and 3) do not keep AssemblyVersion and AssemblyFileVersion in sync. To do this in TFS 2010, the best post I’ve found has been Jim Lamb’s post of building a custom TFS 2010 workflow activity. Overall, this post is excellent but the primary issue I have with it is that the assembly version numbers produced are based in a date and look like this: “2010.5.15.1”. This is definitely not what I want. I want to be able to communicate to the developers and stakeholders that we are producing the “1.1 release” or “1.2 release” – which would have an assembly version number of “1.1.317.0” for example. In this post, I’ll walk through the process of customizing the assembly version number based on this method – customizing the concepts in Lamb’s post to suit my needs. I’ll also be combining this with the concepts of Fourie’s post – particularly with regards to the standards around how to version the assemblies. The first thing I’ll do is add a file called SolutionAssemblyVersionInfo.cs to the root of my solution that looks like this: 1: using System; 2: using System.Reflection; 3: [assembly: AssemblyVersion("1.1.0.0")] 4: [assembly: AssemblyFileVersion("1.1.0.0")] I’ll then add that file as a Visual Studio link file to each project in my solution by right-clicking the project, “Add – Existing Item…” then when I click the SolutionAssemblyVersionInfo.cs file, making sure I “Add As Link”: Now the Solution Explorer will show our file. We can see that it’s a “link” file because of the black arrow in the icon within all our projects. Of course you’ll need to remove the AssemblyVersion and AssemblyFileVersion attributes from the AssemblyInfo.cs files to avoid the duplicate attributes since they now leave in the SolutionAssemblyVersionInfo.cs file. This is an extremely common technique so that all the projects in our solution can be versioned as a unit. At this point, we’re ready to write our custom activity. The primary consideration is that I want the developer and/or tech lead to be able to easily be in control of the Major.Minor and then I want the CI process to add the third number with a unique incremental number. We’ll leave the fourth position always “0” for now – it’s held in reserve in case the day ever comes where we need to do an emergency patch to Production based on a branched version.   Writing the Custom Workflow Activity Similar to Lamb’s post, I’m going to write two custom workflow activities. The “outer” activity (a xaml activity) will be pretty straight forward. It will check if the solution version file exists in the solution root and, if so, delegate the replacement of version to the AssemblyVersionInfo activity which is a CodeActivity highlighted in red below:   Notice that the arguments of this activity are the “solutionVersionFile” and “tfsBuildNumber” which will be passed in. The tfsBuildNumber passed in will look something like this: “CI_MyApplication.4” and we’ll need to grab the “4” (i.e., the incremental revision number) and put that in the third position. Then we’ll need to honor whatever was specified for Major.Minor in the SolutionAssemblyVersionInfo.cs file. For example, if the SolutionAssemblyVersionInfo.cs file had “1.1.0.0” for the AssemblyVersion (as shown in the first code block near the beginning of this post), then we want to resulting file to have “1.1.4.0”. Before we do anything, let’s put together a unit test for all this so we can know if we get it right: 1: [TestMethod] 2: public void Assembly_version_should_be_parsed_correctly_from_build_name() 3: { 4: // arrange 5: const string versionFile = "SolutionAssemblyVersionInfo.cs"; 6: WriteTestVersionFile(versionFile); 7: var activity = new VersionAssemblies(); 8: var arguments = new Dictionary<string, object> { 9: { "tfsBuildNumber", "CI_MyApplication.4"}, 10: { "solutionVersionFile", versionFile} 11: }; 12:   13: // act 14: var result = WorkflowInvoker.Invoke(activity, arguments); 15:   16: // assert 17: Assert.AreEqual("1.2.4.0", (string)result["newAssemblyFileVersion"]); 18: var lines = File.ReadAllLines(versionFile); 19: Assert.IsTrue(lines.Contains("[assembly: AssemblyVersion(\"1.2.0.0\")]")); 20: Assert.IsTrue(lines.Contains("[assembly: AssemblyFileVersion(\"1.2.4.0\")]")); 21: } 22: 23: private void WriteTestVersionFile(string versionFile) 24: { 25: var fileContents = "using System.Reflection;\n" + 26: "[assembly: AssemblyVersion(\"1.2.0.0\")]\n" + 27: "[assembly: AssemblyFileVersion(\"1.2.0.0\")]"; 28: File.WriteAllText(versionFile, fileContents); 29: }   At this point, the code for our AssemblyVersion activity is pretty straight forward: 1: [BuildActivity(HostEnvironmentOption.Agent)] 2: public class AssemblyVersionInfo : CodeActivity 3: { 4: [RequiredArgument] 5: public InArgument<string> FileName { get; set; } 6:   7: [RequiredArgument] 8: public InArgument<string> TfsBuildNumber { get; set; } 9:   10: public OutArgument<string> NewAssemblyFileVersion { get; set; } 11:   12: protected override void Execute(CodeActivityContext context) 13: { 14: var solutionVersionFile = this.FileName.Get(context); 15: 16: // Ensure that the file is writeable 17: var fileAttributes = File.GetAttributes(solutionVersionFile); 18: File.SetAttributes(solutionVersionFile, fileAttributes & ~FileAttributes.ReadOnly); 19:   20: // Prepare assembly versions 21: var majorMinor = GetAssemblyMajorMinorVersionBasedOnExisting(solutionVersionFile); 22: var newBuildNumber = GetNewBuildNumber(this.TfsBuildNumber.Get(context)); 23: var newAssemblyVersion = string.Format("{0}.{1}.0.0", majorMinor.Item1, majorMinor.Item2); 24: var newAssemblyFileVersion = string.Format("{0}.{1}.{2}.0", majorMinor.Item1, majorMinor.Item2, newBuildNumber); 25: this.NewAssemblyFileVersion.Set(context, newAssemblyFileVersion); 26:   27: // Perform the actual replacement 28: var contents = this.GetFileContents(newAssemblyVersion, newAssemblyFileVersion); 29: File.WriteAllText(solutionVersionFile, contents); 30:   31: // Restore the file's original attributes 32: File.SetAttributes(solutionVersionFile, fileAttributes); 33: } 34:   35: #region Private Methods 36:   37: private string GetFileContents(string newAssemblyVersion, string newAssemblyFileVersion) 38: { 39: var cs = new StringBuilder(); 40: cs.AppendLine("using System.Reflection;"); 41: cs.AppendFormat("[assembly: AssemblyVersion(\"{0}\")]", newAssemblyVersion); 42: cs.AppendLine(); 43: cs.AppendFormat("[assembly: AssemblyFileVersion(\"{0}\")]", newAssemblyFileVersion); 44: return cs.ToString(); 45: } 46:   47: private Tuple<string, string> GetAssemblyMajorMinorVersionBasedOnExisting(string filePath) 48: { 49: var lines = File.ReadAllLines(filePath); 50: var versionLine = lines.Where(x => x.Contains("AssemblyVersion")).FirstOrDefault(); 51:   52: if (versionLine == null) 53: { 54: throw new InvalidOperationException("File does not contain [assembly: AssemblyVersion] attribute"); 55: } 56:   57: return ExtractMajorMinor(versionLine); 58: } 59:   60: private static Tuple<string, string> ExtractMajorMinor(string versionLine) 61: { 62: var firstQuote = versionLine.IndexOf('"') + 1; 63: var secondQuote = versionLine.IndexOf('"', firstQuote); 64: var version = versionLine.Substring(firstQuote, secondQuote - firstQuote); 65: var versionParts = version.Split('.'); 66: return new Tuple<string, string>(versionParts[0], versionParts[1]); 67: } 68:   69: private string GetNewBuildNumber(string buildName) 70: { 71: return buildName.Substring(buildName.LastIndexOf(".") + 1); 72: } 73:   74: #endregion 75: }   At this point the final step is to incorporate this activity into the overall build template. Make a copy of the DefaultTempate.xaml – we’ll call it DefaultTemplateWithVersioning.xaml. Before the build and labeling happens, drag the VersionAssemblies activity in. Then set the LabelName variable to “BuildDetail.BuildDefinition.Name + "-" + newAssemblyFileVersion since the newAssemblyFileVersion was produced by our activity.   Configuring CI Once you add your solution to source control, you can configure CI with the build definition window as shown here. The main difference is that we’ll change the Process tab to reflect a different build number format and choose our custom build process file:   When the build completes, we’ll see the name of our project with the unique revision number:   If we look at the detailed build log for the latest build, we’ll see the label being created with our custom task:     We can now look at the history labels in TFS and see the project name with the labels (the Assignment activity I added to the workflow):   Finally, if we look at the physical assemblies that are produced, we can right-click on any assembly in Windows Explorer and see the assembly version in its properties:   Full Traceability We now have full traceability for our code. There will never be a question of what code was deployed to Production. You can always see the assembly version in the properties of the physical assembly. That can be traced back to a label in TFS where the unique revision number matches. The label in TFS gives you the complete snapshot of the code in your source control repository at the time the code was built. This type of process for full traceability has been used for many years for CI – in fact, I’ve done similar things with CCNet and SVN for quite some time. This is simply the TFS implementation of that pattern. The new features that TFS 2010 give you to make these types of customizations in your build process are quite easy once you get over the initial curve.

    Read the article

  • Solution: Testing Web Services with MSTest on Team Build

    - by Martin Hinshelwood
    Guess what. About 20 minutes after I fixed the build, Allan broke it again! Update: 4th March 2010 – After having huge problems getting this working I read Billy Wang’s post which showed me the light. The problem here is that even though the test passes locally it will not during an Automated Build. When you send your tests to the build server it does not understand that you want to spin up the web site and run tests against that! When you run the test in Visual Studio it spins up the web site anyway, but would you expect your test to pass if you told the website not to spin up? Of course not. So, when you send the code to the build server you need to tell it what to spin up. First, the best way to get the parameters you need is to right click on the method you want to test and select “Create Unit Test”. This will detect wither you are running in IIS or ASP.NET Development Server or None, and create the relevant tags. Figure: Right clicking on “SaveDefaultProjectFile” will produce a context menu with “Create Unit tests…” on it. If you use this option it will AutoDetect most of the Attributes that are required. /// <summary> ///A test for SSW.SQLDeploy.SilverlightUI.Web.Services.IProfileService.SaveDefaultProjectFile ///</summary> // TODO: Ensure that the UrlToTest attribute specifies a URL to an ASP.NET page (for example, // http://.../Default.aspx). This is necessary for the unit test to be executed on the web server, // whether you are testing a page, web service, or a WCF service. [TestMethod()] [HostType("ASP.NET")] [AspNetDevelopmentServerHost("D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web", "/")] [UrlToTest("http://localhost:3100/")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] public void SaveDefaultProjectFileTest() { IProfileService target = new ProfileService(); // TODO: Initialize to an appropriate value string strComputerName = string.Empty; // TODO: Initialize to an appropriate value bool expected = false; // TODO: Initialize to an appropriate value bool actual; actual = target.SaveDefaultProjectFile(strComputerName); Assert.AreEqual(expected, actual); Assert.Inconclusive("Verify the correctness of this test method."); } Figure: Auto created code that shows the attributes required to run correctly in IIS or in this case ASP.NET Development Server If you are a purist and don’t like creating unit tests like this then you just need to add the three attributes manually. HostType – This attribute specified what host to use. Its an extensibility point, so you could write your own. Or you could just use “ASP.NET”. UrlToTest – This specifies the start URL. For most tests it does not matter which page you call, as long as it is a valid page otherwise your test may not run on the server, but may pass anyway. AspNetDevelopmentServerHost – This is a nasty one, it is only used if you are using ASP.NET Development Host and is unnecessary if you are using IIS. This sets the host settings and the first value MUST be the physical path to the root of your web application. OK, so all that was rubbish and I could not get anything working using the MSDN documentation. Google provided very little help until I ran into Billy Wang’s post  and I heard that heavenly music that all developers hear when understanding dawns that what they have been doing up until now is just plain stupid. I am sure that the above will work when I am doing Web Unit Tests, but there is a much easier way when doing web services. You need to add the AspNetDevelopmentServer attribute to your code. This will tell MSTest to spin up an ASP.NET Development server to host the service. Specify the path to the web application you want to use. [AspNetDevelopmentServer("WebApp1", "D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: This AspNetDevelopmentServer will make sure that the specified web application is launched. Now we can run the test and have it pass, but if the dynamically assigned ASP.NET Development server port changes what happens to the details in your app.config that was generated when creating a reference to the web service? Well, it would be wrong and the test would fail. This is where Billy’s helper method comes in. Once you have created an instance of your service call, and it has loaded the config, but before you make any calls to it you need to go in and dynamically set the Endpoint address to the same address as your dynamically hosted Web Application. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Reflection; using System.ServiceModel.Description; using System.ServiceModel; namespace SSW.SQLDeploy.Test { class WcfWebServiceHelper { public static bool TryUrlRedirection(object client, TestContext context, string identifier) { bool result = true; try { PropertyInfo property = client.GetType().GetProperty("Endpoint"); string webServer = context.Properties[string.Format("AspNetDevelopmentServer.{0}", identifier)].ToString(); Uri webServerUri = new Uri(webServer); ServiceEndpoint endpoint = (ServiceEndpoint)property.GetValue(client, null); EndpointAddressBuilder builder = new EndpointAddressBuilder(endpoint.Address); builder.Uri = new Uri(endpoint.Address.Uri.OriginalString.Replace(endpoint.Address.Uri.Authority, webServerUri.Authority)); endpoint.Address = builder.ToEndpointAddress(); } catch (Exception e) { context.WriteLine(e.Message); result = false; } return result; } } } Figure: This fixes a problem with the URL in your web.config not being the same as the dynamically hosted ASP.NET Development server port. We can now add a call to this method after we created the Proxy object and change the Endpoint for the Service to the correct one. This process is wrapped in an assert as if it fails there is no point in continuing. [AspNetDevelopmentServer("WebApp1", D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Editing the Endpoint from the app.config on the fly to match the dynamically hosted ASP.NET Development Server URL and port is now easy. As you can imagine AspNetDevelopmentServer poses some problems of you have multiple developers. What are the chances of everyone using the same location to store the source? What about if you are using a build server, how do you tell MSTest where to look for the files? To the rescue is a property called" “%PathToWebRoot%” which is always right on the build server. It will always point to your build drop folder for your solutions web sites. Which will be “\\tfs.ssw.com.au\BuildDrop\[BuildName]\Debug\_PrecompiledWeb\” or whatever your build drop location is. So lets change the code above to add this. [AspNetDevelopmentServer("WebApp1", "%PathToWebRoot%\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Adding %PathToWebRoot% to the AspNetDevelopmentServer path makes it work everywhere. Now we have another problem… this will ONLY run on the build server and will fail locally as %PathToWebRoot%’s default value is “C:\Users\[profile]\Documents\Visual Studio 2010\Projects”. Well this sucks… How do we get the test to run on any build server and any developer laptop. Open “Tools | Options | Test Tools | Test Execution” in Visual Studio and you will see a field called “Web application root directory”. This is where you override that default above. Figure: You can override the default website location for tests. In my case I would put in “D:\Workspaces\SSW\SSW\SqlDeploy\DEV\Main” and all the developers working with this branch would put in the folder that they have mapped. Can you see a problem? What is I create a “$/SSW/SqlDeploy/DEV/34567” branch from Main and I want to run tests in there. Well… I would have to change the value above. This is not ideal, but as you can put your projects anywhere on a computer, it has to be done. Conclusion Although this looks convoluted and complicated there are real problems being solved here that mean that you have a test ANYWHERE solution. Any build server, any Developer workstation. Resources: http://billwg.blogspot.com/2009/06/testing-wcf-web-services.html http://tough-to-find.blogspot.com/2008/04/testing-asmx-web-services-in-visual.html http://msdn.microsoft.com/en-us/library/ms243399(VS.100).aspx http://blogs.msdn.com/dscruggs/archive/2008/09/29/web-tests-unit-tests-the-asp-net-development-server-and-code-coverage.aspx http://www.5z5.com/News/?543f8bc8b36b174f Technorati Tags: VS2010,MSTest,Team Build 2010,Team Build,Visual Studio,Visual Studio 2010,Visual Studio ALM,Team Test,Team Test 2010

    Read the article

  • Cleaner HTML Markup with ASP.NET 4 Web Forms - Client IDs (VS 2010 and .NET 4.0 Series)

    - by ScottGu
    This is the sixteenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post is the first of a few blog posts I’ll be doing that talk about some of the important changes we’ve made to make Web Forms in ASP.NET 4 generate clean, standards-compliant, CSS-friendly markup.  Today I’ll cover the work we are doing to provide better control over the “ID” attributes rendered by server controls to the client. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Clean, Standards-Based, CSS-Friendly Markup One of the common complaints developers have often had with ASP.NET Web Forms is that when using server controls they don’t have the ability to easily generate clean, CSS-friendly output and markup.  Some of the specific complaints with previous ASP.NET releases include: Auto-generated ID attributes within HTML make it hard to write JavaScript and style with CSS Use of tables instead of semantic markup for certain controls (in particular the asp:menu control) make styling ugly Some controls render inline style properties even if no style property on the control has been set ViewState can often be bigger than ideal ASP.NET 4 provides better support for building standards-compliant pages out of the box.  The built-in <asp:> server controls with ASP.NET 4 now generate cleaner markup and support CSS styling – and help address all of the above issues.  Markup Compatibility When Upgrading Existing ASP.NET Web Forms Applications A common question people often ask when hearing about the cleaner markup coming with ASP.NET 4 is “Great - but what about my existing applications?  Will these changes/improvements break things when I upgrade?” To help ensure that we don’t break assumptions around markup and styling with existing ASP.NET Web Forms applications, we’ve enabled a configuration flag – controlRenderingCompatbilityVersion – within web.config that let’s you decide if you want to use the new cleaner markup approach that is the default with new ASP.NET 4 applications, or for compatibility reasons render the same markup that previous versions of ASP.NET used:   When the controlRenderingCompatbilityVersion flag is set to “3.5” your application and server controls will by default render output using the same markup generation used with VS 2008 and .NET 3.5.  When the controlRenderingCompatbilityVersion flag is set to “4.0” your application and server controls will strictly adhere to the XHTML 1.1 specification, have cleaner client IDs, render with semantic correctness in mind, and have extraneous inline styles removed. This flag defaults to 4.0 for all new ASP.NET Web Forms applications built using ASP.NET 4. Any previous application that is upgraded using VS 2010 will have the controlRenderingCompatbilityVersion flag automatically set to 3.5 by the upgrade wizard to ensure backwards compatibility.  You can then optionally change it (either at the application level, or scope it within the web.config file to be on a per page or directory level) if you move your pages to use CSS and take advantage of the new markup rendering. Today’s Cleaner Markup Topic: Client IDs The ability to have clean, predictable, ID attributes on rendered HTML elements is something developers have long asked for with Web Forms (ID values like “ctl00_ContentPlaceholder1_ListView1_ctrl0_Label1” are not very popular).  Having control over the ID values rendered helps make it much easier to write client-side JavaScript against the output, makes it easier to style elements using CSS, and on large pages can help reduce the overall size of the markup generated. New ClientIDMode Property on Controls ASP.NET 4 supports a new ClientIDMode property on the Control base class.  The ClientIDMode property indicates how controls should generate client ID values when they render.  The ClientIDMode property supports four possible values: AutoID—Renders the output as in .NET 3.5 (auto-generated IDs which will still render prefixes like ctrl00 for compatibility) Predictable (Default)— Trims any “ctl00” ID string and if a list/container control concatenates child ids (example: id=”ParentControl_ChildControl”) Static—Hands over full ID naming control to the developer – whatever they set as the ID of the control is what is rendered (example: id=”JustMyId”) Inherit—Tells the control to defer to the naming behavior mode of the parent container control The ClientIDMode property can be set directly on individual controls (or within container controls – in which case the controls within them will by default inherit the setting): Or it can be specified at a page or usercontrol level (using the <%@ Page %> or <%@ Control %> directives) – in which case controls within the pages/usercontrols inherit the setting (and can optionally override it): Or it can be set within the web.config file of an application – in which case pages within the application inherit the setting (and can optionally override it): This gives you the flexibility to customize/override the naming behavior however you want. Example: Using the ClientIDMode property to control the IDs of Non-List Controls Let’s take a look at how we can use the new ClientIDMode property to control the rendering of “ID” elements within a page.  To help illustrate this we can create a simple page called “SingleControlExample.aspx” that is based on a master-page called “Site.Master”, and which has a single <asp:label> control with an ID of “Message” that is contained with an <asp:content> container control called “MainContent”: Within our code-behind we’ll then add some simple code like below to dynamically populate the Label’s Text property at runtime:   If we were running this application using ASP.NET 3.5 (or had our ASP.NET 4 application configured to run using 3.5 rendering or ClientIDMode=AutoID), then the generated markup sent down to the client would look like below: This ID is unique (which is good) – but rather ugly because of the “ct100” prefix (which is bad). Markup Rendering when using ASP.NET 4 and the ClientIDMode is set to “Predictable” With ASP.NET 4, server controls by default now render their ID’s using ClientIDMode=”Predictable”.  This helps ensure that ID values are still unique and don’t conflict on a page, but at the same time it makes the IDs less verbose and more predictable.  This means that the generated markup of our <asp:label> control above will by default now look like below with ASP.NET 4: Notice that the “ct100” prefix is gone. Because the “Message” control is embedded within a “MainContent” container control, by default it’s ID will be prefixed “MainContent_Message” to avoid potential collisions with other controls elsewhere within the page. Markup Rendering when using ASP.NET 4 and the ClientIDMode is set to “Static” Sometimes you don’t want your ID values to be nested hierarchically, though, and instead just want the ID rendered to be whatever value you set it as.  To enable this you can now use ClientIDMode=static, in which case the ID rendered will be exactly the same as what you set it on the server-side on your control.  This will cause the below markup to be rendered with ASP.NET 4: This option now gives you the ability to completely control the client ID values sent down by controls. Example: Using the ClientIDMode property to control the IDs of Data-Bound List Controls Data-bound list/grid controls have historically been the hardest to use/style when it comes to working with Web Form’s automatically generated IDs.  Let’s now take a look at a scenario where we’ll customize the ID’s rendered using a ListView control with ASP.NET 4. The code snippet below is an example of a ListView control that displays the contents of a data-bound collection — in this case, airports: We can then write code like below within our code-behind to dynamically databind a list of airports to the ListView above: At runtime this will then by default generate a <ul> list of airports like below.  Note that because the <ul> and <li> elements in the ListView’s template are not server controls, no IDs are rendered in our markup: Adding Client ID’s to Each Row Item Now, let’s say that we wanted to add client-ID’s to the output so that we can programmatically access each <li> via JavaScript.  We want these ID’s to be unique, predictable, and identifiable. A first approach would be to mark each <li> element within the template as being a server control (by giving it a runat=server attribute) and by giving each one an id of “airport”: By default ASP.NET 4 will now render clean IDs like below (no ctl001-like ids are rendered):   Using the ClientIDRowSuffix Property Our template above now generates unique ID’s for each <li> element – but if we are going to access them programmatically on the client using JavaScript we might want to instead have the ID’s contain the airport code within them to make them easier to reference.  The good news is that we can easily do this by taking advantage of the new ClientIDRowSuffix property on databound controls in ASP.NET 4 to better control the ID’s of our individual row elements. To do this, we’ll set the ClientIDRowSuffix property to “Code” on our ListView control.  This tells the ListView to use the databound “Code” property from our Airport class when generating the ID: And now instead of having row suffixes like “1”, “2”, and “3”, we’ll instead have the Airport.Code value embedded within the IDs (e.g: _CLE, _CAK, _PDX, etc): You can use this ClientIDRowSuffix approach with other databound controls like the GridView as well. It is useful anytime you want to program row elements on the client – and use clean/identified IDs to easily reference them from JavaScript code. Summary ASP.NET 4 enables you to generate much cleaner HTML markup from server controls and from within your Web Forms applications.  In today’s post I covered how you can now easily control the client ID values that are rendered by server controls.  In upcoming posts I’ll cover some of the other markup improvements that are also coming with the ASP.NET 4 release. Hope this helps, Scott

    Read the article

  • Anti-Forgery Request Recipes For ASP.NET MVC And AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent in the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> This invocation generates a token then writes inside the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and also writes into the cookie: __RequestVerificationToken_Lw__= J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. In the server side, [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, some problems are encountered. Specify validation on controller (not on each action) The server side problem is, It is expected to declare [ValidateAntiForgeryToken] on controller, but actually it has be to declared on each POST actions. Because POST actions are usually much more then controllers, the work would be a little crazy. Problem Usually a controller contains actions for HTTP GET and actions for HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller // One [ValidateAntiForgeryToken] attribute. { [HttpGet] public ActionResult Index() // Index() cannot work. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If browser sends an HTTP GET request by clicking a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each POST action:public class SomeController : Controller // Many [ValidateAntiForgeryToken] attributes. { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } This is a little bit crazy, because one application can have a lot of POST actions. Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one for each POST action), the following ValidateAntiForgeryTokenWrapperAttribute wrapper class can be helpful, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // GET actions are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all POST actions. Maybe it would be nice if HTTP verbs can be specified on the built-in [ValidateAntiForgeryToken] attribute, which is easy to implemented. Specify Non-constant salt in runtime By default, the salt should be a compile time constant, so it can be used for the [ValidateAntiForgeryToken] or [ValidateAntiForgeryTokenWrapper] attribute. Problem One Web product might be sold to many clients. If a constant salt is evaluated in compile time, after the product is built and deployed to many clients, they all have the same salt. Of course, clients do not like this. Even some clients might want to specify a custom salt in configuration. In these scenarios, salt is required to be a runtime value. Solution In the above [ValidateAntiForgeryToken] and [ValidateAntiForgeryTokenWrapper] attribute, the salt is passed through constructor. So one solution is to remove this parameter:public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = AntiForgeryToken.Value }; } // Other members. } But here the injected dependency becomes a hard dependency. So the other solution is moving validation code into controller to work around the limitation of attributes:public abstract class AntiForgeryControllerBase : Controller { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; protected AntiForgeryControllerBase(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } protected override void OnAuthorization(AuthorizationContext filterContext) { base.OnAuthorization(filterContext); string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } Then make controller classes inheriting from this AntiForgeryControllerBase class. Now the salt is no long required to be a compile time constant. Submit token via AJAX For browser side, once server side turns on anti-forgery validation for HTTP POST, all AJAX POST requests will fail by default. Problem In AJAX scenarios, the HTTP POST request is not sent by form. Take jQuery as an example:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution Basically, the tokens must be printed to browser then sent back to server. So first of all, HtmlHelper.AntiForgeryToken() need to be called somewhere. Now the browser has token in both HTML and cookie. Then jQuery must find the printed token in the HTML, and append token to the data before sending:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated into a tiny jQuery plugin:/// <reference path="jquery-1.4.2.js" /> (function ($) { $.getAntiForgeryToken = function (tokenWindow, appPath) { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. tokenWindow = tokenWindow && typeof tokenWindow === typeof window ? tokenWindow : window; appPath = appPath && typeof appPath === "string" ? "_" + appPath.toString() : ""; // The name attribute is either __RequestVerificationToken, // or __RequestVerificationToken_{appPath}. tokenName = "__RequestVerificationToken" + appPath; // Finds the <input type="hidden" name={tokenName} value="..." /> from the specified. // var inputElements = $("input[type='hidden'][name='__RequestVerificationToken" + appPath + "']"); var inputElements = tokenWindow.document.getElementsByTagName("input"); for (var i = 0; i < inputElements.length; i++) { var inputElement = inputElements[i]; if (inputElement.type === "hidden" && inputElement.name === tokenName) { return { name: tokenName, value: inputElement.value }; } } return null; }; $.appendAntiForgeryToken = function (data, token) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } // Gets token from current window by default. token = token ? token : $.getAntiForgeryToken(); // $.getAntiForgeryToken(window). data = data ? data + "&" : ""; // If token exists, appends {token.name}={token.value} to data. return token ? data + encodeURIComponent(token.name) + "=" + encodeURIComponent(token.value) : data; }; // Wraps $.post(url, data, callback, type). $.postAntiForgery = function (url, data, callback, type) { return $.post(url, $.appendAntiForgeryToken(data), callback, type); }; // Wraps $.ajax(settings). $.ajaxAntiForgery = function (settings) { settings.data = $.appendAntiForgeryToken(settings.data); return $.ajax(settings); }; })(jQuery); In most of the scenarios, it is Ok to just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() with $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. There might be some scenarios of custom token, where $.appendAntiForgeryToken() is useful:data = $.appendAntiForgeryToken(data, token); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); And there are scenarios that the token is not in the current window. For example, an HTTP POST request can be sent by an iframe, while the token is in the parent window. Here, token's container window can be specified for $.getAntiForgeryToken():data = $.appendAntiForgeryToken(data, $.getAntiForgeryToken(window.parent)); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); If you have better solution, please do tell me.

    Read the article

  • CodePlex Daily Summary for Monday, December 20, 2010

    CodePlex Daily Summary for Monday, December 20, 2010Popular ReleasesLINQ to Twitter: LINQ to Twitter Beta v2.0.18: Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation. Bug fixes.ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4.3: Helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: Improvements for confirm, popup, popup form RenderView controller extension the user experience for crud in live demo has been substantially improved + added search all the features are shown in the live demoSQL Monitor: SQL Monitor 3.0 alpha 6: 1. add alert target with regexp support 2. fix actitivites not getting current server 3. allow to change connection in query 4. fix problem with open data tableGanttPlanner: GanttPlanner V1.0: GanttPlanner V1.0 include GanttPlanner.dll and also a Demo application.EnhSim: EnhSim 2.2.5 ALPHA: 2.2.5 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in the new...HD-Trailers.NET Downloader: HD-Trailers.NET Downloader v1.2: A Couple of small changes required to add support for XBMC which requires -trailer be appended to trailer filenames 2. Changed Version Number to 1.2. Leaving it a beta just in case. 3. added assembly.directives.dll (needed that to successfully compile the application. Not sure if it has to be in the distribution package. 4. Leaving Version 1.1N2 CMS: 2.1 release candidate 3: * Web platform installer support available N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) A bunch of bugs were fixed File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the templ...TweetSharp: TweetSharp v2.0.0.0 - Preview 6: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 6 ChangesMaintenance release with user reported fixes Preview 5 ChangesMaintenance release with user reported fixes Preview 4 ChangesReintroduced fluent interface support via satellite assembly Added entities support, entity segmentation, and ITweetable/ITweeter interfaces for client development Numer...Team Foundation Server Administration Tool: 2.1: TFS Administration Tool 2.1, is the first version of the TFS Administration Tool which is built on top of the Team Foundation Server 2010 object model. TFS Administration Tool 2.1 can be installed on machines that are running either Team Explorer 2010, or Team Foundation Server 2010.Hacker Passwords: HackerPasswords.zip: Source code, executable and documentationSubtitleTools: SubtitleTools 1.3: - Added .srt FileAssociation & Win7 ShowRecentCategory feature. - Applied UnifiedYeKe to fix Persian search problems. - Reduced file size of Persian subtitles for uploading @OSDB.Facebook C# SDK: 4.1.0: - Lots of bug fixes - Removed Dynamic Runtime Language dependencies from non-dynamic platforms. - Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample - Changed internal serialization to use Json.net - BREAKING CHANGE: Canvas Session is no longer support. Use Signed Request instead. Canvas Session has been deprecated by Facebook. - BREAKING CHANGE: Some renames and changes with Authorizer, CanvasAuthorizer, and Authorization ac...NuGet (formerly NuPack): NuGet 1.0 build 11217.102: Note: this release is slightly newer than RC1, and fixes a couple issues relating to updating packages to newer versions. NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. This new build targets the newer feed (h...WCF Community Site: WCF Web APIs 10.12.17: Welcome to the second release of WCF Web APIs on codeplex Here is what is new in this release. WCF Support for jQuery - create WCF web services that are easy to consume from JavaScript clients, in particular jQuery. Better support for using JsonValue as dynamic Support for JsonValue change notification events for databinding and other purposes Support for going between JsonValue and CLR types WCF HTTP - create HTTP / REST based web services. This is a minor release which contains fixe...LiveChat Starter Kit: LCSK v1.0: This is a working version of the LCSK for Visual Studio 2010, ASP.NET MVC 3 (using Razor View Engine). this is still provider based (with 1 provider Sql) and this is still using WebService and Windows Forms operator console. The solution is cleaner, with an installer to create tables etc. You can also install it via nuget (Install-Package lcsk) Let me know your feedbackOrchard Project: Orchard 0.9: Orchard Release Notes Build: 0.9.253 Published: 12/16/2010 How to Install OrchardTo install the Orchard tech preview using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard-Using-Web-PI.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.0.9.253.zip file. The zip contents are pre-built and ready-to-run. Simply extract the contents of the Orch...DotNetNuke® Community Edition: 05.06.01 Beta: This is the initial Beta of DotNetNuke 5.6.1. See the DotNetNuke Roadmap a full list of changes in this release.MSBuild Extension Pack: December 2010: Release Blog Post The MSBuild Extension Pack December 2010 release provides a collection of over 380 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...mojoPortal: 2.3.5.8: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2358-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-12-13: Code samples for Visual Studio 2010New Projects.NET Micro Framework Binary Parsing and Processing Extensions: Prototype library and test cases for parsing binary data in the .NET Micro Framework. The goal is to improve processing speed, reduce GC work and simplify code.CarolLib Framework: A common library by Lance Zhang and Carol Xiong Include Helpers, Extensions, Configurations and Windows Service...Copy Document URL to Clipboard: This is a SharePoint 2010 Feature that adds a button in the Ribbon, allowing users to copy document urls (links) to clipboard.exTWEET download: <exTWEET> Download link for application to be downloaded & used from here unless it becomes publicly available. Then, this link will be removed. HandleSpy: Get Object information by its Handle.ibuy: helloLoA: PL: Podstawa gry bez tekstur, modeli, map i skryptów. EN: ---MP3Tunes Locker Windows Phone API: A simple REST client for connecting to your mp3tunes.com locker on your windows mobile phone. You can browse by artist and album and shuffle your songs.Netduino Library: Various useful classes to help develop for netduino.NReports: NReports is a reporting library using RDL file format, aiming at interoperatibility with SQL Reporting Services. It has began as a fork of the latest version 4.1 of fyiReporting (now defunct). Supported platforms are .NET 3.5 (Windows) and Mono 2.6 (Linux).qlink Framework: Qlink FrameWork development application that auto-generates a data layer object model based on your database schema and ...Rapid Image Editor: This is image editor which allows to edit images very rapidly and comfortably. Therefore it's named as Rapid.Serial-test with SHLCN: ?????????????????????????SharePoint Selbstbedienung: This is a project for all Microsoft SharePoint users. Whether you are a developer, administrator or end user, I want to provide you, complete NO-Deployment-solutions to create nice application for SharePoint 2010 or Office 365 plattform easily. TestRepo: test projectUmbraco Live At Education Calendar: Umbraco Live At Education CalendarUSS: Urban Statistical SystemVLBA Web Services: This is our seminar project to demonstrate the implementation and use of web services.Windows Phone 7 Isolated Storage Explorer: Isolated Storage Explorer for Windows Phone 7 emulator or device.Windows Phone 7 Logging Framework: Logging Framework for Windows Phone 7

    Read the article

  • CodePlex Daily Summary for Sunday, December 19, 2010

    CodePlex Daily Summary for Sunday, December 19, 2010Popular ReleasesTeam Foundation Server Administration Tool: 2.1: TFS Administration Tool 2.1, is the first version of the TFS Administration Tool which is built on top of the Team Foundation Server 2010 object model. TFS Administration Tool 2.1 can be installed on machines that are running either Team Explorer 2010, or Team Foundation Server 2010.SQL Monitor: SQL Monitor 3.0 alpha 5: 1. fix a problem with not expanding nodes in object explorer, sorryHacker Passwords: HackerPasswords.zip: Source code, executable and documentationWatchersNET.SiteMap: WatchersNET.SiteMap 01.03.03: Whats NewSkin Object: You can now filter by Terms for Example use: <object id="dnnSITEMAPSL" codetype="dotnetnuke/server" codebase="SITEMAPSL"> <param name="TaxMode" value="terms" /> <param name="TaxTerms" value="TermName1,TermName2" /> </object> changes Tax Term Filter should work correct nowSubtitleTools: SubtitleTools 1.3: - Added .srt FileAssociation & Win7 ShowRecentCategory feature. - Applied UnifiedYeKe to fix Persian search problems. - Reduced file size of Persian subtitles for uploading @OSDB.EnhSim: EnhSim 2.2.3 ALPHA: 2.2.3 ALPHAThis release adds in the changes for 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in th...Facebook C# SDK: 4.1.0: - Lots of bug fixes - Removed Dynamic Runtime Language dependencies from non-dynamic platforms. - Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample - Changed internal serialization to use Json.net - BREAKING CHANGE: Canvas Session is no longer support. Use Signed Request instead. Canvas Session has been deprecated by Facebook. - BREAKING CHANGE: Some renames and changes with Authorizer, CanvasAuthorizer, and Authorization ac...NuGet (formerly NuPack): NuGet 1.0 build 11217.102: Note: this release is slightly newer than RC1, and fixes a couple issues relating to updating packages to newer versions. NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. This new build targets the newer feed (h...WCF Community Site: WCF Web APIs 10.12.17: Welcome to the second release of WCF Web APIs on codeplex Here is what is new in this release. WCF Support for jQuery - create WCF web services that are easy to consume from JavaScript clients, in particular jQuery. Better support for using JsonValue as dynamic Support for JsonValue change notification events for databinding and other purposes Support for going between JsonValue and CLR types WCF HTTP - create HTTP / REST based web services. This is a minor release which contains fixe...Orchard Project: Orchard 0.9: Orchard Release Notes Build: 0.9.253 Published: 12/16/2010 How to Install OrchardTo install the Orchard tech preview using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard-Using-Web-PI.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.0.9.253.zip file. The zip contents are pre-built and ready-to-run. Simply extract the contents of the Orch...Pyxis 2: Beta 2.2: An issue stopping the Checkbox from working properly has been resolved (gradient background has also been added) A TabDialog control has been added NumericUpDown control added A driver for the VS1053 has been added (not yet in use) PyxisAPI.OpenFile now works with all its features PyxisAPI.SaveFile now works with all features Settings window is now available from the Pyxis menu You can now connect using Static IP or DHCP You can now set your system time from the Settings windo...sgMotion Animation Library: sgMotion v1.3 for SunBurn 2.0.9 [Includes Sample]: sgMotion 1.3 release for use with SunBurn 2.0.9 (all editions). Includes example project with assets, and full Windows and Xbox support. (tested on all platforms)DotSpatial: DotSpatial 12-15-2010: This release contains a few minor bug fixes and hopefully the GDAL libraries for the 3.5 x86 build actually built to the correct directory this time.DotNetNuke® Community Edition: 05.06.01 Beta: This is the initial Beta of DotNetNuke 5.6.1. See the DotNetNuke Roadmap a full list of changes in this release.MSBuild Extension Pack: December 2010: Release Blog Post The MSBuild Extension Pack December 2010 release provides a collection of over 380 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...TweetSharp: TweetSharp v2.0.0.0 - Preview 5: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 5 ChangesMaintenance release with user reported fixes Preview 4 ChangesReintroduced fluent interface support via satellite assembly Added entities support, entity segmentation, and ITweetable/ITweeter interfaces for client development Numerous fixes reported by preview users Preview 3 ChangesNumerous ...Silverlight Contrib: Silverlight Contrib 2010.1.0: 2010.1.0 New FeaturesCompatibility Release for Silverlight 4 and Visual Studio 2010FlickrNet API Library: 3.1.4000: Newest release. Now contains dedicated Windows Phone 7 DLL as well as all previous DLLs. Also contains Windows Help file documentation now as standard.mojoPortal: 2.3.5.8: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2358-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-12-13: Code samples for Visual Studio 2010New ProjectsANLP - Another .NET Lexer Parser: This project aims to have a lexer/parser working in Silverlight and help people to write their own grammar and make the lexer/parser available in Silverlight.Better App Tab Shortcuts Firefox Extension: An extension for Firefox 4.0 which improves the shortcuts for selecting tabs.BizTalk Map Converter: Converts BizTalk Maps into ExcelCampo Minado: Campo minado em SilverlightCSharpHASH: Hash project is an college project conserning business software development. Written in C#.DS_HW1: hw1DynamicViewModel: MVVM using POCOs with .NET 4.0: This project aims to provide a way to implement the Model View ViewModel (MVVM) architectural pattern using Plain Old CLR Objects (POCOs) while taking full advantage of .NET 4.0 DynamicObject Class.Evolution Game: Evolution game that uses genetic algorithms to visualize life of different species.Fluent Parser: Fluent Parser is a library allowing to build a syntaxic analyser in C#. The grammar is built using only .NET objects into the form of a Parsing Expression Grammar (PEG).GoogleMusic Player: Choose and play music from newly launched google Music http://www.google.co.in/music create and save playlists Using C#Gracefully update SharePoint 2010 document metadata: Users want to update the metadata for a document in a document library, for which they don’t have access or if they wanted to update some read-only fields. Hacker Passwords: Hacker Passwords generator - free open sourceHTML5 Canvas FlowView: FlowView is a simple to use API for creating/implementing a flowview style element in HTML5/Canvas. The FlowView looks much like something from Apple's cover view.L#: A F# program to provide similar functionalities offered by Prolog for F# oriented applications.MapInfo Tools: Utilities for working with MapInfo files.Notepad X: Notepad X is an alternative open source text editor for Microsoft Windows, with a lot of customization options created to help users managing text documents, featuring tab navigation.OpenFlyClient: An educational open source of a client written in C# for FlyFF. This is for educational and evaluation purposes only. Using this for private servers is not allowed.Orkut API Library: Orkut API Library (more of a wrapper) for .NET makes it easier for .NET developers creating web and/or desktop applications to leverage Orkut OpenSocial API from within the familiar Visual Studio environment. PasteHtml.NET: PasteHtml.NET is a .NET API for using the PasteHtml.com service.Photon: Silverlight MVVM Framework RSATasync: Running analyses in RiskSpectrum PSA Professional software is a time-consuming task. RSATasync is a middleware, which uses .NET4 Parallel Extensions to parallelize analyses sent by the editor to the analyzer in the RiskSpectrum software.The Killie01 Project: all killie01 projects (opensource)United Nations News for Windows Phone 7: Open source project for United Nations News for Windows Phone 7. WCF SIP Stack: WCF SIPWordpress .NET: WPDotnet is a library to ease the Wordpress integration with ASP.NET. Some Server controls are implemented. If you don't want to use them, you can just use the classes.

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • Udev webcam rule read, but not respected?

    - by user89305
    I have two usb-webcams on them machine, but at bot they some switch /dev/video number. The solution to this problem seems to be new udev rule. I have added this rule in/etc/udev/rules.d/jj-video.rules: Fix webcam 1 KERNEL=="video1", SUBSYSTEM=="video4linux", SUBSYSTEMS=="usb", ATTRS{idVendor}=="1d6b", ATTRS{idProduct}=="0001", SYMLINK+="webcam1" Fix webcam 2 KERNEL=="video2", SUBSYSTEM=="video4linux", ATTR{name}=="Logitech QuickCam Pro 3000", KERNELS=="0000:00:1d.0", SUBSYSTEMS=="pci", DRIVERS=="uhci_hcd", ATTRS{vendor}=="0x8086", ATTRS##{device}=="0x2658", SYMLINK+="webcam2" but the symlinks are not created. I have tried many different combinations in this file. The present ones are just my lates attempts. I found the parameters in: jjk@eee-old:~$ udevadm info -a -p $(udevadm info -q path -p /class/video4linux/video1) Udevadm info starts with the device specified by the devpath and then walks up the chain of parent devices. It prints for every device found, all possible attributes in the udev rules key format. A rule to match, can be composed by the attributes of the device and the attributes from one single parent device. looking at device '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1': KERNEL=="video1" SUBSYSTEM=="video4linux" DRIVER=="" ATTR{name}=="Logitech QuickCam Pro 3000" ATTR{index}=="0" ATTR{button}=="0" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0': KERNELS=="2-2:1.0" SUBSYSTEMS=="usb" DRIVERS=="Philips webcam" ATTRS{bInterfaceNumber}=="00" ATTRS{bAlternateSetting}==" 9" ATTRS{bNumEndpoints}=="02" ATTRS{bInterfaceClass}=="0a" ATTRS{bInterfaceSubClass}=="ff" ATTRS{bInterfaceProtocol}=="00" ATTRS{supports_autosuspend}=="0" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-2': KERNELS=="2-2" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{configuration}=="" ATTRS{bNumInterfaces}==" 3" ATTRS{bConfigurationValue}=="1" ATTRS{bmAttributes}=="a0" ATTRS{bMaxPower}=="500mA" ATTRS{urbnum}=="371076" ATTRS{idVendor}=="046d" ATTRS{idProduct}=="08b0" ATTRS{bcdDevice}=="0002" ATTRS{bDeviceClass}=="00" ATTRS{bDeviceSubClass}=="00" ATTRS{bDeviceProtocol}=="00" ATTRS{bNumConfigurations}=="1" ATTRS{bMaxPacketSize0}=="8" ATTRS{speed}=="12" ATTRS{busnum}=="2" ATTRS{devnum}=="2" ATTRS{devpath}=="2" ATTRS{version}==" 1.10" ATTRS{maxchild}=="0" ATTRS{quirks}=="0x0" ATTRS{avoid_reset_quirk}=="0" ATTRS{authorized}=="1" ATTRS{serial}=="01402100A5000000" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2': KERNELS=="usb2" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{configuration}=="" ATTRS{bNumInterfaces}==" 1" ATTRS{bConfigurationValue}=="1" ATTRS{bmAttributes}=="e0" ATTRS{bMaxPower}==" 0mA" ATTRS{urbnum}=="34" ATTRS{idVendor}=="1d6b" ATTRS{idProduct}=="0001" ATTRS{bcdDevice}=="0302" ATTRS{bDeviceClass}=="09" ATTRS{bDeviceSubClass}=="00" ATTRS{bDeviceProtocol}=="00" ATTRS{bNumConfigurations}=="1" ATTRS{bMaxPacketSize0}=="64" ATTRS{speed}=="12" ATTRS{busnum}=="2" ATTRS{devnum}=="1" ATTRS{devpath}=="0" ATTRS{version}==" 1.10" ATTRS{maxchild}=="2" ATTRS{quirks}=="0x0" ATTRS{avoid_reset_quirk}=="0" ATTRS{authorized}=="1" ATTRS{manufacturer}=="Linux 3.2.0-29-generic uhci_hcd" ATTRS{product}=="UHCI Host Controller" ATTRS{serial}=="0000:00:1d.0" ATTRS{authorized_default}=="1" looking at parent device '/devices/pci0000:00/0000:00:1d.0': KERNELS=="0000:00:1d.0" SUBSYSTEMS=="pci" DRIVERS=="uhci_hcd" ATTRS{vendor}=="0x8086" ATTRS{device}=="0x2658" ATTRS{subsystem_vendor}=="0x1043" ATTRS{subsystem_device}=="0x82d8" ATTRS{class}=="0x0c0300" ATTRS{irq}=="23" ATTRS{local_cpus}=="ff" ATTRS{local_cpulist}=="0-7" ATTRS{dma_mask_bits}=="32" ATTRS{consistent_dma_mask_bits}=="32" ATTRS{broken_parity_status}=="0" ATTRS{msi_bus}=="" looking at parent device '/devices/pci0000:00': KERNELS=="pci0000:00" SUBSYSTEMS=="" DRIVERS=="" jjk@eee-old:~$ And tested the setup: sudo udevadm --debug test /sys/class/video4linux/video1 main: runtime dir '/run/udev' run_command: calling: test adm_test: version 175 This program is for debugging only, it does not run any program, specified by a RUN key. It may show incorrect results, because some values may be different, or not available at a simulation run. parse_file: reading '/lib/udev/rules.d/40-crda.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-fuse.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-gnupg.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-hplip.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-ia64.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-inputattach.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-libgphoto2-2.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-libsane.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-ppc.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-usb_modeswitch.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-xserver-xorg-video-intel.rules' as rules file parse_file: reading '/lib/udev/rules.d/42-qemu-usb.rules' as rules file parse_file: reading '/lib/udev/rules.d/50-firmware.rules' as rules file parse_file: reading '/lib/udev/rules.d/50-udev-default.rules' as rules file parse_file: reading '/lib/udev/rules.d/55-dm.rules' as rules file parse_file: reading '/lib/udev/rules.d/56-hpmud_support.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-cdrom_id.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-pcmcia.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-alsa.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-input.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-serial.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-storage-dm.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-storage-tape.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-storage.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-v4l.rules' as rules file parse_file: reading '/lib/udev/rules.d/61-accelerometer.rules' as rules file parse_file: reading '/lib/udev/rules.d/64-xorg-xkb.rules' as rules file parse_file: reading '/lib/udev/rules.d/66-xorg-synaptics-quirks.rules' as rules file parse_file: reading '/lib/udev/rules.d/69-cd-sensors.rules' as rules file add_rule: IMPORT found builtin 'usb_id', replacing /lib/udev/rules.d/69-cd-sensors.rules:76 parse_file: reading '/lib/udev/rules.d/69-libmtp.rules' as rules file parse_file: reading '/lib/udev/rules.d/69-xorg-vmmouse.rules' as rules file parse_file: reading '/lib/udev/rules.d/69-xserver-xorg-input-wacom.rules' as rules file parse_file: reading '/etc/udev/rules.d/70-persistent-cd.rules' as rules file parse_file: reading '/etc/udev/rules.d/70-persistent-net.rules' as rules file parse_file: reading '/lib/udev/rules.d/70-printers.rules' as rules file parse_file: reading '/lib/udev/rules.d/70-udev-acl.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-cd-aliases-generator.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-net-description.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-persistent-net-generator.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-probe_mtd.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-tty-description.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-ericsson-mbm.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-longcheer-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-nokia-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-pcmcia-device-blacklist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-platform-serial-whitelist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-qdl-device-blacklist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-simtech-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-usb-device-blacklist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-x22x-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-zte-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-nm-olpc-mesh.rules' as rules file parse_file: reading '/lib/udev/rules.d/78-graphics-card.rules' as rules file parse_file: reading '/lib/udev/rules.d/78-sound-card.rules' as rules file parse_file: reading '/lib/udev/rules.d/80-drivers.rules' as rules file parse_file: reading '/lib/udev/rules.d/80-mm-candidate.rules' as rules file parse_file: reading '/lib/udev/rules.d/80-udisks.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-brltty.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-hdparm.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-hplj10xx.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-keyboard-configuration.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-regulatory.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-usbmuxd.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-alsa-restore.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-alsa-ucm.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-libgpod.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-pulseaudio.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-cd-devices.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-keyboard-force-release.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-keymap.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-udev-late.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-dell.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-fujitsu.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-gateway.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-ibm.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-lenovo.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-toshiba.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-csr.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-hid.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-wup.rules' as rules file parse_file: reading '/lib/udev/rules.d/97-bluetooth-hid2hci.rules' as rules file parse_file: reading '/etc/udev/rules.d/jj-video.rules' as rules file udev_rules_new: rules use 259284 bytes tokens (21607 * 12 bytes), 37913 bytes buffer udev_rules_new: temporary index used 67520 bytes (3376 * 20 bytes) udev_device_new_from_syspath: device 0x215103e0 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1' udev_device_new_from_syspath: device 0x21510758 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1' udev_device_read_db: device 0x21510758 filled with db file data udev_device_new_from_syspath: device 0x21510e10 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0' udev_device_new_from_syspath: device 0x21511b10 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2' udev_device_new_from_syspath: device 0x215132f8 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2' udev_device_new_from_syspath: device 0x21513650 has devpath '/devices/pci0000:00/0000:00:1d.0' udev_device_new_from_syspath: device 0x21513980 has devpath '/devices/pci0000:00' udev_rules_apply_to_event: GROUP 44 /lib/udev/rules.d/50-udev-default.rules:29 udev_rules_apply_to_event: IMPORT 'v4l_id /dev/video1' /lib/udev/rules.d/60-persistent-v4l.rules:7 udev_event_spawn: starting 'v4l_id /dev/video1' spawn_read: 'v4l_id /dev/video1'(out) 'ID_V4L_VERSION=2' spawn_read: 'v4l_id /dev/video1'(out) 'ID_V4L_PRODUCT=Logitech QuickCam Pro 3000' spawn_read: 'v4l_id /dev/video1'(out) 'ID_V4L_CAPABILITIES=:capture:' spawn_wait: 'v4l_id /dev/video1' [2609] exit with return code 0 udev_rules_apply_to_event: IMPORT builtin 'usb_id' /lib/udev/rules.d/60-persistent-v4l.rules:9 builtin_usb_id: /sys/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0: if_class 10 protocol 0 udev_builtin_add_property: ID_VENDOR=046d udev_builtin_add_property: ID_VENDOR_ENC=046d udev_builtin_add_property: ID_VENDOR_ID=046d udev_builtin_add_property: ID_MODEL=08b0 udev_builtin_add_property: ID_MODEL_ENC=08b0 udev_builtin_add_property: ID_MODEL_ID=08b0 udev_builtin_add_property: ID_REVISION=0002 udev_builtin_add_property: ID_SERIAL=046d_08b0_01402100A5000000 udev_builtin_add_property: ID_SERIAL_SHORT=01402100A5000000 udev_builtin_add_property: ID_TYPE=generic udev_builtin_add_property: ID_BUS=usb udev_builtin_add_property: ID_USB_INTERFACES=:0aff00:010100:010200: udev_builtin_add_property: ID_USB_INTERFACE_NUM=00 udev_builtin_add_property: ID_USB_DRIVER=Philips webcam udev_rules_apply_to_event: LINK 'v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0' /lib/udev/rules.d/60-persistent-v4l.rules:10 udev_rules_apply_to_event: IMPORT builtin 'path_id' /lib/udev/rules.d/60-persistent-v4l.rules:16 udev_builtin_add_property: ID_PATH=pci-0000:00:1d.0-usb-0:2:1.0 udev_builtin_add_property: ID_PATH_TAG=pci-0000_00_1d_0-usb-0_2_1_0 udev_rules_apply_to_event: LINK 'v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0' /lib/udev/rules.d/60-persistent-v4l.rules:17 udev_rules_apply_to_event: RUN 'udev-acl --action=$env{ACTION} --device=$env{DEVNAME}' /lib/udev/rules.d/70-udev-acl.rules:74 udev_rules_apply_to_event: LINK 'webcam1' /etc/udev/rules.d/jj-video.rules:2 udev_event_execute_rules: no node name set, will use kernel supplied name 'video1' udev_node_add: creating device node '/dev/video1', devnum=81:1, mode=0660, uid=0, gid=44 udev_node_mknod: preserve file '/dev/video1', because it has correct dev_t udev_node_mknod: preserve permissions /dev/video1, 020660, uid=0, gid=44 node_symlink: preserve already existing symlink '/dev/char/81:1' to '../video1' link_find_prioritized: found 'c81:2' claiming '/run/udev/links/v4l\x2fby-id\x2fusb-046d_08b0_01402100A5000000-video-index0' udev_device_new_from_syspath: device 0x21516748 has devpath '/devices/pci0000:00/0000:00:1d.1/usb3/3-2/3-2:1.0/video4linux/video2' udev_device_read_db: device 0x21516748 filled with db file data link_find_prioritized: found 'c81:1' claiming '/run/udev/links/v4l\x2fby-id\x2fusb-046d_08b0_01402100A5000000-video-index0' link_update: creating link '/dev/v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0' to '/dev/video1' node_symlink: atomically replace '/dev/v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0' link_find_prioritized: found 'c81:1' claiming '/run/udev/links/v4l\x2fby-path\x2fpci-0000:00:1d.0-usb-0:2:1.0-video-index0' link_update: creating link '/dev/v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0' to '/dev/video1' node_symlink: preserve already existing symlink '/dev/v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0' to '../../video1' link_find_prioritized: found 'c81:1' claiming '/run/udev/links/webcam1' link_update: creating link '/dev/webcam1' to '/dev/video1' node_symlink: preserve already existing symlink '/dev/webcam1' to 'video1' udev_device_update_db: created db file '/run/udev/data/c81:1' for '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1' ACTION=add COLORD_DEVICE=1 COLORD_KIND=camera DEVLINKS=/dev/v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0 /dev/v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0 /dev/webcam1 DEVNAME=/dev/video1 DEVPATH=/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1 ID_BUS=usb ID_MODEL=08b0 ID_MODEL_ENC=08b0 ID_MODEL_ID=08b0 ID_PATH=pci-0000:00:1d.0-usb-0:2:1.0 ID_PATH_TAG=pci-0000_00_1d_0-usb-0_2_1_0 ID_REVISION=0002 ID_SERIAL=046d_08b0_01402100A5000000 ID_SERIAL_SHORT=01402100A5000000 ID_TYPE=generic ID_USB_DRIVER=Philips webcam ID_USB_INTERFACES=:0aff00:010100:010200: ID_USB_INTERFACE_NUM=00 ID_V4L_CAPABILITIES=:capture: ID_V4L_PRODUCT=Logitech QuickCam Pro 3000 ID_V4L_VERSION=2 ID_VENDOR=046d ID_VENDOR_ENC=046d ID_VENDOR_ID=046d MAJOR=81 MINOR=1 SUBSYSTEM=video4linux TAGS=:udev-acl: UDEV_LOG=6 USEC_INITIALIZED=18213768 run: 'udev-acl --action=add --device=/dev/video1' jjk@eee-old:~$ (and correspondingly for video2) It looks to me like my rules are read, but not respected. What am I doing wrong?

    Read the article

  • DocumentDB - Another Azure NoSQL Storage Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/08/25/documentdb---another-azure-nosql-storage-service.aspxMicrosoft just released a bunch of new features for Azure on 22nd and one of them I was interested in most is DocumentDB, a document NoSQL database service on the cloud.   Quick Look at DocumentDB We can try DocumentDB from the new azure preview portal. Just click the NEW button and select the item named DocumentDB to create a new account. Specify the name of the DocumentDB, which will be the endpoint we are going to use to connect later. Select the capacity unit, resource group and subscription. In resource group section we can select which region our DocumentDB will be located. Same as other azure services select the same location with your consumers of the DocumentDB, for example the website, web services, etc.. After several minutes the DocumentDB will be ready. Click the KEYS button we can find the URI and primary key, which will be used when connecting. Now let's open Visual Studio and try to use the DocumentDB we had just created. Create a new console application and install the DocumentDB .NET client library from NuGet with the keyword "DocumentDB". You need to select "Include Prerelase" in NuGet Package Manager window since this library was not yet released. Next we will create a new database and document collection under our DocumentDB account. The code below created an instance of DocumentClient with the URI and primary key we just copied from azure portal, and create a database and collection. And it also prints the document and collection link string which will be used later to insert and query documents. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7: Run(client).Wait(); 8:  9: Console.WriteLine("done"); 10: Console.ReadKey(); 11: } 12:  13: static async Task Run(DocumentClient client) 14: { 15:  16: var database = new Database() { Id = "testdb" }; 17: database = await client.CreateDatabaseAsync(database); 18: Console.WriteLine("database link = {0}", database.SelfLink); 19:  20: var collection = new DocumentCollection() { Id = "testcol" }; 21: collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection); 22: Console.WriteLine("collection link = {0}", collection.SelfLink); 23: } Below is the result from the console window. We need to copy the collection link string for future usage. Now if we back to the portal we will find a database was listed with the name we specified in the code. Next we will insert a document into the database and collection we had just created. In the code below we pasted the collection link which copied in previous step, create a dynamic object with several properties defined. As you can see we can add some normal properties contains string, integer, we can also add complex property for example an array, a dictionary and an object reference, unless they can be serialized to JSON. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: // collection link pasted from the result in previous demo 9: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 10:  11: // document we are going to insert to database 12: dynamic doc = new ExpandoObject(); 13: doc.firstName = "Shaun"; 14: doc.lastName = "Xu"; 15: doc.roles = new string[] { "developer", "trainer", "presenter", "father" }; 16:  17: // insert the docuemnt 18: InsertADoc(client, collectionLink, doc).Wait(); 19:  20: Console.WriteLine("done"); 21: Console.ReadKey(); 22: } the insert code will be very simple as below, just provide the collection link and the object we are going to insert. 1: static async Task InsertADoc(DocumentClient client, string collectionLink, dynamic doc) 2: { 3: var document = await client.CreateDocumentAsync(collectionLink, doc); 4: Console.WriteLine(await JsonConvert.SerializeObjectAsync(document, Formatting.Indented)); 5: } Below is the result after the object had been inserted. Finally we will query the document from the database and collection. Similar to the insert code, we just need to specify the collection link so that the .NET SDK will help us to retrieve all documents in it. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 9:  10: SelectDocs(client, collectionLink); 11:  12: Console.WriteLine("done"); 13: Console.ReadKey(); 14: } 15:  16: static void SelectDocs(DocumentClient client, string collectionLink) 17: { 18: var docs = client.CreateDocumentQuery(collectionLink + "docs/").ToList(); 19: foreach(var doc in docs) 20: { 21: Console.WriteLine(doc); 22: } 23: } Since there's only one document in my collection below is the result when I executed the code. As you can see all properties, includes the array was retrieve at the same time. DocumentDB also attached some properties we didn't specified such as "_rid", "_ts", "_self" etc., which is controlled by the service.   DocumentDB Benefit DocumentDB is a document NoSQL database service. Different from the traditional database, document database is truly schema-free. In a short nut, you can save anything in the same database and collection if it could be serialized to JSON. We you query the document database, all sub documents will be retrieved at the same time. This means you don't need to join other tables when using a traditional database. Document database is very useful when we build some high performance system with hierarchical data structure. For example, assuming we need to build a blog system, there will be many blog posts and each of them contains the content and comments. The comment can be commented as well. If we were using traditional database, let's say SQL Server, the database schema might be defined as below. When we need to display a post we need to load the post content from the Posts table, as well as the comments from the Comments table. We also need to build the comment tree based on the CommentID field. But if were using DocumentDB, what we need to do is to save the post as a document with a list contains all comments. Under a comment all sub comments will be a list in it. When we display this post we just need to to query the post document, the content and all comments will be loaded in proper structure. 1: { 2: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 3: "title": "xxxxx", 4: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 5: "postedOn": "08/25/2014 13:55", 6: "comments": 7: [ 8: { 9: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 10: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 11: "commentedOn": "08/25/2014 14:00", 12: "commentedBy": "xxx" 13: }, 14: { 15: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 16: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 17: "commentedOn": "08/25/2014 14:10", 18: "commentedBy": "xxx", 19: "comments": 20: [ 21: { 22: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 23: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 24: "commentedOn": "08/25/2014 14:18", 25: "commentedBy": "xxx", 26: "comments": 27: [ 28: { 29: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 30: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 31: "commentedOn": "08/25/2014 18:22", 32: "commentedBy": "xxx", 33: } 34: ] 35: }, 36: { 37: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 38: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 39: "commentedOn": "08/25/2014 15:02", 40: "commentedBy": "xxx", 41: } 42: ] 43: }, 44: { 45: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 46: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 47: "commentedOn": "08/25/2014 14:30", 48: "commentedBy": "xxx" 49: } 50: ] 51: }   DocumentDB vs. Table Storage DocumentDB and Table Storage are all NoSQL service in Microsoft Azure. One common question is "when we should use DocumentDB rather than Table Storage". Here are some ideas from me and some MVPs. First of all, they are different kind of NoSQL database. DocumentDB is a document database while table storage is a key-value database. Second, table storage is cheaper. DocumentDB supports scale out from one capacity unit to 5 in preview period and each capacity unit provides 10GB local SSD storage. The price is $0.73/day includes 50% discount. For storage service the highest price is $0.061/GB, which is almost 10% of DocumentDB. Third, table storage provides local-replication, geo-replication, read access geo-replication while DocumentDB doesn't support. Fourth, there is local emulator for table storage but none for DocumentDB. We have to connect to the DocumentDB on cloud when developing locally. But, DocumentDB supports some cool features that table storage doesn't have. It supports store procedure, trigger and user-defined-function. It supports rich indexing while table storage only supports indexing against partition key and row key. It supports transaction, table storage supports as well but restricted with Entity Group Transaction scope. And the last, table storage is GA but DocumentDB is still in preview.   Summary In this post I have a quick demonstration and introduction about the new DocumentDB service in Azure. It's very easy to interact through .NET and it also support REST API, Node.js SDK and Python SDK. Then I explained the concept and benefit of  using document database, then compared with table storage.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Metro: Dynamically Switching Templates with a WinJS ListView

    - by Stephen.Walther
    Imagine that you want to display a list of products using the WinJS ListView control. Imagine, furthermore, that you want to use different templates to display different products. In particular, when a product is on sale, you want to display the product using a special “On Sale” template. In this blog entry, I explain how you can switch templates dynamically when displaying items with a ListView control. In other words, you learn how to use more than one template when displaying items with a ListView control. Creating the Data Source Let’s start by creating the data source for the ListView. Nothing special here – our data source is a list of products. Two of the products, Oranges and Apples, are on sale. (function () { "use strict"; var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44 }, { name: "Oranges", price: 1.99, onSale: true }, { name: "Wine", price: 8.55 }, { name: "Apples", price: 2.44, onSale: true }, { name: "Steak", price: 1.99 }, { name: "Eggs", price: 2.44 }, { name: "Mushrooms", price: 1.99 }, { name: "Yogurt", price: 2.44 }, { name: "Soup", price: 1.99 }, { name: "Cereal", price: 2.44 }, { name: "Pepsi", price: 1.99 } ]); WinJS.Namespace.define("ListViewDemos", { products: products }); })(); The file above is saved with the name products.js and referenced by the default.html page described below. Declaring the Templates and ListView Control Next, we need to declare the ListView control and the two Template controls which we will use to display template items. The markup below appears in the default.html file: <!-- Templates --> <div id="productItemTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div id="productOnSaleTemplate" data-win-control="WinJS.Binding.Template"> <div class="product onSale"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> (On Sale!) </div> </div> <!-- ListView --> <div id="productsListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: ListViewDemos.products.dataSource, layout: { type: WinJS.UI.ListLayout } }"> </div> In the markup above, two Template controls are declared. The first template is used when rendering a normal product and the second template is used when rendering a product which is on sale. The second template, unlike the first template, includes the text “(On Sale!)”. The ListView control is bound to the data source which we created in the previous section. The ListView itemDataSource property is set to the value ListViewDemos.products.dataSource. Notice that we do not set the ListView itemTemplate property. We set this property in the default.js file. Switching Between Templates All of the magic happens in the default.js file. The default.js file contains the JavaScript code used to switch templates dynamically. Here’s the entire contents of the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.UI.processAll().then(function () { var productsListView = document.getElementById("productsListView"); productsListView.winControl.itemTemplate = itemTemplateFunction; });; } }; function itemTemplateFunction(itemPromise) { return itemPromise.then(function (item) { // Select either normal product template or on sale template var itemTemplate = document.getElementById("productItemTemplate"); if (item.data.onSale) { itemTemplate = document.getElementById("productOnSaleTemplate"); }; // Render selected template to DIV container var container = document.createElement("div"); itemTemplate.winControl.render(item.data, container); return container; }); } app.start(); })(); In the code above, a function is assigned to the ListView itemTemplate property with the following line of code: productsListView.winControl.itemTemplate = itemTemplateFunction;   The itemTemplateFunction returns a DOM element which is used for the template item. Depending on the value of the product onSale property, the DOM element is generated from either the productItemTemplate or the productOnSaleTemplate template. Using Binding Converters instead of Multiple Templates In the previous sections, I explained how you can use different templates to render normal products and on sale products. There is an alternative approach to displaying different markup for normal products and on sale products. Instead of creating two templates, you can create a single template which contains separate DIV elements for a normal product and an on sale product. The following default.html file contains a single item template and a ListView control bound to the template. <!-- Template --> <div id="productItemTemplate" data-win-control="WinJS.Binding.Template"> <div class="product" data-win-bind="style.display: onSale ListViewDemos.displayNormalProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> <div class="product onSale" data-win-bind="style.display: onSale ListViewDemos.displayOnSaleProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> (On Sale!) </div> </div> <!-- ListView --> <div id="productsListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: ListViewDemos.products.dataSource, itemTemplate: select('#productItemTemplate'), layout: { type: WinJS.UI.ListLayout } }"> </div> The first DIV element is used to render a normal product: <div class="product" data-win-bind="style.display: onSale ListViewDemos.displayNormalProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> The second DIV element is used to render an “on sale” product: <div class="product onSale" data-win-bind="style.display: onSale ListViewDemos.displayOnSaleProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> (On Sale!) </div> Notice that both templates include a data-win-bind attribute. These data-win-bind attributes are used to show the “normal” template when a product is not on sale and show the “on sale” template when a product is on sale. These attributes set the Cascading Style Sheet display attribute to either “none” or “block”. The data-win-bind attributes take advantage of binding converters. The binding converters are defined in the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.UI.processAll(); } }; WinJS.Namespace.define("ListViewDemos", { displayNormalProduct: WinJS.Binding.converter(function (onSale) { return onSale ? "none" : "block"; }), displayOnSaleProduct: WinJS.Binding.converter(function (onSale) { return onSale ? "block" : "none"; }) }); app.start(); })(); The ListViewDemos.displayNormalProduct binding converter converts the value true or false to the value “none” or “block”. The ListViewDemos.displayOnSaleProduct binding converter does the opposite; it converts the value true or false to the value “block” or “none” (Sadly, you cannot simply place a NOT operator before the onSale property in the binding expression – you need to create both converters). The end result is that you can display different markup depending on the value of the product onSale property. Either the contents of the first or second DIV element are displayed: Summary In this blog entry, I’ve explored two approaches to displaying different markup in a ListView depending on the value of a data item property. The bulk of this blog entry was devoted to explaining how you can assign a function to the ListView itemTemplate property which returns different templates. We created both a productItemTemplate and productOnSaleTemplate and displayed both templates with the same ListView control. We also discussed how you can create a single template and display different markup by using binding converters. The binding converters are used to set a DIV element’s display property to either “none” or “block”. We created a binding converter which displays normal products and a binding converter which displays “on sale” products.

    Read the article

  • 'Content' is NOT 'Text' in XAML

    - by psheriff
    One of the key concepts in XAML is that the Content property of a XAML control like a Button or ComboBoxItem does not have to contain just textual data. In fact, Content can be almost any other XAML that you want. To illustrate here is a simple example of how to spruce up your Button controls in Silverlight. Here is some very simple XAML that consists of two Button controls within a StackPanel on a Silverlight User Control. <StackPanel>  <Button Name="btnHome"          HorizontalAlignment="Left"          Content="Home" />  <Button Name="btnLog"          HorizontalAlignment="Left"          Content="Logs" /></StackPanel> The XAML listed above will produce a Silverlight control within a Browser that looks like Figure 1.   Figure 1: Normal button controls are quite boring. With just a little bit of refactoring to move the button attributes into Styles, we can make the buttons look a little better. I am a big believer in Styles, so I typically create a Resources section within my user control where I can factor out the common attribute settings for a particular set of controls. Here is a Resources section that I added to my Silverlight user control. <UserControl.Resources>  <Style TargetType="Button"         x:Key="NormalButton">    <Setter Property="HorizontalAlignment"            Value="Left" />    <Setter Property="MinWidth"            Value="50" />    <Setter Property="Margin"            Value="10" />  </Style></UserControl.Resources> Now back in the XAML within the Grid control I update the Button controls to use the Style attribute and have each button use the Static Resource called NormalButton. <StackPanel>  <Button Name="btnHome"          Style="{StaticResource NormalButton}"          Content="Home" />  <Button Name="btnLog"          Style="{StaticResource NormalButton}"          Content="Logs" /></StackPanel> With the additional attributes set in the Resources section on the Button, the above XAML will now display the two buttons as shown in Figure 2. Figure 2: Use Resources to Make Buttons More Consistent Now let’s re-design these buttons even more. Instead of using words for each button, let’s replace the Content property to use a picture. As they say… a picture is worth a thousand words, so let’s take advantage of that. Modify each of the Button controls and eliminate the Content attribute and instead, insert an <Image> control with the <Button> and the </Button> tags. Add a ToolTip to still display the words you had before in the Content and you will now have better looking buttons, as shown in Figure 3.   Figure 3: Using pictures instead of words can be an effective method of communication The XAML to produce Figure 3 is shown in the following listing: <StackPanel>  <Button Name="btnHome"          ToolTipService.ToolTip="Home"          Style="{StaticResource NormalButton}">    <Image Style="{StaticResource NormalImage}"            Source="Images/Home.jpg" />  </Button>  <Button Name="btnLog"          ToolTipService.ToolTip="Logs"          Style="{StaticResource NormalButton}">    <Image Style="{StaticResource NormalImage}"            Source="Images/Log.jpg" />  </Button></StackPanel> You will also need to add the following XAML to the User Control’s Resources section. <Style TargetType="Image"        x:Key="NormalImage">  <Setter Property="Width"          Value="50" /></Style> Add Multiple Controls to Content Now, since the Content can be whatever we want, you could also modify the Content of each button to be a StackPanel control. Then you can have an image and text within the button. <StackPanel>  <Button Name="btnHome"          ToolTipService.ToolTip="Home"          Style="{StaticResource NormalButton}">    <StackPanel>      <Image Style="{StaticResource NormalImage}"              Source="Images/Home.jpg" />      <TextBlock Text="Home"                  Style="{StaticResource NormalTextBlock}" />    </StackPanel>  </Button>  <Button Name="btnLog"          ToolTipService.ToolTip="Logs"          Style="{StaticResource NormalButton}">    <StackPanel>      <Image Style="{StaticResource NormalImage}"              Source="Images/Log.jpg" />      <TextBlock Text="Logs"                  Style="{StaticResource NormalTextBlock}" />    </StackPanel>  </Button></StackPanel> You will need to add one more resource for the TextBlock control too. <Style TargetType="TextBlock"        x:Key="NormalTextBlock">  <Setter Property="HorizontalAlignment"          Value="Center" /></Style> All of the above will now produce the following:   Figure 4: Add multiple controls to the content to make your buttons even more interesting. Summary While this is a simple example, you can see how XAML Content has great flexibility. You could add a MediaElement control as the content of a Button and play a video within the Button. Not that you would necessarily do this, but it does work. What is nice about adding different content within the Button control is you still get the Click event and other attributes of a button, but it does necessarily look like a normal button. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled "Silverlight XAML for the Complete Novice - Part 1."

    Read the article

  • CodePlex Daily Summary for Saturday, January 08, 2011

    CodePlex Daily Summary for Saturday, January 08, 2011Popular ReleasesExtended WPF Toolkit: Extended WPF Toolkit - 1.3.0: What's in the 1.3.0 Release?BusyIndicator ButtonSpinner ChildWindow ColorPicker - Updated (Breaking Changes) DateTimeUpDown - New Control Magnifier - New Control MaskedTextBox - New Control MessageBox NumericUpDown RichTextBox RichTextBoxFormatBar - Updated .NET 3.5 binaries and SourcePlease note: The Extended WPF Toolkit 3.5 is dependent on .NET Framework 3.5 and the WPFToolkit. You must install .NET Framework 3.5 and the WPFToolkit in order to use any features in the To...sNPCedit: sNPCedit v0.9d: added elementclient coordinate catcher to catch coordinates select a target (ingame) i.e. your char, npc or monster than click the button and coordinates+direction will be transfered to the selected row in the table corrected labels from Rot to Direction (because it is a vector)AutoLoL: AutoLoL v1.5.2: Implemented the Auto Updater Fix: Your settings will no longer be cleared with new releases of AutoLoL The mastery Editor and Browser now have their own tabs instead of nested tabs The Browser tab will only show the masteries matching ALL filters instead of just one Added a 'Browse' button in the Mastery Editor tab to open the Masteries Directory The Browser tab now shows a message when there are no mastery files in the Masteries Directory Fix: Fixed the Save As dialog again, for ...Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.StyleCop for ReSharper: StyleCop for ReSharper 5.1.14980.000: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the correct path ...VivoSocial: VivoSocial 7.4.1: New release with bug fixes and updates for performance.SSH.NET Library: 2011.1.6: Fixes CommandTimeout default value is fixed to infinite. Port Forwarding feature improvements Memory leaks fixes New Features Add ErrorOccurred event to handle errors that occurred on different thread New and improve SFTP features SftpFile now has more attributes and some operations Most standard operations now available Allow specify encoding for command execution KeyboardInteractiveConnectionInfo class added for "keyboard-interactive" authentication. Add ability to specify bo....NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.03: Added lot's of new extensions and new projects for MVC and Entity Framework. object.FindTypeByRecursion Int32.InRange String.RemoveAllSpecialCharacters String.IsEmptyOrWhiteSpace String.IsNotEmptyOrWhiteSpace String.IfEmptyOrWhiteSpace String.ToUpperFirstLetter String.GetBytes String.ToTitleCase String.ToPlural DateTime.GetDaysInYear DateTime.GetPeriodOfDay IEnumberable.RemoveAll IEnumberable.Distinct ICollection.RemoveAll IList.Join IList.Match IList.Cast Array.IsNullOrEmpty Array.W...VidCoder: 0.8.0: Added x64 version. Made the audio output preview more detailed and accurate. If the chosen encoder or mixdown is incompatible with the source, the fallback that will be used is displayed. Added "Auto" to the audio mixdown choices. Reworked non-anamorphic size calculation to work better with non-standard pixel aspect ratios and cropping. Reworked Custom anamorphic to be more intuitive and allow display width to be set automatically (Thanks, Statick). Allowing higher bitrates for 6-ch....NET Voice Recorder: Auto-Tune Release: This is the source code and binaries to accompany the article on the Coding 4 Fun website. It is the Auto Tuner release of the .NET Voice Recorder application.BloodSim: BloodSim - 1.3.2.0: - Simulation Log is now automatically disabled and hidden when running 10 or more iterations - Hit and Expertise are now entered by Rating, and include option for a Racial Expertise bonus - Added option for boss to use a periodic magic ability (Dragon Breath) - Added option for boss to periodically Enrage, gaining a Damage/Attack Speed buffEnhSim: EnhSim 2.2.9 BETA: 2.2.9 BETAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in the Gobl...xUnit.net - Unit Testing for .NET: xUnit.net 1.7 Beta: xUnit.net release 1.7 betaBuild #1533 Important notes for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. This release adds the following new features: Added support for ASP.NET MVC 3 Added Assert.Equal(double expected, double actual, int precision)...Json.NET: Json.NET 4.0 Release 1: New feature - Added Windows Phone 7 project New feature - Added dynamic support to LINQ to JSON New feature - Added dynamic support to serializer New feature - Added INotifyCollectionChanged to JContainer in .NET 4 build New feature - Added ReadAsDateTimeOffset to JsonReader New feature - Added ReadAsDecimal to JsonReader New feature - Added covariance to IJEnumerable type parameter New feature - Added XmlSerializer style Specified property support New feature - Added ...DbDocument: DbDoc Initial Version: DbDoc Initial versionASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.2: Atomic CMS 2.1.2 release notes Atomic CMS installation guide N2 CMS: 2.1: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) All-round improvements and bugfixes File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the template packs (above) and open the proj...Mobile Device Detection and Redirection: 0.1.11.10: IMPORTANT CHANGESThis release changes the way some WURFL capabilities and attributes are exposed to .NET developers. If you cast MobileCapabilities to return some values then please read the Release Note before implementing this release. The following code snippet can be used to access any WURFL capability. For instance, if the device is a tablet: string capability = Request.Browser["is_tablet"]; SummaryNew attributes have been added to the redirect section: originalUrlAsQueryString If se...Wii Backup Fusion: Wii Backup Fusion 1.0: - Norwegian translation - French translation - German translation - WBFS dump for analysis - Scalable full HQ cover - Support for log file - Load game images improved - Support for image splitting - Diff for images after transfer - Support for scrubbing modes - Search functionality for log - Recurse depth for Files/Load - Show progress while downloading game cover - Supports more databases for cover download - Game cover loading routines improvedBlogEngine.NET: BlogEngine.NET 2.0: Get DotNetBlogEngine for 3 Months Free! Click Here for More Info 3 Months FREE – BlogEngine.NET Hosting – Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. To get started, be sure to check out our installatio...New Projects2011 Scripting Games: Track Changes for the 2011 Scripting Games to be hosted on PoshCode by the Microsoft Scripting Guys and the PowerShell community. ADO.NET Unit Testable Repository Generator: Generates a strongly-typed Object Context and POCO Entities from your EDMX file. Additionally a Repository and an abstract Unit Test for the Repository is created (with a mock for the ObjectContext and mock for the ObjectSets). Bases on the "ADO.NET C# POCO Entity Generator".AshpazKhone: ???AVAilable Rooms: AVAilable Rooms Can be used to show and book rooms on an exchange serverContentManager: Content Manger for SAP Kpro exports the data pool of a content-server by PHIOS-Lists (txt-files) into local binary files. Afterwards this files can be imported in a SAP content-repository . The whole List of the PHIOS Keys can also be downloaded an splitted in practical units.CSLauncher: CSLauncher is a small application providing a launching interface similar to Counter Strike's weapons selection menu. You can organize your favorite applications in CSLauncher in the order you wish, then launch them by remembering their numbers.DbMetadata: A simple lightweight library used to retrieve database metadata.DotNetSiemensPLCToolBoxLibrary: DotNetSiemensPLCToolBoxLibrary A Library for working with Siemens Step5 and Step7 Projects, connecting to S5 or S7 PLC's.EmptyX: EmptyX is a game developped with c# and xna for the XNA CHALLENGE in AlgeriaERASMUS: Erasmus šitFaceDeBouc: Simplebook, le nouveau Facebook CPE! Technlogies .NET Serveur: - BDD SQLServer - Logique métier - Web services Clients: - client lourd WPF - client léger ASP.NETFunction resolver: Application using .NET Expression trees to resolve a function for a set of numbersMLRobotic: Programme de gestion MLRobotic 2011 Sissa ProjectMS CRM Multiple Attachment Download Application: Microsoft Dynamics CRM 4.0 currently does not have a built in way for CRM Administrators to Export multiple attachments from the database and store it on a local directory on the Server or the local PC. This application allows you to do so.Open Data Publisher: Open Data Publisher is a C# MVC application that allows you to quickly and easily publish data from SQL Server tables and views using the OData protocol, and present data from OData and static sources to the public for download and exploration.OttawaZagreb1: Test projectPrague: ????? ?? ???? ??????PSSplunk: PSSplunk is a set of Powershell cmdlets that use the RESTFUL API provided by www.splunk.com The goal is provide access to all of Splunks configuration and search features from the CLI without having to have Splunk installed locally.Reactive C#: Reactive language extensions for C#SemanaWebcasts: Projeto para semana de webcasts.Steward: Personal StewardSVNUG.Tracker: SVNUG.Tracker is an example task tracking application used as a presentation project for the Sangamon Valley .NET User Group (svnug.com). The purpose of SVNUG.Tracker is to be a common problem domain for exercising new .NET technology. SVNUG.Tracker is written in C# 4.The Silverlight Cookbook - A Reference Application for Silverlight 4 LOB Apps: The Silverlight Cookbook is a .NET example application that demonstrates how to build a full-blown line-of-business app using the reference architecture explained at: http://www.dennisdoomen.net/search/label/Silverlight Vkontakte Wpf Client: This is a simple client for social network Vkontakte. It can send message to your friends, download audio files or play it,list user video. Also you can view user status and change yours.Windows Phone Multilingual Keyboard: Multilingual keyboard for Windows Phone 7WPF Controls & MVVM Practices: WPF Controls contains various controls that can be used in applications. Currently Featured * FilteredListBox (filter and selected items notification) * IsDirty functionality for a ViewModel model

    Read the article

  • CodePlex Daily Summary for Tuesday, December 21, 2010

    CodePlex Daily Summary for Tuesday, December 21, 2010Popular ReleasesSQL Monitor - tracking sql server activities: SQL Monitor 3.0 alpha 7: 1. added script save/load in user query window 2. fixed problem with connection dialog when choosing windows auth but still ask for user name 3. auto open user table when double click one table node 4. improved alert message, added log only methodOpen NFe: Open NFe 2.0 (Beta): Última versão antes da versão final a ser lançada nos próximos dias.EnhSim: EnhSim 2.2.6 ALPHA: 2.2.6 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Fixing up some r...LINQ to Twitter: LINQ to Twitter Beta v2.0.18: Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation. Bug fixes.ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4.3: Helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: Improvements for confirm, popup, popup form RenderView controller extension the user experience for crud in live demo has been substantially improved + added search all the features are shown in the live demoGanttPlanner: GanttPlanner V1.0: GanttPlanner V1.0 include GanttPlanner.dll and also a Demo application.N2 CMS: 2.1 release candidate 3: * Web platform installer support available N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) A bunch of bugs were fixed File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the templ...TweetSharp: TweetSharp v2.0.0.0 - Preview 6: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 6 ChangesMaintenance release with user reported fixes Preview 5 ChangesMaintenance release with user reported fixes Preview 4 ChangesReintroduced fluent interface support via satellite assembly Added entities support, entity segmentation, and ITweetable/ITweeter interfaces for client development Numer...Team Foundation Server Administration Tool: 2.1: TFS Administration Tool 2.1, is the first version of the TFS Administration Tool which is built on top of the Team Foundation Server 2010 object model. TFS Administration Tool 2.1 can be installed on machines that are running either Team Explorer 2010, or Team Foundation Server 2010.Hacker Passwords: HackerPasswords.zip: Source code, executable and documentationSubtitleTools: SubtitleTools 1.3: - Added .srt FileAssociation & Win7 ShowRecentCategory feature. - Applied UnifiedYeKe to fix Persian search problems. - Reduced file size of Persian subtitles for uploading @OSDB.Facebook C# SDK: 4.1.0: - Lots of bug fixes - Removed Dynamic Runtime Language dependencies from non-dynamic platforms. - Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample - Changed internal serialization to use Json.net - BREAKING CHANGE: Canvas Session is no longer support. Use Signed Request instead. Canvas Session has been deprecated by Facebook. - BREAKING CHANGE: Some renames and changes with Authorizer, CanvasAuthorizer, and Authorization ac...NuGet (formerly NuPack): NuGet 1.0 build 11217.102: Note: this release is slightly newer than RC1, and fixes a couple issues relating to updating packages to newer versions. NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. This new build targets the newer feed (h...WCF Community Site: WCF Web APIs 10.12.17: Welcome to the second release of WCF Web APIs on codeplex Here is what is new in this release. WCF Support for jQuery - create WCF web services that are easy to consume from JavaScript clients, in particular jQuery. Better support for using JsonValue as dynamic Support for JsonValue change notification events for databinding and other purposes Support for going between JsonValue and CLR types WCF HTTP - create HTTP / REST based web services. This is a minor release which contains fixe...LiveChat Starter Kit: LCSK v1.0: This is a working version of the LCSK for Visual Studio 2010, ASP.NET MVC 3 (using Razor View Engine). this is still provider based (with 1 provider Sql) and this is still using WebService and Windows Forms operator console. The solution is cleaner, with an installer to create tables etc. You can also install it via nuget (Install-Package lcsk) Let me know your feedbackOrchard Project: Orchard 0.9: Orchard Release Notes Build: 0.9.253 Published: 12/16/2010 How to Install OrchardTo install the Orchard tech preview using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard-Using-Web-PI.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.0.9.253.zip file. The zip contents are pre-built and ready-to-run. Simply extract the contents of the Orch...DotSpatial: DotSpatial 12-15-2010: This release contains a few minor bug fixes and hopefully the GDAL libraries for the 3.5 x86 build actually built to the correct directory this time.DotNetNuke® Community Edition: 05.06.01 Beta: This is the initial Beta of DotNetNuke 5.6.1. See the DotNetNuke Roadmap a full list of changes in this release.MSBuild Extension Pack: December 2010: Release Blog Post The MSBuild Extension Pack December 2010 release provides a collection of over 380 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...Silverlight Contrib: Silverlight Contrib 2010.1.0: 2010.1.0 New FeaturesCompatibility Release for Silverlight 4 and Visual Studio 2010New Projectsaqq2: aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2Cameleon CMS: Caméléon, le CMS open-source le plus simple d'utilisation sur le marché.Dail Emulator: Dail Emulator is a software that emulates the server software of Aeonsofts game FlyFF. Specifically made for the now defunct version 6 of the game.Dmitry Lyalin's WP7 Sample Application: A project where im putting all my public samples in a single WP7 client test applicationElixir: DotNetNuke module test framework. Use this DLL to write concise, to-the-point tests for your modules that can target multiple DNN platforms and multiple browsers.FlightTableEditor: This edits the Flight and Healing Tables of Pokémon Fire Red and Pokémon Leaf Green.Genetico UNLaM: Algoritmo Genetico UNLaMHelpCentral HelpDesk Ticket Application: An open-ended framework and client for a helpdesk ticket application and bug tracker. Built to be as extensible and available as possible, the central system is a WCF webservice which can be accessed by any client application.Import Media for Umbraco: This package allows you to import multiple or large files that have been uploaded to the server via FTP or other means. A new media item is created in the media tree and the media is moved from the upload location to the new directory for the media item.LargeCMS full sample (How to call CryptMsg API in streaming mode): LargeCms implements SignedLargeCMS and EnvelopedLargeCMS classes, my own version of .NET SignedCms and EnvelopedCMS to work with large data. It shows a way to call CryptMsg API in streaming mode to overcome the limitations of current .NET classesLinq.Autocompiler: Adds runtime, on demand compilation of Linq queries (in Linq to SQL), which creates CompiledQuery instances and stores them in a cache. Mathetris: it's a game.MonoWiimote: Managed c# library for using Wii Remote device under Mono/Linux enviroment. It requires libcwiimote-0.4 (sudo apt-get install libcwiimote-0.4).Movie Captor: Concorrendo no Desafio .NETNSpotifyLib: A .NET library contains wrapper classes for the Spotify Metadata API. http://developer.spotify.com/en/metadata-api/overview/ This library supports searching for artists, albums and tracks by using the Spotify Metadata API. In the future I'll add support for Lookups as well.Silverhawk: It's developed in C# programming language.SimpleSpy: Detecting object handle on cursor move and get its information. Basically this is a bare bone program for detecting object handle at cursor and add your own logic to detect for further information. SunburnJitter: JitterSunburn contains helper classes and methods to use the jitter physics engine easier in your sunburn projects. Simple component based shapes will provide easy editing in editor. It's developed in C#.Sunrod: Sunrod is a .NET wrapper for the Obsidian Portal API.TouchStack - A MonoTouch ServiceStack.NET client: TouchStack makes it easier for MonoTouch developers to consume Web services created and exposed by the brilliant ServiceStack.NET webservice framework. TouchStack is written in C# and uses the MonoTouch library.VNTViewer: Viewer for notes/memos (VNT files)VpNet: Official .NET Wrapper for Virtual Paradise.xlGUI: Streamlet GUI Framework

    Read the article

  • Metro: Dynamically Switching Templates with a WinJS ListView

    - by Stephen.Walther
    Imagine that you want to display a list of products using the WinJS ListView control. Imagine, furthermore, that you want to use different templates to display different products. In particular, when a product is on sale, you want to display the product using a special “On Sale” template. In this blog entry, I explain how you can switch templates dynamically when displaying items with a ListView control. In other words, you learn how to use more than one template when displaying items with a ListView control. Creating the Data Source Let’s start by creating the data source for the ListView. Nothing special here – our data source is a list of products. Two of the products, Oranges and Apples, are on sale. (function () { "use strict"; var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44 }, { name: "Oranges", price: 1.99, onSale: true }, { name: "Wine", price: 8.55 }, { name: "Apples", price: 2.44, onSale: true }, { name: "Steak", price: 1.99 }, { name: "Eggs", price: 2.44 }, { name: "Mushrooms", price: 1.99 }, { name: "Yogurt", price: 2.44 }, { name: "Soup", price: 1.99 }, { name: "Cereal", price: 2.44 }, { name: "Pepsi", price: 1.99 } ]); WinJS.Namespace.define("ListViewDemos", { products: products }); })(); The file above is saved with the name products.js and referenced by the default.html page described below. Declaring the Templates and ListView Control Next, we need to declare the ListView control and the two Template controls which we will use to display template items. The markup below appears in the default.html file: <!-- Templates --> <div id="productItemTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div id="productOnSaleTemplate" data-win-control="WinJS.Binding.Template"> <div class="product onSale"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> (On Sale!) </div> </div> <!-- ListView --> <div id="productsListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: ListViewDemos.products.dataSource, layout: { type: WinJS.UI.ListLayout } }"> </div> In the markup above, two Template controls are declared. The first template is used when rendering a normal product and the second template is used when rendering a product which is on sale. The second template, unlike the first template, includes the text “(On Sale!)”. The ListView control is bound to the data source which we created in the previous section. The ListView itemDataSource property is set to the value ListViewDemos.products.dataSource. Notice that we do not set the ListView itemTemplate property. We set this property in the default.js file. Switching Between Templates All of the magic happens in the default.js file. The default.js file contains the JavaScript code used to switch templates dynamically. Here’s the entire contents of the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.UI.processAll().then(function () { var productsListView = document.getElementById("productsListView"); productsListView.winControl.itemTemplate = itemTemplateFunction; });; } }; function itemTemplateFunction(itemPromise) { return itemPromise.then(function (item) { // Select either normal product template or on sale template var itemTemplate = document.getElementById("productItemTemplate"); if (item.data.onSale) { itemTemplate = document.getElementById("productOnSaleTemplate"); }; // Render selected template to DIV container var container = document.createElement("div"); itemTemplate.winControl.render(item.data, container); return container; }); } app.start(); })(); In the code above, a function is assigned to the ListView itemTemplate property with the following line of code: productsListView.winControl.itemTemplate = itemTemplateFunction;   The itemTemplateFunction returns a DOM element which is used for the template item. Depending on the value of the product onSale property, the DOM element is generated from either the productItemTemplate or the productOnSaleTemplate template. Using Binding Converters instead of Multiple Templates In the previous sections, I explained how you can use different templates to render normal products and on sale products. There is an alternative approach to displaying different markup for normal products and on sale products. Instead of creating two templates, you can create a single template which contains separate DIV elements for a normal product and an on sale product. The following default.html file contains a single item template and a ListView control bound to the template. <!-- Template --> <div id="productItemTemplate" data-win-control="WinJS.Binding.Template"> <div class="product" data-win-bind="style.display: onSale ListViewDemos.displayNormalProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> <div class="product onSale" data-win-bind="style.display: onSale ListViewDemos.displayOnSaleProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> (On Sale!) </div> </div> <!-- ListView --> <div id="productsListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: ListViewDemos.products.dataSource, itemTemplate: select('#productItemTemplate'), layout: { type: WinJS.UI.ListLayout } }"> </div> The first DIV element is used to render a normal product: <div class="product" data-win-bind="style.display: onSale ListViewDemos.displayNormalProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> The second DIV element is used to render an “on sale” product: <div class="product onSale" data-win-bind="style.display: onSale ListViewDemos.displayOnSaleProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> (On Sale!) </div> Notice that both templates include a data-win-bind attribute. These data-win-bind attributes are used to show the “normal” template when a product is not on sale and show the “on sale” template when a product is on sale. These attributes set the Cascading Style Sheet display attribute to either “none” or “block”. The data-win-bind attributes take advantage of binding converters. The binding converters are defined in the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.UI.processAll(); } }; WinJS.Namespace.define("ListViewDemos", { displayNormalProduct: WinJS.Binding.converter(function (onSale) { return onSale ? "none" : "block"; }), displayOnSaleProduct: WinJS.Binding.converter(function (onSale) { return onSale ? "block" : "none"; }) }); app.start(); })(); The ListViewDemos.displayNormalProduct binding converter converts the value true or false to the value “none” or “block”. The ListViewDemos.displayOnSaleProduct binding converter does the opposite; it converts the value true or false to the value “block” or “none” (Sadly, you cannot simply place a NOT operator before the onSale property in the binding expression – you need to create both converters). The end result is that you can display different markup depending on the value of the product onSale property. Either the contents of the first or second DIV element are displayed: Summary In this blog entry, I’ve explored two approaches to displaying different markup in a ListView depending on the value of a data item property. The bulk of this blog entry was devoted to explaining how you can assign a function to the ListView itemTemplate property which returns different templates. We created both a productItemTemplate and productOnSaleTemplate and displayed both templates with the same ListView control. We also discussed how you can create a single template and display different markup by using binding converters. The binding converters are used to set a DIV element’s display property to either “none” or “block”. We created a binding converter which displays normal products and a binding converter which displays “on sale” products.

    Read the article

  • Advanced reporting in Oracle Service Bus

    - by [email protected]
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 21 false false false FR-BE X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tableau Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Reporting in OSB is useful, it allows you to audit message going through OSB. The service bus console allows you to view the content that you reported. To report data you simply use the Report action in your proxy. The action itself is rather straightforward. You specify the content to report ($body for example), an optional key for easier search (for example the id of the record) and that's it. Sometimes though, what you want to is a bit more complicated. I recently had a case where the key was built from the message type (XML) and the id of the message. Seems quite simple but the id could be any element anywhere in the message depending on its type. This could be handled by 'if' statement but adding new cases would mean changing the proxy service and if you have lots of message types this can get boring so I wanted the solution to be as dynamic as possible (read "just change a configuration file and that's it"). The following entry details how you can make this dynamic in your proxy by using XQuery/XSLT.   First step the XQuery We're going to use an XQuery to make the mapping between the XML message type and the location of the identifier in it. We assume here that the message type is the first node of the input XML and use a rather simple Xpath to find the identifier.  The XQuery looks like this for two messages : <reportmapping>                 <row>                                <logical>messageType1</logical>                                <type>MT1</type>                                <reportingreferencelocation>//customID</reportingreferencelocation>                 </row>                 <row>                                <logical>messageType2</logical>                                <type>MT2</type>                                <reportingreferencelocation>//theOtherIDLocation</reportingreferencelocation>                 </row>   </reportmapping>   Second step the XSLT To get the identifier value of the dynamic path, we're going to use an XSLT transformation. This XSLT takes an XML parameter as input which contains our xpath (coming from the previous XQuery). The XSLT looks like this : <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xalan="http://xml.apache.org/xalan">               <xsl:param name="PathToNode"/>               <xsl:template match="/">                             <IDVALUE>                                           <xsl:value-of select="xalan:evaluate($PathToNode/reportingreferencelocation)"/>                             </ IDVALUE >               </xsl:template> </xsl:stylesheet> (note the use of a xalan function here. Xalan is the XSLT processor used in weblogic server)   Last step, the proxy service We're now going to wire everything in the proxy service. First we assign the XQuery to a variable. We then get the entry in the XQuery corresponding to the record we're treating. We're then extracting the id of the message using the XSLT transformation Final assign is to built the final variable that will be used as the reporting key. The report action is then called with this variable. Everything is setup. We're now ready to test.   Testing the solution Using the test console, we're sending our first XML ... <messageType1>                 <sender>test console 1</sender>                 <customID>ID12345</customID >                 <content>                                 <field1>value of field 1</field1>                 </content> </messageType1>   ... and a second one of another supported type <messageType2>                 <header>                                 <theOtherIDLocation >ID67890</theOtherIDLocation >                 </header> <body>                                <data>Test data</data>                 </body> </messageType2>   Reporting result is :  Conclusion Report is done as expected. Now if a new message type must be supported we only have to modify the XQuery and nothing at the proxy service level.   Sample project attached to this entry.sbconfig-dynamicReport.jar  

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >