Search Results

Search found 2036 results on 82 pages for 'jim smith'.

Page 80/82 | < Previous Page | 76 77 78 79 80 81 82  | Next Page >

  • Truncate text to fit table cell without wrapping using css or jquery

    - by Tauren
    I want the text in one of the columns of a table to not wrap, but to just truncate so that it fits within the current size of the table cell. I don't want the table cell to change size, as I need the table to be exactly 100% the width of the container. This is because the table with 100% width is inside of a positioned div with overflow: auto (it's actually inside of a jquery UI.Layout panel). I tried both overflow: hidden and the text still wrapped. I tried white-space: nowrap, but it stretched the table wider than 100% and added a horizontal scroll bar. div.container { position: absolute; overflow: auto; /* user can slide resize bars to change the width & height */ width: 600px; height: 300px; } table { width: 100% } td.nowrap { overflow: hidden; white-space: nowrap; } <div class="container"> <table> <tr> <td>From</td> <td>Subject</td> <td>Date</td> </tr> <tr> <td>Bob Smith</td> <td class="nowrap"> <strong>Message subject</strong> <span>This is a preview of the message body and could be long.</span> </td> <td>2010-03-30 02:18AM</td> </tr> </table> </div> Is there a way using css to solve this? If I had a fixed table cell size, then overflow:hidden would truncate anything that flows over, but I can't used a fixed size as I want the table to stretch with the UI.Layout panel size. If not, then how would I solve this with jquery? My use case is similar to the gmail interface, where an email subject is bolded and the beginning of the message body is shown, but then truncated to fit.

    Read the article

  • Linq to List and IEnumerable issues

    - by Otaku
    I am querying an HTML file with Linq. It looks something like this: <html> <body> <div class="Players"> <div class="role">Goalies</div> <div class="name">John Smith</div> <div class="name">Shawn Xie</div> <div class="role">Right Wings</div> <div class="name">Jack Davis</div> <div class="name">Carl Yuns</div> <div class="name">Wayne Gortonia</div> <div class="role">Centers</div> <div class="name">Lutz Gaspy</div> <div class="name">John Jacobs</div> </div </html> </body> What I'm trying to do is create a list of these folks like in a list of a structure called Players: Structure Players Public Name As String Public Position As String End Structure But I've quickly found out I don't really know what I'm doing when it comes to Linq. I've got this far my my queries: Dim goalieList = From d In player.Elements _ Where d.Value = "Goalies" _ Select From g In d.ElementsAfterSelf _ Take While (g.@class <> "role") _ Select New Players With {.Position = "Goalie", _ .Name = g.Value} Dim centersList = From d In player.Elements _ Where d.Value = "Centers" _ Select From g In d.ElementsAfterSelf _ Take While (g.@class <> "role") _ Select New Players With {.Position = "Centers", _ .Name = g.Value} Which gets me down to the the players by position, but then I can't do much with this afterwards the result type is System.Collections.Generic.IEnumerable(Of System.Collections.Generic.IEnumerable(Of Player)) What I want to do is add these two results to a new list, like: Dim playersList As List(Of Players) = Nothing playersList.AddRange(centersList) playersList.AddRange(goalieList) So that I can then query the list and use it. But it kicks the error: Unable to cast object of type 'WhereSelectEnumerableIterator2[System.Xml.Linq.XElement,System.Collections.Generic.IEnumerable1[Players]]' to type 'System.Collections.Generic.IEnumerable`1[Players]' As you can see, I may really have no idea how to work with all these objects/classes. Does anyone have any insight on what I may be doing wrong and how I can resolve it? RESOLVED: The Linq query needs to return a single iEnumerable, like this: Dim goalieList = From l In _ (From d In players.Elements _ Where d.Value = "Goalies" _ Select d.ElementsAfterSelf.TakeWhile(Function(f) f.@class <> "role")) _ Select New Players With {.Position = "Goalie", .Name = l.Value} and then use goalieList.ToList

    Read the article

  • Using XMLDecoder to cast Encoded XML to List<>

    - by Ender
    I am writing an application that reads in a large number of basic user details in the following format; once read in it then allows the user to search for a user's details using their email: NAME ROLE EMAIL --------------------------------------------------- Joe Bloggs Manager [email protected] John Smith Consultant [email protected] Alan Wright Tester [email protected] ... The problem I am suffering is that I need to store a large number of details of all people that have worked at the company. The file containing these details will be written on a yearly basis simply for reporting purposes, but the program will need to be able to access these details quickly. The way I aim to access these files is to have a program that asks the user for the name of the unique email of the member of staff and for the program to then return the name and the role from that line of the file. I've played around with text files, but am struggling with how I would handle multiple columns of data when it comes to searching this large file. What is the best format to store such data in? A text file? XML? The size doesn't bother me, but I'd like to be able to search it as quickly as possible. The file will need to contain a lot of entries, probably over the 10K mark over time. EDIT: I've decided to go with the XML serialisation method. I've managed to get the code for Encoding working perfectly, but the Decoding code below does not work. XMLDecoder d = new XMLDecoder( new BufferedInputStream(new FileInputStream("data.xml"))); List<Employee> list = (List<Employee>) d.readObject(); d.close(); for(Employee x : list) { if(x.getEmail().equals(userInput)) { // do stuff } } When the program hits List<Employee> list = (List<Employee>) d.readObject(); an exception is thrown claiming that "Employee cannot be cast to java.util.List". I've added a bounty to this and anyone that can help me solve this problem once and for all will get lots of lovely points. EDIT 2: I've looked a bit more into the problem and have come across Serialization as a potential answer. If anyone can look into this for me as I've no experience with Serialization or Deserialization I'd be very grateful. It can provide an Object with no problems whatsoever, but I really need to return it in the same format as it went in (List). EDIT 3: Ugh, this problem is really starting to drive me crazy and to be honest I'm starting to think that it's an unsolvable problem. If possible, could someone take a look at the code and help provide a solution for me?

    Read the article

  • C# and NpgsqlDataAdapter returning a single string instead of a data table

    - by tme321
    I have a postgresql db and a C# application to access it. I'm having a strange error with values I return from a NpgsqlDataAdapter.Fill command into a DataSet. I've got this code: NpgsqlCommand n = new NpgsqlCommand(); n.Connection = connector; // a class member NpgsqlConnection DataSet ds = new DataSet(); DataTable dt = new DataTable(); // DBTablesRef are just constants declared for // the db table names and columns ArrayList cols = new ArrayList(); cols.Add(DBTablesRef.all); //all is just * ArrayList idCol = new ArrayList(); idCol.Add(DBTablesRef.revIssID); ArrayList idVal = new ArrayList(); idVal.Add(idNum); // a function parameter // Select builder and Where builder are just small // functions that return an sql statement based // on the parameters. n is passed to the where // builder because the builder uses named // parameters and sets them in the NpgsqlCommand // passed in String select = SelectBuilder(DBTablesRef.revTableName, cols) + WhereBuilder(n,idCol, idVal); n.CommandText = select; try { NpgsqlDataAdapter da = new NpgsqlDataAdapter(n); ds.Reset(); // filling DataSet with result from NpgsqlDataAdapter da.Fill(ds); // C# DataSet takes multiple tables, but only the first is used here dt = ds.Tables[0]; } catch (Exception e) { Console.WriteLine(e.ToString()); } So my problem is this: the above code works perfectly, just like I want it to. However, if instead of doing a select on all (*) if I try to name individual columns to return from the query I get the information I asked for, but rather than being split up into seperate entries in the data table I get a string in the first index of the data table that looked something like: "(0,5,false,Bob Smith,7)" And the data is correct, I would be expecting 0, then 5, then a boolean, then some text etc. But I would (obviously) prefer it to not be returned as just one big string. Anyone know why if I do a select on * I get a datatable as expected, but if I do a select on specific columns I get a data table with one entry that is the string of the values I'm asking for?

    Read the article

  • jquery data selector

    - by Tauren
    I need to select elements based on values stored in an element's .data() object. At a minimum, I'd like to select top-level data properties using selectors, perhaps like this: $('a').data("category","music"); $('a:data(category=music)'); Or perhaps the selector would be in regular attribute selector format: $('a[category=music]'); Or in attribute format, but with a specifier to indicate it is in .data(): $('a[:category=music]'); I've found James Padolsey's implementation to look simple, yet good. The selector formats above mirror methods shown on that page. There is also this Sizzle patch. For some reason, I recall reading a while back that jQuery 1.4 would include support for selectors on values in the jquery .data() object. However, now that I'm looking for it, I can't find it. Maybe it was just a feature request that I saw. Is there support for this and I'm just not seeing it? Ideally, I'd like to support sub-properties in data() using dot notation. Like this: $('a').data("user",{name: {first:"Tom",last:"Smith"},username: "tomsmith"}); $('a[:user.name.first=Tom]'); I also would like to support multiple data selectors, where only elements with ALL specified data selectors are found. The regular jquery multiple selector does an OR operation. For instance, $('a.big, a.small') selects a tags with either class big or small). I'm looking for an AND, perhaps like this: $('a').data("artist",{id: 3281, name: "Madonna"}); $('a').data("category","music"); $('a[:category=music && :artist.name=Madonna]'); Lastly, it would be great if comparison operators and regex features were available on data selectors. So $(a[:artist.id>5000]) would be possible. I realize I could probably do much of this using filter(), but it would be nice to have a simple selector format. What solutions are available to do this? Is Jame's Padolsey's the best solution at this time? My concern is primarily in regards to performance, but also in the extra features like sub-property dot-notation and multiple data selectors. Are there other implementations that support these things or are better in some way?

    Read the article

  • Advice on displaying and allowing editing of data using ASP.NET MVC?

    - by Remnant
    I am embarking upon my first ASP.NET MVC project and I would like to get some input on possible ways to display database data and general best practice. In short, the body of my webpage will show data from my database in a table like format, with each table row showing similar data. For example: Name Age Position Date Joined Jon Smith 23 Striker 18th Mar 2005 John Doe 38 Defender 3rd Jan 1988 In terms of functionality, primarily I’d like to give the user the ability to edit the data and, after the edit, commit the edit to the database and refresh the view.The reason I want to refresh the view is because the data is date ordered and I will need to re-sort if the user edits a date field. My main question is what architecture / tools would be best suited to this fulfil my requirements at a high level? From the research I have done so far my initial conclusions were: ADO.NET for data retrieval. This is something I have used before and feel comfortable with. I like the look of LINQ to SQL but don’t want to make the learning curve any steeper for my first outing into MVC land just yet. Partial Views to create a template and then iterate through a datatable that I have pulled back from my database model. jQuery to allow the user to edit data in the table, error check edited data entries etc. Also, my intial view was that caching the data would not be a key requirement here. The only field a user will be able to update is the field and, if they do, I will need to commit that data to the database immediately and then refresh the view (as the data is date sorted). Any thoughts on this? Alternatively, I have seen some jQuery plug-ins that emulate a datagrid and provide associated functionality. My first thoughts are that I do not need all the functionality that comes with these plug-ins (e.g. zebra striping, ability to sort by column using sort glyph in column headers etc .) and I don’t really see any benefit to this over and above the solution I have outlined above. Again, is there reason to reconsider this view? Finally, when a user edits a date , I will need to refresh the view. In order to do this I had been reading about Html.RenderAction and this seemed like it may be a better option than using Partial Views as I can incorporate application logic into the action method. Am I right to consider Html.RenderAction or have I misunderstood its usage? Hope this post is clear and not too long. I did consider separate posts for each topic (e.g. Partial View vs. Html.RenderAction, when to use jQury datagrid plug-in) but it feels like these issues are so intertwined that they need to be dealt with in contect of each other. Thanks

    Read the article

  • soap client not working in php

    - by Jin Yong
    I tried to write a code in php to call a web server to add a client details, however, it's seem not working for me and display the following error: Fatal error: Uncaught SoapFault exception: [HTTP] Could not connect to host in D:\www\web_server.php:15 Stack trace: #0 [internal function]: SoapClient-_doRequest('_call('AddClient', Array) #2 D:\www\web_server.php(15): SoapClient-AddClientArray) #3 {main} thrown in D:\www\web_server.php on line 15 Refer below for the code that I wrote in php: <s:element name="AddClient"> - <s:complexType> - <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="username" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="password" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="clientRequest" type="tns:ClientRequest"/> </s:sequence> </s:complexType> </s:element> - <s:complexType name="ClientRequest"> - <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="customerCode" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="customerFullName" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="ref" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="phoneNumber" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="Date" type="s:string"/> </s:sequence> </s:complexType> <s:element name="AddClientResponse"> - <s:complexType> - <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="AddClientResult" type="tns:clientResponse"/> <s:element minOccurs="0" maxOccurs="1" name="response" type="tns:ServiceResponse"/> </s:sequence> </s:complexType> </s:element> - <s:complexType name="ClientResponse"> - <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="testNumber" type="s:string"/> </s:sequence> </s:complexType> <?php $client = new SoapClient($url); $result = $client->AddClient(array('username' => 'test','password'=>'testing','clientRequest'=>array('customerCode'=>'18743','customerFullName'=>'Gaby Smith','ref'=>'','phoneNumber'=>'0413496525','Date'=>'12/04/2013'))); echo $result->AddClientResponse; ?> Does anyone where I gone wrong for this code?

    Read the article

  • WPF MVVM: Convention over Configuration for ResourceDictionary ?

    - by Jeffrey Knight
    Update In the wiki spirit of StackOverflow, here's an update: I spiked Joe White's IValueConverter suggestion below. It works like a charm. I've written a "quickstart" example of this that automates the mapping of ViewModels-Views using some cheap string replacement. If no View is found to represent the ViewModel, it defaults to an "Under Construction" page. I'm dubbing this approach "WPF MVVM White" since it was Joe White's idea. Here are a couple screenshots. The first image is a case of "[SomeControlName]ViewModel" has a corresponding "[SomeControlName]View", based on pure naming convention. The second is a case where the ModelView doesn't have any views to represent it. No more ResourceDictionaries with long ViewModel to View mappings. It's pure naming convention now. I'm hosting a download of the project here: http://rootsilver.com/files/Mvvm.White.Quickstart.zip I'll follow up with a longer blog post walk through. Original Post I read Josh Smith's fantastic MSDN article on WPF MVVM over the weekend. It's destined to be a cult classic. It took me a while to wrap my head around the magic of asking WPF to render the ViewModel. It's like saying "Here's a class, WPF. Go figure out which UI to use to present it." For those who missed this magic, WPF can do this by looking up the View for ModelView in the ResourceDictionary mapping and pulling out the corresponding View. (Scroll down to Figure 10 Supplying a View ). The first thing that jumps out at me immediately is that there's already a strong naming convention of: classNameView ("View" suffix) classNameViewModel ("ViewModel" suffix) My question is: Since the ResourceDictionary can be manipulated programatically, I"m wondering if anyone has managed to Regex.Replace the whole thing away, so the lookup is automatic, and any new View/ViewModels get resolved by virtue of their naming convention? [Edit] What I'm imagining is a hook/interception into ResourceDictionary. ... Also considering a method at startup that uses interop to pull out *View$ and *ViewModel$ class names to build the DataTemplate dictionary in code: //build list foreach .... String.Format("<DataTemplate DataType=\"{x:Type vm:{0} }\"><v:{1} /></DataTemplate>", ...)

    Read the article

  • What are the Tags Around Default iPhone Address Book People Phone Number Labels?

    - by rnistuk
    My question concerns markup that surrounds some of the default phone number labels in the Person entries of the Contact list on the iPhone. I have created an iPhone contact list address book entry for a person, "John Smith" with the following phone number entries: Mobile (604) 123-4567 iPhone (778) 123-4567 Home (604) 789-4561 Work (604) 456-7891 Main (604) 789-1234 megaphone (234) 567-8990 Note that the first five labels are default labels provided by the Contacts application and the last label, "megaphone", is a custom label. I wrote the following method to retrieve and display the labels and phone numbers for each person in the address book: -(void)displayPhoneNumbersForAddressBook { ABAddressBookRef book = ABAddressBookCreate(); CFArrayRef people = ABAddressBookCopyArrayOfAllPeople(book); ABRecordRef record = CFArrayGetValueAtIndex(people, 0); ABMultiValueRef multi = ABRecordCopyValue(record, kABPersonPhoneProperty); NSLog(@"---------" ); NSLog(@"displayPhoneNumbersForAddressBook" ); CFStringRef label, phone; for (CFIndex i = 0; i < ABMultiValueGetCount(multi); ++i) { label = ABMultiValueCopyLabelAtIndex(multi, i); phone = ABMultiValueCopyValueAtIndex(multi, i); NSLog(@"label: \"%@\" number: \"%@\"", (NSString*)label, (NSString*)phone); CFRelease(label); CFRelease(phone); } NSLog(@"---------" ); CFRelease(multi); CFRelease(people); CFRelease(book); } and here is the output for the address book entry that I entered: 2010-03-08 13:24:28.789 test2m[2479:207] --------- 2010-03-08 13:24:28.789 test2m[2479:207] displayPhoneNumbersForAddressBook 2010-03-08 13:24:28.790 test2m[2479:207] label: "_$!<Mobile>!$_" number: "(604) 123-4567" 2010-03-08 13:24:28.790 test2m[2479:207] label: "iPhone" number: "(778) 123-4567" 2010-03-08 13:24:28.791 test2m[2479:207] label: "_$!<Home>!$_" number: "(604) 789-4561" 2010-03-08 13:24:28.791 test2m[2479:207] label: "_$!<Work>!$_" number: "(604) 456-7891" 2010-03-08 13:24:28.792 test2m[2479:207] label: "_$!<Main>!$_" number: "(604) 789-1234" 2010-03-08 13:24:28.792 test2m[2479:207] label: "megaphone" number: "(234) 567-8990" 2010-03-08 13:24:28.793 test2m[2479:207] --------- What are the markup characters _$!< and >!$_ surrounding most, save for iPhone, of the default labels for? Can you point me to where in the "Address Book Programming Guide for iPhone OS" I can find the information? Thank you for your help.

    Read the article

  • xslt help - my transform is not rendering correctly

    - by Hcabnettek
    Hi All, I'm trying to apply an xlst transformation using the following file. This is very basic, but I wanted to build off of this when I get it working correctly. <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl"> <xsl:template match="/"> <div> <h2>Station Inventory</h2> <hr/> <xsl:apply-templates/> </div> </xsl:template> Here is some xml I'm using for the source. <StationInventoryList xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.dummy-tmdd-address"> <StationInventory> <station-id>9940</station-id> <station-name>Zone 9940-SEB</station-name> <station-travel-direction>SEB</station-travel-direction> <detector-list> <detector> <detector-id>2910</detector-id> <detector-name>1999 West Smith Exit SEB</detector-name> </detector> <detector> <detector-id>9205</detector-id> <detector-name>CR-155 Exit SEB</detector-name> </detector> <detector> <detector-id>9710</detector-id> <detector-name>Pt of View SEB</detector-name> </detector> </detector-list> </StationInventory> </StationInventoryList> Any ideas what I'm doing wrong? The simple intent here is to make a list of station, then make a list of detectors at a station. This is a small piece of the XML. It would have multiple StationInventory elements. I'm using the data as the source for an asp:xml control and the xslt file as the transformsource. var service = new InternalService(); var result = service.StationInventory(); invXml.DocumentContent = result; invXml.TransformSource = "StationInventory.xslt"; invXml.DataBind(); Any tips are of course appreciated. Have a terrific weekend. Cheers, ~ck

    Read the article

  • XSLT: Display unique rows of filtered XML recordset

    - by Chris G.
    I've got a recordset that I'm filtering on a particular field (i.e. Manager ="Frannklin"). Now I'd like to group the results of that filtered recordset based on another field (Client). I can't seem to get Muenchian grouping to work right. Any thoughts? TIA! CG My filter looks like this: <xsl:key name="k1" match="Row" use="@Manager"/> <xsl:param name="dvt_filterval">Frannklin</xsl:param> <xsl:variable name="Rows" select="/dsQueryResponse/Rows/Row" /> <xsl:variable name="FilteredRowsAttr" select="$Rows[normalize-space(@*[name()=$FieldNameNoAtSign])=$dvt_filterval ]" /> Templates <xsl:apply-templates select="$FilteredRowsAttr[generate-id() = generate-id(key('k1',@Manager))]" mode="g1000a"> </xsl:apply-templates> <xsl:template match="Row" mode="g1000a"> Client: <xsl:value-of select="@Client"/> </xsl:template> Results I'm getting Client: Beta Client: Beta Client: Beta Client: Gamma Client: Delta Results I want Client: Beta Client: Gamma Client: Delta Sample recordset <dsQueryResponse> <Rows> <Row Manager="Smith" Client="Alpha " Project_x0020_Name="Annapolis" PM_x0023_="00123" /> <Row Manager="Ford" Client="Alpha " Project_x0020_Name="Brown" PM_x0023_="00124" /> <Row Manager="Cronkite" Client="Beta " Project_x0020_Name="Gannon" PM_x0023_="00129" /> <Row Manager="Clinton, Bill" Client="Beta " Project_x0020_Name="Harvard" PM_x0023_="00130" /> <Row Manager="Frannklin" Client="Beta " Project_x0020_Name="Irving" PM_x0023_="00131" /> <Row Manager="Frannklin" Client="Beta " Project_x0020_Name="Jakarta" PM_x0023_="00132" /> <Row Manager="Frannklin" Client="Beta " Project_x0020_Name="Vassar" PM_x0023_="00135" /> <Row Manager="Jefferson" Client="Gamma " Project_x0020_Name="Stamford" PM_x0023_="00141" /> <Row Manager="Cronkite" Client="Gamma " Project_x0020_Name="Tufts" PM_x0023_="00142" /> <Row Manager="Frannklin" Client="Gamma " Project_x0020_Name="UCLA" PM_x0023_="00143" /> <Row Manager="Jefferson" Client="Gamma " Project_x0020_Name="Villanova" PM_x0023_="00144" /> <Row Manager="Carter" Client="Delta " Project_x0020_Name="Drexel" PM_x0023_="00150" /> <Row Manager="Clinton" Client="Delta " Project_x0020_Name="Iona" PM_x0023_="00151" /> <Row Manager="Frannklin" Client="Delta " Project_x0020_Name="Temple" PM_x0023_="00152" /> <Row Manager="Ford" Client="Epsilon " Project_x0020_Name="UNC" PM_x0023_="00157" /> <Row Manager="Clinton" Client="Epsilon " Project_x0020_Name="Berkley" PM_x0023_="00158" /> </Rows> </dsQueryResponse>

    Read the article

  • ASP NET MVC (loading data from database)

    - by rah.deex
    hi experts, its me again... i have some code like this.. using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MvcGridSample.Models { public class CustomerService { private List<SVC> Customers { get { List<SVC> customers; if (HttpContext.Current.Session["Customers"] != null) { customers = (List<SVC>) HttpContext.Current.Session["Customers"]; } else { //Create customer data store and save in session customers = new List<SVC>(); InitCustomerData(customers); HttpContext.Current.Session["Customers"] = customers; } return customers; } } public SVC GetByID(int customerID) { return this.Customers.AsQueryable().First(customer => customer.seq_ == customerID); } public IQueryable<SVC> GetQueryable() { return this.Customers.AsQueryable(); } public void Add(SVC customer) { this.Customers.Add(customer); } public void Update(SVC customer) { } public void Delete(int customerID) { this.Customers.RemoveAll(customer => customer.seq_ == customerID); } private void InitCustomerData(List<SVC> customers) { customers.Add(new SVC { ID = 1, FirstName = "John", LastName = "Doe", Phone = "1111111111", Email = "[email protected]", OrdersPlaced = 5, DateOfLastOrder = DateTime.Parse("5/3/2007") }); customers.Add(new SVC { ID = 2, FirstName = "Jane", LastName = "Doe", Phone = "2222222222", Email = "[email protected]", OrdersPlaced = 3, DateOfLastOrder = DateTime.Parse("4/5/2008") }); customers.Add(new SVC { ID = 3, FirstName = "John", LastName = "Smith", Phone = "3333333333", Email = "[email protected]", OrdersPlaced = 25, DateOfLastOrder = DateTime.Parse("4/5/2000") }); customers.Add(new SVC { ID = 4, FirstName = "Eddie", LastName = "Murphy", Phone = "4444444444", Email = "[email protected]", OrdersPlaced = 1, DateOfLastOrder = DateTime.Parse("4/5/2003") }); customers.Add(new SVC { ID = 5, FirstName = "Ziggie", LastName = "Ziggler", Phone = null, Email = "[email protected]", OrdersPlaced = 0, DateOfLastOrder = null }); customers.Add(new SVC { ID = 6, FirstName = "Michael", LastName = "J", Phone = "666666666", Email = "[email protected]", OrdersPlaced = 5, DateOfLastOrder = DateTime.Parse("12/3/2007") }); } } } those codes is an example that i've got from the internet.. in that case, the data is created and saved in session before its shown.. the things that i want to ask is how if i want to load the data from table? i'am a newbie here.. please help :) thank b4 for advance..

    Read the article

  • help merging perl code routines together for file processing

    - by jdamae
    I need some perl help in putting these (2) processes/code to work together. I was able to get them working individually to test, but I need help bringing them together especially with using the loop constructs. I'm not sure if I should go with foreach..anyways the code is below. Also, any best practices would be great too as I'm learning this language. Thanks for your help. Here's the process flow I am looking for: -read a directory -look for a particular file -use the file name to strip out some key information to create a newly processed file -process the input file -create the newly processed file for each input file read (if i read in 10, I create 10 new files) Sample Recs: col1,col2,col3,col4,col5 [email protected],[email protected],8,2009-09-24 21:00:46,1 [email protected],[email protected],16,2007-08-18 22:53:12,33 [email protected],[email protected],16,2007-08-18 23:41:23,33 Here's my test code: Target Filetype: `/backups/test/foo101.name.aue-foo_p002.20110124.csv` Part 1: my $target_dir = "/backups/test/"; opendir my $dh, $target_dir or die "can't opendir $target_dir: $!"; while (defined(my $file = readdir($dh))) { next if ($file =~ /^\.+$/); #Get filename attributes if ($file =~ /^foo(\d{3})\.name\.(\w{3})-foo_p(\d{1,4})\.\d+.csv$/) { print "$1\n"; print "$2\n"; print "$3\n"; } print "$file\n"; } Part 2: use strict; use Digest::MD5 qw(md5_hex); #Create new file open (NEWFILE, ">/backups/processed/foo$1.name.$2-foo_p$3.out") || die "cannot create file"; my $data = ''; my $line1 = <>; chomp $line1; my @heading = split /,/, $line1; my ($sep1, $sep2, $eorec) = ( "^A", "^E", "^D"); while (<>) { my $digest = md5_hex($data); chomp; my (@values) = split /,/; my $extra = "__mykey__$sep1$digest$sep2" ; $extra .= "$heading[$_]$sep1$values[$_]$sep2" for (0..scalar(@values)); $data .= "$extra$eorec"; print NEWFILE "$data"; } #print $data; close (NEWFILE);

    Read the article

  • due at midnight - program compiles but has logic error(s)

    - by Leslie Laraia
    not sure why this program isn't working. it compiles, but doesn't provide the expected output. the input file is basically just this: Smith 80000 Jones 100000 Scott 75000 Washington 110000 Duffy 125000 Jacobs 67000 Here is the program: import java.io.File; import java.io.FileNotFoundException; import java.util.Scanner; /** * * @author Leslie */ public class Election { /** * @param args the command line arguments */ public static void main(String[] args) throws FileNotFoundException { // TODO code application logic here File inputFile = new File("C:\\Users\\Leslie\\Desktop\\votes.txt"); Scanner in = new Scanner(inputFile); int x = 0; String line = ""; Scanner lineScanner = new Scanner(line); line = in.nextLine(); while (in.hasNextLine()) { line = in.nextLine(); x++; } String[] senatorName = new String[x]; int[] votenumber = new int[x]; double[] votepercent = new double[x]; System.out.printf("%44s", "Election Results for State Senator"); System.out.println(); System.out.printf("%-22s", "Candidate"); //Prints the column headings to the screen System.out.printf("%22s", "Votes Received"); System.out.printf("%22s", "%of Total Votes"); int i; for(i=0; i<x; i++) { while(in.hasNextLine()) { line = in.nextLine(); String candidateName = lineScanner.next(); String candidate = candidateName.trim(); senatorName[i] = candidate; int votevalue = lineScanner.nextInt(); votenumber[i] = votevalue; } } votepercent = percentages(votenumber, x); for (i = 0; i < x; i++) { System.out.println(); System.out.printf("%-22s", senatorName[i]); System.out.printf("%22d", votenumber[i]); System.out.printf("%22.2f", votepercent[i]); System.out.println(); } } public static double [] percentages(int[] votenumber, int z) { double [] percentage = new double [z]; double total = 0; for (double element : votenumber) { total = total + element; } for(int i=0; i < votenumber.length; i++) { int y = votenumber[i]; percentage[i] = (y/total) * 100; } return percentage; } }

    Read the article

  • .NET Extension Objects with XSLT -- how to iterate over a collection?

    - by Pandincus
    Help me, Stackoverflow! I have a simple .NET 3.5 console app that reads some data and sends emails. I'm representing the email format in an XSLT stylesheet so that we can easily change the wording of the email without needing to recompile the app. We're using Extension Objects to pass data to the XSLT when we apply the transformation: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl" xmlns:EmailNotification="ext:EmailNotification"> -- this way, we can have statements like: <p> Dear <xsl:value-of select="EmailNotification:get_FullName()" />: </p> The above works fine. I pass the object via code like this (some irrelevant code omitted for brevity): // purely an example structure public struct EmailNotification { public string FullName { get; set; } } // Somewhere in some method ... var notification = new Notification("John Smith"); // ... XsltArgumentList xslArgs = new XsltArgumentList(); xslArgs.AddExtensionObject("ext:EmailNotification", notification); // ... // The part where it breaks! (This is where we do the transformation) xslt.Transform(fakeXMLDocument.CreateNavigator(), xslArgs, XmlWriter.Create(transformedXMLString)); So, all of the above code works. However, I wanted to get a little fancy (always my downfall) and pass a collection, so that I could do something like this: <p>The following accounts need to be verified:</p> <xsl:for-each select="EmailNotification:get_SomeCollection()"> <ul> <li> <xsl:value-of select="@SomeAttribute" /> </li> </ul> <xsl:for-each> When I pass the collection in the extension object and attempt to transform, I get the following error: "Extension function parameters or return values which have Clr type 'String[]' are not supported." or List, or IEnumerable, or whatever I try to pass in. So, my questions are: How can I pass in a collection to my XSLT? What do I put for the xsl:value-of select="" inside the xsl:for-each ? Is what I am trying to do impossible?

    Read the article

  • jquery .get/.post not working on ie 7 or 8, works fine in ff

    - by Samutz
    I have basically this on a page: <script type="text/javascript"> function refresh_context() { $("#ajax-context").html("Searching..."); $.get("/ajax/ldap_search.php", {cn: $("#username").val()}, function(xml) { $("#ajax-context").html($("display", xml).text()); $("#context").val($("context", xml).text()); }, 'xml'); } $(document).ready(function() { $("#username").blur(refresh_context); }); </script> <input type="text" name="username" id="username" maxlength="255" value="" /> <input type="hidden" name="context" id="context" value=""/> <div id="ajax-context"></div> What it should do (and does fine on Firefox) is when you type a username in to the #username field, it will run /ajax/ldap_search.php?cn=$username, which searches our company's ldap for the username and returns it's raw context and a formatted version of the context like this: <result> <display>Staff -&gt; Accounting -&gt; John Smith</display> <context>cn=jsmith,ou=Accounting,ou=Staff,ou=Users,o=MyOrg</context> </result> The formatted version (display) goes to the div #ajax-context and goes to the hidden input #context. (Also, the - are actually - "& g t ;" (without spaces)). However, on IE the div stays stuck on "Searching..." and the hidden input value stays blank. I've tried both .get and .post and neither work. I'm sure it's failing on the .get because if I try this, I don't even get the alert: $.get("/ajax/ldap_search.php", {cn: $("#username").val()}, function() { alert("Check"); }); Also, IE doesn't give me any script errors. Edit: Added "$(document).ready(function() {", the .blur was already in it in my code, but I forgot to include that in my post. Edit 2: The request is being sent and apache2 is receiving it: 10.135.128.96 - - [01/May/2009:10:04:27 -0500] "GET /ajax/ldap_search.php?cn=i_typed_this_in_IE HTTP/1.1" 200 69

    Read the article

  • How to align Definition Lists in IE6 ?

    - by ellander
    I'm having a major headache trying to align some and elements in ie6. It looks fine in ie7 and firefox but the dt elements don't appear in ie6. can anyone help? here is the code.. <div id="listMembers"> <h3>Members</h3> <dl class="myDL"> <dt>Name</dt> <dd>John Smith</dd> <dt>Address</dt> <dd>the street</dd> ... </dl> <div id="listOptions"> <div> <table>...</table> </div> </div> <div> and the css:- DL.myDL { BORDER-RIGHT: black 2px outset; PADDING-RIGHT: 2px; BORDER-TOP: black 2px outset; DISPLAY: block; PADDING-LEFT: 2px; BACKGROUND: #ccbe99; PADDING-BOTTOM: 2px; BORDER-LEFT: black 2px outset; WIDTH: auto; PADDING-TOP: 2px; BORDER-BOTTOM: black 2px outset; FONT-FAMILY: "Trebuchet MS", Arial, sans-serif } DL.myDL DT { CLEAR: both; PADDING-RIGHT: 3px; DISPLAY: inline; FLOAT: left; WIDTH: 250px; TEXT-ALIGN: right } I basically want the dt text aligned to the right and the dd on the right hand side with left align text. I reset the margin on all elements to be 0 before anything else in the css and the elements are within a dive with position relative.

    Read the article

  • Sharepoint AD imported users are becomming sporadically corrupted, causing us to have to create a new account

    - by TrevJen
    Sharepoint 2007 MOSS with AD imported users. All servers are 2008. ***UPDATE More details in testing. This Sharepoint is in an AD Child domain (clients.mycompany.local), which is sub to the root of the AD tree (mycompany.local). The user is in the parent tree (as are half of the other functional users. I have elevated the user rights to Domain. In looking at the logs, it seems that the Sharepoint server is trying to authenticate them by querying the DC for the clients domain (which is the way it normally works and still works for all existing identically configured users). I think if I could force it to authenticate up to the top domain DC then it would be ok?? I have around 50 users, over the past 2 months, I have had a handful of the users suddenly unable to login to Sharepoint. When they login, they either get a blank screen or they are repropmted. These users are using accounts that have been used for many months, sometimes the problem originates with a password change. In all cases, the users account works on every other Active Directory authenticated resource (domain, exchange, LDAP). In the most recent case, last night I was forced deleted a user ("John smith") because of corruption. The orifinal account name was jsmith. I deleted him from active directory, then deleted him from the profile list in Sharepoint Shared Services. I could not find a way to delete him from the Sharepoint user list, but I reran the import after recreating his account (renamed it too just to be sure to "smithj"). At first, this did not wor, the user could still access all other resources but Sharepoint. then, some 30 minutes later it inexplicably started working. This morning, the user changed passwords, which immediatly broke the login on Sharepoint again. Logs by request from matt b Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 7888 Task Category: Office Server General Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: A runtime exception was detected. Details follow. Message: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) – TrevJen 19 hours ago Techinal Details: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) at Microsoft.SharePoint.SPGlobal.HandleUnauthorizedAccessException(UnauthorizedAccessException ex) at Microsoft.SharePoint.Library.SPRequest.UpdateField(String bstrUrl, String bstrListName, String bstrXML) at Microsoft.SharePoint.SPField.UpdateCore(Boolean bToggleSealed) – TrevJen 19 hours ago at Microsoft.SharePoint.SPField.Update() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.PushSchemaToList(Boolean& bAddedColumn) at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.SynchFull() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.Synch() at Microsoft.Office.Server.Diagnostics.FirstChanceHandler.ExceptionFilter(Boolean fRethrowException, TryBlock tryBlock, FilterBlock filter, CatchBlock catchBlock, FinallyBlock finallyBlock) – TrevJen 19 hours ago Log Name: Application Source: Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 5553 Task Category: User Profiles Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: failure trying to synch site 6fea15e2-0899-4c19-9016-44d77834c018 for ContentDB b2002b0b-3d4c-411a-8c4f-3d047ca9322c WebApp 3aff7051-455d-4a70-a377-5b1c36df618e. Exception message was Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)). – TrevJen 18 hours ago

    Read the article

  • jQuery $.ajax response empty, but only in Chrome

    - by roguepixel
    I've exhausted every avenue of research to solve this one so hopefully someone else will think of something I just didn't. Relatively straight forward setup, I have a html page with some javascript that makes an ajax request to a URL (in the same domain) the java web app in the background does its stuff and returns a partial html page (no html, head or body tags, just the content) which should be inserted at a particular point in the page. All sounds pretty easy and the code I have works in IE, Firefox and Safari, but not in Chrome. In Chrome the target element just ends up empty and if I look at the resource request in Chromes developer tools the response content is also empty. All very confusing, I've tried a myriad of things to solve it and I'm just out of ideas. Any help would be greatly appreciated. var container = $('#container'); $.ajax({ type: 'GET', url: '/path/to/local/url', data: data('parameters=value&another=value2'), dataType: 'html', cache: false, beforeSend: requestBefore, complete: requestComplete, success: requestSuccess, error: requestError }); function data(parameters) { var dictionary = {}; var pairs = parameters.split('&'); for (var i = 0; i < pairs.length; i++) { var keyValuePair = pairs[i].split('='); dictionary[keyValuePair[0]] = keyValuePair[1]; } return dictionary; } function requestBefore() { container.find('.message.error').hide(); container.prepend('<div class="modal"><div class="indicator">Loading...</div></div>'); } function requestComplete() { container.find('.modal').remove(); } function requestSuccess(response) { container.empty(); container.html(response); } function requestError(response) { if (response.status == 200 && response.responseText == 'OK') { requestSuccess(response); } else { container.find('.message.error').fadeIn('slow'); } } All of this is executed in a $(document).ready(function() {}); Cheers, Jim @Oleg - Additional information requested, an example of the response that the ajax call might receive. <p class="message error hidden">An unknown error occured while trying to retrieve data, please try again shortly.</p> <div class="timeline"> <a class="icon shuttle-previous" rel="max_id=16470650733&page=1&q=something">Newer Data</a> <a class="icon shuttle-next" rel="max_id=16470650733&page=3&q=something">Older Data</a> </div> <ol class="social"> <li class="even"> <div class="avatar"> <img src="sphere_normal.gif"/> </div> <p> Some Content<br/> <span class="published">Jun 18, 2010 11:29:05 AM</span> - <a target="_blank" href="">Direct Link</a> </p> </li> <li class="odd"> <div class="avatar"> <img src="sphere_normal.gif"/> </div> <p> Some Content<br/> <span class="published">Jun 18, 2010 11:29:05 AM</span> - <a target="_blank" href="">Direct Link</a> </p> </li> </ol> <div class="timeline"> <a class="icon shuttle-previous" rel="max_id=16470650733&page=1&q=something">Newer Data</a> <a class="icon shuttle-next" rel="max_id=16470650733&page=3&q=something">Older Data</a> </div>

    Read the article

  • Version Assemblies with TFS 2010 Continuous Integration

    - by Steve Michelotti
    When I first heard that TFS 2010 had moved to Workflow Foundation for Team Build, I was *extremely* skeptical. I’ve loved MSBuild and didn’t quite understand the reasons for this change. In fact, given that I’ve been exclusively using Cruise Control for Continuous Integration (CI) for the last 5+ years of my career, I was skeptical of TFS for CI in general. However, after going through the learning process for TFS 2010 recently, I’m starting to become a believer. I’m also starting to see some of the benefits with Workflow Foundation for the overall processing because it gives you constructs not available in MSBuild such as parallel tasks, better control flow constructs, and a slightly better customization story. The first customization I had to make to the build process was to version the assemblies of my solution. This is not new. In fact, I’d recommend reading Mike Fourie’s well known post on Versioning Code in TFS before you get started. This post describes several foundational aspects of versioning assemblies regardless of your version of TFS. The main points are: 1) don’t use source control operations for your version file, 2) use a schema like <Major>.<Minor>.<IncrementalNumber>.0, and 3) do not keep AssemblyVersion and AssemblyFileVersion in sync. To do this in TFS 2010, the best post I’ve found has been Jim Lamb’s post of building a custom TFS 2010 workflow activity. Overall, this post is excellent but the primary issue I have with it is that the assembly version numbers produced are based in a date and look like this: “2010.5.15.1”. This is definitely not what I want. I want to be able to communicate to the developers and stakeholders that we are producing the “1.1 release” or “1.2 release” – which would have an assembly version number of “1.1.317.0” for example. In this post, I’ll walk through the process of customizing the assembly version number based on this method – customizing the concepts in Lamb’s post to suit my needs. I’ll also be combining this with the concepts of Fourie’s post – particularly with regards to the standards around how to version the assemblies. The first thing I’ll do is add a file called SolutionAssemblyVersionInfo.cs to the root of my solution that looks like this: 1: using System; 2: using System.Reflection; 3: [assembly: AssemblyVersion("1.1.0.0")] 4: [assembly: AssemblyFileVersion("1.1.0.0")] I’ll then add that file as a Visual Studio link file to each project in my solution by right-clicking the project, “Add – Existing Item…” then when I click the SolutionAssemblyVersionInfo.cs file, making sure I “Add As Link”: Now the Solution Explorer will show our file. We can see that it’s a “link” file because of the black arrow in the icon within all our projects. Of course you’ll need to remove the AssemblyVersion and AssemblyFileVersion attributes from the AssemblyInfo.cs files to avoid the duplicate attributes since they now leave in the SolutionAssemblyVersionInfo.cs file. This is an extremely common technique so that all the projects in our solution can be versioned as a unit. At this point, we’re ready to write our custom activity. The primary consideration is that I want the developer and/or tech lead to be able to easily be in control of the Major.Minor and then I want the CI process to add the third number with a unique incremental number. We’ll leave the fourth position always “0” for now – it’s held in reserve in case the day ever comes where we need to do an emergency patch to Production based on a branched version.   Writing the Custom Workflow Activity Similar to Lamb’s post, I’m going to write two custom workflow activities. The “outer” activity (a xaml activity) will be pretty straight forward. It will check if the solution version file exists in the solution root and, if so, delegate the replacement of version to the AssemblyVersionInfo activity which is a CodeActivity highlighted in red below:   Notice that the arguments of this activity are the “solutionVersionFile” and “tfsBuildNumber” which will be passed in. The tfsBuildNumber passed in will look something like this: “CI_MyApplication.4” and we’ll need to grab the “4” (i.e., the incremental revision number) and put that in the third position. Then we’ll need to honor whatever was specified for Major.Minor in the SolutionAssemblyVersionInfo.cs file. For example, if the SolutionAssemblyVersionInfo.cs file had “1.1.0.0” for the AssemblyVersion (as shown in the first code block near the beginning of this post), then we want to resulting file to have “1.1.4.0”. Before we do anything, let’s put together a unit test for all this so we can know if we get it right: 1: [TestMethod] 2: public void Assembly_version_should_be_parsed_correctly_from_build_name() 3: { 4: // arrange 5: const string versionFile = "SolutionAssemblyVersionInfo.cs"; 6: WriteTestVersionFile(versionFile); 7: var activity = new VersionAssemblies(); 8: var arguments = new Dictionary<string, object> { 9: { "tfsBuildNumber", "CI_MyApplication.4"}, 10: { "solutionVersionFile", versionFile} 11: }; 12:   13: // act 14: var result = WorkflowInvoker.Invoke(activity, arguments); 15:   16: // assert 17: Assert.AreEqual("1.2.4.0", (string)result["newAssemblyFileVersion"]); 18: var lines = File.ReadAllLines(versionFile); 19: Assert.IsTrue(lines.Contains("[assembly: AssemblyVersion(\"1.2.0.0\")]")); 20: Assert.IsTrue(lines.Contains("[assembly: AssemblyFileVersion(\"1.2.4.0\")]")); 21: } 22: 23: private void WriteTestVersionFile(string versionFile) 24: { 25: var fileContents = "using System.Reflection;\n" + 26: "[assembly: AssemblyVersion(\"1.2.0.0\")]\n" + 27: "[assembly: AssemblyFileVersion(\"1.2.0.0\")]"; 28: File.WriteAllText(versionFile, fileContents); 29: }   At this point, the code for our AssemblyVersion activity is pretty straight forward: 1: [BuildActivity(HostEnvironmentOption.Agent)] 2: public class AssemblyVersionInfo : CodeActivity 3: { 4: [RequiredArgument] 5: public InArgument<string> FileName { get; set; } 6:   7: [RequiredArgument] 8: public InArgument<string> TfsBuildNumber { get; set; } 9:   10: public OutArgument<string> NewAssemblyFileVersion { get; set; } 11:   12: protected override void Execute(CodeActivityContext context) 13: { 14: var solutionVersionFile = this.FileName.Get(context); 15: 16: // Ensure that the file is writeable 17: var fileAttributes = File.GetAttributes(solutionVersionFile); 18: File.SetAttributes(solutionVersionFile, fileAttributes & ~FileAttributes.ReadOnly); 19:   20: // Prepare assembly versions 21: var majorMinor = GetAssemblyMajorMinorVersionBasedOnExisting(solutionVersionFile); 22: var newBuildNumber = GetNewBuildNumber(this.TfsBuildNumber.Get(context)); 23: var newAssemblyVersion = string.Format("{0}.{1}.0.0", majorMinor.Item1, majorMinor.Item2); 24: var newAssemblyFileVersion = string.Format("{0}.{1}.{2}.0", majorMinor.Item1, majorMinor.Item2, newBuildNumber); 25: this.NewAssemblyFileVersion.Set(context, newAssemblyFileVersion); 26:   27: // Perform the actual replacement 28: var contents = this.GetFileContents(newAssemblyVersion, newAssemblyFileVersion); 29: File.WriteAllText(solutionVersionFile, contents); 30:   31: // Restore the file's original attributes 32: File.SetAttributes(solutionVersionFile, fileAttributes); 33: } 34:   35: #region Private Methods 36:   37: private string GetFileContents(string newAssemblyVersion, string newAssemblyFileVersion) 38: { 39: var cs = new StringBuilder(); 40: cs.AppendLine("using System.Reflection;"); 41: cs.AppendFormat("[assembly: AssemblyVersion(\"{0}\")]", newAssemblyVersion); 42: cs.AppendLine(); 43: cs.AppendFormat("[assembly: AssemblyFileVersion(\"{0}\")]", newAssemblyFileVersion); 44: return cs.ToString(); 45: } 46:   47: private Tuple<string, string> GetAssemblyMajorMinorVersionBasedOnExisting(string filePath) 48: { 49: var lines = File.ReadAllLines(filePath); 50: var versionLine = lines.Where(x => x.Contains("AssemblyVersion")).FirstOrDefault(); 51:   52: if (versionLine == null) 53: { 54: throw new InvalidOperationException("File does not contain [assembly: AssemblyVersion] attribute"); 55: } 56:   57: return ExtractMajorMinor(versionLine); 58: } 59:   60: private static Tuple<string, string> ExtractMajorMinor(string versionLine) 61: { 62: var firstQuote = versionLine.IndexOf('"') + 1; 63: var secondQuote = versionLine.IndexOf('"', firstQuote); 64: var version = versionLine.Substring(firstQuote, secondQuote - firstQuote); 65: var versionParts = version.Split('.'); 66: return new Tuple<string, string>(versionParts[0], versionParts[1]); 67: } 68:   69: private string GetNewBuildNumber(string buildName) 70: { 71: return buildName.Substring(buildName.LastIndexOf(".") + 1); 72: } 73:   74: #endregion 75: }   At this point the final step is to incorporate this activity into the overall build template. Make a copy of the DefaultTempate.xaml – we’ll call it DefaultTemplateWithVersioning.xaml. Before the build and labeling happens, drag the VersionAssemblies activity in. Then set the LabelName variable to “BuildDetail.BuildDefinition.Name + "-" + newAssemblyFileVersion since the newAssemblyFileVersion was produced by our activity.   Configuring CI Once you add your solution to source control, you can configure CI with the build definition window as shown here. The main difference is that we’ll change the Process tab to reflect a different build number format and choose our custom build process file:   When the build completes, we’ll see the name of our project with the unique revision number:   If we look at the detailed build log for the latest build, we’ll see the label being created with our custom task:     We can now look at the history labels in TFS and see the project name with the labels (the Assignment activity I added to the workflow):   Finally, if we look at the physical assemblies that are produced, we can right-click on any assembly in Windows Explorer and see the assembly version in its properties:   Full Traceability We now have full traceability for our code. There will never be a question of what code was deployed to Production. You can always see the assembly version in the properties of the physical assembly. That can be traced back to a label in TFS where the unique revision number matches. The label in TFS gives you the complete snapshot of the code in your source control repository at the time the code was built. This type of process for full traceability has been used for many years for CI – in fact, I’ve done similar things with CCNet and SVN for quite some time. This is simply the TFS implementation of that pattern. The new features that TFS 2010 give you to make these types of customizations in your build process are quite easy once you get over the initial curve.

    Read the article

  • Oracle Fusion Applications: Changing the Game

    - by kellsey.ruppel(at)oracle.com
    Originally posted in the Oracle Profit Magazine, November 2010 Edition. When the order processing system red-flags a customer's credit status, the IT department doesn't get the customer's call. When a supplier misses a delivery date for a key automotive assembly, it's not the CIO who has to answer for the error. Knowledge workers (known in IT circles as "users") are on the front lines when an exception occurs in an established business process. They're also the ones who study sales trends to decide when to open a new store in an up-and-coming neighborhood, which products are most profitable, how employee skill sets are evolving, and which suppliers are most efficient. In short, knowledge workers are masters of business as unusual. Traditional enterprise resource planning (ERP) systems and other familiar enterprise applications excel at automating, managing, and executing standard business processes. These programs shine when everything goes as planned. Life gets even trickier when a traditional application needs to be extended with a new service or an extra step is added to a business process when new products are brought to market, divisions are merged, or companies are acquired. Monolithic applications often need the IT department to step in and make the necessary adjustments--incurring additional costs and delays. Until now. When Oracle unveiled the much-anticipated family of Oracle Fusion Applications at Oracle OpenWorld in September 2010, knowledge workers in particular had a lot to cheer about. Business users will soon have ready access to analytical information and collaboration tools in the context of what they are working on, so they can make better decisions when problems or opportunities arise. Additionally, the Oracle Fusion Applications platform will make it easy for business users to tweak processes, create new capabilities, and find information, often without the need for IT department assistance and while still following company guidelines. And IT leaders will be happy to hear about new deployment options, guided implementation and setup tools, and cost-saving management capabilities. Just as important, the underlying technologies in Oracle Fusion Applications will allow organizations to choose among their existing investments and next-generation enterprise applications so they can introduce innovations at a pace that makes the most business and financial sense. "Oracle Fusion Applications are architected so you don't have to do rip and replace," says Jim Hayes, managing director of the consulting firm Accenture. "That's very important for creating a business case that will get through the steering committee and be approved by the board. It shows you can drive value and make a difference in the near term." For these and other reasons, analysts and early adopters are calling Oracle Fusion Applications a game changer for enterprise customers. The differences become apparent in three key areas: the way we innovate, work, and adopt technology. Game Changer #1: New Standard for InnovationChange is a constant challenge for most businesses, whether the catalysts are market dynamics, new competition, or the ever-expanding regulatory environment. And, in an ongoing effort to differentiate, business leaders are constantly looking for new ways to do business, serve constituents, and bring new products and services to market. In addition, companies face significant costs to keep their applications up-to-date. For example, when a company adds new suppliers to a procurement system, the IT shop typically has to invest time, effort, and even consulting fees for custom integrations that allow various ERP systems to communicate with each other. Oracle Fusion Applications were built on Web services and a modular SOA foundation to ease customizations and integration activities among all applications--whether from Oracle or another vendor. Interfaces and updates written in ubiquitous Java, rather than a proprietary coding language, allow organizations to tap into existing in-house technical skills rather than seek expensive outside specialists. And with SOA, organizations can extend a feature set or integrate with other SOA environments by combining Web services such as "look up customer" into a new business process managed by the BPEL orchestration engine. Flexibility like this has long-term implications. "Because users capture these changes at a higher metadata layer, not in the application's code, changes and additions are protected even as new versions of Oracle Fusion Applications are released," says Steve Miranda, senior vice president of applications development at Oracle. "This is a much more sustainable approach because you don't incur costly customizations that prevent upgrades and other innovations." And changes are easier to make: if one change is made in the metadata, that change is automatically reflected throughout the application interface, business intelligence, business process, and business logic. Game Changer #2: New Standard for WorkBoosting productivity comes down to doing the basics right: running business processes more efficiently and managing exceptions more effectively, so users can accomplish more in the course of a day or spend more quality time with the most profitable customers. The fastest way to improve process efficiency is to reduce the number of steps it takes to execute common tasks, such as ordering office equipment from an internal procurement system. Oracle Fusion Applications will deliver a complete role-based user experience with business intelligence and collaboration capabilities provided in the context of the work at hand. "We created every Oracle Fusion Applications screen by asking 'What does the user need to know?' 'What does he or she need to do?' and 'Who do they need to work with to get the job done?'" Miranda explains. So when the sales department heads need new laptops, the self-service procurement screen will not only display a list of approved vendors and configurations, but also a running list of reviews by coworkers who recently purchased the various models. Embedded intelligence may also display prevailing delivery lead times based on actual order histories, not the generic shipping dates vendors may quote. The pervasive business intelligence serves many other business activities across all areas of the enterprise. For example, a manager considering whether to promote a direct report can see the person's employee profile, with a salary history, appraisal summaries, and a rundown of skills and training. This approach to business intelligence also has implications for supply chain management. "One of the challenges at Ingersoll Rand is lack of visibility in our supply chain," says Mike Macrie, global director of enterprise applications for global industrial firm Ingersoll Rand. "Oracle Fusion Applications are going to provide the embedded intelligence to give us that visibility and give us the ability to analyze those orders at any point in our supply chain." Oracle Fusion Applications will also create a "role-based user experience" that displays a work list of events that need attention, based on user job function. Role awareness guides users with daily lists of action items and exceptions. So a credit manager may see seven invoices with discounts that are about to expire or 12 suppliers that have been put on hold because credit memos are awaiting approval. Individualization extends to the search capabilities of Oracle Fusion Applications. The platform uses Web-style search screens powered by an Oracle enterprise search engine, with a security framework that filters search results so individuals will only see the internal information they're authorized to access. A further aid to productivity is Oracle Fusion Applications' integration with Web 2.0 collaboration and social networking resources for business environments. Hover-over text will reveal relevant contact information whenever the name of a person appears in an Oracle Fusion Application. Users can connect via an online chat, phone call, or instant message without leaving the main application, reducing the time required for an accounts payable staffer to resolve a mismatch between an invoiced charge and the service record, for example. Addresses of suppliers, customers, or partners will also initiate hover-over text to show contact details and Web-based maps. Finally, Oracle Fusion Applications will promote a new way of working with purpose-driven communities that can bring new efficiencies to everything from cultivating sales leads to managing new projects. As soon as a lead or project materializes, the applications will automatically gather relevant participants into an online community that shares member contact information, schedules, discussion forums, and Wiki pages. "Oracle Fusion Applications will allow us to take it to the next level with embedded Web 2.0 tools and the embedded analytics," says Steve Printz, CIO and vice president, supply chain management, at window-and-door manufacturer Pella. "[This] allows those employees today who are processing transactions to really contribute to the success of the company and become decision-makers." Game Changer #3: New Standard for Technology AdoptionAs IT becomes a dominant component of how businesses run and compete, organizations need to lower the cost of implementing applications and introducing new application features. In the past, rolling out new code often required creating a test bed system, moving beta code to a separate system for user feedback, and--once all the revisions were made--moving version one of the software onto production systems, where business users could finally get the needed new features. Oracle Fusion Applications will use a dedicated setup manager application to streamline this process. First, the setup manager will help scope out the project, querying users about their requirements. "From those questions and answers we determine the steps and the order of those steps that will enable that task," Miranda says. Next, system utilities will assign tasks to owners, track completion status, and monitor the overall status of a programming effort. Oracle Fusion Applications can then recommend Web services that allow users to migrate setup choices and steps across all the various deployments of the application. Those setup capabilities automate the migration from test systems to production systems, as well as between different business units that may be using the same application. "The self-service ability of the setup manager helps business users change setups with very little intervention from the IT team," says Ravi Kumar, vice president at IT services company Infosys. "That to me is a big difference from how we've viewed enterprise applications before." For additional flexibility, organizations will be able to adopt Oracle Fusion Applications modules in either of two modes: a single-instance alternative uses one database for all Oracle Fusion Applications, while a "pillar mode" creates separate databases to underpin each application. This means IT departments running any one of Oracle's applications or even third-party applications can plug Oracle Fusion Applications modules into their environment and see additional business value created on top of their existing systems. And Oracle Fusion Applications offer a hybrid approach to deployment. The applications are all software-as-a-service-ready, so customers can choose on-premises, public or private cloud, or a combination of these to suit their business needs. It's that combination of flexibility and a roadmap for the future that may be the biggest game changer of all. "The Oracle Fusion Applications architecture allows us to migrate our company at a pace that's consistent with our business strategy, whereas before we might have had to do it with a massive upgrade," says Macrie of Ingersoll Rand. "We're looking forward to that architecture to really give us more flexibility in how we migrate over time." For More InformationUser Input Key to the Success of Oracle Fusion ApplicationsTransforming Coexistence into Strategic ValueUnder the HoodOracle Fusion ApplicationsOracle Service-Oriented Architecture  

    Read the article

  • Life Technologies: Making Life Easier to Manage

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} When we’re thinking about customer engagement, we’re acutely aware of all the forces at play competing for our customer’s attention. Solutions that make life easier for our customers draw attention to themselves. We tend to engage more when there is a distinct benefit and we can take a deep breath and accept that there is hope in the world and everything isn’t designed to frustrate us and make our lives miserable. (sigh…) When products are designed to automate processes that were consuming hours of our time with no relief in sight, they deserve to be recognized. One of our recent Oracle Fusion Middleware Innovation Award Winners in the WebCenter category, Life Technologies, has recently posted a video promoting their “award winning” solution. The Oracle Innovation Awards are part of the overall Oracle Excellence awards given to customers for innovation with Oracle products. More info here. Their award nomination included this description: Life Technologies delivered the My Life Service Portal as part of a larger Digital Hub strategy. This Portal is the first of its kind in the biotechnology service providing industry. The Portal provides access to Life Technologies cloud based service monitoring system where all customer deployed instruments can be remotely monitored and proactively repaired. The portal provides alerts from these cloud based monitoring services directly to the customer and to Life Technologies Field Engineers. The Portal provides insight into the instruments and services customers purchased for the purpose of analyzing and anticipating future customer needs and creating targeted sales and service programs. This portal not only provides benefits for Life Technologies internal sales and service teams but provides customers a central place to track all pertinent instrument information including: instrument service history instrument status and previous activities instrument performance analytics planned service visits warranty/contract information discussion forums social networks for lab management and collaboration alerts and notifications on all of the above team scheduling for instrument usage promote optional reagents required to keep instruments performing From their website The Life Technologies Instruments & Services Portal Helps You Save Time and Gain Peace of Mind Introducing the new, award-winning, free online tool that enables easier management of your instrument use and care, faster response to requests for service or service quotes, and instant sharing of key instrument and service information with your colleagues. Now – this unto itself is obviously beneficial for their customers who were previously burdened with having to do all of these tasks separately, manually and inconsistently by nature. Now – all in one place and free to their customers – a portal that ties it all together. They now have built the platform to give their customers yet another reason to do business with them – Their headline on their product page says it all: “Life is now easier to manage - All your instrument use and care in one place – the no-cost, no-hassle Instruments and Services Portal.” Of course – it’s very convenient that the company name includes “Life” and now can also promote to their clients and prospects that doing business with them is easy and their sophisticated lab equipment is easy to manage. In an industry full of PhD’s – “Easy” isn’t usually the first word that comes to mind, but Life Technologies has now tied the word to their brand in a very eloquent way. Between our work lives and family or personal lives, getting any mono-focused minutes of dedicated attention has become such a rare occurrence in our current era of multi-tasking that those moments of focus are highly prized. So – when something is done really well – so well that it becomes captivating and urges sharing impulses – I take notice and dig deeper and most of the time I discover other gems not so hidden below the surface. And then I share with those I know would enjoy and understand. In the spirit of full disclosure, I must admit here that the first person I shared the videos below with was my daughter. She’s in her senior year of high school in the midst of her college search. She’s passionate about her academics and has already decided that she wants to study Neuroscience in college and like her mother will be in for the long haul to a PhD eventually. In a summer science program at Smith College 2 summers ago – she sent the family famous text to me – “I just dissected a sheep’s brain – wicked cool!” – This was followed by an equally memorable text this past summer in a research mentorship in Neuroscience at UConn – “Just sliced up some rat brain. Reminded me of a deli slicer at the supermarket… sorry I forgot to call last night…” So… needless to say – I knew I had an audience that would enjoy and understand these videos below and are now being shared among her science classmates and faculty. And evidently - so does Life Technologies! They’ve done a great job on these making them fun and something that will easily be shared among their customers social networks. They’ve created a neuro-archetypal character, “Ph.Diddy” and know that their world of clients in academics, research, and other institutions would understand and enjoy the “edutainment” value in this series of videos on their YouTube channel that pokes fun at the stereotypes while also promoting their products at the same time. They use their Facebook page for additional engagement with their clients and as another venue to promote these videos. Enjoy this one as well! More to be found here: http://www.youtube.com/lifetechnologies Stay tuned to this Oracle WebCenter blog channel. Tomorrow we'll be taking a look at another winner of the Innovation Awards, LADWP - helping to keep the citizens of Los Angeles engaged with their Water and Power provider.

    Read the article

  • Microsoft Channel 9 Interviews Mei Liang to Introduce Sample Browser Extension for Visual Studio 2012 and 2010

    - by Jialiang
    This morning, Microsoft Channel 9 interviewed Mei Liang - Group Manager of Microsoft All-In-One Code Framework - to introduce the newest Sample Browser extension for Visual Studio 2012 &2010.   This extension provides a way for developers to search and download more than 4500 code samples from within Visual Studio, including over 700 Windows 8 samples and more than 1000 All-In-One Code Framework customer-driven code samples. Mei shows us not only the extension, but also the standalone version of the Sample Browser.   http://channel9.msdn.com/Shows/Visual-Studio-Toolbox/Sample-Browser-Visual-Studio-Extension   Microsoft All-In-One Code Framework, working in close partnership with the Visual Studio product team and MSDN Samples Gallery, developed the Sample Browser extension for both Visual Studio 2012 and Visual Studio 2010.  As an effort to evolve the code sample use experience and improve developers' productivity, the Sample Browser allows programmers to search, download and open over 4500 code samples from within Visual Studio with just a few simple clicks.  If no existing code sample can meet the needs, developers can even request a code sample easily from Microsoft thanks to the free “Sample Request Service” offered by Microsoft All-In-One Code Framework.  Through innovations, the teams hope to put the power of tens of thousands of code samples at developers’ fingertips. In short 3 months, the Sample Browser Visual Studio Extension has been installed by 100K global users.  It is also selected as one of the six most highly regarded and commonly used tools for Visual Studio that will make your programming experience feel like never before.   Got to love the All-In-One Code Framework team! You guys know this is THE go to source for code samples. Get this extension and you'll never need to leave VS2012 (well except for bathroom trips, but that's TMI anyway... ;) Read More... From: Greg Duncan (Author of CoolThingOfTheDay) 9/6/2011 12:00 AM The one software design pattern that I have used in just about every application I’ve written is “cut-and-paste,” so the new “Sample Browser” – read sample as a noun not an adjective – is a great boon to my productivity. Read More... From: Jim O'Neil (Microsoft Developer Evangelist) 9/28/2011 12:00 AM Install: http://aka.ms/samplebrowservsx Microsoft All-In-One Code Framework also offers the standalone version of Sample Browser.   The standalone version is particularly useful to Visual Studio Express edition or Visual Studio 2008 users, who cannot install the Sample Browser Visual Studio extension.   From Grassroots’ Passion for Developers to the Innovation of Sample Browser This Sample Browser has come a very long way improving the code sample use experience.  The history can be traced back to a grass-root innovation three years ago.   In early 2009, a few MSDN forum support engineers observed that lots of developers were struggling to work in Visual Studio without adequate code samples. Programming tasks seem harder than they should be when you only read through the documentation.  Just a couple of lines of sample code could answer a lot of questions.   They had a brilliant idea: What if we produce code samples based on developers’ frequently asked programming tasks in forums, social networks and support incidents, and then aggregate all our sample code in a one-stop library to benefit developers?  And what if developers can request code samples directly from Microsoft, free of charge?  This small group of grassroots at Microsoft devoted their nights and weekends to prototyping such a customer-driven code sample library.  This simple idea eventually turned into “Microsoft All-In-One Code Framework”, aka. OneCode.  With the support from more and more passionate developers at Microsoft and the leaders in the Community and Online Support team and Microsoft Commercial Technical Services (CTS), the idea has become a continually growing library with over 1000 customer-driven code samples covering almost all Microsoft development technologies.  These code samples originated from developers’ common pains and needs should be able to help many developers.  However, if developers cannot easily discover the code samples, the effort would still be in vain.  So in early 2010, the team started the idea of Sample Browser to ease the discovery and access of these samples.  In just two months, the first version of Sample Browser was finished and released by a passionate developer.  It was a very simple application, only supporting the basic sample offline search.  Users had to download the whole 100MB sample package containing all samples first, and run the Sample Browser to search locally.   Though developers could not search and download samples on-demand, this simple application laid a solid foundation for the team’s continuous innovations of Sample Browsing experience. In 2011, MSDN Samples Gallery had a big refresh.  The online sample experience was brought to a new level thanks to its PM Steven Wilssens and the gallery team’s effort.  Microsoft All-In-One Code Framework Team saw the opportunity to realize the “on-demand” sample search and download feature with the new gallery.  The two teams formed a strong partnership to upload all the customer-driven code samples to MSDN Samples Gallery, and released the new version of Sample Browser to support “on-demand” sample downloading in April, 2011.  Mei Liang, the Group Manager of Microsoft All-In-One Code Framework, was interviewed by Channel 9 to demo the Sample Browser.  Customers love the effort and the innovation!!  This can be clearly seen from the user comments in the publishing page.   It was very encouraging to the team of All-In-One Code Framework. The team continues innovating and evolving the Sample Browser.  They found the Visual Studio product team this time, and integrated the Sample Browsing experience into the latest Visual Studio 2012.  The newly released Sample Browser Visual Studio extension makes good use of Visual Studio 2012 IDE such as the new Quick Launch bar, the code editor, the toolbar and menus to offer easy access to thousands of code samples from within the development environment.   The Visual Studio Senior Program Manager Lead - Anthony Cangialosi, the Program Manager - Murali Krishna Hosabettu Kamalesha, the MSDN Samples Gallery PM – Steven Wilssens, and the Visual Studio Senior Escalation Engineer - Ed Dore shared lots of insightful suggestions with the team.  Thanks to the brilliant cross-group collaboration inside Microsoft, tens of new features including “Local Language Support” and “Favorite Samples”, as well as a face-lifted user interface, were added to further enhance the user experience. Since the new Sample Browser Visual Studio extension was released, it has received over 100 thousand downloads and five-star ratings.  A customer told the team that he officially falls in LOVE with Microsoft All-In-One Code Framework.   The Sample Browser Innovation for Developers Never Stops! The teams would never stop improving the Sample Browser for developers’ easier lives.   The Microsoft All-In-One Code Framework, Visual Studio and MSDN Samples Gallery teams are working closely to develop the next version of Sample Browser.  Here are the key functions in development or in discussion.  We hope to learn your feedback of the effort.  You can submit your suggestions to the official Visual Studio UserVoice site.  We look forward to hearing from you! 1) Offline Sample Search This is one of the top feature requests that we have received for Sample Browser.   The Sample Browser will support the offline search mode so that developers can search downloaded code samples when they do not have internet access.  This is particularly useful to developers in Enterprises with strict proxy settings. 2) Code Snippet Support and Visual Studio Editor Integration Today, the Sample Browser supports downloading and opening sample project.   However, when developers are searching for code samples, a better user experience would be to see the code snippets in the search result first.  Developers can quickly decide if the code snippet is relevant.   They can also drag and drop the code snippet into the Visual Studio Editor to solve some simple programming tasks.  If developers want to learn more about the sample, they can then choose to download the sample project and open it in Visual Studio. 3) Enterprise Sample Sharing and Searching Large enterprises have many code samples for their own internal tools and APIs that are not appropriate to be shared publicly in MSDN Samples Gallery.   In that case, today’s Sample Browser and MSDN Samples Gallery cannot help these Enterprise developers.  The idea is to create a Code Sample Repository in TFS, and provide an additional Visual Studio extension for Enterprise developers to quickly share code samples to TFS.  The Sample Browser can be configured to connect to the TFS Code Sample Repository to search for and download code samples.  This would potentially enable the Enterprise developers to be more productive. 4) Windows Store Sample Browser With the upcoming release of Windows RT and Microsoft Surface, developers are facing a completely new world of application platform.   Not like laptop, people would often use Microsoft Surface in commute and in travel.  Internet may not be available.  Today’s Visual Studio cannot be installed and run on Windows RT, however, our enthusiastic developers would hope to spend every minute on code.  They love code!   The idea is to create a Windows Store version of Sample Browser. Search and download samples from the online Samples Gallery when the user has internet access. Browse the sample code files and learn the sample documentation of downloaded samples with or without internet access.   In addition to the "browse” function, the Sample Browser could further support “bookmark”, “learning notes”, “code review”, and “quick social sharing". Make full use of the new touch and Windows Store App UI to give developers a new “relaxing” code browsing and learning experience, anytime, anywhere. With Windows Store Sample Browser, developers can enjoy A new relaxing and enjoyable experience for developers to learn code samples You do not have to sit in front of desk and formally open Visual Studio to read code samples.  Many developers get sub-health due to staying in front of desk for a very long time.  With Windows RT, Microsoft Surface and this Windows Store Sample Browser combining with the online MSDN Samples Gallery, developers can sit in a sofa, relaxingly hold the tablet and enjoy to learn their beloved sample code with detailed documentation. Anytime, anywhere Whether you have internet access or not, whether you are at home, in office, or in commute/airplane, developers can always easily access and browse the sample code. Lightweight and fast Particularly for learning a small sample project, the Windows Store Sample Browser would be more lightweight and faster to open and browse the sample code. Please submit your feedback and suggestion to Visual Studio UserVoice.  We look forward to hearing from you and deliver a better and better sample use experience.  Happy Coding!   Special Thanks to People working behind the latest release of Sample Browser Visual Studio Extension and the great partnerships!

    Read the article

  • Android - Create a custom multi-line ListView bound to an ArrayList

    - by Bill Osuch
    The Android HelloListView tutorial shows how to bind a ListView to an array of string objects, but you'll probably outgrow that pretty quickly. This post will show you how to bind the ListView to an ArrayList of custom objects, as well as create a multi-line ListView. Let's say you have some sort of search functionality that returns a list of people, along with addresses and phone numbers. We're going to display that data in three formatted lines for each result, and make it clickable. First, create your new Android project, and create two layout files. Main.xml will probably already be created by default, so paste this in: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  android:orientation="vertical"  android:layout_width="fill_parent"   android:layout_height="fill_parent">  <TextView   android:layout_height="wrap_content"   android:text="Custom ListView Contents"   android:gravity="center_vertical|center_horizontal"   android:layout_width="fill_parent" />   <ListView    android:id="@+id/ListView01"    android:layout_height="wrap_content"    android:layout_width="fill_parent"/> </LinearLayout> Next, create a layout file called custom_row_view.xml. This layout will be the template for each individual row in the ListView. You can use pretty much any type of layout - Relative, Table, etc., but for this we'll just use Linear: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  android:orientation="vertical"  android:layout_width="fill_parent"   android:layout_height="fill_parent">   <TextView android:id="@+id/name"   android:textSize="14sp"   android:textStyle="bold"   android:textColor="#FFFF00"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/>  <TextView android:id="@+id/cityState"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/>  <TextView android:id="@+id/phone"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/> </LinearLayout> Now, add an object called SearchResults. Paste this code in: public class SearchResults {  private String name = "";  private String cityState = "";  private String phone = "";  public void setName(String name) {   this.name = name;  }  public String getName() {   return name;  }  public void setCityState(String cityState) {   this.cityState = cityState;  }  public String getCityState() {   return cityState;  }  public void setPhone(String phone) {   this.phone = phone;  }  public String getPhone() {   return phone;  } } This is the class that we'll be filling with our data, and loading into an ArrayList. Next, you'll need a custom adapter. This one just extends the BaseAdapter, but you could extend the ArrayAdapter if you prefer. public class MyCustomBaseAdapter extends BaseAdapter {  private static ArrayList<SearchResults> searchArrayList;    private LayoutInflater mInflater;  public MyCustomBaseAdapter(Context context, ArrayList<SearchResults> results) {   searchArrayList = results;   mInflater = LayoutInflater.from(context);  }  public int getCount() {   return searchArrayList.size();  }  public Object getItem(int position) {   return searchArrayList.get(position);  }  public long getItemId(int position) {   return position;  }  public View getView(int position, View convertView, ViewGroup parent) {   ViewHolder holder;   if (convertView == null) {    convertView = mInflater.inflate(R.layout.custom_row_view, null);    holder = new ViewHolder();    holder.txtName = (TextView) convertView.findViewById(R.id.name);    holder.txtCityState = (TextView) convertView.findViewById(R.id.cityState);    holder.txtPhone = (TextView) convertView.findViewById(R.id.phone);    convertView.setTag(holder);   } else {    holder = (ViewHolder) convertView.getTag();   }      holder.txtName.setText(searchArrayList.get(position).getName());   holder.txtCityState.setText(searchArrayList.get(position).getCityState());   holder.txtPhone.setText(searchArrayList.get(position).getPhone());   return convertView;  }  static class ViewHolder {   TextView txtName;   TextView txtCityState;   TextView txtPhone;  } } (This is basically the same as the List14.java API demo) Finally, we'll wire it all up in the main class file: public class CustomListView extends Activity {     @Override     public void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.main);                 ArrayList<SearchResults> searchResults = GetSearchResults();                 final ListView lv1 = (ListView) findViewById(R.id.ListView01);         lv1.setAdapter(new MyCustomBaseAdapter(this, searchResults));                 lv1.setOnItemClickListener(new OnItemClickListener() {          @Override          public void onItemClick(AdapterView<?> a, View v, int position, long id) {           Object o = lv1.getItemAtPosition(position);           SearchResults fullObject = (SearchResults)o;           Toast.makeText(ListViewBlogPost.this, "You have chosen: " + " " + fullObject.getName(), Toast.LENGTH_LONG).show();          }          });     }         private ArrayList<SearchResults> GetSearchResults(){      ArrayList<SearchResults> results = new ArrayList<SearchResults>();            SearchResults sr1 = new SearchResults();      sr1.setName("John Smith");      sr1.setCityState("Dallas, TX");      sr1.setPhone("214-555-1234");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Jane Doe");      sr1.setCityState("Atlanta, GA");      sr1.setPhone("469-555-2587");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Steve Young");      sr1.setCityState("Miami, FL");      sr1.setPhone("305-555-7895");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Fred Jones");      sr1.setCityState("Las Vegas, NV");      sr1.setPhone("612-555-8214");      results.add(sr1);            return results;     } } Notice that we first get an ArrayList of SearchResults objects (normally this would be from an external data source...), pass it to the custom adapter, then set up a click listener. The listener gets the item that was clicked, converts it back to a SearchResults object, and does whatever it needs to do. Fire it up in the emulator, and you should wind up with something like this:

    Read the article

  • T4 Performance Counters explained

    - by user13346607
    Now that T4 is out for a few month some people might have wondered what details of the new pipeline you can monitor. A "cpustat -h" lists a lot of events that can be monitored, and only very few are self-explanatory. I will try to give some insight on all of them, some of these "PIC events" require an in-depth knowledge of T4 pipeline. Over time I will try to explain these, for the time being these events should simply be ignored. (Side note: some counters changed from tape-out 1.1 (*only* used in the T4 beta program) to tape-out 1.2 (used in the systems shipping today) The table only lists the tape-out 1.2 counters) 0 0 1 1058 6033 Oracle Microelectronics 50 14 7077 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} pic name (cpustat) Prose Comment Sel-pipe-drain-cycles, Sel-0-[wait|ready], Sel-[1,2] Sel-0-wait counts cycles a strand waits to be selected. Some reasons can be counted in detail; these are: Sel-0-ready: Cycles a strand was ready but not selected, that can signal pipeline oversubscription Sel-1: Cycles only one instruction or µop was selected Sel-2: Cycles two instructions or µops were selected Sel-pipe-drain-cycles: cf. PRM footnote 8 to table 10.2 Pick-any, Pick-[0|1|2|3] Cycles one, two, three, no or at least one instruction or µop is picked Instr_FGU_crypto Number of FGU or crypto instructions executed on that vcpu Instr_ld dto. for load Instr_st dto. for store SPR_ring_ops dto. for SPR ring ops Instr_other dto. for all other instructions not listed above, PRM footnote 7 to table 10.2 lists the instructions Instr_all total number of instructions executed on that vcpu Sw_count_intr Nr of S/W count instructions on that vcpu (sethi %hi(fc000),%g0 (whatever that is))  Atomics nr of atomic ops, which are LDSTUB/a, CASA/XA, and SWAP/A SW_prefetch Nr of PREFETCH or PREFETCHA instructions Block_ld_st Block loads or store on that vcpu IC_miss_nospec, IC_miss_[L2_or_L3|local|remote]\ _hit_nospec Various I$ misses, distinguished by where they hit. All of these count per thread, but only primary events: T4 counts only the first occurence of an I$ miss on a core for a certain instruction. If one strand misses in I$ this miss is counted, but if a second strand on the same core misses while the first miss is being resolved, that second miss is not counted This flavour of I$ misses counts only misses that are caused by instruction that really commit (note the "_nospec") BTC_miss Branch target cache miss ITLB_miss ITLB misses (synchronously counted) ITLB_miss_asynch dto. but asynchronously [I|D]TLB_fill_\ [8KB|64KB|4MB|256MB|2GB|trap] H/W tablewalk events that fill ITLB or DTLB with translation for the corresponding page size. The “_trap” event occurs if the HWTW was not able to fill the corresponding TLB IC_mtag_miss, IC_mtag_miss_\ [ptag_hit|ptag_miss|\ ptag_hit_way_mismatch] I$ micro tag misses, with some options for drill down Fetch-0, Fetch-0-all fetch-0 counts nr of cycles nothing was fetched for this particular strand, fetch-0-all counts cycles nothing was fetched for all strands on a core Instr_buffer_full Cycles the instruction buffer for a strand was full, thereby preventing any fetch BTC_targ_incorrect Counts all occurences of wrongly predicted branch targets from the BTC [PQ|ROB|LB|ROB_LB|SB|\ ROB_SB|LB_SB|RB_LB_SB|\ DTLB_miss]\ _tag_wait ST_q_tag_wait is listed under sl=20. These counters monitor pipeline behaviour therefore they are not strand specific: PQ_...: cycles Rename stage waits for a Pick Queue tag (might signal memory bound workload for single thread mode, cf. Mail from Richard Smith) ROB_...: cycles Select stage waits for a ROB (ReOrderBuffer) tag LB_...: cycles Select stage waits for a Load Buffer tag SB_...: cycles Select stage waits for Store Buffer tag combinations of the above are allowed, although some of these events can overlap, the counter will only be incremented once per cycle if any of these occur DTLB_...: cycles load or store instructions wait at Pick stage for a DTLB miss tag [ID]TLB_HWTW_\ [L2_hit|L3_hit|L3_miss|all] Counters for HWTW accesses caused by either DTLB or ITLB misses. Canbe further detailed by where they hit IC_miss_L2_L3_hit, IC_miss_local_remote_remL3_hit, IC_miss I$ prefetches that were dropped because they either miss in L2$ or L3$ This variant counts misses regardless if the causing instruction commits or not DC_miss_nospec, DC_miss_[L2_L3|local|remote_L3]\ _hit_nospec D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters DTLB_miss_asynch counts all DTLB misses asynchronously, there is no way to count them synchronously DC_pref_drop_DC_hit, SW_pref_drop_[DC_hit|buffer_full] L1-D$ h/w prefetches that were dropped because of a D$ hit, counted per core. The others count software prefetches per strand [Full|Partial]_RAW_hit_st_[buf|q] Count events where a load wants to get data that has not yet been stored, i. e. it is still inside the pipeline. The data might be either still in the store buffer or in the store queue. If the load's data matches in the SB and in the store queue the data in buffer takes precedence of course since it is younger [IC|DC]_evict_invalid, [IC|DC|L1]_snoop_invalid, [IC|DC|L1]_invalid_all Counter for invalidated cache evictions per core St_q_tag_wait Number of cycles pipeline waits for a store queue tag, of course counted per core Data_pref_[drop_L2|drop_L3|\ hit_L2|hit_L3|\ hit_local|hit_remote] Data prefetches that can be further detailed by either why they were dropped or where they did hit St_hit_[L2|L3], St_L2_[local|remote]_C2C, St_local, St_remote Store events distinguished by where they hit or where they cause a L2 cache-to-cache transfer, i.e. either a transfer from another L2$ on the same die or from a different die DC_miss, DC_miss_\ [L2_L3|local|remote]_hit D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters L2_[clean|dirty]_evict Per core clean or dirty L2$ evictions L2_fill_buf_full, L2_wb_buf_full, L2_miss_buf_full Per core L2$ buffer events, all count number of cycles that this state was present L2_pipe_stall Per core cycles pipeline stalled because of L2$ Branches Count branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_taken Counts taken branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_mispred, Br_dir_mispred, Br_trg_mispred, Br_trg_mispred_\ [far_tbl|indir_tbl|ret_stk] Counter for various branch misprediction events.  Cycles_user counts cycles, attribute setting hpriv, nouser, sys controls addess space to count in Commit-[0|1|2], Commit-0-all, Commit-1-or-2 Number of times either no, one, or two µops commit for a strand. Commit-0-all counts number of times no µop commits for the whole core, cf. footnote 11 to table 10.2 in PRM for a more detailed explanation on how this counters interacts with the privilege levels

    Read the article

< Previous Page | 76 77 78 79 80 81 82  | Next Page >