Search Results

Search found 2368 results on 95 pages for 'jeff smith'.

Page 92/95 | < Previous Page | 88 89 90 91 92 93 94 95  | Next Page >

  • C# and NpgsqlDataAdapter returning a single string instead of a data table

    - by tme321
    I have a postgresql db and a C# application to access it. I'm having a strange error with values I return from a NpgsqlDataAdapter.Fill command into a DataSet. I've got this code: NpgsqlCommand n = new NpgsqlCommand(); n.Connection = connector; // a class member NpgsqlConnection DataSet ds = new DataSet(); DataTable dt = new DataTable(); // DBTablesRef are just constants declared for // the db table names and columns ArrayList cols = new ArrayList(); cols.Add(DBTablesRef.all); //all is just * ArrayList idCol = new ArrayList(); idCol.Add(DBTablesRef.revIssID); ArrayList idVal = new ArrayList(); idVal.Add(idNum); // a function parameter // Select builder and Where builder are just small // functions that return an sql statement based // on the parameters. n is passed to the where // builder because the builder uses named // parameters and sets them in the NpgsqlCommand // passed in String select = SelectBuilder(DBTablesRef.revTableName, cols) + WhereBuilder(n,idCol, idVal); n.CommandText = select; try { NpgsqlDataAdapter da = new NpgsqlDataAdapter(n); ds.Reset(); // filling DataSet with result from NpgsqlDataAdapter da.Fill(ds); // C# DataSet takes multiple tables, but only the first is used here dt = ds.Tables[0]; } catch (Exception e) { Console.WriteLine(e.ToString()); } So my problem is this: the above code works perfectly, just like I want it to. However, if instead of doing a select on all (*) if I try to name individual columns to return from the query I get the information I asked for, but rather than being split up into seperate entries in the data table I get a string in the first index of the data table that looked something like: "(0,5,false,Bob Smith,7)" And the data is correct, I would be expecting 0, then 5, then a boolean, then some text etc. But I would (obviously) prefer it to not be returned as just one big string. Anyone know why if I do a select on * I get a datatable as expected, but if I do a select on specific columns I get a data table with one entry that is the string of the values I'm asking for?

    Read the article

  • Haskell: Left-biased/short-circuiting function

    - by user2967411
    Two classes ago, our professor presented to us a Parser module. Here is the code: module Parser (Parser,parser,runParser,satisfy,char,string,many,many1,(+++)) where import Data.Char import Control.Monad import Control.Monad.State type Parser = StateT String [] runParser :: Parser a -> String -> [(a,String)] runParser = runStateT parser :: (String -> [(a,String)]) -> Parser a parser = StateT satisfy :: (Char -> Bool) -> Parser Char satisfy f = parser $ \s -> case s of [] -> [] a:as -> [(a,as) | f a] char :: Char -> Parser Char char = satisfy . (==) alpha,digit :: Parser Char alpha = satisfy isAlpha digit = satisfy isDigit string :: String -> Parser String string = mapM char infixr 5 +++ (+++) :: Parser a -> Parser a -> Parser a (+++) = mplus many, many1 :: Parser a -> Parser [a] many p = return [] +++ many1 p many1 p = liftM2 (:) p (many p) Today he gave us an assignment to introduce "a left-biased, or short-circuiting version of (+++)", called (<++). His hint was for us to consider the original implementation of (+++). When he first introduced +++ to us, this was the code he wrote, which I am going to call the original implementation: infixr 5 +++ (+++) :: Parser a -> Parser a -> Parser a p +++ q = Parser $ \s -> runParser p s ++ runParser q s I have been having tons of trouble since we were introduced to parsing and so it continues. I have tried/am considering two approaches. 1) Use the "original" implementation, as in p +++ q = Parser $ \s - runParser p s ++ runParser q s 2) Use the final implementation, as in (+++) = mplus Here are my questions: 1) The module will not compile if I use the original implementation. The error: Not in scope: data constructor 'Parser'. It compiles fine using (+++) = mplus. What is wrong with using the original implementation that is avoided by using the final implementation? 2) How do I check if the first Parser returns anything? Is something like (not (isNothing (Parser $ \s - runParser p s) on the right track? It seems like it should be easy but I have no idea. 3) Once I figure out how to check if the first Parser returns anything, if I am to base my code on the final implementation, would it be as easy as this?: -- if p returns something then p <++ q = mplus (Parser $ \s -> runParser p s) mzero -- else (<++) = mplus Best, Jeff

    Read the article

  • jquery data selector

    - by Tauren
    I need to select elements based on values stored in an element's .data() object. At a minimum, I'd like to select top-level data properties using selectors, perhaps like this: $('a').data("category","music"); $('a:data(category=music)'); Or perhaps the selector would be in regular attribute selector format: $('a[category=music]'); Or in attribute format, but with a specifier to indicate it is in .data(): $('a[:category=music]'); I've found James Padolsey's implementation to look simple, yet good. The selector formats above mirror methods shown on that page. There is also this Sizzle patch. For some reason, I recall reading a while back that jQuery 1.4 would include support for selectors on values in the jquery .data() object. However, now that I'm looking for it, I can't find it. Maybe it was just a feature request that I saw. Is there support for this and I'm just not seeing it? Ideally, I'd like to support sub-properties in data() using dot notation. Like this: $('a').data("user",{name: {first:"Tom",last:"Smith"},username: "tomsmith"}); $('a[:user.name.first=Tom]'); I also would like to support multiple data selectors, where only elements with ALL specified data selectors are found. The regular jquery multiple selector does an OR operation. For instance, $('a.big, a.small') selects a tags with either class big or small). I'm looking for an AND, perhaps like this: $('a').data("artist",{id: 3281, name: "Madonna"}); $('a').data("category","music"); $('a[:category=music && :artist.name=Madonna]'); Lastly, it would be great if comparison operators and regex features were available on data selectors. So $(a[:artist.id>5000]) would be possible. I realize I could probably do much of this using filter(), but it would be nice to have a simple selector format. What solutions are available to do this? Is Jame's Padolsey's the best solution at this time? My concern is primarily in regards to performance, but also in the extra features like sub-property dot-notation and multiple data selectors. Are there other implementations that support these things or are better in some way?

    Read the article

  • Advice on displaying and allowing editing of data using ASP.NET MVC?

    - by Remnant
    I am embarking upon my first ASP.NET MVC project and I would like to get some input on possible ways to display database data and general best practice. In short, the body of my webpage will show data from my database in a table like format, with each table row showing similar data. For example: Name Age Position Date Joined Jon Smith 23 Striker 18th Mar 2005 John Doe 38 Defender 3rd Jan 1988 In terms of functionality, primarily I’d like to give the user the ability to edit the data and, after the edit, commit the edit to the database and refresh the view.The reason I want to refresh the view is because the data is date ordered and I will need to re-sort if the user edits a date field. My main question is what architecture / tools would be best suited to this fulfil my requirements at a high level? From the research I have done so far my initial conclusions were: ADO.NET for data retrieval. This is something I have used before and feel comfortable with. I like the look of LINQ to SQL but don’t want to make the learning curve any steeper for my first outing into MVC land just yet. Partial Views to create a template and then iterate through a datatable that I have pulled back from my database model. jQuery to allow the user to edit data in the table, error check edited data entries etc. Also, my intial view was that caching the data would not be a key requirement here. The only field a user will be able to update is the field and, if they do, I will need to commit that data to the database immediately and then refresh the view (as the data is date sorted). Any thoughts on this? Alternatively, I have seen some jQuery plug-ins that emulate a datagrid and provide associated functionality. My first thoughts are that I do not need all the functionality that comes with these plug-ins (e.g. zebra striping, ability to sort by column using sort glyph in column headers etc .) and I don’t really see any benefit to this over and above the solution I have outlined above. Again, is there reason to reconsider this view? Finally, when a user edits a date , I will need to refresh the view. In order to do this I had been reading about Html.RenderAction and this seemed like it may be a better option than using Partial Views as I can incorporate application logic into the action method. Am I right to consider Html.RenderAction or have I misunderstood its usage? Hope this post is clear and not too long. I did consider separate posts for each topic (e.g. Partial View vs. Html.RenderAction, when to use jQury datagrid plug-in) but it feels like these issues are so intertwined that they need to be dealt with in contect of each other. Thanks

    Read the article

  • Looking for efficient scaling patterns for Silverlight application with distributed text-file data s

    - by Edward Tanguay
    I'm designing a Silverlight software solution for students and teachers to record flashcards, e.g. words and phrases that students find while reading and errors that teachers notice while teaching. Requirements are: each person publishes his own flashcards in a file on a web server, e.g. http://:www.mywebserver.com/flashcards.txt other people subscribe to that person's flashcards by using a Silverlight flashcard reader that I have developed and entering the URLs of flashcard files they want to subscribe to, URLs and imported flashcards being saved in IsolatedStorage the flashcards.txt file has the following simple format: title, then blocks of question/answers: Jim Smith's flashcards from English class 53-222, winter semester 2009 ==fla Das kann nicht sein. That can't be. ==fla Es sei denn, er kommt nicht. Unless he doesn't come. The user then makes public the URL to his flashcard file and other readers begin reading in his flashcards. In order to lower the bar for non-technical users to contribute, it will even be possible for them to save this text in a Google Document, which they publish and distribute the URL. The flashcard readers will then recognize it is a google document and perform the necessary screen scraping to get at the raw text. I have two technical questions about this approach: What is a best way to plan now for scalability issues: e.g. if your reader is subscribed to 10 flashcard files that are each 200K, it will have to download 2MB of text just to find out if any new flashcards are available. Or can I somehow accurately and consistently get at the last update date/time of text files on servers and published google docs? Each reader will have the ability to allow the person to test himself on imported flashcards and add meta information to them, e.g. categorize them, edit them, etc. This information will be stored in IsolatedStorage along with the important flashcards themselves. What is a good pattern to allow these readers to share and synchronize this meta data, e.g. so when you are looking at a flashcard you can see that 5 other people have made corrections to it. The best solution I can think of now is that the Silverlight readers will have to republish their data to a central database, but then there is the problem of uniquely identifying each flashcard, the best approach seems to be URL + position-in-file, or even better URL + original text of both question and answer fields, but both of these have their obvious drawbacks. The main requirement is that the bar for participation is kept as low as possible, i.e. type text in a google document, publish it, distribute the URL, and you're publishing within the flashcard community. So I want to come up with the most efficient technical solutions in order to compensate for the lack of database, lack of unique ids, etc. For those who have designed or developed similar non-traditional, distributed database projects like this, what advice, experience or best-practice tips you can share on the above two points?

    Read the article

  • soap client not working in php

    - by Jin Yong
    I tried to write a code in php to call a web server to add a client details, however, it's seem not working for me and display the following error: Fatal error: Uncaught SoapFault exception: [HTTP] Could not connect to host in D:\www\web_server.php:15 Stack trace: #0 [internal function]: SoapClient-_doRequest('_call('AddClient', Array) #2 D:\www\web_server.php(15): SoapClient-AddClientArray) #3 {main} thrown in D:\www\web_server.php on line 15 Refer below for the code that I wrote in php: <s:element name="AddClient"> - <s:complexType> - <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="username" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="password" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="clientRequest" type="tns:ClientRequest"/> </s:sequence> </s:complexType> </s:element> - <s:complexType name="ClientRequest"> - <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="customerCode" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="customerFullName" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="ref" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="phoneNumber" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="Date" type="s:string"/> </s:sequence> </s:complexType> <s:element name="AddClientResponse"> - <s:complexType> - <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="AddClientResult" type="tns:clientResponse"/> <s:element minOccurs="0" maxOccurs="1" name="response" type="tns:ServiceResponse"/> </s:sequence> </s:complexType> </s:element> - <s:complexType name="ClientResponse"> - <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="testNumber" type="s:string"/> </s:sequence> </s:complexType> <?php $client = new SoapClient($url); $result = $client->AddClient(array('username' => 'test','password'=>'testing','clientRequest'=>array('customerCode'=>'18743','customerFullName'=>'Gaby Smith','ref'=>'','phoneNumber'=>'0413496525','Date'=>'12/04/2013'))); echo $result->AddClientResponse; ?> Does anyone where I gone wrong for this code?

    Read the article

  • WPF MVVM: Convention over Configuration for ResourceDictionary ?

    - by Jeffrey Knight
    Update In the wiki spirit of StackOverflow, here's an update: I spiked Joe White's IValueConverter suggestion below. It works like a charm. I've written a "quickstart" example of this that automates the mapping of ViewModels-Views using some cheap string replacement. If no View is found to represent the ViewModel, it defaults to an "Under Construction" page. I'm dubbing this approach "WPF MVVM White" since it was Joe White's idea. Here are a couple screenshots. The first image is a case of "[SomeControlName]ViewModel" has a corresponding "[SomeControlName]View", based on pure naming convention. The second is a case where the ModelView doesn't have any views to represent it. No more ResourceDictionaries with long ViewModel to View mappings. It's pure naming convention now. I'm hosting a download of the project here: http://rootsilver.com/files/Mvvm.White.Quickstart.zip I'll follow up with a longer blog post walk through. Original Post I read Josh Smith's fantastic MSDN article on WPF MVVM over the weekend. It's destined to be a cult classic. It took me a while to wrap my head around the magic of asking WPF to render the ViewModel. It's like saying "Here's a class, WPF. Go figure out which UI to use to present it." For those who missed this magic, WPF can do this by looking up the View for ModelView in the ResourceDictionary mapping and pulling out the corresponding View. (Scroll down to Figure 10 Supplying a View ). The first thing that jumps out at me immediately is that there's already a strong naming convention of: classNameView ("View" suffix) classNameViewModel ("ViewModel" suffix) My question is: Since the ResourceDictionary can be manipulated programatically, I"m wondering if anyone has managed to Regex.Replace the whole thing away, so the lookup is automatic, and any new View/ViewModels get resolved by virtue of their naming convention? [Edit] What I'm imagining is a hook/interception into ResourceDictionary. ... Also considering a method at startup that uses interop to pull out *View$ and *ViewModel$ class names to build the DataTemplate dictionary in code: //build list foreach .... String.Format("<DataTemplate DataType=\"{x:Type vm:{0} }\"><v:{1} /></DataTemplate>", ...)

    Read the article

  • What are the Tags Around Default iPhone Address Book People Phone Number Labels?

    - by rnistuk
    My question concerns markup that surrounds some of the default phone number labels in the Person entries of the Contact list on the iPhone. I have created an iPhone contact list address book entry for a person, "John Smith" with the following phone number entries: Mobile (604) 123-4567 iPhone (778) 123-4567 Home (604) 789-4561 Work (604) 456-7891 Main (604) 789-1234 megaphone (234) 567-8990 Note that the first five labels are default labels provided by the Contacts application and the last label, "megaphone", is a custom label. I wrote the following method to retrieve and display the labels and phone numbers for each person in the address book: -(void)displayPhoneNumbersForAddressBook { ABAddressBookRef book = ABAddressBookCreate(); CFArrayRef people = ABAddressBookCopyArrayOfAllPeople(book); ABRecordRef record = CFArrayGetValueAtIndex(people, 0); ABMultiValueRef multi = ABRecordCopyValue(record, kABPersonPhoneProperty); NSLog(@"---------" ); NSLog(@"displayPhoneNumbersForAddressBook" ); CFStringRef label, phone; for (CFIndex i = 0; i < ABMultiValueGetCount(multi); ++i) { label = ABMultiValueCopyLabelAtIndex(multi, i); phone = ABMultiValueCopyValueAtIndex(multi, i); NSLog(@"label: \"%@\" number: \"%@\"", (NSString*)label, (NSString*)phone); CFRelease(label); CFRelease(phone); } NSLog(@"---------" ); CFRelease(multi); CFRelease(people); CFRelease(book); } and here is the output for the address book entry that I entered: 2010-03-08 13:24:28.789 test2m[2479:207] --------- 2010-03-08 13:24:28.789 test2m[2479:207] displayPhoneNumbersForAddressBook 2010-03-08 13:24:28.790 test2m[2479:207] label: "_$!<Mobile>!$_" number: "(604) 123-4567" 2010-03-08 13:24:28.790 test2m[2479:207] label: "iPhone" number: "(778) 123-4567" 2010-03-08 13:24:28.791 test2m[2479:207] label: "_$!<Home>!$_" number: "(604) 789-4561" 2010-03-08 13:24:28.791 test2m[2479:207] label: "_$!<Work>!$_" number: "(604) 456-7891" 2010-03-08 13:24:28.792 test2m[2479:207] label: "_$!<Main>!$_" number: "(604) 789-1234" 2010-03-08 13:24:28.792 test2m[2479:207] label: "megaphone" number: "(234) 567-8990" 2010-03-08 13:24:28.793 test2m[2479:207] --------- What are the markup characters _$!< and >!$_ surrounding most, save for iPhone, of the default labels for? Can you point me to where in the "Address Book Programming Guide for iPhone OS" I can find the information? Thank you for your help.

    Read the article

  • xslt help - my transform is not rendering correctly

    - by Hcabnettek
    Hi All, I'm trying to apply an xlst transformation using the following file. This is very basic, but I wanted to build off of this when I get it working correctly. <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl"> <xsl:template match="/"> <div> <h2>Station Inventory</h2> <hr/> <xsl:apply-templates/> </div> </xsl:template> Here is some xml I'm using for the source. <StationInventoryList xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.dummy-tmdd-address"> <StationInventory> <station-id>9940</station-id> <station-name>Zone 9940-SEB</station-name> <station-travel-direction>SEB</station-travel-direction> <detector-list> <detector> <detector-id>2910</detector-id> <detector-name>1999 West Smith Exit SEB</detector-name> </detector> <detector> <detector-id>9205</detector-id> <detector-name>CR-155 Exit SEB</detector-name> </detector> <detector> <detector-id>9710</detector-id> <detector-name>Pt of View SEB</detector-name> </detector> </detector-list> </StationInventory> </StationInventoryList> Any ideas what I'm doing wrong? The simple intent here is to make a list of station, then make a list of detectors at a station. This is a small piece of the XML. It would have multiple StationInventory elements. I'm using the data as the source for an asp:xml control and the xslt file as the transformsource. var service = new InternalService(); var result = service.StationInventory(); invXml.DocumentContent = result; invXml.TransformSource = "StationInventory.xslt"; invXml.DataBind(); Any tips are of course appreciated. Have a terrific weekend. Cheers, ~ck

    Read the article

  • Why is two-way binding in silverlight not working?

    - by Edward Tanguay
    According to how Silverlight TwoWay binding works, when I change the data in the FirstName field, it should change the value in CheckFirstName field. Why is this not the case? ANSWER: Thank you Jeff, that was it, for others: here is the full solution with downloadable code. XAML: <StackPanel> <Grid x:Name="GridCustomerDetails"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="300"/> </Grid.ColumnDefinitions> <TextBlock VerticalAlignment="Center" Margin="10" Grid.Row="0" Grid.Column="0">First Name:</TextBlock> <TextBox Margin="10" Grid.Row="0" Grid.Column="1" Text="{Binding FirstName, Mode=TwoWay}"/> <TextBlock VerticalAlignment="Center" Margin="10" Grid.Row="1" Grid.Column="0">Last Name:</TextBlock> <TextBox Margin="10" Grid.Row="1" Grid.Column="1" Text="{Binding LastName}"/> <TextBlock VerticalAlignment="Center" Margin="10" Grid.Row="2" Grid.Column="0">Address:</TextBlock> <TextBox Margin="10" Grid.Row="2" Grid.Column="1" Text="{Binding Address}"/> </Grid> <Border Background="Tan" Margin="10"> <TextBlock x:Name="CheckFirstName"/> </Border> </StackPanel> Code behind: public Page() { InitializeComponent(); Customer customer = new Customer(); customer.FirstName = "Jim"; customer.LastName = "Taylor"; customer.Address = "72384 South Northern Blvd."; GridCustomerDetails.DataContext = customer; Customer customerOutput = (Customer)GridCustomerDetails.DataContext; CheckFirstName.Text = customer.FirstName; }

    Read the article

  • XSLT: Display unique rows of filtered XML recordset

    - by Chris G.
    I've got a recordset that I'm filtering on a particular field (i.e. Manager ="Frannklin"). Now I'd like to group the results of that filtered recordset based on another field (Client). I can't seem to get Muenchian grouping to work right. Any thoughts? TIA! CG My filter looks like this: <xsl:key name="k1" match="Row" use="@Manager"/> <xsl:param name="dvt_filterval">Frannklin</xsl:param> <xsl:variable name="Rows" select="/dsQueryResponse/Rows/Row" /> <xsl:variable name="FilteredRowsAttr" select="$Rows[normalize-space(@*[name()=$FieldNameNoAtSign])=$dvt_filterval ]" /> Templates <xsl:apply-templates select="$FilteredRowsAttr[generate-id() = generate-id(key('k1',@Manager))]" mode="g1000a"> </xsl:apply-templates> <xsl:template match="Row" mode="g1000a"> Client: <xsl:value-of select="@Client"/> </xsl:template> Results I'm getting Client: Beta Client: Beta Client: Beta Client: Gamma Client: Delta Results I want Client: Beta Client: Gamma Client: Delta Sample recordset <dsQueryResponse> <Rows> <Row Manager="Smith" Client="Alpha " Project_x0020_Name="Annapolis" PM_x0023_="00123" /> <Row Manager="Ford" Client="Alpha " Project_x0020_Name="Brown" PM_x0023_="00124" /> <Row Manager="Cronkite" Client="Beta " Project_x0020_Name="Gannon" PM_x0023_="00129" /> <Row Manager="Clinton, Bill" Client="Beta " Project_x0020_Name="Harvard" PM_x0023_="00130" /> <Row Manager="Frannklin" Client="Beta " Project_x0020_Name="Irving" PM_x0023_="00131" /> <Row Manager="Frannklin" Client="Beta " Project_x0020_Name="Jakarta" PM_x0023_="00132" /> <Row Manager="Frannklin" Client="Beta " Project_x0020_Name="Vassar" PM_x0023_="00135" /> <Row Manager="Jefferson" Client="Gamma " Project_x0020_Name="Stamford" PM_x0023_="00141" /> <Row Manager="Cronkite" Client="Gamma " Project_x0020_Name="Tufts" PM_x0023_="00142" /> <Row Manager="Frannklin" Client="Gamma " Project_x0020_Name="UCLA" PM_x0023_="00143" /> <Row Manager="Jefferson" Client="Gamma " Project_x0020_Name="Villanova" PM_x0023_="00144" /> <Row Manager="Carter" Client="Delta " Project_x0020_Name="Drexel" PM_x0023_="00150" /> <Row Manager="Clinton" Client="Delta " Project_x0020_Name="Iona" PM_x0023_="00151" /> <Row Manager="Frannklin" Client="Delta " Project_x0020_Name="Temple" PM_x0023_="00152" /> <Row Manager="Ford" Client="Epsilon " Project_x0020_Name="UNC" PM_x0023_="00157" /> <Row Manager="Clinton" Client="Epsilon " Project_x0020_Name="Berkley" PM_x0023_="00158" /> </Rows> </dsQueryResponse>

    Read the article

  • ASP NET MVC (loading data from database)

    - by rah.deex
    hi experts, its me again... i have some code like this.. using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MvcGridSample.Models { public class CustomerService { private List<SVC> Customers { get { List<SVC> customers; if (HttpContext.Current.Session["Customers"] != null) { customers = (List<SVC>) HttpContext.Current.Session["Customers"]; } else { //Create customer data store and save in session customers = new List<SVC>(); InitCustomerData(customers); HttpContext.Current.Session["Customers"] = customers; } return customers; } } public SVC GetByID(int customerID) { return this.Customers.AsQueryable().First(customer => customer.seq_ == customerID); } public IQueryable<SVC> GetQueryable() { return this.Customers.AsQueryable(); } public void Add(SVC customer) { this.Customers.Add(customer); } public void Update(SVC customer) { } public void Delete(int customerID) { this.Customers.RemoveAll(customer => customer.seq_ == customerID); } private void InitCustomerData(List<SVC> customers) { customers.Add(new SVC { ID = 1, FirstName = "John", LastName = "Doe", Phone = "1111111111", Email = "[email protected]", OrdersPlaced = 5, DateOfLastOrder = DateTime.Parse("5/3/2007") }); customers.Add(new SVC { ID = 2, FirstName = "Jane", LastName = "Doe", Phone = "2222222222", Email = "[email protected]", OrdersPlaced = 3, DateOfLastOrder = DateTime.Parse("4/5/2008") }); customers.Add(new SVC { ID = 3, FirstName = "John", LastName = "Smith", Phone = "3333333333", Email = "[email protected]", OrdersPlaced = 25, DateOfLastOrder = DateTime.Parse("4/5/2000") }); customers.Add(new SVC { ID = 4, FirstName = "Eddie", LastName = "Murphy", Phone = "4444444444", Email = "[email protected]", OrdersPlaced = 1, DateOfLastOrder = DateTime.Parse("4/5/2003") }); customers.Add(new SVC { ID = 5, FirstName = "Ziggie", LastName = "Ziggler", Phone = null, Email = "[email protected]", OrdersPlaced = 0, DateOfLastOrder = null }); customers.Add(new SVC { ID = 6, FirstName = "Michael", LastName = "J", Phone = "666666666", Email = "[email protected]", OrdersPlaced = 5, DateOfLastOrder = DateTime.Parse("12/3/2007") }); } } } those codes is an example that i've got from the internet.. in that case, the data is created and saved in session before its shown.. the things that i want to ask is how if i want to load the data from table? i'am a newbie here.. please help :) thank b4 for advance..

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • help merging perl code routines together for file processing

    - by jdamae
    I need some perl help in putting these (2) processes/code to work together. I was able to get them working individually to test, but I need help bringing them together especially with using the loop constructs. I'm not sure if I should go with foreach..anyways the code is below. Also, any best practices would be great too as I'm learning this language. Thanks for your help. Here's the process flow I am looking for: -read a directory -look for a particular file -use the file name to strip out some key information to create a newly processed file -process the input file -create the newly processed file for each input file read (if i read in 10, I create 10 new files) Sample Recs: col1,col2,col3,col4,col5 [email protected],[email protected],8,2009-09-24 21:00:46,1 [email protected],[email protected],16,2007-08-18 22:53:12,33 [email protected],[email protected],16,2007-08-18 23:41:23,33 Here's my test code: Target Filetype: `/backups/test/foo101.name.aue-foo_p002.20110124.csv` Part 1: my $target_dir = "/backups/test/"; opendir my $dh, $target_dir or die "can't opendir $target_dir: $!"; while (defined(my $file = readdir($dh))) { next if ($file =~ /^\.+$/); #Get filename attributes if ($file =~ /^foo(\d{3})\.name\.(\w{3})-foo_p(\d{1,4})\.\d+.csv$/) { print "$1\n"; print "$2\n"; print "$3\n"; } print "$file\n"; } Part 2: use strict; use Digest::MD5 qw(md5_hex); #Create new file open (NEWFILE, ">/backups/processed/foo$1.name.$2-foo_p$3.out") || die "cannot create file"; my $data = ''; my $line1 = <>; chomp $line1; my @heading = split /,/, $line1; my ($sep1, $sep2, $eorec) = ( "^A", "^E", "^D"); while (<>) { my $digest = md5_hex($data); chomp; my (@values) = split /,/; my $extra = "__mykey__$sep1$digest$sep2" ; $extra .= "$heading[$_]$sep1$values[$_]$sep2" for (0..scalar(@values)); $data .= "$extra$eorec"; print NEWFILE "$data"; } #print $data; close (NEWFILE);

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • Why does the following Java Script fail to load XML?

    - by Pavitar
    I have taken an example taught to us in class,wherein a javascript is used to retrieve data from the XML,but it doesn't work.Please help I have also added the XML file below. <html> <head> <title>Customer Info</title> <script language="javascript"> var xmlDoc = 0; var xmlObj = 0; function loadCustomers(){ xmlDoc = new ActiveXObject("Microsoft.XMLDOM"); xmlDoc.async = "false"; xmlDoc.onreadystatechange = displayCustomers; xmlDoc.load("customers.xml"); } function displayCustomers(){ if(xmlDoc.readyState == 4){ xmlObj = xmlDoc.documentElement; var len = xmlObj.childNodes.length; for(i = 0; i < len; i++){ var nodeElement = xmlObj.childNodes[i]; document.write(nodeElement.attributes[0].value); for(j = 0; j < nodeElement.childNodes.length; j++){ document.write(" " + nodeElement.childNodes[j].firstChild.nodeValue); } document.write("<br/>"); } } } </script> </head> <body> <form> <input type="button" value="Load XML" onClick="loadCustomers()"> </form> </body> </html> XML(customers.xml) <?xml version="1.0" encoding="UTF-8"?> <customers> <customer custid="CU101"> <pwd>PW101</pwd> <email>[email protected]</email> </customer> <customer custid="CU102"> <pwd>PW102</pwd> <email>[email protected]</email> </customer> <customer custid="CU103"> <pwd>PW103</pwd> <email>[email protected]</email> </customer> <customer custid="CU104"> <pwd>PW104</pwd> <email>[email protected]</email> </customer> </customers>

    Read the article

  • due at midnight - program compiles but has logic error(s)

    - by Leslie Laraia
    not sure why this program isn't working. it compiles, but doesn't provide the expected output. the input file is basically just this: Smith 80000 Jones 100000 Scott 75000 Washington 110000 Duffy 125000 Jacobs 67000 Here is the program: import java.io.File; import java.io.FileNotFoundException; import java.util.Scanner; /** * * @author Leslie */ public class Election { /** * @param args the command line arguments */ public static void main(String[] args) throws FileNotFoundException { // TODO code application logic here File inputFile = new File("C:\\Users\\Leslie\\Desktop\\votes.txt"); Scanner in = new Scanner(inputFile); int x = 0; String line = ""; Scanner lineScanner = new Scanner(line); line = in.nextLine(); while (in.hasNextLine()) { line = in.nextLine(); x++; } String[] senatorName = new String[x]; int[] votenumber = new int[x]; double[] votepercent = new double[x]; System.out.printf("%44s", "Election Results for State Senator"); System.out.println(); System.out.printf("%-22s", "Candidate"); //Prints the column headings to the screen System.out.printf("%22s", "Votes Received"); System.out.printf("%22s", "%of Total Votes"); int i; for(i=0; i<x; i++) { while(in.hasNextLine()) { line = in.nextLine(); String candidateName = lineScanner.next(); String candidate = candidateName.trim(); senatorName[i] = candidate; int votevalue = lineScanner.nextInt(); votenumber[i] = votevalue; } } votepercent = percentages(votenumber, x); for (i = 0; i < x; i++) { System.out.println(); System.out.printf("%-22s", senatorName[i]); System.out.printf("%22d", votenumber[i]); System.out.printf("%22.2f", votepercent[i]); System.out.println(); } } public static double [] percentages(int[] votenumber, int z) { double [] percentage = new double [z]; double total = 0; for (double element : votenumber) { total = total + element; } for(int i=0; i < votenumber.length; i++) { int y = votenumber[i]; percentage[i] = (y/total) * 100; } return percentage; } }

    Read the article

  • .NET Extension Objects with XSLT -- how to iterate over a collection?

    - by Pandincus
    Help me, Stackoverflow! I have a simple .NET 3.5 console app that reads some data and sends emails. I'm representing the email format in an XSLT stylesheet so that we can easily change the wording of the email without needing to recompile the app. We're using Extension Objects to pass data to the XSLT when we apply the transformation: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl" xmlns:EmailNotification="ext:EmailNotification"> -- this way, we can have statements like: <p> Dear <xsl:value-of select="EmailNotification:get_FullName()" />: </p> The above works fine. I pass the object via code like this (some irrelevant code omitted for brevity): // purely an example structure public struct EmailNotification { public string FullName { get; set; } } // Somewhere in some method ... var notification = new Notification("John Smith"); // ... XsltArgumentList xslArgs = new XsltArgumentList(); xslArgs.AddExtensionObject("ext:EmailNotification", notification); // ... // The part where it breaks! (This is where we do the transformation) xslt.Transform(fakeXMLDocument.CreateNavigator(), xslArgs, XmlWriter.Create(transformedXMLString)); So, all of the above code works. However, I wanted to get a little fancy (always my downfall) and pass a collection, so that I could do something like this: <p>The following accounts need to be verified:</p> <xsl:for-each select="EmailNotification:get_SomeCollection()"> <ul> <li> <xsl:value-of select="@SomeAttribute" /> </li> </ul> <xsl:for-each> When I pass the collection in the extension object and attempt to transform, I get the following error: "Extension function parameters or return values which have Clr type 'String[]' are not supported." or List, or IEnumerable, or whatever I try to pass in. So, my questions are: How can I pass in a collection to my XSLT? What do I put for the xsl:value-of select="" inside the xsl:for-each ? Is what I am trying to do impossible?

    Read the article

  • Multiple column subselect in mysql 5 (5.1.42)

    - by rubber boots
    This one seems to be a simple problem, but I can't make it work in a single select or nested select. Retrieve the authors and (if any) advisers of a paper (article) into one row. I order to explain the problem, here are the two data tables (pseudo) papers (id, title, c_year) persons (id, firstname, lastname) plus a link table w/one extra attribute (pseudo): paper_person_roles( paper_id person_id act_role ENUM ('AUTHOR', 'ADVISER') ) This is basically a list of written papers (table: papers) and a list of staff and/or students (table: persons) An article my have (1,N) authors. An article may have (0,N) advisers. A person can be in 'AUTHOR' or 'ADVISER' role (but not at the same time). The application eventually puts out table rows containing the following entries: TH: || Paper_ID | Author(s) | Title | Adviser(s) | TD: || 21334 |John Doe, Jeff Tucker|Why the moon looks yellow|Brown, Rayleigh| ... My first approach was like: select/extract a full list of articles into the application, eg.SELECT q.id, q.title FROM papers AS q ORDER BY q.c_year and save the results of the query into an array (in the application). After this step, loop over the array of the returned information and retrieve authors and advisers (if any), via prepared statement (? is the paper's id) from the link table like:APPLICATION_LOOP(paper_ids in array) SELECT p.lastname, p.firstname, r.act_role FROM persons AS p, paper_person_roles AS r WHERE p.id=r.person_id AND r.paper_id = ? # The application does further processing from here (pseudo): foreach record from resulting records if record.act_role eq 'AUTHOR' then join to author_column if record.act_role eq 'ADVISER' then join to avdiser_column end print id, author_column, title, adviser_column APPLICATION_LOOP This works so far and gives the desired output. Would it make sense to put the computation back into the DB? I'm not very proficient in nontrivial SQL and can't find a solution with a single (combined or nested) select call. I tried sth. like SELECT q.title (CONCAT_WS(' ', (SELECT p.firstname, p.lastname AS aunames FROM persons AS p, paper_person_roles AS r WHERE q.id=r.paper_id AND r.act_role='AUTHOR') ) ) AS aulist FROM papers AS q, persons AS p, paper_person_roles AS r in several variations, but no luck ... Maybe there is some chance? Thanks in advance r.b.

    Read the article

  • jquery .get/.post not working on ie 7 or 8, works fine in ff

    - by Samutz
    I have basically this on a page: <script type="text/javascript"> function refresh_context() { $("#ajax-context").html("Searching..."); $.get("/ajax/ldap_search.php", {cn: $("#username").val()}, function(xml) { $("#ajax-context").html($("display", xml).text()); $("#context").val($("context", xml).text()); }, 'xml'); } $(document).ready(function() { $("#username").blur(refresh_context); }); </script> <input type="text" name="username" id="username" maxlength="255" value="" /> <input type="hidden" name="context" id="context" value=""/> <div id="ajax-context"></div> What it should do (and does fine on Firefox) is when you type a username in to the #username field, it will run /ajax/ldap_search.php?cn=$username, which searches our company's ldap for the username and returns it's raw context and a formatted version of the context like this: <result> <display>Staff -&gt; Accounting -&gt; John Smith</display> <context>cn=jsmith,ou=Accounting,ou=Staff,ou=Users,o=MyOrg</context> </result> The formatted version (display) goes to the div #ajax-context and goes to the hidden input #context. (Also, the - are actually - "& g t ;" (without spaces)). However, on IE the div stays stuck on "Searching..." and the hidden input value stays blank. I've tried both .get and .post and neither work. I'm sure it's failing on the .get because if I try this, I don't even get the alert: $.get("/ajax/ldap_search.php", {cn: $("#username").val()}, function() { alert("Check"); }); Also, IE doesn't give me any script errors. Edit: Added "$(document).ready(function() {", the .blur was already in it in my code, but I forgot to include that in my post. Edit 2: The request is being sent and apache2 is receiving it: 10.135.128.96 - - [01/May/2009:10:04:27 -0500] "GET /ajax/ldap_search.php?cn=i_typed_this_in_IE HTTP/1.1" 200 69

    Read the article

  • How to align Definition Lists in IE6 ?

    - by ellander
    I'm having a major headache trying to align some and elements in ie6. It looks fine in ie7 and firefox but the dt elements don't appear in ie6. can anyone help? here is the code.. <div id="listMembers"> <h3>Members</h3> <dl class="myDL"> <dt>Name</dt> <dd>John Smith</dd> <dt>Address</dt> <dd>the street</dd> ... </dl> <div id="listOptions"> <div> <table>...</table> </div> </div> <div> and the css:- DL.myDL { BORDER-RIGHT: black 2px outset; PADDING-RIGHT: 2px; BORDER-TOP: black 2px outset; DISPLAY: block; PADDING-LEFT: 2px; BACKGROUND: #ccbe99; PADDING-BOTTOM: 2px; BORDER-LEFT: black 2px outset; WIDTH: auto; PADDING-TOP: 2px; BORDER-BOTTOM: black 2px outset; FONT-FAMILY: "Trebuchet MS", Arial, sans-serif } DL.myDL DT { CLEAR: both; PADDING-RIGHT: 3px; DISPLAY: inline; FLOAT: left; WIDTH: 250px; TEXT-ALIGN: right } I basically want the dt text aligned to the right and the dd on the right hand side with left align text. I reset the margin on all elements to be 0 before anything else in the css and the elements are within a dive with position relative.

    Read the article

  • Sharepoint AD imported users are becomming sporadically corrupted, causing us to have to create a new account

    - by TrevJen
    Sharepoint 2007 MOSS with AD imported users. All servers are 2008. ***UPDATE More details in testing. This Sharepoint is in an AD Child domain (clients.mycompany.local), which is sub to the root of the AD tree (mycompany.local). The user is in the parent tree (as are half of the other functional users. I have elevated the user rights to Domain. In looking at the logs, it seems that the Sharepoint server is trying to authenticate them by querying the DC for the clients domain (which is the way it normally works and still works for all existing identically configured users). I think if I could force it to authenticate up to the top domain DC then it would be ok?? I have around 50 users, over the past 2 months, I have had a handful of the users suddenly unable to login to Sharepoint. When they login, they either get a blank screen or they are repropmted. These users are using accounts that have been used for many months, sometimes the problem originates with a password change. In all cases, the users account works on every other Active Directory authenticated resource (domain, exchange, LDAP). In the most recent case, last night I was forced deleted a user ("John smith") because of corruption. The orifinal account name was jsmith. I deleted him from active directory, then deleted him from the profile list in Sharepoint Shared Services. I could not find a way to delete him from the Sharepoint user list, but I reran the import after recreating his account (renamed it too just to be sure to "smithj"). At first, this did not wor, the user could still access all other resources but Sharepoint. then, some 30 minutes later it inexplicably started working. This morning, the user changed passwords, which immediatly broke the login on Sharepoint again. Logs by request from matt b Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 7888 Task Category: Office Server General Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: A runtime exception was detected. Details follow. Message: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) – TrevJen 19 hours ago Techinal Details: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) at Microsoft.SharePoint.SPGlobal.HandleUnauthorizedAccessException(UnauthorizedAccessException ex) at Microsoft.SharePoint.Library.SPRequest.UpdateField(String bstrUrl, String bstrListName, String bstrXML) at Microsoft.SharePoint.SPField.UpdateCore(Boolean bToggleSealed) – TrevJen 19 hours ago at Microsoft.SharePoint.SPField.Update() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.PushSchemaToList(Boolean& bAddedColumn) at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.SynchFull() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.Synch() at Microsoft.Office.Server.Diagnostics.FirstChanceHandler.ExceptionFilter(Boolean fRethrowException, TryBlock tryBlock, FilterBlock filter, CatchBlock catchBlock, FinallyBlock finallyBlock) – TrevJen 19 hours ago Log Name: Application Source: Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 5553 Task Category: User Profiles Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: failure trying to synch site 6fea15e2-0899-4c19-9016-44d77834c018 for ContentDB b2002b0b-3d4c-411a-8c4f-3d047ca9322c WebApp 3aff7051-455d-4a70-a377-5b1c36df618e. Exception message was Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)). – TrevJen 18 hours ago

    Read the article

  • Solaris 11 Launch Blog Carnival Roundup

    - by constant
    Solaris 11 is here! And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter go to eleven. Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure: Getting Started/Overview A lot of people speculated that the official launch of Solaris 11 would be on 11/11 (whatever way you want to turn it), but it actually happened two days earlier. Larry Wake himself offers 11 Reasons Why Oracle Solaris 11 11/11 Isn't Being Released on 11/11/11. Then, Larry goes on with a summary: Oracle Solaris 11: The First Cloud OS gives you a short and sweet rundown of what the major new features of Solaris 11 are. Jeff Victor has his own list of What's New in Oracle Solaris 11. A popular Solaris 11 meme is to write a blog post about 11 favourite features: Jim Laurent's 11 Reasons to Love Solaris 11, Darren Moffat's 11 Favourite Solaris 11 Features, Mike Gerdt's 11 of My Favourite Things! are just three examples of "11 Favourite Things..." type blog posts, I'm sure many more will follow... More official overview content for Solaris 11 is available from the Oracle Tech Network Solaris 11 Portal. Also, check out Rick Ramsey's blog post Solaris 11 Resources for System Administrators on the OTN Blog and his secret 5 Commands That Make Solaris Administration Easier post from the OTN Garage. (Automatic) Installation and the Image Packaging System (IPS) The brand new Image Packaging System (IPS) and the Automatic Installer (IPS), together with numerous other install/packaging/boot/patching features are among the most significant improvements in Solaris 11. But before installing, you may wonder whether Solaris 11 will support your particular set of hardware devices. Again, the OTN Garage comes to the rescue with Rick Ramsey's post How to Find Out Which Devices Are Supported By Solaris 11. Included is a useful guide to all the first steps to get your Solaris 11 system up and running. Tim Foster had a whole handful of blog posts lined up for the launch, teaching you everything you need to know about IPS but didn't dare to ask: The IPS System Repository, IPS Self-assembly - Part 1: Overlays and Part 2: Multiple Packages Delivering Configuration. Watch out for more IPS posts from Tim! If installing packages or upgrading your system from the net makes you uneasy, then you're not alone: Jim Laurent will tech you how Building a Solaris 11 Repository Without Network Connection will make your life easier. Many of you have already peeked into the future by installing Solaris 11 Express. If you're now wondering whether you can upgrade or whether a fresh install is necessary, then check out Alan Hargreaves's post Upgrading Solaris 11 Express b151a with support to Solaris 11. The trick is in upgrading your pkg(1M) first. Networking One of the first things to do after installing Solaris 11 (or any operating system for that matter), is to set it up for networking. Solaris 11 comes with the brand new "Network Auto-Magic" feature which can figure out everything by itself. For those cases where you want to exercise a little more control, Solaris 11 left a few people scratching their heads. Fortunately, Tschokko wrote up this cool blog post: Solaris 11 manual IPv4 & IPv6 configuration right after the launch ceremony. Thanks, Tschokko! And Milek points out a long awaited networking feature in Solaris 11 called Solaris 11 - hostmodel, which I know for a fact that many customers have looked forward to: How to "bind" a Solaris 11 system to a specific gateway for specific IP address it is using. Steffen Weiberle teaches us how to tune the Solaris 11 networking stack the proper way: ipadm(1M). No more fiddling with ndd(1M)! Check out his tutorial on Solaris 11 Network Tunables. And if you want to get even deeper into the networking stack, there's nothing better than DTrace. Alan Maguire teaches you in: DTracing TCP Congestion Control how to probe deeply into the Solaris 11 TCP/IP stack, the TCP congestion control part in particular. Don't miss his other DTrace and TCP related blog posts! DTrace And there we are: DTrace, the king of all observability tools. Long time DTrace veteran and co-author of The DTrace book*, Brendan Gregg blogged about Solaris 11 DTrace syscall provider changes. BTW, after you install Solaris 11, check out the DTrace toolkit which is installed by default in /usr/dtrace/DTT. It is chock full of handy DTrace scripts, many of which contributed by Brendan himself! Security Another big theme in Solaris 11, and one that is crucial for the success of any operating system in the Cloud is Security. Here are some notable posts in this category: Darren Moffat starts by showing us how to completely get rid of root: Completely Disabling Root Logins on Solaris 11. With no root user, there's one major entry point less to worry about. But that's only the start. In Immutable Zones on Encrypted ZFS, Darren shows us how to double the security of your services: First by locking them into the new Immutable Zones feature, then by encrypting their data using the new ZFS encryption feature. And if you're still missing sudo from your Linux days, Darren again has a solution: Password (PAM) caching for Solaris su - "a la sudo". If you're wondering how much compute power all this encryption will cost you, you're in luck: The Solaris X86 AESNI OpenSSL Engine will make sure you'll use your Intel's embedded crypto support to its fullest. And if you own a brand new SPARC T4 machine you're even luckier: It comes with its own SPARC T4 OpenSSL Engine. Dan Anderson's posts show how there really is now excuse not to encrypt any more... Developers Solaris 11 has a lot to offer to developers as well. Ali Bahrami has a series of blog posts that cover diverse developer topics: elffile: ELF Specific File Identification Utility, Using Stub Objects and The Stub Proto: Not Just For Stub Objects Anymore to name a few. BTW, if you're a developer and want to shape the future of Solaris 11, then Vijay Tatkar has a hint for you: Oracle (Sun Systems Group) is hiring! Desktop and Graphics Yes, Solaris 11 is a 100% server OS, but it can also offer a decent desktop environment, especially if you are a developer. Alan Coopersmith starts by discussing S11 X11: ye olde window system in today's new operating system, then Calum Benson shows us around What's new on the Solaris 11 Desktop. Even accessibility is a first-class citizen in the Solaris 11 user interface. Peter Korn celebrates: Accessible Oracle Solaris 11 - released! Performance Gone are the days of "Slowaris", when Solaris was among the few OSes that "did the right thing" while others cut corners just to win benchmarks. Today, Solaris continues doing the right thing, and it delivers the right performance at the same time. Need proof? Check out Brian's BestPerf blog with continuous updates from the benchmarking lab, including Recent Benchmarks Using Oracle Solaris 11! Send Me More Solaris 11 Launch Articles! These are just a few of the more interesting blog articles that came out around the Solaris 11 launch, I'm sure there are many more! Feel free to post a comment below if you find a particularly interesting blog post that hasn't been listed so far and share your enthusiasm for Solaris 11! *Affiliate link: Buy cool stuff and support this blog at no extra cost. We both win! var flattr_uid = '26528'; var flattr_tle = 'Solaris 11 Launch Blog Carnival Roundup'; var flattr_dsc = '<strong>Solaris 11 is here!</strong>And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter <a href="http://en.wikipedia.org/wiki/Up_to_eleven">go to eleven</a>.Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure:'; var flattr_tag = 'blogging,digest,Oracle,Solaris,solaris,solaris 11'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2011/11/solaris-11-launch-blog-carnival-roundup'; var flattr_lng = 'en_GB'

    Read the article

  • Interesting articles and blogs on SPARC T4

    - by mv
    Interesting articles and blogs on SPARC T4 processor   I have consolidated all the interesting information I could get on SPARC T4 processor and its hardware cryptographic capabilities.  Hope its useful. 1. Advantages of SPARC T4 processor  Most important points in this T4 announcement are : "The SPARC T4 processor was designed from the ground up for high speed security and has a cryptographic stream processing unit (SPU) integrated directly into each processor core. These accelerators support 16 industry standard security ciphers and enable high speed encryption at rates 3 to 5 times that of competing processors. By integrating encryption capabilities directly inside the instruction pipeline, the SPARC T4 processor eliminates the performance and cost barriers typically associated with secure computing and makes it possible to deliver high security levels without impacting the user experience." Data Sheet has more details on these  : "New on-chip Encryption Instruction Accelerators with direct non-privileged support for 16 industry-standard cryptographic algorithms plus random number generation in each of the eight cores: AES, Camellia, CRC32c, DES, 3DES, DH, DSA, ECC, Kasumi, MD5, RSA, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512" I ran "isainfo -v" command on Solaris 11 Sparc T4-1 system. It shows the new instructions as expected  : $ isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32  2.  Dan Anderson's Blog have some interesting points about how these can be used : "New T4 crypto instructions include: aes_kexpand0, aes_kexpand1, aes_kexpand2,         aes_eround01, aes_eround23, aes_eround01_l, aes_eround_23_l, aes_dround01, aes_dround23, aes_dround01_l, aes_dround_23_l.       Having SPARC T4 hardware crypto instructions is all well and good, but how do we access it ?      The software is available with Solaris 11 and is used automatically if you are running Solaris a SPARC T4.  It is used internally in the kernel through kernel crypto modules.  It is available in user space through the PKCS#11 library." 3.   Dans' Blog on Where's the Crypto Libraries? Although this was written in 2009 but still is very useful  "Here's a brief tour of the major crypto libraries shown in the digraph:   The libpkcs11 library contains the PKCS#11 API (C_\*() functions, such as C_Initialize()). That in turn calls library pkcs11_softtoken or pkcs11_kernel, for userland or kernel crypto providers. The latter is used mostly for hardware-assisted cryptography (such as n2cp for Niagara2 SPARC processors), as that is performed more efficiently in kernel space with the "kCF" module (Kernel Crypto Framework). Additionally, for Solaris 10, strong crypto algorithms were split off in separate libraries, pkcs11_softtoken_extra libcryptoutil contains low-level utility functions to help implement cryptography. libsoftcrypto (OpenSolaris and Solaris Nevada only) implements several symmetric-key crypto algorithms in software, such as AES, RC4, and DES3, and the bignum library (used for RSA). libmd implements MD5, SHA, and SHA2 message digest algorithms" 4. Difference in T3 and T4 Diagram in this blog is good and self explanatory. Jeff's blog also highlights the differences  "The T4 servers have improved crypto acceleration, described at https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine. It is "just built in" so administrators no longer have to assign crypto accelerator units to domains - it "just happens". Every physical or virtual CPU on a SPARC-T4 has full access to hardware based crypto acceleration at all times. .... For completeness sake, it's worth noting that the T4 adds more crypto algorithms, and accelerates Camelia, CRC32c, and more SHA-x." 5. About performance counters In this blog, performance counters are explained : "Note that unlike T3 and before, T4 crypto doesn't require kernel modules like ncp or n2cp, there is no visibility of crypto hardware with kstats or cryptoadm. T4 does provide hardware counters for crypto operations.  You can see these using cpustat: cpustat -c pic0=Instr_FGU_crypto 5 You can check the general crypto support of the hardware and OS with the command "isainfo -v". Since T4 crypto's implementation now allows direct userland access, there are no "crypto units" visible to cryptoadm.  " For more details refer Martin's blog as well. 6. How to turn off  SPARC T4 or Intel AES-NI crypto acceleration  I found this interesting blog from Darren about how to turn off  SPARC T4 or Intel AES-NI crypto acceleration. "One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.   The alternate to this is having the application coded to call getisax(2) system call and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so and libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  For SPARC T4 : export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" .. For Intel systems with AES-NI support: export LD_HWCAP="-aes"" Note that LD_HWCAP is explained in  http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html "LD_HWCAP, LD_HWCAP_32, and LD_HWCAP_64 -  Identifies an alternative hardware capabilities value... A “-” prefix results in the capabilities that follow being removed from the alternative capabilities." 7. Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing This Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing explains more details.  It has DTrace scripts which may come in handy : "To ensure the hardware-assisted cryptographic acceleration is configured to use and working with the security scenarios, it is recommended to use the following Solaris DTrace script. #!/usr/sbin/dtrace -s pid$1:libsoftcrypto:yf*:entry, pid$target:libsoftcrypto:rsa*:entry, pid$1:libmd:yf*:entry { @[probefunc] = count(); } tick-1sec { printa(@ops); trunc(@ops); }" Note that I have slightly modified the D Script to have RSA "libsoftcrypto:rsa*:entry" as well as per recommendations from Chi-Chang Lin. 8. References http://www.oracle.com/us/corporate/features/sparc-t4-announcement-494846.html http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-1-ds-487858.pdf https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine https://blogs.oracle.com/DanX/entry/where_s_the_crypto_libraries https://blogs.oracle.com/darren/entry/howto_turn_off_sparc_t4 http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html   https://blogs.oracle.com/hardware/entry/unleash_the_power_of_cryptography https://blogs.oracle.com/cmt/entry/t4_crypto_cheat_sheet https://blogs.oracle.com/martinm/entry/t4_performance_counters_explained  https://blogs.oracle.com/jsavit/entry/no_mau_required_on_a http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-business-wp-524472.pdf

    Read the article

  • Life Technologies: Making Life Easier to Manage

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} When we’re thinking about customer engagement, we’re acutely aware of all the forces at play competing for our customer’s attention. Solutions that make life easier for our customers draw attention to themselves. We tend to engage more when there is a distinct benefit and we can take a deep breath and accept that there is hope in the world and everything isn’t designed to frustrate us and make our lives miserable. (sigh…) When products are designed to automate processes that were consuming hours of our time with no relief in sight, they deserve to be recognized. One of our recent Oracle Fusion Middleware Innovation Award Winners in the WebCenter category, Life Technologies, has recently posted a video promoting their “award winning” solution. The Oracle Innovation Awards are part of the overall Oracle Excellence awards given to customers for innovation with Oracle products. More info here. Their award nomination included this description: Life Technologies delivered the My Life Service Portal as part of a larger Digital Hub strategy. This Portal is the first of its kind in the biotechnology service providing industry. The Portal provides access to Life Technologies cloud based service monitoring system where all customer deployed instruments can be remotely monitored and proactively repaired. The portal provides alerts from these cloud based monitoring services directly to the customer and to Life Technologies Field Engineers. The Portal provides insight into the instruments and services customers purchased for the purpose of analyzing and anticipating future customer needs and creating targeted sales and service programs. This portal not only provides benefits for Life Technologies internal sales and service teams but provides customers a central place to track all pertinent instrument information including: instrument service history instrument status and previous activities instrument performance analytics planned service visits warranty/contract information discussion forums social networks for lab management and collaboration alerts and notifications on all of the above team scheduling for instrument usage promote optional reagents required to keep instruments performing From their website The Life Technologies Instruments & Services Portal Helps You Save Time and Gain Peace of Mind Introducing the new, award-winning, free online tool that enables easier management of your instrument use and care, faster response to requests for service or service quotes, and instant sharing of key instrument and service information with your colleagues. Now – this unto itself is obviously beneficial for their customers who were previously burdened with having to do all of these tasks separately, manually and inconsistently by nature. Now – all in one place and free to their customers – a portal that ties it all together. They now have built the platform to give their customers yet another reason to do business with them – Their headline on their product page says it all: “Life is now easier to manage - All your instrument use and care in one place – the no-cost, no-hassle Instruments and Services Portal.” Of course – it’s very convenient that the company name includes “Life” and now can also promote to their clients and prospects that doing business with them is easy and their sophisticated lab equipment is easy to manage. In an industry full of PhD’s – “Easy” isn’t usually the first word that comes to mind, but Life Technologies has now tied the word to their brand in a very eloquent way. Between our work lives and family or personal lives, getting any mono-focused minutes of dedicated attention has become such a rare occurrence in our current era of multi-tasking that those moments of focus are highly prized. So – when something is done really well – so well that it becomes captivating and urges sharing impulses – I take notice and dig deeper and most of the time I discover other gems not so hidden below the surface. And then I share with those I know would enjoy and understand. In the spirit of full disclosure, I must admit here that the first person I shared the videos below with was my daughter. She’s in her senior year of high school in the midst of her college search. She’s passionate about her academics and has already decided that she wants to study Neuroscience in college and like her mother will be in for the long haul to a PhD eventually. In a summer science program at Smith College 2 summers ago – she sent the family famous text to me – “I just dissected a sheep’s brain – wicked cool!” – This was followed by an equally memorable text this past summer in a research mentorship in Neuroscience at UConn – “Just sliced up some rat brain. Reminded me of a deli slicer at the supermarket… sorry I forgot to call last night…” So… needless to say – I knew I had an audience that would enjoy and understand these videos below and are now being shared among her science classmates and faculty. And evidently - so does Life Technologies! They’ve done a great job on these making them fun and something that will easily be shared among their customers social networks. They’ve created a neuro-archetypal character, “Ph.Diddy” and know that their world of clients in academics, research, and other institutions would understand and enjoy the “edutainment” value in this series of videos on their YouTube channel that pokes fun at the stereotypes while also promoting their products at the same time. They use their Facebook page for additional engagement with their clients and as another venue to promote these videos. Enjoy this one as well! More to be found here: http://www.youtube.com/lifetechnologies Stay tuned to this Oracle WebCenter blog channel. Tomorrow we'll be taking a look at another winner of the Innovation Awards, LADWP - helping to keep the citizens of Los Angeles engaged with their Water and Power provider.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95  | Next Page >