Search Results

Search found 1168 results on 47 pages for 'jim fred'.

Page 46/47 | < Previous Page | 42 43 44 45 46 47  | Next Page >

  • what exactly is the danger of an uninitialized pointer in C

    - by akh2103
    I am trying get a handle on C as I work my way thru Jim Trevor's "Cyclone: A safe dialect of C" for a PL class. Trevor and his co-authors are trying to make a safe version of C, so they eliminate uninitialized pointers in their language. Googling around a bit on uninitialized pointers, it seems like un-initialized pointers point to random locations in memory. It seems like this alone makes them unsafe. If you reference an un-itilialized pointer, you jump to an unsafe part of memory. Period. But the way Trevor talks about them seems to imply that it is more complex. He cites the following code, and explains that when the function FrmGetObjectIndex dereferences f, it isn’t accessing a valid pointer, but rather an unpredictable address — whatever was on the stack when the space for f was allocated. What does Trevor mean by "whatever was on the stack when the space for f was allocated"? Are "un-initialized" pointers initialized to random locations in memory by default? Or does their "random" behavior have to do with the memory allocated for these pointers getting filled with strange values (that are then referenced) because of unexpected behavior on the stack. Form *f; switch (event->eType) { case frmOpenEvent: f = FrmGetActiveForm(); ... case ctlSelectEvent: i = FrmGetObjectIndex(f, field); ... }

    Read the article

  • Virgin STI Help

    - by Mutuelinvestor
    I am working on a horse racing application and I'm trying to utilize STI to model a horse's connections. A horse's connections is comprised of his owner, trainer and jockey. Over time, connections can change for a variety of reasons: The horse is sold to another owner The owner switches trainers or jockey The horse is claimed by a new owner As it stands now, I have model this with the following tables: horses connections (join table) stakeholders (stakeholder has three sub classes: jockey, trainer & owner) Here are my clases and associations: class Horse < ActiveRecord::Base has_one :connection has_one :owner_stakeholder, :through => :connection has_one :jockey_stakeholder, :through => :connection has_one :trainer_stakeholder, :through => :connection end class Connection < ActiveRecord::Base belongs_to :horse belongs_to :owner_stakeholder belongs_to :jockey_stakeholder belongs_to :trainer_stakeholder end class Stakeholder < ActiveRecord::Base has_many :connections has_many :horses, :through => :connections end class Owner < Stakeholder # Owner specific code goes here. end class Jockey < Stakeholder # Jockey specific code goes here. end class Trainer < Stakeholder # Trainer specific code goes here. end One the database end, I have inserted a Type column in the connections table. Have I modeled this correctly. Is there a better/more elegant approach. Thanks in advance for you feedback. Jim

    Read the article

  • Select * from 'many to many' SQL relationship

    - by Rampant Creative Group
    I'm still learning SQL and my brain is having a hard time with this one. Say I have 3 tables: teams players and teams_players as my link table All I want to do is run a query to get each team and the players on them. I tried this: SELECT * FROM teams INNER JOIN teams_players ON teams.id = teams_players.team_id INNER JOIN players ON teams_players.player_id = players.id But it returned a separate row for each player on each team. Is JOIN the right way to do it or should I be doing something else? ----------------------------------------- Edit Ok, so from what I'm hearing, this isn't necessarily a bad way to do it. I'll just have to group the data by team while I'm doing my loop. I have not yet tried the modified SQL statements provided, but I will today and get back to you. To answer the question about structure - I guess I wasn't thinking about the returned row structure which is part of what lead to my confusion. In this particular case, each team is limited to 4 players (or less) so I guess the structure that would be helpful to me is something like the following: teams.id, teams.name, players.id, players.name, players.id, players.name, players.id, players.name, players.id, players.name, 1 Team ABC 1 Jim 2 Bob 3 Ned 4 Roy 2 Team XYZ 2 Bob 3 Ned 5 Ralph 6 Tom

    Read the article

  • Have to login twice. PHP sessions and login troubles with Chrome and Opera.

    - by Robert
    The problem I am encountering is that for my login form I have to login twice for the session to register properly, but only in Chrome (my version is 4.0.249.89) and Opera (my version is 10.10). Here is the stripped down code that I am testing on: Login Page: session_start(); $_SESSION['user_id'] = 8; $_SESSION['user_name'] = 'Jim'; session_write_close(); header('Location: http://www.my-domain-name.com/'); exit(); Home Page: session_start(); if ( isset($_SESSION['user_id']) ) { echo "You are logged in!"; } else { echo "You are NOT logged in!"; } Logout Page: session_start(); session_unset(); session_destroy(); header('Location: http://www.my-domain-name.com/'); exit(); Currently, under a fresh load with no cookies, if I go to my-domain-name.com/login/ it will redirect to the home page and say "You are NOT logged in!" but if I go there again it will say "You are logged in!". Any ideas? Thanks for your help.

    Read the article

  • WCF throttling and emptying the listenBacklog quickly

    - by user1161625
    I'm sort of a WCF newbie (inheriting tasks from someone) and I'm trying to better throttle my application. We have a service that sends print jobs to the Windows print queue and it seems that rather than dumping all of the jobs into the queue it holds on to them in the listenBacklog and slowly releases them to the Windows print queue. Here is my throttling behavior: <behavior name="throttling"> <serviceThrottling maxConcurrentCalls="144" maxConcurrentSessions="1168" /> <serviceDebug httpHelpPageEnabled="true" includeExceptionDetailInFaults="true"/> </behavior> I'm using a custom binding for this service also which is here: <customBinding> <binding name="oneWayPrintBinding" receiveTimeout="00:30:00"> <reliableSession maxPendingChannels="16" inactivityTimeout="01:30:00" /> <binaryMessageEncoding> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" /> </binaryMessageEncoding> <tcpTransport maxReceivedMessageSize="67108864" listenBacklog="1024" /> </binding> </customBinding> Any recommendations would be very helpful. Thank you! -Jim

    Read the article

  • Populate tableView with more than one array

    - by Ewoods
    The short version: Is there a way to populate one specific row in a tableView with one value from one array, then populate another row in that same tableView with one value from a different array? For example, cell 1 would have the first value from Array A, cell 2 would have the first value from Array B, cell 3 would have the first value from Array C, etc. The long version: I hope this isn't too confusing. I've got an array of names, and then three more arrays with actions associated with those people. For example, the names array has Jim, Bob, and Sue, and then there's an array for eating, reading, and sleeping that records every time each person does one of these things (all of these arrays are populated from a MySQL database). The names array is used to populate a root tableView. Tapping on one of the names brings up a detail view controller that has another tableView that only has three rows. This part is all working fine. What I want to happen is when I tap on a name, it moves to the detail view and the three cells would then show the last event for that person for each of the three activities. Tapping on one of those three events then moves to a new view controller with a tableView that shows every event for that category. For example, if I tapped on Bob, the second page would show the last time Bob ate, read, and slept. Tapping on the first row would bring up a table that showed every time Bob has eaten. So far I've only been able to populate the second tableView with all of the rows from one of the arrays. I need it the other way around (one row from all of the arrays).

    Read the article

  • Delivering activity feed items in a moderately scalable way

    - by sotangochips
    The application I'm working on has an activity feed where each user can see their friends' activity (much like Facebook). I'm looking for a moderately scalable way to show a given users' activity stream on the fly. I say 'moderately' because I'm looking to do this with just a database (Postgresql) and maybe memcached. For instance, I want this solution to scale to 200k users each with 100 friends. Currently, there is a master activity table that stores the rendered html for the given activity (Jim added a friend, George installed an application, etc.). This master activity table keeps the source user, the html, and a timestamp. Then, there's a separate ('join') table that simply keeps a pointer to the person who should see this activity in their friend feed, and a pointer to the object in the main activity table. So, if I have 100 friends, and I do 3 activities, then the join table will then grow to 300 items. Clearly this table will grow very quickly. It has the nice property, though, that fetching activity to show to a user takes a single (relatively) inexpensive query. The other option is to just keep the main activity table and query it by saying something like: select * from activity where source_user in (1, 2, 44, 2423, ... my friend list) This has the disadvantage that you're querying for users who may never be active, and as your friend list grows, this query can get slower and slower. I see the pros and the cons of both sides, but I'm wondering if some SO folks might help me weigh the options and suggest one way or they other. I'm also open to other solutions, though I'd like to keep it simple and not install something like CouchDB, etc. Many thanks!

    Read the article

  • Druapl & Regular PHP Integration

    - by user333128
    I'm building a new website which has one core application and many content pages. Content pages are mostly dynamic and I require a way to manage this dynamic content on a regular basis. The core application's main functionality is a 3 step process or reading user data (input page), reading data from MySQL (product page) and submitting an application to an email address (application page). Ideally I would like to build the core application in regular PHP and leverage Drupal for its content management capabilities. Can Drupal and regular PHP be integrated as I suggest easily? My feeling is that coding the core application as a Drupal module(s) will add layers of complexity that could be difficult to code from the outset and maintain later on as the system matures - so I would really like to just use regular PHP. Let me explain where dynamic content (managed by the CMS) intersects with the core application: Dynamic content such as FAQ data is used both on the 'normal' help pages and also within a mini-feed displayed within core application pages down a right hand side column. In this column, 3 random questions are pulled from the database and displayed as a feed. When users click on FAQ question they are not taken away from the core application product page but are instead shown data in a pop-up window displaying the question and answer. In addition, users can browse other questions and answers through a simple navigation menu within this popup. There are 3 such like feeds as I describe above that I require on the core application product page. So, what is the ideal solution here in terms of 'keeping things simple' for both the management of dynamic content and the ease of coding the core application? Can 'regular PHP' and Drupal co-exist 'peacefully'? If so, how is this technically possible? Because there is some content managed by Drupal contained within core application pages, can the core application still be coded in regular PHP? Any advice / suggestions? Thank you! Jim.

    Read the article

  • jquery accessing dynamic class selectors

    - by dinotom
    Given the following html rendering; <fieldset id="fld_Rye"> <legend>7 Main St.</legend> <div> <table> <tr> <th class="wideCol"><b><i>Service Description</i></b></th> <th class="wideCol"><b><i>Service Name</i></b></th> <th class="normalPlusWidth"><b><i>Contact Name</i></b></th> </tr> <ItemTemplate> <tr class=""> <td class="servDesc">&nbsp;<b>Plumbing Services</b></td> <td class="servName">&nbsp;<b>Flynnsters's Plumbing</b></td> <td class="servContact">&nbsp;<b>Jim Flynnster</b></td> </tr> </ItemTemplate> I am trying to access all the td's with a class of servDesc, get their width into an array and get the max width from that array to reset the css width of that class using jquery on page load. I cant seem to get the right selector for those tags, and I've tried at least 50 variations. My latest try; var maxTdWidth; var servDescCols = []; $("#fld_Rye td#servDesc").each(function () { alert("found one"); servDescCols.push(this.width()); }); maxTdWidth = Math.max.apply(Math, servDescCols);

    Read the article

  • Can a lambda can be used to change a List's values in-place ( without creating a new list)?

    - by Saint Hill
    I am trying to determine the correct way of transforming all the values in a List using the new lambdas feature in the upcoming release of Java 8 without creating a **new** List. This pertains to times when a List is passed in by a caller and needs to have a function applied to change all the contents to a new value. For example, the way Collections.sort(list) changes a list in-place. What is the easiest way given this transforming function and this starting list: String function(String s){ return [some change made to value of s]; } List<String> list = Arrays.asList("Bob", "Steve", "Jim", "Arbby"); The usual way of applying a change to all the values in-place was this: for (int i = 0; i < list.size(); i++) { list.set(i, function( list.get(i) ); } Does lambdas and Java 8 offer: an easier and more expressive way? a way to do this without setting up all the scaffolding of the for(..) loop?

    Read the article

  • Testing for optional node - Delphi XE

    - by Seti Net
    What is the proper way to test for the existance of an optional node? A snipped of my XML is: <Antenna > <Mount Model="text" Manufacture="text"> <BirdBathMount/> </Mount> </Antenna> But it could also be: <Antenna > <Mount Model="text" Manufacture="text"> <AzEl/> </Mount> </Antenna> The child of Antenna could either be BirdBath or AzEl but not both... In Delphi XE I have tried: if (MountNode.ChildNodes.Nodes['AzEl'] <> unassigned then //Does not work if (MountNode.ChildNodes['BirdBathMount'].NodeValue <> null) then // Does not work if (MountNode.BirdBathMount.NodeValue <> null) then // Does not work I use XMLSpy to create the schema and the example XML and they parse correctly. I use Delphi XE to create the bindings and it works on most other combinations fine. This must have a simple answer that I have just overlooked - but what? Thanks...... Jim

    Read the article

  • Stumbleupon type query...

    - by Chris Denman
    Wow, makes your head spin! I am about to start a project, and although my mySql is OK, I can't get my head around what required for this: I have a table of web addresses. id,url 1,http://www.url1.com 2,http://www.url2.com 3,http://www.url3.com 4,http://www.url4.com I have a table of users. id,name 1,fred bloggs 2,john bloggs 3,amy bloggs I have a table of categories. id,name 1,science 2,tech 3,adult 4,stackoverflow I have a table of categories the user likes as numerical ref relating to the category unique ref. For example: user,category 1,4 1,6 1,7 1,10 2,3 2,4 3,5 . . . I have a table of scores relating to each website address. When a user visits one of these sites and says they like it, it's stored like so: url_ref,category 4,2 4,3 4,6 4,2 4,3 5,2 5,3 . . . So based on the above data, URL 4 would score (in it's own right) as follows: 2=2 3=2 6=1 What I was hoping to do was pick out a random URL from over 2,000,000 records based on the current users interests. So if the logged in user likes categories 1,2,3 then I would like to ORDER BY a score generated based on their interest. If the logged in user likes categories 2 3 and 6 then the total score would be 5. However, if the current logged in user only like categories 2 and 6, the URL score would be 3. So the order by would be in context of the logged in users interests. Think of stumbleupon. I was thinking of using a set of VIEWS to help with sub queries. I'm guessing that all 2,000,000 records will need to be looked at and based on the id of the url it will look to see what scores it has based on each selected category of the current user. So we need to know the user ID and this gets passed into the query as a constant from the start. Ain't got a clue! Chris Denman

    Read the article

  • Operator== in derived class never gets called.

    - by Robin Welch
    Can someone please put me out of my misery with this? I'm trying to figure out why a derived operator== never gets called in a loop. To simplify the example, here's my Base and Derived class: class Base { // ... snipped bool operator==( const Base& other ) const { return name_ == other.name_; } }; class Derived : public Base { // ... snipped bool operator==( const Derived& other ) const { return ( static_cast<const Base&>( *this ) == static_cast<const Base&>( other ) ? age_ == other.age_ : false ); }; Now when I instantiate and compare like this ... Derived p1("Sarah", 42); Derived p2("Sarah", 42); bool z = ( p1 == p2 ); ... all is fine. Here the operator== from Derived gets called, but when I loop over a list, comparing items in a list of pointers to Base objects ... list<Base*> coll; coll.push_back( new Base("fred") ); coll.push_back( new Derived("sarah", 42) ); // ... snipped // Get two items from the list. Base& obj1 = **itr; Base& obj2 = **itr2; cout << obj1.asString() << " " << ( ( obj1 == obj2 ) ? "==" : "!=" ) << " " << obj2.asString() << endl; Here asString() (which is virtual and not shown here for brevity) works fine, but obj1 == obj2 always calls the Base operator== even if the two objects are Derived. I know I'm going to kick myself when I find out what's wrong, but if someone could let me down gently it would be much appreciated.

    Read the article

  • protobuf-net NOT faster than binary serialization?

    - by Ashish Gupta
    I wrote a program to serialize a 'Person' class using XMLSerializer, BinaryFormatter and ProtoBuf. I thought protobuf-net should be faster than the other two. Protobuf serialization was faster than XMLSerialization but much slower than the binary serialization. Is my understanding incorrect? Please make me understand this. Thank you for the help. Following is the output:- Person got created using protocol buffer in 347 milliseconds Person got created using XML in 1462 milliseconds Person got created using binary in 2 milliseconds Code below using System; using System.Collections.Generic; using System.Linq; using System.Text; using ProtoBuf; using System.IO; using System.Diagnostics; using System.Runtime.Serialization.Formatters.Binary; namespace ProtocolBuffers { class Program { static void Main(string[] args) { string XMLSerializedFileName = "PersonXMLSerialized.xml"; string ProtocolBufferFileName = "PersonProtocalBuffer.bin"; string BinarySerializedFileName = "PersonBinary.bin"; var person = new Person { Id = 12345, Name = "Fred", Address = new Address { Line1 = "Flat 1", Line2 = "The Meadows" } }; Stopwatch watch = Stopwatch.StartNew(); watch.Start(); using (var file = File.Create(ProtocolBufferFileName)) { Serializer.Serialize(file, person); } watch.Stop(); Console.WriteLine(watch.ElapsedMilliseconds.ToString()); Console.WriteLine("Person got created using protocol buffer in " + watch.ElapsedMilliseconds.ToString() + " milliseconds " ); watch.Reset(); watch.Start(); System.Xml.Serialization.XmlSerializer x = new System.Xml.Serialization.XmlSerializer(person.GetType()); using (TextWriter w = new StreamWriter(XMLSerializedFileName)) { x.Serialize(w, person); } watch.Stop(); Console.WriteLine(watch.ElapsedMilliseconds.ToString()); Console.WriteLine("Person got created using XML in " + watch.ElapsedMilliseconds.ToString() + " milliseconds"); watch.Reset(); watch.Start(); using (Stream stream = File.Open(BinarySerializedFileName, FileMode.Create)) { BinaryFormatter bformatter = new BinaryFormatter(); //Console.WriteLine("Writing Employee Information"); bformatter.Serialize(stream, person); } watch.Stop(); Console.WriteLine(watch.ElapsedMilliseconds.ToString()); Console.WriteLine("Person got created using binary in " + watch.ElapsedMilliseconds.ToString() + " milliseconds"); Console.ReadLine(); } } [ProtoContract] [Serializable] public class Person { [ProtoMember(1)] public int Id {get;set;} [ProtoMember(2)] public string Name { get; set; } [ProtoMember(3)] public Address Address {get;set;} } [ProtoContract] [Serializable] public class Address { [ProtoMember(1)] public string Line1 {get;set;} [ProtoMember(2)] public string Line2 {get;set;} } }

    Read the article

  • how do i scroll through 100 photos in UIScrollView in IPhone

    - by mwangima
    I'm trying to scroll through images being downloaded from a users online album (like in the facebook iphone app) since i can't load all images into memory, i'm loading 3 at a time (prev,current & next). then removing image(prev-1) & image (next +1) from the uiscroller subviews. my logic works fine in the simulator but fails in the device with this error: [CALayer retain]: message sent to deallocated instance what could be the problem below is my code sample - (void)scrollViewDidEndDecelerating:(UIScrollView *)_scrollView { pageControlIsChangingPage = NO; CGFloat pageWidth = _scrollView.frame.size.width; int page = floor((_scrollView.contentOffset.x - pageWidth / 2) / pageWidth) + 1; if (page1 && page<=(pageControl.numberOfPages-3)) { [self removeThisView:(page-2)]; [self removeThisView:(page+2)]; } if(page0) { NSLog(@"<< PREVIOUS"); [self showPhoto:(page-1)]; } [self showPhoto:page]; if(page<(pageControl.numberOfPages-1)) { //NSLog(@"NEXT "); [self showPhoto:page+1]; NSLog(@"FINISHED LOADING NEXT "); } } -(void) showPhoto:(NSInteger)index { CGFloat cx = scrollView.frame.size.width*index; CGFloat cy = 40; CGRect rect=CGRectMake( 0, 0,320, 480); rect.origin.x = cx; rect.origin.y = cy; AsyncImageView* asyncImage = [[AsyncImageView alloc] initWithFrame:rect]; asyncImage.tag = 999; NSURL *url = [NSURL URLWithString:[pics objectAtIndex:index]]; [asyncImage loadImageFromURL:url place:CGRectMake(150, 190, 30, 30) member:memberid isSlide:@"Yes" picId:[picsIds objectAtIndex:index]]; [scrollView addSubview:asyncImage]; [asyncImage release]; } -(void) removeThisView:(NSInteger)index { if(index<[[scrollView subviews] count] && [[scrollView subviews] objectAtIndex:index]!=nil){ if ([[[scrollView subviews] objectAtIndex:index] isKindOfClass:[AsyncImageView class]] || [[[scrollView subviews] objectAtIndex:index] isKindOfClass:[UIImageView class]]) { [[[scrollView subviews] objectAtIndex:index] removeFromSuperview]; } } } For the record it works OK in the simulator, but not the iphone device itself. any ideas will be appreciated. cheers, fred.

    Read the article

  • non blocking TCP-acceptor not reading from socket

    - by Abruzzo Forte e Gentile
    I have the code below implementing a NON-Blocking TCP acceptor. Clients are able to connect without any problem and the writing seems occurring as well, but the acceptor doesn't read anything from the socket and the call to read() blocks indefinitely. Am I using some wrong setting for the acceptor? Kind Regards AFG int main(){ create_programming_socket(); poll_programming_connect(); while(1){ poll_programming_read(); } } int create_programming_socket(){ int cnt = 0; p_listen_socket = socket( AF_INET, SOCK_STREAM, 0 ); if( p_listen_socket < 0 ){ return 1; } int flags = fcntl( p_listen_socket, F_GETFL, 0 ); if( fcntl( p_listen_socket, F_SETFL, flags | O_NONBLOCK ) == -1 ){ return 1; } bzero( (char*)&p_serv_addr, sizeof(p_serv_addr) ); p_serv_addr.sin_family = AF_INET; p_serv_addr.sin_addr.s_addr = INADDR_ANY; p_serv_addr.sin_port = htons( p_port ); if( bind( p_listen_socket, (struct sockaddr*)&p_serv_addr , sizeof(p_serv_addr) ) < 0 ) { return 1; } listen( p_listen_socket, 5 ); return 0; } int poll_programming_connect(){ int retval = 0; static socklen_t p_clilen = sizeof(p_cli_addr); int res = accept( p_listen_socket, (struct sockaddr*)&p_cli_addr, &p_clilen ); if( res > 0 ){ p_conn_socket = res; int flags = fcntl( p_conn_socket, F_GETFL, 0 ); if( fcntl( p_conn_socket, F_SETFL, flags | O_NONBLOCK ) == -1 ){ retval = 1; }else{ p_connected = true; } }else if( res == -1 && ( errno == EWOULDBLOCK || errno == EAGAIN ) ) { //printf( "poll_sock(): accept(c_listen_socket) would block\n"); }else{ retval = 1; } return retval; } int poll_programming_read(){ int retval = 0; bzero( p_buffer, 256 ); int numbytes = read( p_conn_socket, p_buffer, 255 ); if( numbytes > 0 ) { fprintf( stderr, "poll_sock(): read() read %d bytes\n", numbytes ); pkt_struct2_t tx_buf; int fred; int i; } else if( numbytes == -1 && ( errno == EWOULDBLOCK || errno == EAGAIN ) ) { //printf( "poll_sock(): read() would block\n"); } else { close( p_conn_socket ); p_connected = false; retval = 1; } return retval; }

    Read the article

  • Programmatically use a server as the Build Server for multiple Project Collections

    Important: With this post you create an unsupported scenario by Microsoft. It will break your support for this server with Microsoft. So handle with care. I am the administrator an a TFS environment with a lot of Project Collections. In the supported configuration of Microsoft 2010 you need one Build Controller per Project Collection, and it is not supported to have multiple Build Controllers installed. Jim Lamb created a post how you can modify your system to change this behaviour. But since I have so many Project Collections, I automated this with the API of TFS. When you install a new build server via the UI, you do the following steps Register the build service (with this you hook the windows server into the build server environment) Add a new build controller Add a new build agent So in pseudo code, the code would look like foreach (projectCollection in GetAllProjectCollections) {       CreateNewWindowsService();       RegisterService();       AddNewController();       AddNewAgent(); } The following code fragements show you the most important parts of the method implementations. Attached is the full project. CreateNewWindowsService We create a new windows service with the SC command via the Diagnostics.Process class:             var pi = new ProcessStartInfo("sc.exe")                         {                             Arguments =                                 string.Format(                                     "create \"{0}\" start= auto binpath= \"C:\\Program Files\\Microsoft Team Foundation Server 2010\\Tools\\TfsBuildServiceHost.exe              /NamedInstance:{0}\" DisplayName= \"Visual Studio Team Foundation Build Service Host ({1})\"",                                     serviceHostName, tpcName)                         };            Process.Start(pi);             pi.Arguments = string.Format("failure {0} reset= 86400 actions= restart/60000", serviceHostName);            Process.Start(pi); RegisterService The trick in this method is that we set the NamedInstance static property. This property is Internal, so we need to set it through reflection. To get information on these you need nice Microsoft friends and the .Net reflector .             // Indicate which build service host instance we are using            typeof(BuildServiceHostUtilities).Assembly.GetType("Microsoft.TeamFoundation.Build.Config.BuildServiceHostProcess").InvokeMember("NamedInstance",              System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.SetProperty | System.Reflection.BindingFlags.Static, null, null, new object[] { serviceName });             // Create the build service host            serviceHost = buildServer.CreateBuildServiceHost(serviceName, endPoint);            serviceHost.Save();             // Register the build service host            BuildServiceHostUtilities.Register(serviceHost, user, password); AddNewController and AddNewAgent Once you have the BuildServerHost, the rest is pretty straightforward. There are methods on the BuildServerHost to modify the controllers and the agents                 controller = serviceHost.CreateBuildController(controllerName);                 agent = controller.ServiceHost.CreateBuildAgent(agentName, buildDirectory, controller);                controller.AddBuildAgent(agent); You have now seen the highlights of the application. If you need it and want to have sample information when you work in this area, download the app TFS2010_RegisterBuildServerToTPCs

    Read the article

  • 2010 Collaboration Summit Impressions

    - by Elena Zannoni
    It's a bit late, but there you have it anyway. April 14 to 16 I attended the Linux Foundation Collaboration Summit in SFO. I was running two tracks, one on tracing and one on tools. You can see the tracks and the slides here: http://events.linuxfoundation.org/events/collaboration-summit/slides I was pretty busy both days, Thursday with a whole day tracing track, Friday with a half day toolchain track. The sessions were well attended, the rooms were full, with people spilling in the hallways. Some new things were presented, like Kernelshark, by Steve Rostedt, a GUI (yes, believe it or not, a GUI) written in GTK. It is very nice, showing a timeline for traced kernel events, and you can zoom in and filter at will. It works on the latest kernels, and it requires some new things/fixes in GTK. I don't recall exactly what version of GTK though. Dominique Toupin from Ericsson presented something about user requirements for tracing. Mostly though about who's who in the embedded world, and eclipse. Masami and Mathieu presented an update on their work. See their slides. The interesting thing to me was of course the new version of uprobes w/o underlying utrace presented by Jim Keniston. At the end of the session we had a discussion about the future of utrace. Roland wasn't there, butTom Tromey (also from RedHat) collected the feedback. Basically we are at a standstill now that utrace has been rejected yet again. There wasn't much advise that anybody could give, except jokingly, we decided that the only way in is to make it a part of perf events. There needs to be another refactoring, but most of all, this "killer app" that would be enabled because of utrace hasn't materialized yet. We think that having a good debugging story on Linux is enough of a killer app, for instance allowing multiple tracers, and not relying on SIGCHLD etc. I think this wasn't completely clear to the kernel community. Trying to achieve debugging via a gdb stub inside the kernel interfacing to utrace and that is controlled via the gdb remote protocol also lost its appeal (thankfully, since the gdb remote protocol is archaic). Somebody would have to be creative in how to submit utrace. It doesn't have to be called utrace (it was really a random choice, for lack of a letter that was not already used in front of the word "trace"). So basically, I think the ideas behind utrace are sound, and the necessity of a new interface is acknowledged. But I believe the integration/submission process with the kernel folks has to restart from scratch, clean slate. We'll see. There are many conferences and meetings coming up in the near future where things can be discussed further. On the second day, Friday, we had the tools talks. It was interesting to observe the more "kernel" oriented people's behavior towards the gcc etc community. The first talk was by Mark Mitchell, about Gcc and its new plugin architecture. After that, Paolo talked about the new C++1x standard, which will be finalized in 2011. Many features are already implemented in the libstdc++ library and gcc and usable today. We had a few minutes (really, the half day track was quite short) where Bradley Kuhn from the Software Freedom Law Center explained the GPLv3 exception for gcc (due to the new gcc plugin architecture and the availability of the intermediate results from the compilation, which is a new thing). I will not try to explain, but basically you cannot take the result of the preprocessing and then use that in your own proprietary compiler. After, we had a talk by Ian Taylor about the new Gold linker. One good thing in that area is that they are trying to make gold the new default linker (for instance Fedora will use gold as the distro linker). However gold is very different from binutils' old linker. It doesn't use a linker script, for instance. The kernel has been linked with gold many times as an exercise (the ground work was done by Kris Van Hees), but this needs to be constantly tested/monitored because the kernel linker script is very complex, and uses esoteric features (Wenji is now monitoring that each kernel RC can be built with gold). It was positive that people are now aware of gold and the need for it to be ported to more architectures. It seems that the porting is very easy, with little arch dependent code. Finally Tom Tromey presented about gdb and the archer project. Archer is a development branch of gdb mostly done by RedHat, where they are focusing on better c++ printing, c++ expression parsing, and plugins. The archer work is merged regularly in the gdb mainline. In general it was a good conference. I did miss most of the first day, because that's when I flew in. But I caught a couple of talks. Nothing earth shattering, except for Google giving each person registered a free Android phone. Yey.

    Read the article

  • Off The Beaten Path—Three Things Growing Midsize Companies are Thankful For

    - by Christine Randle
    By: Jim Lein, Senior Director, Oracle Accelerate Last Sunday I went on a walkabout.  That’s when I just step out the door of my Colorado home and hike through the mountains for hours with no predetermined destination. I favor “social trails”, the unmapped routes pioneered by both animal and human explorers.  These tracks  are usually more challenging than established, marked routes and you can’t be 100% sure of where you’re going to end up. But I’ve found the rewards to be much greater. For awhile, I pondered on how—depending upon your perspective—the current economic situation worldwide could be viewed as either a classic “the glass is half empty” or a “the glass is half full” scenario. Midsize companies buy Oracle to grow and so I’m continually amazed and fascinated by the success stories our customers relate to me.  Oracle’s successful midsize companies are growing via innovation, agility, and opportunity. For them, the glass isn’t half full—it’s overflowing. Growing Midsize Companies are Thankful for: Innovation The sun angling through the pine trees reminded me of a conversation with a European customer a year ago May.  You might not recognize the name but, chances are, your local evening weather report relies on this company’s weather observation, monitoring and measurement products.  For decades, the company was recognized in its industry for product innovation, but its recent rapid growth comes from tailoring end to end product and service solutions based on the needs of distinctly different customer groups across industrial, public sector, and defense sectors.  Hours after that phone call I was walking my dog in a local park and came upon a small white plastic box sprouting short antennas and dangling by a nylon cord from a tree branch.  I cut it down. The name of that customer’s company was stamped on the housing. “It’s a radiosonde from a high altitude weather balloon,” he told me the next day. “Keep it as a souvenir.”  It sits on my fireplace mantle and elicits many questions from guests. Growing Midsize Companies are Thankful for: Agility In July, I had another interesting discussion with the CFO of an Asia-Pacific company which owns and operates a large portfolio of leisure assets. They are best known for their epic outdoor theme parks. However, their primary growth today is coming from a chain of indoor amusement centers in the USA where billiards, bowling, and laser tag take the place of roller coasters, kiddy rides, and wave pools. With mountains and rivers right out my front door, I’m not much for theme parks, but I’ll take a spirited game of laser tag any day.  This company has grown dramatically since first implementing Oracle ERP more than a decade ago. Their profitable expansion into a completely foreign market is derived from the ability to replicate proven and efficient best business practices across diverse operating environments.  They recently went live on Oracle’s Fusion HCM and Taleo. Their CFO explained to me how, with thousands of employees in three countries, Fusion HCM and Taleo would enable them to remain incredibly agile by acting on trends linking individual employee performance to their management, establishing and maintaining those best practices. Growing Midsize Companies are Thankful for: Opportunity I have three GPS apps on my iPhone. I use them mainly to keep track of my stats—distance, time, and vertical gain. However, every once in awhile I need to find the most efficient route back home before dark from my current location (notice I didn’t use the word “lost”). In August I listened in on an interview with the CFO of another European company that designs and delivers telematics solutions—the integrated use of telecommunications and informatics—for managing the mobile workforce. These solutions enable customers to achieve evolutionary step-changes in their performance and service delivery. Forgive the overused metaphor, but this is route optimization on steroids.  The company’s executive team saw an opportunity in this emerging market and went “all in”. Consequently, they are being rewarded with tremendous growth results and market domination by providing the ability for their clients to collect and analyze performance information related to fuel consumption, service workforce safety, and asset productivity. This Thanksgiving, I’m thankful for health, family, friends, and a career with an innovative company that helps companies leverage top tier software to drive and manage growth. And I’m thankful to have learned the lesson that good things happen when you get off the beaten path—both when hiking and when forging new routes through a complex world economy. Halfway through my walkabout on Sunday, after scrambling up a long stretch of scree-covered hill, I crested a ridge with an obstructed view of 14,265 ft Mt Evans just a few miles to the west.  There, nowhere near a house or a trail, someone had placed a wooden lounge chair. Its wood was worn and faded but it was sturdy. I had lunch and a cold drink in my pack. Opportunity knocked and I seized it. Happy Thanksgiving.  

    Read the article

  • 5 Things I Learned About the IT Labor Shortage

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein | Sr. Principal Product Marketing Director | Oracle Midsize Programs | @JimLein Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 5 Things I Learned About the IT Labor Shortage A gentle autumn breeze is nudging the last golden leaves off the aspen trees. It’s time to wrap up the series that I started back in April, “The Growing IT Labor Shortage: Are You Feeling It?” Even in a time of relatively high unemployment, labor shortages exist depending on many factors, including location, industry, IT requirements, and company size. According to Manpower Groups 2013 Talent Shortage Survey, 35% of hiring managers globally are having difficulty filling jobs. Their top three challenges in filling jobs are: 1. lack of technical competencies (hard skills) 2. Lack of available applicants 3. Lack of experience The same report listed Technicians as the most difficult position to fill in the United States For most companies, Human Capital and Talent Management have never been more strategic and they are striving for ways streamline processes, reduce turnover, and lower costs (see this Oracle whitepaper, “ Simplify Workforce Management and Increase Global Agility”). Everyone I spoke to—partner, customer, and Oracle experts—agreed that it can be extremely challenging to hire and retain IT talent in today’s labor market. And they generally agreed on the causes: a. IT is so pervasive that there are myriad moving parts requiring support and expertise, b. thus, it’s hard for university graduates to step in and contribute immediately without experience and specialization, c. big IT companies generally aren’t the talent incubators that they were in the freewheeling 90’s due to bottom line pressures that require hiring talent that can hit the ground running, and d. it’s often too expensive for resource-strapped midsize companies to invest the time and money required to get graduates up to speed. Here are my top lessons learned from my conversations with the experts. 1. A Better Title Would Have Been, “The Challenges of Finding and Retaining IT Talent That Matches Your Requirements” There are more applicants than jobs but it’s getting tougher and tougher to find individuals that perfectly fit each and every role. Top performing companies are increasingly looking to hire the “almost ready”, striving to keep their existing talent more engaged, and leveraging their employee’s social and professional networks to quickly narrow down candidate searches (here’s another whitepaper, “A Strategic Approach to Talent Management”). 2. Size Matters—But So Does Location Midsize companies must strive to build cultures that compete favorably with what large enterprises can offer, especially when they aren’t within commuting distance of IT talent strongholds. They can’t always match the compensation and benefits offered by large enterprises so it's paramount to offer candidates high quality of life and opportunities to build their resumes in alignment with their long term career aspirations. 3. Get By With a Little Help From Your Friends It doesn’t always make sense to invest time and money in training an employee on a task they will not perform frequently. Or get in a bidding war for talent with skills that are rare and in high demand. Many midsize companies are finding that it makes good economic sense to contract with partners for remote support rather than trying to divvy up each and every role amongst their lean staff. Internal staff can be assigned to roles that will have the highest positive impact on achieving organizational goals. 4. It’s Actually Both “What You Know” AND “Who You Know” If I was hiring someone today I would absolutely leverage the social and professional networks of my co-workers. Period. Most research shows that hiring in this manner is less expensive and time consuming AND produces better results. There is also some evidence that suggests new hires from employees’ networks have higher job performance and retention rates. 5. I Have New Respect for Recruiters and Hiring Managers My hats off to them—it’s not easy hiring and retaining top talent with today’s challenges. Check out the infographic, “A New Day: Taking HR from Chaos to Control”, on Oracle’s Human Capital Management solutions home page. You can also explore all of Oracle’s HCM solutions from that page based on your role. You can read all the posts in this series by clicking on the links in the right sidebar. Stay tuned…we’ll continue to post thought leadership on HCM and Talent Management topics.

    Read the article

  • Time Tracking on an Agile Team

    - by Stephen.Walther
    What’s the best way to handle time-tracking on an Agile team? Your gut reaction to this question might be to resist any type of time-tracking at all. After all, one of the principles of the Agile Manifesto is “Individuals and interactions over processes and tools”.  Forcing the developers on your team to track the amount of time that they devote to completing stories or tasks might seem like useless bureaucratic red tape: an impediment to getting real work done. I completely understand this reaction. I’ve been required to use time-tracking software in the past to account for each hour of my workday. It made me feel like Fred Flintstone punching in at the quarry mine and not like a professional. Why You Really Do Need Time-Tracking There are, however, legitimate reasons to track time spent on stories even when you are a member of an Agile team.  First, if you are working with an outside client, you might need to track the number of hours spent on different stories for the purposes of billing. There might be no way to avoid time-tracking if you want to get paid. Second, the Product Owner needs to know when the work on a story has gone over the original time estimated for the story. The Product Owner is concerned with Return On Investment. If the team has gone massively overtime on a story, then the Product Owner has a legitimate reason to halt work on the story and reconsider the story’s business value. Finally, you might want to track how much time your team spends on different types of stories or tasks. For example, if your team is spending 75% of their time doing testing then you might need to bring in more testers. Or, if 10% of your team’s time is expended performing a software build at the end of each iteration then it is time to consider better ways of automating the build process. Time-Tracking in SonicAgile For these reasons, we added time-tracking as a feature to SonicAgile which is our free Agile Project Management tool. We were heavily influenced by Jeff Sutherland (one of the founders of Scrum) in the way that we implemented time-tracking (see his article http://scrum.jeffsutherland.com/2007/03/time-tracking-is-anti-scrum-what-do-you.html). In SonicAgile, time-tracking is disabled by default. If you want to use this feature then the project owner must enable time-tracking in Project Settings. You can choose to estimate using either days or hours. If you are estimating at the level of stories then it makes more sense to choose days. Otherwise, if you are estimating at the level of tasks then it makes more sense to use hours. After you enable time-tracking then you can assign three estimates to a story: Original Estimate – This is the estimate that you enter when you first create a story. You don’t change this estimate. Time Spent – This is the amount of time that you have already devoted to the story. You update the time spent on each story during your daily standup meeting. Time Left – This is the amount of time remaining to complete the story. Again, you update the time left during your daily standup meeting. So when you first create a story, you enter an original estimate that becomes the time left. During each daily standup meeting, you update the time spent and time left for each story on the Kanban. If you had perfect predicative power, then the original estimate would always be the same as the sum of the time spent and the time left. For example, if you predict that a story will take 5 days to complete then on day 3, the story should have 3 days spent and 2 days left. Unfortunately, never in the history of mankind has anyone accurately predicted the exact amount of time that it takes to complete a story. For this reason, SonicAgile does not update the time spent and time left automatically. Each day, during the daily standup, your team should update the time spent and time left for each story. For example, the following table shows the history of the time estimates for a story that was originally estimated to take 3 days but, eventually, takes 5 days to complete: Day Original Estimate Time Spent Time Left Day 1 3 days 0 days 3 days Day 2 3 days 1 day 2 days Day 3 3 days 2 days 2 days Day 4 3 days 3 days 2 days Day 5 3 days 4 days 0 days In the table above, everything goes as predicted until you reach day 3. On day 3, the team realizes that the work will require an additional two days. The situation does not improve on day 4. All of the sudden, on day 5, all of the remaining work gets done. Real work often follows this pattern. There are long periods when nothing gets done punctuated by occasional and unpredictable bursts of progress. We designed SonicAgile to make it as easy as possible to track the time spent and time left on a story. Detecting when a Story Goes Over the Original Estimate Sometimes, stories take much longer than originally estimated. There’s a surprise. For example, you discover that a new software component is incompatible with existing software components. Or, you discover that you have to go through a month-long certification process to finish a story. In those cases, the Product Owner has a legitimate reason to halt work on a story and re-evaluate the business value of the story. For example, the Product Owner discovers that a story will require weeks to implement instead of days, then the story might not be worth the expense. SonicAgile displays a warning on both the Backlog and the Kanban when the time spent on a story goes over the original estimate. An icon of a clock is displayed. Time-Tracking and Tasks Another optional feature of SonicAgile is tasks. If you enable Tasks in Project Settings then you can break stories into one or more tasks. You can perform time-tracking at the level of a story or at the level of a task. If you don’t break a story into tasks then you can enter the time left and time spent for the story. As soon as you break a story into tasks, then you can no longer enter the time left and time spent at the level of the story. Instead, the time left and time spent for a story is rolled up from its tasks. On the Kanban, you can see how the time left and time spent for each task gets rolled up into each story. The progress bar for the story is rolled up from the progress bars for each task. The original estimate is never rolled up – even when you break a story into tasks. A story’s original estimate is entered separately from the original estimates of each of the story’s tasks. Summary Not every Agile team can avoid time-tracking. You might be forced to track time to get paid, to detect when you are spending too much time on a particular story, or to track the amount of time that you are devoting to different types of tasks. We designed time-tracking in SonicAgile to require the least amount of work to track the information that you need. Time-tracking is an optional feature. If you enable time-tracking then you can track the original estimate, time left, and time spent for each story and task. You can use time-tracking with SonicAgile for free. Register at http://SonicAgile.com.

    Read the article

  • Android - Create a custom multi-line ListView bound to an ArrayList

    - by Bill Osuch
    The Android HelloListView tutorial shows how to bind a ListView to an array of string objects, but you'll probably outgrow that pretty quickly. This post will show you how to bind the ListView to an ArrayList of custom objects, as well as create a multi-line ListView. Let's say you have some sort of search functionality that returns a list of people, along with addresses and phone numbers. We're going to display that data in three formatted lines for each result, and make it clickable. First, create your new Android project, and create two layout files. Main.xml will probably already be created by default, so paste this in: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  android:orientation="vertical"  android:layout_width="fill_parent"   android:layout_height="fill_parent">  <TextView   android:layout_height="wrap_content"   android:text="Custom ListView Contents"   android:gravity="center_vertical|center_horizontal"   android:layout_width="fill_parent" />   <ListView    android:id="@+id/ListView01"    android:layout_height="wrap_content"    android:layout_width="fill_parent"/> </LinearLayout> Next, create a layout file called custom_row_view.xml. This layout will be the template for each individual row in the ListView. You can use pretty much any type of layout - Relative, Table, etc., but for this we'll just use Linear: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  android:orientation="vertical"  android:layout_width="fill_parent"   android:layout_height="fill_parent">   <TextView android:id="@+id/name"   android:textSize="14sp"   android:textStyle="bold"   android:textColor="#FFFF00"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/>  <TextView android:id="@+id/cityState"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/>  <TextView android:id="@+id/phone"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/> </LinearLayout> Now, add an object called SearchResults. Paste this code in: public class SearchResults {  private String name = "";  private String cityState = "";  private String phone = "";  public void setName(String name) {   this.name = name;  }  public String getName() {   return name;  }  public void setCityState(String cityState) {   this.cityState = cityState;  }  public String getCityState() {   return cityState;  }  public void setPhone(String phone) {   this.phone = phone;  }  public String getPhone() {   return phone;  } } This is the class that we'll be filling with our data, and loading into an ArrayList. Next, you'll need a custom adapter. This one just extends the BaseAdapter, but you could extend the ArrayAdapter if you prefer. public class MyCustomBaseAdapter extends BaseAdapter {  private static ArrayList<SearchResults> searchArrayList;    private LayoutInflater mInflater;  public MyCustomBaseAdapter(Context context, ArrayList<SearchResults> results) {   searchArrayList = results;   mInflater = LayoutInflater.from(context);  }  public int getCount() {   return searchArrayList.size();  }  public Object getItem(int position) {   return searchArrayList.get(position);  }  public long getItemId(int position) {   return position;  }  public View getView(int position, View convertView, ViewGroup parent) {   ViewHolder holder;   if (convertView == null) {    convertView = mInflater.inflate(R.layout.custom_row_view, null);    holder = new ViewHolder();    holder.txtName = (TextView) convertView.findViewById(R.id.name);    holder.txtCityState = (TextView) convertView.findViewById(R.id.cityState);    holder.txtPhone = (TextView) convertView.findViewById(R.id.phone);    convertView.setTag(holder);   } else {    holder = (ViewHolder) convertView.getTag();   }      holder.txtName.setText(searchArrayList.get(position).getName());   holder.txtCityState.setText(searchArrayList.get(position).getCityState());   holder.txtPhone.setText(searchArrayList.get(position).getPhone());   return convertView;  }  static class ViewHolder {   TextView txtName;   TextView txtCityState;   TextView txtPhone;  } } (This is basically the same as the List14.java API demo) Finally, we'll wire it all up in the main class file: public class CustomListView extends Activity {     @Override     public void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.main);                 ArrayList<SearchResults> searchResults = GetSearchResults();                 final ListView lv1 = (ListView) findViewById(R.id.ListView01);         lv1.setAdapter(new MyCustomBaseAdapter(this, searchResults));                 lv1.setOnItemClickListener(new OnItemClickListener() {          @Override          public void onItemClick(AdapterView<?> a, View v, int position, long id) {           Object o = lv1.getItemAtPosition(position);           SearchResults fullObject = (SearchResults)o;           Toast.makeText(ListViewBlogPost.this, "You have chosen: " + " " + fullObject.getName(), Toast.LENGTH_LONG).show();          }          });     }         private ArrayList<SearchResults> GetSearchResults(){      ArrayList<SearchResults> results = new ArrayList<SearchResults>();            SearchResults sr1 = new SearchResults();      sr1.setName("John Smith");      sr1.setCityState("Dallas, TX");      sr1.setPhone("214-555-1234");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Jane Doe");      sr1.setCityState("Atlanta, GA");      sr1.setPhone("469-555-2587");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Steve Young");      sr1.setCityState("Miami, FL");      sr1.setPhone("305-555-7895");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Fred Jones");      sr1.setCityState("Las Vegas, NV");      sr1.setPhone("612-555-8214");      results.add(sr1);            return results;     } } Notice that we first get an ArrayList of SearchResults objects (normally this would be from an external data source...), pass it to the custom adapter, then set up a click listener. The listener gets the item that was clicked, converts it back to a SearchResults object, and does whatever it needs to do. Fire it up in the emulator, and you should wind up with something like this:

    Read the article

  • Validation firing in ASP.NET MVC

    - by rkrauter
    I am lost on this MVC project I am working on. I also read Brad Wilsons article. http://bradwilson.typepad.com/blog/2010/01/input-validation-vs-model-validation-in-aspnet-mvc.html I have this: public class Employee { [Required] public int ID { get; set; } [Required] public string FirstName { get; set; } [Required] public string LastName { get; set; } } and these in a controller: public ActionResult Edit(int id) { var emp = GetEmployee(); return View(emp); } [HttpPost] public ActionResult Edit(int id, Employee empBack) { var emp = GetEmployee(); if (TryUpdateModel(emp,new string[] { "LastName"})) { Response.Write("success"); } return View(emp); } public Employee GetEmployee() { return new Employee { FirstName = "Tom", LastName = "Jim", ID = 3 }; } and my view has the following: <% using (Html.BeginForm()) {%> <%= Html.ValidationSummary() %> <fieldset> <legend>Fields</legend> <div class="editor-label"> <%= Html.LabelFor(model => model.FirstName) %> </div> <div class="editor-field"> <%= Html.DisplayFor(model => model.FirstName) %> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.LastName) %> </div> <div class="editor-field"> <%= Html.TextBoxOrLabelFor(model => model.LastName, true)%> <%= Html.ValidationMessageFor(model => model.LastName) %> </div> <p> <input type="submit" value="Save" /> </p> </fieldset> <% } %> Note that the only field editable is the LastName. When I postback, I get back the original employee and try to update it with only the LastName property. But but I see on the page is the following error: •The FirstName field is required. This from what I understand, is because the TryUpdateModel failed. But why? I told it to update only the LastName property. I am using MVC2 RTM Thanks in advance.

    Read the article

  • Looking for efficient scaling patterns for Silverlight application with distributed text-file data s

    - by Edward Tanguay
    I'm designing a Silverlight software solution for students and teachers to record flashcards, e.g. words and phrases that students find while reading and errors that teachers notice while teaching. Requirements are: each person publishes his own flashcards in a file on a web server, e.g. http://:www.mywebserver.com/flashcards.txt other people subscribe to that person's flashcards by using a Silverlight flashcard reader that I have developed and entering the URLs of flashcard files they want to subscribe to, URLs and imported flashcards being saved in IsolatedStorage the flashcards.txt file has the following simple format: title, then blocks of question/answers: Jim Smith's flashcards from English class 53-222, winter semester 2009 ==fla Das kann nicht sein. That can't be. ==fla Es sei denn, er kommt nicht. Unless he doesn't come. The user then makes public the URL to his flashcard file and other readers begin reading in his flashcards. In order to lower the bar for non-technical users to contribute, it will even be possible for them to save this text in a Google Document, which they publish and distribute the URL. The flashcard readers will then recognize it is a google document and perform the necessary screen scraping to get at the raw text. I have two technical questions about this approach: What is a best way to plan now for scalability issues: e.g. if your reader is subscribed to 10 flashcard files that are each 200K, it will have to download 2MB of text just to find out if any new flashcards are available. Or can I somehow accurately and consistently get at the last update date/time of text files on servers and published google docs? Each reader will have the ability to allow the person to test himself on imported flashcards and add meta information to them, e.g. categorize them, edit them, etc. This information will be stored in IsolatedStorage along with the important flashcards themselves. What is a good pattern to allow these readers to share and synchronize this meta data, e.g. so when you are looking at a flashcard you can see that 5 other people have made corrections to it. The best solution I can think of now is that the Silverlight readers will have to republish their data to a central database, but then there is the problem of uniquely identifying each flashcard, the best approach seems to be URL + position-in-file, or even better URL + original text of both question and answer fields, but both of these have their obvious drawbacks. The main requirement is that the bar for participation is kept as low as possible, i.e. type text in a google document, publish it, distribute the URL, and you're publishing within the flashcard community. So I want to come up with the most efficient technical solutions in order to compensate for the lack of database, lack of unique ids, etc. For those who have designed or developed similar non-traditional, distributed database projects like this, what advice, experience or best-practice tips you can share on the above two points?

    Read the article

  • Event Listener in Google Charts API

    - by DeanGrobler
    I'm busy using Google Charts in one of my projects to display data in a table. Everything is working great. Except that I need to see what line a user selected once they click a button. This would obviously be done with Javascript, but I've been struggling for days now to no avail. Below I've pasted code for a simple example of the table, and the Javascript function that I want to use (that doesn't work). <html> <head> <script type='text/javascript' src='https://www.google.com/jsapi'></script> <script type='text/javascript'> google.load('visualization', '1', {packages:['table']}); google.setOnLoadCallback(drawTable); var table = ""; function drawTable() { var data = new google.visualization.DataTable(); data.addColumn('string', 'Name'); data.addColumn('number', 'Salary'); data.addColumn('boolean', 'Full Time Employee'); data.addRows(4); data.setCell(0, 0, 'Mike'); data.setCell(0, 1, 10000, '$10,000'); data.setCell(0, 2, true); data.setCell(1, 0, 'Jim'); data.setCell(1, 1, 8000, '$8,000'); data.setCell(1, 2, false); data.setCell(2, 0, 'Alice'); data.setCell(2, 1, 12500, '$12,500'); data.setCell(2, 2, true); data.setCell(3, 0, 'Bob'); data.setCell(3, 1, 7000, '$7,000'); data.setCell(3, 2, true); table = new google.visualization.Table(document.getElementById('table_div')); table.draw(data, {showRowNumber: true}); } function selectionHandler() { selectedData = table.getSelection(); row = selectedData[0].row; item = table.getValue(row,0); alert("You selected :" + item); } </script> </head> <body> <div id='table_div'></div> <input type="button" value="Select" onClick="selectionHandler()"> </body> </html> Thanks in advance for anyone taking the time to look at this. I've honestly tried my best with this, hope someone out there can help me out a bit.

    Read the article

< Previous Page | 42 43 44 45 46 47  | Next Page >