Search Results

Search found 1000 results on 40 pages for 'jim ward'.

Page 39/40 | < Previous Page | 35 36 37 38 39 40  | Next Page >

  • Why can't I get a TRUE return in this prepared statement?

    - by Cortopasta
    I can't seem to get this to do anything but return false. My best guess is that the prepared statement isn't executing, but I have no idea why. private function check_credentials($plain_username, $md5_password) { global $dbcon; $ac = new ac(); $ac->dbconnect(); $userid = $dbcon->prepare('SELECT id FROM users WHERE username = :username AND password = :password LIMIT 1'); $userid->bindParam(':username', $plain_username); $userid->bindParam(':password', $md5_password); $userid->execute(); $id = $userid->fetch(); Return $id; } *EDIT:*I've even tried hard coding the username and password into the function itself to try and isolate the problem like this: private function check_credentials($plain_username, $md5_password) { global $dbcon; $plain_username = "jim"; $md5_username = "waffles"; $ac = new ac(); $ac->dbconnect(); $userid = $dbcon->prepare('SELECT id FROM users WHERE username = :username AND password = :password LIMIT 1'); $userid->bindParam(':username', $plain_username); $userid->bindParam(':password', $md5_password); $userid->execute(); print_r($dbcon->errorInfo()); $id = $userid->fetch(); Return $id; } Still nothing :-/

    Read the article

  • Can I create template-based library objects in Dreamweaver CS5?

    - by Danjah
    At work we need two 'streams' of template. The first are general layout templates, like the ones already available in the MX through CS5 packages (except we'd have our own customised ones). The second are more granular objects, some of which are functional. In both cases, I don't want Jimmy to be able to wreak havoc inside anything other than the 'editable regions' which make up the templates. Now this is fine if I stick with the first scenario (layout templates) where there's simply a big chunk of editable region for good ole Jim to sprawl into, think of this as the 'body content' area. But I really do need these granular library (or snippet) objects to work in the same way. Unfortunately with my attempts so far they don't work as I'd have thought - perhaps for good reason? When I create a blank template and throw in my chunk of HTML (unobstrusive JS and external CSS use selectors in this HTML to provide style and function) and save it as a new library item or snippet, all looks well. Then I create a new document based on a layout template and save it as a plain html file (still all good so far). Next I drop in my custom library item... still all good... but then I go to save the document and it only allows me to save it as a new template! I expected it would just allow me to save it as HTML and have it simply respect the defined editable regions, as happens in the containing page 'body content' editable region. Apologies if that got specific and technical quite quickly, but it is quite particular. If you want some example files lemme know and I'll zip some up. Many thanks :) p.s It is not a requirement that library objects must somehow inject their dependency files into the newly created page - I already know what they'll be. Also, I know I must 'detatch from original' once I drop a library item into a document which then allows customisation of the library object.

    Read the article

  • Why did this work with Visual C++, but not with gcc?

    - by Carlos Nunez
    I've been working on a senior project for the last several months now, and a major sticking point in our team's development process has been dealing wtih rifts between Visual-C++ and gcc. (Yes, I know we all should have had the same development environment.) Things are about finished up at this point, but I ran into a moderate bug just today that had me wondering whether Visual-C++ is easier on newbies (like me) by design. In one of my headers, there is a function that relies on strtok to chop up a string, do some comparisons and return a string with a similar format. It works a little something like the following: int main() { string a, b, c; //Do stuff with a and b. c = get_string(a,b); } string get_string(string a, string b) { const char * a_ch, b_ch; a_ch = strtok(a.c_str(),","); b_ch = strtok(b.c_str(),","); } strtok is infamous for being great at tokenizing, but equally great at destroying the original string to be tokenized. Thus, when I compiled this with gcc and tried to do anything with a or b, I got unexpected behavior, since the separator used was completely removed in the string. Here's an example in case I'm unclear; if I set a = "Jim,Bob,Mary" and b="Grace,Soo,Hyun", they would be defined as a="JimBobMary" and b="GraceSooHyun" instead of staying the same like I wanted. However, when I compiled this under Visual C++, I got back the original strings and the program executed fine. I tried dynamically allocating memory to the strings and copying them the "standard" way, but the only way that worked was using malloc() and free(), which I hear is discouraged in C++. While I'm curious about that, the real question I have is this: Why did the program work when compiled in VC++, but not with gcc? (This is one of many conflicts that I experienced while trying to make the code cross-platform.) Thanks in advance! -Carlos Nunez

    Read the article

  • what exactly is the danger of an uninitialized pointer in C

    - by akh2103
    I am trying get a handle on C as I work my way thru Jim Trevor's "Cyclone: A safe dialect of C" for a PL class. Trevor and his co-authors are trying to make a safe version of C, so they eliminate uninitialized pointers in their language. Googling around a bit on uninitialized pointers, it seems like un-initialized pointers point to random locations in memory. It seems like this alone makes them unsafe. If you reference an un-itilialized pointer, you jump to an unsafe part of memory. Period. But the way Trevor talks about them seems to imply that it is more complex. He cites the following code, and explains that when the function FrmGetObjectIndex dereferences f, it isn’t accessing a valid pointer, but rather an unpredictable address — whatever was on the stack when the space for f was allocated. What does Trevor mean by "whatever was on the stack when the space for f was allocated"? Are "un-initialized" pointers initialized to random locations in memory by default? Or does their "random" behavior have to do with the memory allocated for these pointers getting filled with strange values (that are then referenced) because of unexpected behavior on the stack. Form *f; switch (event->eType) { case frmOpenEvent: f = FrmGetActiveForm(); ... case ctlSelectEvent: i = FrmGetObjectIndex(f, field); ... }

    Read the article

  • Virgin STI Help

    - by Mutuelinvestor
    I am working on a horse racing application and I'm trying to utilize STI to model a horse's connections. A horse's connections is comprised of his owner, trainer and jockey. Over time, connections can change for a variety of reasons: The horse is sold to another owner The owner switches trainers or jockey The horse is claimed by a new owner As it stands now, I have model this with the following tables: horses connections (join table) stakeholders (stakeholder has three sub classes: jockey, trainer & owner) Here are my clases and associations: class Horse < ActiveRecord::Base has_one :connection has_one :owner_stakeholder, :through => :connection has_one :jockey_stakeholder, :through => :connection has_one :trainer_stakeholder, :through => :connection end class Connection < ActiveRecord::Base belongs_to :horse belongs_to :owner_stakeholder belongs_to :jockey_stakeholder belongs_to :trainer_stakeholder end class Stakeholder < ActiveRecord::Base has_many :connections has_many :horses, :through => :connections end class Owner < Stakeholder # Owner specific code goes here. end class Jockey < Stakeholder # Jockey specific code goes here. end class Trainer < Stakeholder # Trainer specific code goes here. end One the database end, I have inserted a Type column in the connections table. Have I modeled this correctly. Is there a better/more elegant approach. Thanks in advance for you feedback. Jim

    Read the article

  • Select * from 'many to many' SQL relationship

    - by Rampant Creative Group
    I'm still learning SQL and my brain is having a hard time with this one. Say I have 3 tables: teams players and teams_players as my link table All I want to do is run a query to get each team and the players on them. I tried this: SELECT * FROM teams INNER JOIN teams_players ON teams.id = teams_players.team_id INNER JOIN players ON teams_players.player_id = players.id But it returned a separate row for each player on each team. Is JOIN the right way to do it or should I be doing something else? ----------------------------------------- Edit Ok, so from what I'm hearing, this isn't necessarily a bad way to do it. I'll just have to group the data by team while I'm doing my loop. I have not yet tried the modified SQL statements provided, but I will today and get back to you. To answer the question about structure - I guess I wasn't thinking about the returned row structure which is part of what lead to my confusion. In this particular case, each team is limited to 4 players (or less) so I guess the structure that would be helpful to me is something like the following: teams.id, teams.name, players.id, players.name, players.id, players.name, players.id, players.name, players.id, players.name, 1 Team ABC 1 Jim 2 Bob 3 Ned 4 Roy 2 Team XYZ 2 Bob 3 Ned 5 Ralph 6 Tom

    Read the article

  • Have to login twice. PHP sessions and login troubles with Chrome and Opera.

    - by Robert
    The problem I am encountering is that for my login form I have to login twice for the session to register properly, but only in Chrome (my version is 4.0.249.89) and Opera (my version is 10.10). Here is the stripped down code that I am testing on: Login Page: session_start(); $_SESSION['user_id'] = 8; $_SESSION['user_name'] = 'Jim'; session_write_close(); header('Location: http://www.my-domain-name.com/'); exit(); Home Page: session_start(); if ( isset($_SESSION['user_id']) ) { echo "You are logged in!"; } else { echo "You are NOT logged in!"; } Logout Page: session_start(); session_unset(); session_destroy(); header('Location: http://www.my-domain-name.com/'); exit(); Currently, under a fresh load with no cookies, if I go to my-domain-name.com/login/ it will redirect to the home page and say "You are NOT logged in!" but if I go there again it will say "You are logged in!". Any ideas? Thanks for your help.

    Read the article

  • WCF throttling and emptying the listenBacklog quickly

    - by user1161625
    I'm sort of a WCF newbie (inheriting tasks from someone) and I'm trying to better throttle my application. We have a service that sends print jobs to the Windows print queue and it seems that rather than dumping all of the jobs into the queue it holds on to them in the listenBacklog and slowly releases them to the Windows print queue. Here is my throttling behavior: <behavior name="throttling"> <serviceThrottling maxConcurrentCalls="144" maxConcurrentSessions="1168" /> <serviceDebug httpHelpPageEnabled="true" includeExceptionDetailInFaults="true"/> </behavior> I'm using a custom binding for this service also which is here: <customBinding> <binding name="oneWayPrintBinding" receiveTimeout="00:30:00"> <reliableSession maxPendingChannels="16" inactivityTimeout="01:30:00" /> <binaryMessageEncoding> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" /> </binaryMessageEncoding> <tcpTransport maxReceivedMessageSize="67108864" listenBacklog="1024" /> </binding> </customBinding> Any recommendations would be very helpful. Thank you! -Jim

    Read the article

  • Populate tableView with more than one array

    - by Ewoods
    The short version: Is there a way to populate one specific row in a tableView with one value from one array, then populate another row in that same tableView with one value from a different array? For example, cell 1 would have the first value from Array A, cell 2 would have the first value from Array B, cell 3 would have the first value from Array C, etc. The long version: I hope this isn't too confusing. I've got an array of names, and then three more arrays with actions associated with those people. For example, the names array has Jim, Bob, and Sue, and then there's an array for eating, reading, and sleeping that records every time each person does one of these things (all of these arrays are populated from a MySQL database). The names array is used to populate a root tableView. Tapping on one of the names brings up a detail view controller that has another tableView that only has three rows. This part is all working fine. What I want to happen is when I tap on a name, it moves to the detail view and the three cells would then show the last event for that person for each of the three activities. Tapping on one of those three events then moves to a new view controller with a tableView that shows every event for that category. For example, if I tapped on Bob, the second page would show the last time Bob ate, read, and slept. Tapping on the first row would bring up a table that showed every time Bob has eaten. So far I've only been able to populate the second tableView with all of the rows from one of the arrays. I need it the other way around (one row from all of the arrays).

    Read the article

  • Delivering activity feed items in a moderately scalable way

    - by sotangochips
    The application I'm working on has an activity feed where each user can see their friends' activity (much like Facebook). I'm looking for a moderately scalable way to show a given users' activity stream on the fly. I say 'moderately' because I'm looking to do this with just a database (Postgresql) and maybe memcached. For instance, I want this solution to scale to 200k users each with 100 friends. Currently, there is a master activity table that stores the rendered html for the given activity (Jim added a friend, George installed an application, etc.). This master activity table keeps the source user, the html, and a timestamp. Then, there's a separate ('join') table that simply keeps a pointer to the person who should see this activity in their friend feed, and a pointer to the object in the main activity table. So, if I have 100 friends, and I do 3 activities, then the join table will then grow to 300 items. Clearly this table will grow very quickly. It has the nice property, though, that fetching activity to show to a user takes a single (relatively) inexpensive query. The other option is to just keep the main activity table and query it by saying something like: select * from activity where source_user in (1, 2, 44, 2423, ... my friend list) This has the disadvantage that you're querying for users who may never be active, and as your friend list grows, this query can get slower and slower. I see the pros and the cons of both sides, but I'm wondering if some SO folks might help me weigh the options and suggest one way or they other. I'm also open to other solutions, though I'd like to keep it simple and not install something like CouchDB, etc. Many thanks!

    Read the article

  • Druapl & Regular PHP Integration

    - by user333128
    I'm building a new website which has one core application and many content pages. Content pages are mostly dynamic and I require a way to manage this dynamic content on a regular basis. The core application's main functionality is a 3 step process or reading user data (input page), reading data from MySQL (product page) and submitting an application to an email address (application page). Ideally I would like to build the core application in regular PHP and leverage Drupal for its content management capabilities. Can Drupal and regular PHP be integrated as I suggest easily? My feeling is that coding the core application as a Drupal module(s) will add layers of complexity that could be difficult to code from the outset and maintain later on as the system matures - so I would really like to just use regular PHP. Let me explain where dynamic content (managed by the CMS) intersects with the core application: Dynamic content such as FAQ data is used both on the 'normal' help pages and also within a mini-feed displayed within core application pages down a right hand side column. In this column, 3 random questions are pulled from the database and displayed as a feed. When users click on FAQ question they are not taken away from the core application product page but are instead shown data in a pop-up window displaying the question and answer. In addition, users can browse other questions and answers through a simple navigation menu within this popup. There are 3 such like feeds as I describe above that I require on the core application product page. So, what is the ideal solution here in terms of 'keeping things simple' for both the management of dynamic content and the ease of coding the core application? Can 'regular PHP' and Drupal co-exist 'peacefully'? If so, how is this technically possible? Because there is some content managed by Drupal contained within core application pages, can the core application still be coded in regular PHP? Any advice / suggestions? Thank you! Jim.

    Read the article

  • jquery accessing dynamic class selectors

    - by dinotom
    Given the following html rendering; <fieldset id="fld_Rye"> <legend>7 Main St.</legend> <div> <table> <tr> <th class="wideCol"><b><i>Service Description</i></b></th> <th class="wideCol"><b><i>Service Name</i></b></th> <th class="normalPlusWidth"><b><i>Contact Name</i></b></th> </tr> <ItemTemplate> <tr class=""> <td class="servDesc">&nbsp;<b>Plumbing Services</b></td> <td class="servName">&nbsp;<b>Flynnsters's Plumbing</b></td> <td class="servContact">&nbsp;<b>Jim Flynnster</b></td> </tr> </ItemTemplate> I am trying to access all the td's with a class of servDesc, get their width into an array and get the max width from that array to reset the css width of that class using jquery on page load. I cant seem to get the right selector for those tags, and I've tried at least 50 variations. My latest try; var maxTdWidth; var servDescCols = []; $("#fld_Rye td#servDesc").each(function () { alert("found one"); servDescCols.push(this.width()); }); maxTdWidth = Math.max.apply(Math, servDescCols);

    Read the article

  • Can a lambda can be used to change a List's values in-place ( without creating a new list)?

    - by Saint Hill
    I am trying to determine the correct way of transforming all the values in a List using the new lambdas feature in the upcoming release of Java 8 without creating a **new** List. This pertains to times when a List is passed in by a caller and needs to have a function applied to change all the contents to a new value. For example, the way Collections.sort(list) changes a list in-place. What is the easiest way given this transforming function and this starting list: String function(String s){ return [some change made to value of s]; } List<String> list = Arrays.asList("Bob", "Steve", "Jim", "Arbby"); The usual way of applying a change to all the values in-place was this: for (int i = 0; i < list.size(); i++) { list.set(i, function( list.get(i) ); } Does lambdas and Java 8 offer: an easier and more expressive way? a way to do this without setting up all the scaffolding of the for(..) loop?

    Read the article

  • Testing for optional node - Delphi XE

    - by Seti Net
    What is the proper way to test for the existance of an optional node? A snipped of my XML is: <Antenna > <Mount Model="text" Manufacture="text"> <BirdBathMount/> </Mount> </Antenna> But it could also be: <Antenna > <Mount Model="text" Manufacture="text"> <AzEl/> </Mount> </Antenna> The child of Antenna could either be BirdBath or AzEl but not both... In Delphi XE I have tried: if (MountNode.ChildNodes.Nodes['AzEl'] <> unassigned then //Does not work if (MountNode.ChildNodes['BirdBathMount'].NodeValue <> null) then // Does not work if (MountNode.BirdBathMount.NodeValue <> null) then // Does not work I use XMLSpy to create the schema and the example XML and they parse correctly. I use Delphi XE to create the bindings and it works on most other combinations fine. This must have a simple answer that I have just overlooked - but what? Thanks...... Jim

    Read the article

  • How to fire server-side methods with jQuery

    - by Nasser Hajloo
    I have a large application and I'm going to enabling short-cut key for it. I'd find 2 JQuery plug-ins (demo plug-in 1 - Demo plug-in 2) that do this for me. you can find both of them in this post in StackOverFlow My application is a completed one and I'm goining to add some functionality to it so I don't want towrite code again. So as a short-cut is just catching a key combination, I'm wonder how can I call the server methods which a short-cut key should fire? So How to use either of these plug-ins, by just calling the methods I'd written before? Actually How to fire Server methods with Jquery? You can also find a good article here, by Dave Ward Update: here is the scenario. When User press CTRL+Del the GridView1_OnDeleteCommand so I have this protected void grdDocumentRows_DeleteCommand(object source, System.Web.UI.WebControls.DataGridCommandEventArgs e) { try { DeleteRow(grdDocumentRows.DataKeys[e.Item.ItemIndex].ToString()); clearControls(); cmdSaveTrans.Text = Hajloo.Portal.Common.Constants.Accounting.Documents.InsertClickText; btnDelete.Visible = false; grdDocumentRows.EditItemIndex = -1; BindGrid(); } catch (Exception ex) { Page.AddMessage(GetLocalResourceObject("AProblemAccuredTryAgain").ToString(), MessageControl.TypeEnum.Error); } } private void BindGrid() { RefreshPage(); grdDocumentRows.DataSource = ((DataSet)Session[Hajloo.Portal.Common.Constants.Accounting.Session.AccDocument]).Tables[AccDocument.TRANSACTIONS_TABLE]; grdDocumentRows.DataBind(); } private void RefreshPage() { Creditors = (decimal)((AccDocument)Session[Hajloo.Portal.Common.Constants.Accounting.Session.AccDocument]).Tables[AccDocument.ACCDOCUMENT_TABLE].Rows[0][AccDocument.ACCDOCUMENT_CREDITORS_SUM_FIELD]; Debtors = (decimal)((AccDocument)Session[Hajloo.Portal.Common.Constants.Accounting.Session.AccDocument]).Tables[AccDocument.ACCDOCUMENT_TABLE].Rows[0][AccDocument.ACCDOCUMENT_DEBTORS_SUM_FIELD]; if ((Creditors - Debtors) != 0) labBalance.InnerText = GetLocalResourceObject("Differentiate").ToString() + "?" + (Creditors - Debtors).ToString(Hajloo.Portal.Common.Constants.Common.Documents.CF) + "?"; else labBalance.InnerText = GetLocalResourceObject("Balance").ToString(); lblSumDebit.Text = Debtors.ToString(Hajloo.Portal.Common.Constants.Common.Documents.CF); lblSumCredit.Text = Creditors.ToString(Hajloo.Portal.Common.Constants.Common.Documents.CF); if (grdDocumentRows.EditItemIndex == -1) clearControls(); } Th other scenario are the same. How to enable short-cut for these kind of code (using session , NHibernate, etc)

    Read the article

  • I am using a constructor for my array but why is it saying i need a return type?

    - by JB
    using System; using System.IO; using System.Data; using System.Text; using System.Drawing; using System.Data.OleDb; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Drawing.Printing; using System.Collections.Generic; namespace Eagle_Eye_Class_Finder { public class GetSchedule { public class IDnumber { public string Name { get; set; } public string ID { get; set; } public string year { get; set; } public string class1 { get; set; } public string class2 { get; set; } public string class3 { get; set; } public string class4 { get; set; } IDnumber[] IDnumbers = new IDnumber[3]; public GetSchedule() //here i get an error saying method must have a return type??? { IDnumbers[0] = new IDnumber() { Name = "Joshua Banks",ID = "900456317", year = "Senior", class1 = "TEET 4090", class2 = "TEET 3020", class3 = "TEET 3090", class4 = "TEET 4290" }; IDnumbers[1] = new IDnumber() { Name = "Sean Ward", ID = "900456318", year = "Junior", class1 = "ENGNR 4090", class2 = "ENGNR 3020", class3 = "ENGNR 3090", class4 = "ENGNR 4290" }; IDnumbers[2] = new IDnumber() { Name = "Terrell Johnson",ID = "900456319",year = "Sophomore", class1 = "BUS 4090", class2 = "BUS 3020", class3 = "BUS 3090", class4 = "BUS 4290" }; } static void ProcessNumber(IDnumber myNum) { StringBuilder myData = new StringBuilder(); myData.AppendLine(IDnumber.Name); myData.AppendLine(": "); myData.AppendLine(IDnumber.ID); myData.AppendLine(IDnumber.year); myData.AppendLine(IDnumber.class1); myData.AppendLine(IDnumber.class2); myData.AppendLine(IDnumber.class3); myData.AppendLine(IDnumber.class4); MessageBox.Show(myData); } public string GetDataFromNumber(string ID) { foreach (IDnumber idCandidateMatch in IDnumbers) { if (IDCandidateMatch.ID == ID) { StringBuilder myData = new StringBuilder(); myData.AppendLine(IDnumber.Name); myData.AppendLine(": "); myData.AppendLine(IDnumber.ID); myData.AppendLine(IDnumber.year); myData.AppendLine(IDnumber.class1); myData.AppendLine(IDnumber.class2); myData.AppendLine(IDnumber.class3); myData.AppendLine(IDnumber.class4); return myData; } } return ""; } } } }

    Read the article

  • Visiting the Emtel Data Centre

    Back in February at the first event of the Emtel Knowledge Series (EKS) I spoke to various people at Emtel about their data centre here on the island. I was trying to see whether it would be possible to arrange a meeting over there for a selected group of our community members. Well, let's say it like this... My first approach wasn't that promising and far from successful but during the following months there were more and more occasions to get in touch with the "right" contact persons at Emtel to make it happen... Setting up an appointment and pre-requisites The major improvement came during a Boot Camp for Windows Phone 8.1 App development organised by Microsoft Indian Ocean Islands in cooperation with Emtel at the Emtel World, Ebene. Apart from learning bits and pieces regarding Universal Apps I took the opportunity to get in touch with Arvin Lockee, Sales Executive - Data, during our lunch break. And this really kicked off the whole procedure. Prior to get access to the Emtel data centre it is requested that you provide full name and National ID of anyone going to visit. Also, it should be noted that there was only a limited amount of seats available. Anyways, packed with this information I posted through the usual social media channels. Responses came in very quickly and based on First-come, first-serve (FCFS) principle I noted down the details and forwarded them to Emtel in order to fix a date and time for the visit. In preparation on our side, all attendees exchanged contact details and we organised transport options to go to the data centre in Arsenal. The day before and on the day of our meeting, Arvin send me a reminder to check whether everything is still confirmed and ready to go... Of course, it was! Arriving at the Emtel Data Centre As I'm coming from Flic En Flac towards the North, we agreed that I'm going to pick up a couple of young fellows near the old post office in Port Louis. All went well, except that Sean eventually might be living in another time zone compared to the rest of us. Anyway, after some extended stop we were complete and arrived just in time in Arsenal to meet and greet with Ish and Veer. Again, Emtel is taking access procedures to their data centre very serious and the gate stayed close until all our IDs had been noted and compared to the list of registered attendees. Despite having a good laugh at the mixture of old and new ID cards it was a straight-forward processing. The ward was very helpful and guided us to the waiting area at the entrance section of the building. Shortly after we were welcomed by Kamlesh Bokhoree, the Data Centre Officer. He gave us brief introduction into the rules and regulations during our visit, like no photography allowed, not touching the buttons, and following his instructions through the whole visit. Of course! Inside the data centre Next, he explained us the multi-factor authentication system using a combination of bio-metric data, like finger print reader, and "classic" pin panel. The Emtel data centre provides multiple services and next to co-location for your own hardware they also offer storage options for your backup and archive data in their massive, fire-resistant vault. Very impressive to get to know about the considerations that have been done in choosing the right location and how to set up the whole premises. It should also be noted that there is 24/7 CCTV surveillance inside and outside the buildings. Strengths of the Emtel TIER 3 Data Centre, Mauritius Finally, we were guided into the first server room. And wow, the whole setup is cleverly planned and outlined in the architecture. From the false floor and ceilings in order to provide optimum air flow, over to the separation of cold and hot aisles between the full-size server racks, and of course the monitored air conditions in order to analyse and watch changes in temperature, smoke detection and other parameters. And not surprisingly everything has been implemented in two independent circuits. There is a standardised classification for the construction and operation of data centres world-wide, and the Emtel's one has been designed to be a TIER 4 building but due to the lack of an alternative power supplier on the island it is officially registered as a TIER 3 compliant data centre. Maybe in the long run there might be a second supplier of energy next to CEB... time will tell. Luckily, the data centre is integrated into the National Fibre Optic Gigabit Ring and Emtel already connects internationally through diverse undersea cable routes like SAFE & LION/LION2 out of Mauritius and through several other providers for onwards connectivity. The data centre is part of the National Fibre Optic Gigabit Ring and has redundant internet connectivity onwards. Meanwhile, Arvin managed to join our little group of geeks and he supported Kamlesh in answering our technical questions regarding the capacities and general operation of the data centre. Visiting the NOC and its dedicated team of IT professionals was surely one of the visual highlights. Seeing their wall of screens to monitor any kind of activities on the data lines, the managed servers and the activity in and around the building was great. Even though I'm using a multi-head setup since years I cannot keep it up with that setup... ;-) But I got a couple of ideas on how to improve my work spaces here at the office. Clear advantages of hosting your e-commerce and mobile backends locally After the completely isolated NOC area we continued our Q&A session with Kamlesh and Arvin in the second server room which is dedictated to shared environments. On first thought it should be well-noted that there is lots of space for full-sized racks and therefore co-location of your own hardware. Actually, given the feedback that there will be upcoming changes in prices the facilities at the Emtel data centre are getting more and more competitive and interesting for local companies, especially small and medium enterprises. After seeing this world-class infrastructure available on the island, I'm already considering of moving one of my root servers abroad to be co-located here on the island. This would provide an improved user experience in terms of site performance and latency. This would be a good improvement, especially for upcoming e-commerce solutions for two of my local clients. Later on, we actually started the conversation of additional services that could be a catalyst for the local market in order to attract more small and medium companies to take the data centre into their evaluations regarding online activities. Until today Emtel does not provide virtualised server environments but there might be ongoing plans in the future to cover this field as well. Emtel is a mobile operator and internet connectivity provider in the first place, entering a market of managed and virtualised server infrastructures including capacities in terms of cloud storage and computing are rather new and there is a continuous learning curve at Emtel, too. You cannot just jump into a new market and see how it works out... And I appreciate Emtel's approach towards a solid fundament and then building new services on top of that. Emtel as a future one-stop-shop service provider for all your internet and telecommunications needs. Emtel's promotional video about their TIER 3 data centre in Arsenal, Mauritius More details are thoroughly described in Emtel's brochure of their data centre. Check out their PDF document here. Thanks for this opportunity Visiting and walking through the Emtel data centre for more than 2 hours was a great experience. As representative of the Mauritius Software Craftsmanship Community (MSCC) I would like to thank anyone at Emtel involved in the process of making it happen, and especially to Arvin Lockee and Kamlesh Bokhoree for their time and patience in explaining the infrastructure and answering all the endless questions from our members. Thank You!

    Read the article

  • DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

    - by Argenis
      Does your CHECKDB hurt, Argenis? There is a classic blog series by Paul Randal [blog|twitter] called “CHECKDB From Every Angle” which is pretty much mandatory reading for anybody who’s even remotely considering going for the MCM certification, or its replacement (the Microsoft Certified Solutions Master: Data Platform – makes my fingers hurt just from typing it). Of particular interest is the post “Consistency Options for a VLDB” – on it, Paul provides solid, timeless advice (I use the word “timeless” because it was written in 2007, and it all applies today!) on how to perform checks on very large databases. Well, here I was trying to figure out how to make CHECKDB run faster on a restored copy of one of our databases, which happens to exceed 7TB in size. The whole thing was taking several days on multiple systems, regardless of the storage used – SAS, SATA or even SSD…and I actually didn’t pay much attention to how long it was taking, or even bothered to look at the reasons why - as long as it was finishing okay and found no consistency errors. Yes – I know. That was a huge mistake, as corruption found in a database several days after taking place could only allow for further spread of the corruption – and potentially large data loss. In the last two weeks I increased my attention towards this problem, as we noticed that CHECKDB was taking EVEN LONGER on brand new all-flash storage in the SAN! I couldn’t really explain it, and were almost ready to blame the storage vendor. The vendor told us that they could initially see the server driving decent I/O – around 450Mb/sec, and then it would settle at a very slow rate of 10Mb/sec or so. “Hum”, I thought – “CHECKDB is just not pushing the I/O subsystem hard enough”. Perfmon confirmed the vendor’s observations. Dreaded @BlobEater What was CHECKDB doing all the time while doing so little I/O? Eating Blobs. It turns out that CHECKDB was taking an extremely long time on one of our frankentables, which happens to be have 35 billion rows (yup, with a b) and sucks up several terabytes of space in the database. We do have a project ongoing to purge/split/partition this table, so it’s just a matter of time before we deal with it. But the reality today is that CHECKDB is coming to a screeching halt in performance when dealing with this particular table. Checking sys.dm_os_waiting_tasks and sys.dm_os_latch_stats showed that LATCH_EX (DBCC_OBJECT_METADATA) was by far the top wait type. I remembered hearing recently about that wait from another post that Paul Randal made, but that was related to computed-column indexes, and in fact, Paul himself reminded me of his article via twitter. But alas, our pathologic table had no non-clustered indexes on computed columns. I knew that latches are used by the database engine to do internal synchronization – but how could I help speed this up? After all, this is stuff that doesn’t have a lot of knobs to tweak. (There’s a fantastic level 500 talk by Bob Ward from Microsoft CSS [blog|twitter] called “Inside SQL Server Latches” given at PASS 2010 – and you can check it out here. DISCLAIMER: I assume no responsibility for any brain melting that might ensue from watching Bob’s talk!) Failed Hypotheses Earlier on this week I flew down to Palo Alto, CA, to visit our Headquarters – and after having a great time with my Monkey peers, I was relaxing on the plane back to Seattle watching a great talk by SQL Server MVP and fellow MCM Maciej Pilecki [twitter] called “Masterclass: A Day in the Life of a Database Transaction” where he discusses many different topics related to transaction management inside SQL Server. Very good stuff, and when I got home it was a little late – that slow DBCC CHECKDB that I had been dealing with was way in the back of my head. As I was looking at the problem at hand earlier on this week, I thought “How about I set the database to read-only?” I remembered one of the things Maciej had (jokingly) said in his talk: “if you don’t want locking and blocking, set the database to read-only” (or something to that effect, pardon my loose memory). I immediately killed the CHECKDB which had been running painfully for days, and set the database to read-only mode. Then I ran DBCC CHECKDB against it. It started going really fast (even a bit faster than before), and then throttled down again to around 10Mb/sec. All sorts of expletives went through my head at the time. Sure enough, the same latching scenario was present. Oh well. I even spent some time trying to figure out if NUMA was hurting performance. Folks on Twitter made suggestions in this regard (thanks, Lonny! [twitter]) …Eureka? This past Friday I was still scratching my head about the whole thing; I was ready to start profiling with XPERF to see if I could figure out which part of the engine was to blame and then get Microsoft to look at the evidence. After getting a bunch of good news I’ll blog about separately, I sat down for a figurative smack down with CHECKDB before the weekend. And then the light bulb went on. A sparse column. I thought that I couldn’t possibly be experiencing the same scenario that Paul blogged about back in March showing extreme latching with non-clustered indexes on computed columns. Did I even have a non-clustered index on my sparse column? As it turns out, I did. I had one filtered non-clustered index – with the sparse column as the index key (and only column). To prove that this was the problem, I went and setup a test. Yup, that'll do it The repro is very simple for this issue: I tested it on the latest public builds of SQL Server 2008 R2 SP2 (CU6) and SQL Server 2012 SP1 (CU4). First, create a test database and a test table, which only needs to contain a sparse column: CREATE DATABASE SparseColTest; GO USE SparseColTest; GO CREATE TABLE testTable (testCol smalldatetime SPARSE NULL); GO INSERT INTO testTable (testCol) VALUES (NULL); GO 1000000 That’s 1 million rows, and even though you’re inserting NULLs, that’s going to take a while. In my laptop, it took 3 minutes and 31 seconds. Next, we run DBCC CHECKDB against the database: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; This runs extremely fast, as least on my test rig – 198 milliseconds. Now let’s create a filtered non-clustered index on the sparse column: CREATE NONCLUSTERED INDEX [badBadIndex] ON testTable (testCol) WHERE testCol IS NOT NULL; With the index in place now, let’s run DBCC CHECKDB one more time: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; In my test system this statement completed in 11433 milliseconds. 11.43 full seconds. Quite the jump from 198 milliseconds. I went ahead and dropped the filtered non-clustered indexes on the restored copy of our production database, and ran CHECKDB against that. We went down from 7+ days to 19 hours and 20 minutes. Cue the “Argenis is not impressed” meme, please, Mr. LaRock. My pain is your gain, folks. Go check to see if you have any of such indexes – they’re likely causing your consistency checks to run very, very slow. Happy CHECKDBing, -Argenis ps: I plan to file a Connect item for this issue – I consider it a pretty serious bug in the engine. After all, filtered indexes were invented BECAUSE of the sparse column feature – and it makes a lot of sense to use them together. Watch this space and my twitter timeline for a link.

    Read the article

  • Need personal advice on how to get out of a company..

    - by SOfan
    Hi, I am an SO user since past 6 months and this is the first time I am turning to SO for personal help. I have asked technical questions before with my real ID. I am stuck inside a service based IT company for the past one year and haven't been able to decide if to leave it, when to leave it and how to leave it. I had taken 2 weeks LWP on medical reason roughly at end of 1 year and then soon after reporting, I applied for 2 months more LWP (on medical/personal ground) with the intention of working on my health,take up a hobby class to ward off depression,pessimism, to have some fun in life, and to look for a job which I really would be excited about - that interests me and which matches with my strength. My leave starts from this Monday. So in any case, I had hard set in mind that I will leave the company after I join them back hopefully with some job offer already in hand (after figuring out what I want do). Neither I can stand the past project,past colleagues,company, HR, pathetically low salary. But if I really listen to my heart, I don't want to have to go back to that office after my sabbatical and again have to see those people. I will have to resign it after my sabbatical ends. Then HR people perhaps wont like it, may even accuse me on face or behind back that primary purpose of my leave must have been to hunt for a better job and I lied about medical and person reasons. Also, if they get nasty and force me to serve 2 months notice period. There is no way I see myself after sabbatical resuming in old project or starting new work. It will be a pain. Since they have already approved 2 months leave and stuff, ideally if they want, they should be just able to relieve me right on the next day after I join back. But, I don't know if they want to get nasty, will they mention about my 2 months sabbatical leave in my experience letter or more scary, the term medical/personal reason. I have hard earned my experience here, have worked against my will, mostly it has been painful and slogged like anything, because I realize the importance of work experience in IT industry. I don't have greed to have those 2 months included extra in my experience letter, but I don't want to mess up with my experience letter in a way which makes my next employer ask question, get suspicious, or be wary if I have any medical reason going on. Being an emotional,moody person or somebody who can't be in an environment, once my mind and heart starts hating it. I think it perhaps is best, if I resign on Monday itself telling them (in polite manner) something that look I took sabbatical for some reason but I don't want to resume working in the company after my sabbatical ends. So please accept my resignation. Now tell me what you want to do about my leave request, my notice period and when you are willing to relieve me. What should I write and how? Some background: I am working in an IT company in India.I am overqualified in the company. It is grossly underpaying me. My education qualifications far exceed anyone's in the whole company being a CS undergrad as well as a CS grad. I joined this company after finishing the grad. I had self-doubts about my skills and interest as a programmer. I like doing research oriented work, though didn't have any particular success during grad. My life here has been very hectic. The project containing many many sub-projects has kept me on my toes and I have never really liked the work. I have been playing against my strength. Also the company strict internet usage policy (you can't read gmails, can't browse any non-work related sites not even news). When working for a client, from the machine we can't even check company related emails.For this one has to go to kiosk like 5 machines in a small room etc. Most of the times those machines are not available, so it was not unusual to keep making rounds to these kiosk machines to check company emails, browse company related emails etc.So it was not so easy to keep in touch with company related basic affairs for a not particular careful person. Things like this which are new to me, make me feel restricted. I am an undecisive person with a sense of failure, self-doubt, not meeting up unrealistic expectation. Somewhere at back of mind, I envy my classmates who make a smooth transition from company to company without causing any gap in their resume. I on other hand have gaps in resume. I get tired after working in a place for sometime. problem with colleagues in general. I am not particular great with people, have few friends, not known for a fun nature, rather serious, scholar. I am not a typical conventional female. I think females are usually more disciplined. But I am not so. I reach office late (though after informing manager). I don't want to blame them entirely, because from my past, it is not unusual for me to get undecisive on things. Also I had doubts about my ability as researched and to succeed there. of building a relationship in a group, to have something to talk about, newspaper. I get cut-off from people. peer pressure. I make blunders in coding, lose patience. Consciously or unconsciously I feel contempt for people here, work here, environment here. I have doubts that either I go to a place which does innovation, does research oriented work, product biggies, have great motivated people, have competent people passionate about products they are building. But then I also doubt my ability to survive there. I have identified that an idea job for me would be 4 days a week, a high salary job. When among people in company/team, I can't think much. I need some time at home to read good authentic books written in good style on what work I am doing.So that I am comfortable with my understanding of work. I get into pressure easily under deadline and need 5th day to cool myself off. I took for 2 weeks leave, because each day was hell for me. May be the depression phase of bipolar is on and also partially it could be that being a work centered person, who derives happiness,self-esteem from work, haven't been enjoying work and have been working for the sole person of proving stability, and ability to stick, against all odds, and facing what challenges I see, bonding with people, identifying opportunities to learn in given task etc.have been averaging one day LWP in 1 week or 10 days. or may be because of my nature,ADD,not being able to switch context,out of touch with news, don't have a circle of friends with who I enjoy. less knowledge in general to talk about, just some technical stuff.anyway, so due to emotional reason, some practical reason etc, I wanted to be very sure before leaving. So my leave starts from Monday and I should feel happy about it. I have taken the leave to for a few purposes - to take care of my health by regular yoga/exercise (with project on, I just can't do anything regular), reassess myself to see what I want to try next which work I might like, look for next job, take up a hobby which I like say singing. I am not clear on my career,job aspiration. I have tried my hands on research. During this year appraisal yesterday, I even had some conflict with my last manager. In meeting with me one on one, he would say all nice things about me, but in feedback to new manager, he hasn't given any excellent feedback. It is all only good. I am angry at this old Manager. Also new manager also scolded me as I didn't agree to his appraisal and waited to hear myself from old Manager. He kind of scolded me for wasting his time. Am I being unethical somewhere? I am always very conscious of if I am cheating anywhere. What advice I am seeking? How to resign and what to write in resignation letter

    Read the article

  • Programmatically use a server as the Build Server for multiple Project Collections

    Important: With this post you create an unsupported scenario by Microsoft. It will break your support for this server with Microsoft. So handle with care. I am the administrator an a TFS environment with a lot of Project Collections. In the supported configuration of Microsoft 2010 you need one Build Controller per Project Collection, and it is not supported to have multiple Build Controllers installed. Jim Lamb created a post how you can modify your system to change this behaviour. But since I have so many Project Collections, I automated this with the API of TFS. When you install a new build server via the UI, you do the following steps Register the build service (with this you hook the windows server into the build server environment) Add a new build controller Add a new build agent So in pseudo code, the code would look like foreach (projectCollection in GetAllProjectCollections) {       CreateNewWindowsService();       RegisterService();       AddNewController();       AddNewAgent(); } The following code fragements show you the most important parts of the method implementations. Attached is the full project. CreateNewWindowsService We create a new windows service with the SC command via the Diagnostics.Process class:             var pi = new ProcessStartInfo("sc.exe")                         {                             Arguments =                                 string.Format(                                     "create \"{0}\" start= auto binpath= \"C:\\Program Files\\Microsoft Team Foundation Server 2010\\Tools\\TfsBuildServiceHost.exe              /NamedInstance:{0}\" DisplayName= \"Visual Studio Team Foundation Build Service Host ({1})\"",                                     serviceHostName, tpcName)                         };            Process.Start(pi);             pi.Arguments = string.Format("failure {0} reset= 86400 actions= restart/60000", serviceHostName);            Process.Start(pi); RegisterService The trick in this method is that we set the NamedInstance static property. This property is Internal, so we need to set it through reflection. To get information on these you need nice Microsoft friends and the .Net reflector .             // Indicate which build service host instance we are using            typeof(BuildServiceHostUtilities).Assembly.GetType("Microsoft.TeamFoundation.Build.Config.BuildServiceHostProcess").InvokeMember("NamedInstance",              System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.SetProperty | System.Reflection.BindingFlags.Static, null, null, new object[] { serviceName });             // Create the build service host            serviceHost = buildServer.CreateBuildServiceHost(serviceName, endPoint);            serviceHost.Save();             // Register the build service host            BuildServiceHostUtilities.Register(serviceHost, user, password); AddNewController and AddNewAgent Once you have the BuildServerHost, the rest is pretty straightforward. There are methods on the BuildServerHost to modify the controllers and the agents                 controller = serviceHost.CreateBuildController(controllerName);                 agent = controller.ServiceHost.CreateBuildAgent(agentName, buildDirectory, controller);                controller.AddBuildAgent(agent); You have now seen the highlights of the application. If you need it and want to have sample information when you work in this area, download the app TFS2010_RegisterBuildServerToTPCs

    Read the article

  • 2010 Collaboration Summit Impressions

    - by Elena Zannoni
    It's a bit late, but there you have it anyway. April 14 to 16 I attended the Linux Foundation Collaboration Summit in SFO. I was running two tracks, one on tracing and one on tools. You can see the tracks and the slides here: http://events.linuxfoundation.org/events/collaboration-summit/slides I was pretty busy both days, Thursday with a whole day tracing track, Friday with a half day toolchain track. The sessions were well attended, the rooms were full, with people spilling in the hallways. Some new things were presented, like Kernelshark, by Steve Rostedt, a GUI (yes, believe it or not, a GUI) written in GTK. It is very nice, showing a timeline for traced kernel events, and you can zoom in and filter at will. It works on the latest kernels, and it requires some new things/fixes in GTK. I don't recall exactly what version of GTK though. Dominique Toupin from Ericsson presented something about user requirements for tracing. Mostly though about who's who in the embedded world, and eclipse. Masami and Mathieu presented an update on their work. See their slides. The interesting thing to me was of course the new version of uprobes w/o underlying utrace presented by Jim Keniston. At the end of the session we had a discussion about the future of utrace. Roland wasn't there, butTom Tromey (also from RedHat) collected the feedback. Basically we are at a standstill now that utrace has been rejected yet again. There wasn't much advise that anybody could give, except jokingly, we decided that the only way in is to make it a part of perf events. There needs to be another refactoring, but most of all, this "killer app" that would be enabled because of utrace hasn't materialized yet. We think that having a good debugging story on Linux is enough of a killer app, for instance allowing multiple tracers, and not relying on SIGCHLD etc. I think this wasn't completely clear to the kernel community. Trying to achieve debugging via a gdb stub inside the kernel interfacing to utrace and that is controlled via the gdb remote protocol also lost its appeal (thankfully, since the gdb remote protocol is archaic). Somebody would have to be creative in how to submit utrace. It doesn't have to be called utrace (it was really a random choice, for lack of a letter that was not already used in front of the word "trace"). So basically, I think the ideas behind utrace are sound, and the necessity of a new interface is acknowledged. But I believe the integration/submission process with the kernel folks has to restart from scratch, clean slate. We'll see. There are many conferences and meetings coming up in the near future where things can be discussed further. On the second day, Friday, we had the tools talks. It was interesting to observe the more "kernel" oriented people's behavior towards the gcc etc community. The first talk was by Mark Mitchell, about Gcc and its new plugin architecture. After that, Paolo talked about the new C++1x standard, which will be finalized in 2011. Many features are already implemented in the libstdc++ library and gcc and usable today. We had a few minutes (really, the half day track was quite short) where Bradley Kuhn from the Software Freedom Law Center explained the GPLv3 exception for gcc (due to the new gcc plugin architecture and the availability of the intermediate results from the compilation, which is a new thing). I will not try to explain, but basically you cannot take the result of the preprocessing and then use that in your own proprietary compiler. After, we had a talk by Ian Taylor about the new Gold linker. One good thing in that area is that they are trying to make gold the new default linker (for instance Fedora will use gold as the distro linker). However gold is very different from binutils' old linker. It doesn't use a linker script, for instance. The kernel has been linked with gold many times as an exercise (the ground work was done by Kris Van Hees), but this needs to be constantly tested/monitored because the kernel linker script is very complex, and uses esoteric features (Wenji is now monitoring that each kernel RC can be built with gold). It was positive that people are now aware of gold and the need for it to be ported to more architectures. It seems that the porting is very easy, with little arch dependent code. Finally Tom Tromey presented about gdb and the archer project. Archer is a development branch of gdb mostly done by RedHat, where they are focusing on better c++ printing, c++ expression parsing, and plugins. The archer work is merged regularly in the gdb mainline. In general it was a good conference. I did miss most of the first day, because that's when I flew in. But I caught a couple of talks. Nothing earth shattering, except for Google giving each person registered a free Android phone. Yey.

    Read the article

  • Off The Beaten Path—Three Things Growing Midsize Companies are Thankful For

    - by Christine Randle
    By: Jim Lein, Senior Director, Oracle Accelerate Last Sunday I went on a walkabout.  That’s when I just step out the door of my Colorado home and hike through the mountains for hours with no predetermined destination. I favor “social trails”, the unmapped routes pioneered by both animal and human explorers.  These tracks  are usually more challenging than established, marked routes and you can’t be 100% sure of where you’re going to end up. But I’ve found the rewards to be much greater. For awhile, I pondered on how—depending upon your perspective—the current economic situation worldwide could be viewed as either a classic “the glass is half empty” or a “the glass is half full” scenario. Midsize companies buy Oracle to grow and so I’m continually amazed and fascinated by the success stories our customers relate to me.  Oracle’s successful midsize companies are growing via innovation, agility, and opportunity. For them, the glass isn’t half full—it’s overflowing. Growing Midsize Companies are Thankful for: Innovation The sun angling through the pine trees reminded me of a conversation with a European customer a year ago May.  You might not recognize the name but, chances are, your local evening weather report relies on this company’s weather observation, monitoring and measurement products.  For decades, the company was recognized in its industry for product innovation, but its recent rapid growth comes from tailoring end to end product and service solutions based on the needs of distinctly different customer groups across industrial, public sector, and defense sectors.  Hours after that phone call I was walking my dog in a local park and came upon a small white plastic box sprouting short antennas and dangling by a nylon cord from a tree branch.  I cut it down. The name of that customer’s company was stamped on the housing. “It’s a radiosonde from a high altitude weather balloon,” he told me the next day. “Keep it as a souvenir.”  It sits on my fireplace mantle and elicits many questions from guests. Growing Midsize Companies are Thankful for: Agility In July, I had another interesting discussion with the CFO of an Asia-Pacific company which owns and operates a large portfolio of leisure assets. They are best known for their epic outdoor theme parks. However, their primary growth today is coming from a chain of indoor amusement centers in the USA where billiards, bowling, and laser tag take the place of roller coasters, kiddy rides, and wave pools. With mountains and rivers right out my front door, I’m not much for theme parks, but I’ll take a spirited game of laser tag any day.  This company has grown dramatically since first implementing Oracle ERP more than a decade ago. Their profitable expansion into a completely foreign market is derived from the ability to replicate proven and efficient best business practices across diverse operating environments.  They recently went live on Oracle’s Fusion HCM and Taleo. Their CFO explained to me how, with thousands of employees in three countries, Fusion HCM and Taleo would enable them to remain incredibly agile by acting on trends linking individual employee performance to their management, establishing and maintaining those best practices. Growing Midsize Companies are Thankful for: Opportunity I have three GPS apps on my iPhone. I use them mainly to keep track of my stats—distance, time, and vertical gain. However, every once in awhile I need to find the most efficient route back home before dark from my current location (notice I didn’t use the word “lost”). In August I listened in on an interview with the CFO of another European company that designs and delivers telematics solutions—the integrated use of telecommunications and informatics—for managing the mobile workforce. These solutions enable customers to achieve evolutionary step-changes in their performance and service delivery. Forgive the overused metaphor, but this is route optimization on steroids.  The company’s executive team saw an opportunity in this emerging market and went “all in”. Consequently, they are being rewarded with tremendous growth results and market domination by providing the ability for their clients to collect and analyze performance information related to fuel consumption, service workforce safety, and asset productivity. This Thanksgiving, I’m thankful for health, family, friends, and a career with an innovative company that helps companies leverage top tier software to drive and manage growth. And I’m thankful to have learned the lesson that good things happen when you get off the beaten path—both when hiking and when forging new routes through a complex world economy. Halfway through my walkabout on Sunday, after scrambling up a long stretch of scree-covered hill, I crested a ridge with an obstructed view of 14,265 ft Mt Evans just a few miles to the west.  There, nowhere near a house or a trail, someone had placed a wooden lounge chair. Its wood was worn and faded but it was sturdy. I had lunch and a cold drink in my pack. Opportunity knocked and I seized it. Happy Thanksgiving.  

    Read the article

  • 5 Things I Learned About the IT Labor Shortage

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein | Sr. Principal Product Marketing Director | Oracle Midsize Programs | @JimLein Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 5 Things I Learned About the IT Labor Shortage A gentle autumn breeze is nudging the last golden leaves off the aspen trees. It’s time to wrap up the series that I started back in April, “The Growing IT Labor Shortage: Are You Feeling It?” Even in a time of relatively high unemployment, labor shortages exist depending on many factors, including location, industry, IT requirements, and company size. According to Manpower Groups 2013 Talent Shortage Survey, 35% of hiring managers globally are having difficulty filling jobs. Their top three challenges in filling jobs are: 1. lack of technical competencies (hard skills) 2. Lack of available applicants 3. Lack of experience The same report listed Technicians as the most difficult position to fill in the United States For most companies, Human Capital and Talent Management have never been more strategic and they are striving for ways streamline processes, reduce turnover, and lower costs (see this Oracle whitepaper, “ Simplify Workforce Management and Increase Global Agility”). Everyone I spoke to—partner, customer, and Oracle experts—agreed that it can be extremely challenging to hire and retain IT talent in today’s labor market. And they generally agreed on the causes: a. IT is so pervasive that there are myriad moving parts requiring support and expertise, b. thus, it’s hard for university graduates to step in and contribute immediately without experience and specialization, c. big IT companies generally aren’t the talent incubators that they were in the freewheeling 90’s due to bottom line pressures that require hiring talent that can hit the ground running, and d. it’s often too expensive for resource-strapped midsize companies to invest the time and money required to get graduates up to speed. Here are my top lessons learned from my conversations with the experts. 1. A Better Title Would Have Been, “The Challenges of Finding and Retaining IT Talent That Matches Your Requirements” There are more applicants than jobs but it’s getting tougher and tougher to find individuals that perfectly fit each and every role. Top performing companies are increasingly looking to hire the “almost ready”, striving to keep their existing talent more engaged, and leveraging their employee’s social and professional networks to quickly narrow down candidate searches (here’s another whitepaper, “A Strategic Approach to Talent Management”). 2. Size Matters—But So Does Location Midsize companies must strive to build cultures that compete favorably with what large enterprises can offer, especially when they aren’t within commuting distance of IT talent strongholds. They can’t always match the compensation and benefits offered by large enterprises so it's paramount to offer candidates high quality of life and opportunities to build their resumes in alignment with their long term career aspirations. 3. Get By With a Little Help From Your Friends It doesn’t always make sense to invest time and money in training an employee on a task they will not perform frequently. Or get in a bidding war for talent with skills that are rare and in high demand. Many midsize companies are finding that it makes good economic sense to contract with partners for remote support rather than trying to divvy up each and every role amongst their lean staff. Internal staff can be assigned to roles that will have the highest positive impact on achieving organizational goals. 4. It’s Actually Both “What You Know” AND “Who You Know” If I was hiring someone today I would absolutely leverage the social and professional networks of my co-workers. Period. Most research shows that hiring in this manner is less expensive and time consuming AND produces better results. There is also some evidence that suggests new hires from employees’ networks have higher job performance and retention rates. 5. I Have New Respect for Recruiters and Hiring Managers My hats off to them—it’s not easy hiring and retaining top talent with today’s challenges. Check out the infographic, “A New Day: Taking HR from Chaos to Control”, on Oracle’s Human Capital Management solutions home page. You can also explore all of Oracle’s HCM solutions from that page based on your role. You can read all the posts in this series by clicking on the links in the right sidebar. Stay tuned…we’ll continue to post thought leadership on HCM and Talent Management topics.

    Read the article

  • Validation firing in ASP.NET MVC

    - by rkrauter
    I am lost on this MVC project I am working on. I also read Brad Wilsons article. http://bradwilson.typepad.com/blog/2010/01/input-validation-vs-model-validation-in-aspnet-mvc.html I have this: public class Employee { [Required] public int ID { get; set; } [Required] public string FirstName { get; set; } [Required] public string LastName { get; set; } } and these in a controller: public ActionResult Edit(int id) { var emp = GetEmployee(); return View(emp); } [HttpPost] public ActionResult Edit(int id, Employee empBack) { var emp = GetEmployee(); if (TryUpdateModel(emp,new string[] { "LastName"})) { Response.Write("success"); } return View(emp); } public Employee GetEmployee() { return new Employee { FirstName = "Tom", LastName = "Jim", ID = 3 }; } and my view has the following: <% using (Html.BeginForm()) {%> <%= Html.ValidationSummary() %> <fieldset> <legend>Fields</legend> <div class="editor-label"> <%= Html.LabelFor(model => model.FirstName) %> </div> <div class="editor-field"> <%= Html.DisplayFor(model => model.FirstName) %> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.LastName) %> </div> <div class="editor-field"> <%= Html.TextBoxOrLabelFor(model => model.LastName, true)%> <%= Html.ValidationMessageFor(model => model.LastName) %> </div> <p> <input type="submit" value="Save" /> </p> </fieldset> <% } %> Note that the only field editable is the LastName. When I postback, I get back the original employee and try to update it with only the LastName property. But but I see on the page is the following error: •The FirstName field is required. This from what I understand, is because the TryUpdateModel failed. But why? I told it to update only the LastName property. I am using MVC2 RTM Thanks in advance.

    Read the article

  • Looking for efficient scaling patterns for Silverlight application with distributed text-file data s

    - by Edward Tanguay
    I'm designing a Silverlight software solution for students and teachers to record flashcards, e.g. words and phrases that students find while reading and errors that teachers notice while teaching. Requirements are: each person publishes his own flashcards in a file on a web server, e.g. http://:www.mywebserver.com/flashcards.txt other people subscribe to that person's flashcards by using a Silverlight flashcard reader that I have developed and entering the URLs of flashcard files they want to subscribe to, URLs and imported flashcards being saved in IsolatedStorage the flashcards.txt file has the following simple format: title, then blocks of question/answers: Jim Smith's flashcards from English class 53-222, winter semester 2009 ==fla Das kann nicht sein. That can't be. ==fla Es sei denn, er kommt nicht. Unless he doesn't come. The user then makes public the URL to his flashcard file and other readers begin reading in his flashcards. In order to lower the bar for non-technical users to contribute, it will even be possible for them to save this text in a Google Document, which they publish and distribute the URL. The flashcard readers will then recognize it is a google document and perform the necessary screen scraping to get at the raw text. I have two technical questions about this approach: What is a best way to plan now for scalability issues: e.g. if your reader is subscribed to 10 flashcard files that are each 200K, it will have to download 2MB of text just to find out if any new flashcards are available. Or can I somehow accurately and consistently get at the last update date/time of text files on servers and published google docs? Each reader will have the ability to allow the person to test himself on imported flashcards and add meta information to them, e.g. categorize them, edit them, etc. This information will be stored in IsolatedStorage along with the important flashcards themselves. What is a good pattern to allow these readers to share and synchronize this meta data, e.g. so when you are looking at a flashcard you can see that 5 other people have made corrections to it. The best solution I can think of now is that the Silverlight readers will have to republish their data to a central database, but then there is the problem of uniquely identifying each flashcard, the best approach seems to be URL + position-in-file, or even better URL + original text of both question and answer fields, but both of these have their obvious drawbacks. The main requirement is that the bar for participation is kept as low as possible, i.e. type text in a google document, publish it, distribute the URL, and you're publishing within the flashcard community. So I want to come up with the most efficient technical solutions in order to compensate for the lack of database, lack of unique ids, etc. For those who have designed or developed similar non-traditional, distributed database projects like this, what advice, experience or best-practice tips you can share on the above two points?

    Read the article

< Previous Page | 35 36 37 38 39 40  | Next Page >