Search Results

Search found 947 results on 38 pages for 'jim buck'.

Page 37/38 | < Previous Page | 33 34 35 36 37 38  | Next Page >

  • Druapl & Regular PHP Integration

    - by user333128
    I'm building a new website which has one core application and many content pages. Content pages are mostly dynamic and I require a way to manage this dynamic content on a regular basis. The core application's main functionality is a 3 step process or reading user data (input page), reading data from MySQL (product page) and submitting an application to an email address (application page). Ideally I would like to build the core application in regular PHP and leverage Drupal for its content management capabilities. Can Drupal and regular PHP be integrated as I suggest easily? My feeling is that coding the core application as a Drupal module(s) will add layers of complexity that could be difficult to code from the outset and maintain later on as the system matures - so I would really like to just use regular PHP. Let me explain where dynamic content (managed by the CMS) intersects with the core application: Dynamic content such as FAQ data is used both on the 'normal' help pages and also within a mini-feed displayed within core application pages down a right hand side column. In this column, 3 random questions are pulled from the database and displayed as a feed. When users click on FAQ question they are not taken away from the core application product page but are instead shown data in a pop-up window displaying the question and answer. In addition, users can browse other questions and answers through a simple navigation menu within this popup. There are 3 such like feeds as I describe above that I require on the core application product page. So, what is the ideal solution here in terms of 'keeping things simple' for both the management of dynamic content and the ease of coding the core application? Can 'regular PHP' and Drupal co-exist 'peacefully'? If so, how is this technically possible? Because there is some content managed by Drupal contained within core application pages, can the core application still be coded in regular PHP? Any advice / suggestions? Thank you! Jim.

    Read the article

  • jquery accessing dynamic class selectors

    - by dinotom
    Given the following html rendering; <fieldset id="fld_Rye"> <legend>7 Main St.</legend> <div> <table> <tr> <th class="wideCol"><b><i>Service Description</i></b></th> <th class="wideCol"><b><i>Service Name</i></b></th> <th class="normalPlusWidth"><b><i>Contact Name</i></b></th> </tr> <ItemTemplate> <tr class=""> <td class="servDesc">&nbsp;<b>Plumbing Services</b></td> <td class="servName">&nbsp;<b>Flynnsters's Plumbing</b></td> <td class="servContact">&nbsp;<b>Jim Flynnster</b></td> </tr> </ItemTemplate> I am trying to access all the td's with a class of servDesc, get their width into an array and get the max width from that array to reset the css width of that class using jquery on page load. I cant seem to get the right selector for those tags, and I've tried at least 50 variations. My latest try; var maxTdWidth; var servDescCols = []; $("#fld_Rye td#servDesc").each(function () { alert("found one"); servDescCols.push(this.width()); }); maxTdWidth = Math.max.apply(Math, servDescCols);

    Read the article

  • Can a lambda can be used to change a List's values in-place ( without creating a new list)?

    - by Saint Hill
    I am trying to determine the correct way of transforming all the values in a List using the new lambdas feature in the upcoming release of Java 8 without creating a **new** List. This pertains to times when a List is passed in by a caller and needs to have a function applied to change all the contents to a new value. For example, the way Collections.sort(list) changes a list in-place. What is the easiest way given this transforming function and this starting list: String function(String s){ return [some change made to value of s]; } List<String> list = Arrays.asList("Bob", "Steve", "Jim", "Arbby"); The usual way of applying a change to all the values in-place was this: for (int i = 0; i < list.size(); i++) { list.set(i, function( list.get(i) ); } Does lambdas and Java 8 offer: an easier and more expressive way? a way to do this without setting up all the scaffolding of the for(..) loop?

    Read the article

  • Testing for optional node - Delphi XE

    - by Seti Net
    What is the proper way to test for the existance of an optional node? A snipped of my XML is: <Antenna > <Mount Model="text" Manufacture="text"> <BirdBathMount/> </Mount> </Antenna> But it could also be: <Antenna > <Mount Model="text" Manufacture="text"> <AzEl/> </Mount> </Antenna> The child of Antenna could either be BirdBath or AzEl but not both... In Delphi XE I have tried: if (MountNode.ChildNodes.Nodes['AzEl'] <> unassigned then //Does not work if (MountNode.ChildNodes['BirdBathMount'].NodeValue <> null) then // Does not work if (MountNode.BirdBathMount.NodeValue <> null) then // Does not work I use XMLSpy to create the schema and the example XML and they parse correctly. I use Delphi XE to create the bindings and it works on most other combinations fine. This must have a simple answer that I have just overlooked - but what? Thanks...... Jim

    Read the article

  • Smarter, Faster, Cheaper: The Insurance Industry’s Dream

    - by Jenna Danko
    On June 3rd, I saw the Gaylord Resort Centre in Washington D.C. become the hub of C level executives and managers of insurance carriers for the IASA 2013 Conference.  Insurance Accounting/Regulation and Technology sessions took the focus, but there were plenty of tertiary sessions for career development, which complemented the overall strong networking side of the conference.  As an exhibitor, Oracle, along with several hundred other product providers, welcomed the opportunity to display and demonstrate our solutions and we were encouraged by hustle and bustle of the exhibition floor.  The IASA organizers had pre-arranged fast track tours whereby interested conference delegates could sign up for a series of like-themed presentations from Vendors, giving them a level of 'Speed Dating' introductions to possible solutions and services.  Oracle participated in a number of these, which were very well subscribed.  Clearly, the conference had a strong business focus; however, attendees saw technology as a key enabler to get their processes done smarter, faster and cheaper.  As we navigated through the exhibition, it became clear from the inquiries that came to us that insurance carriers are gravitating to a number of focus areas: Navigating the maze of upcoming regulatory reporting changes. For US carriers with European holdings, Solvency II carries a myriad of rules and reporting requirements. Alignment across the globe of the Own Risk and Solvency Assessment (ORSA) processes brings to the fore the National Insurance of Insurance commissioners' (NAIC) recent guidance manual publication. Doing more with less and to certainly expect more from technology for less dollars. The overall cost of IT, in particular hardware, has dropped in real terms (though the appetite for more has risen: more CPU, more RAM, more storage), but software has seen less change. Clearly, customers expect either to pay less or get a lot more from their software solutions for the same buck. Doing things smarter – A recognition that with the advance of technology to stand still no longer means you are technically going backwards. Technology and, in particular technology interactions with human business processes, has undergone incredible change over the past 5 years. Consumer usage (iPhones, etc.) has been at the forefront, but now at the Enterprise level ever more effective technology exploitation is beginning to take place. That data and, in particular gleaning knowledge from data, is refining and improving business processes.  Organizations are now consuming more data than ever before, and it is set to grow exponentially for some time to come.  Amassing large volumes of data is one thing, but effectively analyzing that data is another.  It is the results of such analysis that leads to improvements both in terms of insurance product offerings and the processes to support them. Regulatory Compliance, damned if you do and damned if you don’t! Clearly, around the globe at lot is changing from a regulatory perspective and it is evident that in terms of regulatory requirements, whilst there is a greater convergence across jurisdictions bringing uniformity, there is also a lot of work to be done in the next 5 years. Just like the big data, hidden behind effective regulatory compliance there often lies golden nuggets that can give competitive advantages. From Oracle's perspective, our Rating Engine, Billing, Document Management and Insurance Analytics solutions on display served to strike up good conversations and, as is always the case at conferences, it was a great opportunity to meet and speak with existing Oracle customers that we might not have otherwise caught up with for a while. Fortunately, I was able to catch up on a few sessions at the close of the Exhibition.  The speaker quality was high and the audience asked challenging, but pertinent, questions.  During Dr. Jackie Freiberg’s keynote “Bye Bye Business as Usual,” the author discussed 8 strategies to help leaders create a culture where teams consistently deliver innovative ideas by disrupting the status quo.  The very first strategy: Get wired for innovation.  Freiberg admitted that folks in the insurance and financial services industry understand and know innovation is important, but oftentimes they are slow adopters.  Today, technology and innovation go hand in hand. In speaking to delegates during and after the conference, a high degree of satisfaction could be measured from their positive comments of speaker sessions and the exhibitors. I suspect many will be back in 2014 with Indianapolis as the conference location. Did you attend the IASA Conference in Washington D.C.?  If so, I would love to hear your comments. Andrew Collins is the Director, Solvency II of Oracle Financial Services. He can be reached at andrew.collins AT oracle.com.

    Read the article

  • Understanding each other in web development

    - by Pete Hotchkin
    During my career I have been lucky enough to work in several different roles within web development with many extremely talented people, from incredible designers who were passionate about the placement of every pixel right through to server administrators and DBAs who were always measuring the improvements they were making to their queries in the smallest possible unit. The problem I always faced was that more often than not I was stuck in the middle trying to mediate between these different functions and enable each side to understand the other’s point of view. The main areas of contention that there have always been between these functional groups in my experience have been at 2 key points: during the build phase and then when there is a problem post-build. During both of these times it is often easier for someone to pass the buck onto someone else than spend the time to understand the other person’s perspective. Below is a quick look at two upcoming tools that will not only speed up the build phase for each function, but  also help when it comes to the issues faced once a site has been pushed live. In my experience a web project goes through several phases of development. The first of these is design, generally handled as Photoshop files which are then passed onto a front-end developer. This is the first point at which heated discussions can arise. One problem I’ve seen several times is that the designer doesn’t fully understand the platform constraints that need to be considered, and as a result has designed something that does not translate very well or is simply not possible. Working at Red Gate, I am lucky enough to be able to meet some amazing people and this happened just the other day when I was introduced to Neil Kinnish and Pete Nelson, the creators of what I believe could be a great asset in this designer-developer relationship, Mixture. Mixture allows the front end developer to quickly prototype a web page with built-in frameworks such as bootstrap. It’s not an IDE however, it just sits there in the background and monitors the project files in the background so every time you save a file from your favorite IDE, it will compile things like LESS, compact your JavaScript and the automatically refresh your test browser so you can see the changes instantly. I think one of the best parts of this however is a single button that pushes the changed files up to the web so the designer can instantly see how far the developer has got and the problem that he is facing at that time without the need to spend time setting up a remote server. I can see this being a real asset to remote teams where there needs to be a compromise between the designer and the front-end developer, or just to allow the designer to see how the build is progressing and suggest small alterations. Once the design has been built into the front end the designer’s job is generally done and there are no other points of contention between the designer and the other functions involved in building these web projects. As the project moves into the stage of integrating it into the back end and deploying it to the production server other functions start to be pulled in and other issues arise such as the back-end developer understanding the frameworks that they are using such as the routes that are in place in an MVC application or the number of database calls that the ORM layer is actually making. There are many tools out there that can actually help with these problems such as mini profiler that gives you a quick snapshot of what is going on directly in the browser. For a slightly more in-depth look at what is happening and to gain a deeper understanding of an application you may be working on though, you may want to consider Glimpse. Created by Nik and Anthony, it is an application that sits at the bottom of your browser (installed via NuGet) which can show you information about how your application is pieced together and how the information on screen is being delivered as it happens. With a wealth of community-built plugins such as one for nHibernate and linq2SQL (full list of plugins on NuGet). It can be customized directly to your own setup to truly delve into the code to see what is happening, and can help to reduce the number of confusing moments about whether it is your code that is going wrong or whether there is something more sinister happening directly on the server. All the tools that I have mentioned in this post help to do one thing above all, and that is to ease the barrier of understanding between the different functions that are involved in building and maintaining a web application. In my experience it is very easy to say “Well, that’s not my problem”, simply because the two functions involved don’t truly understand the other’s point of view. Software should not only be seen as a way to streamline our own working process or as a debugging tool but also a communication aid to improve the entire lifecycle of a web project. Glimpse is actually the project that I am the designer on and I would love to get your feedback if you do decide to try it out or if you would like to share your own experiences of working on web projects please fill in your details at https://www.surveymk.com/s/joinGlimpse  or add a comment below and I will get in touch with you.

    Read the article

  • Sea Monkey Sales & Marketing, and what does that have to do with ERP?

    - by user709270
    Tier One Defined By Lyle Ekdahl, Oracle JD Edwards Group Vice President and General Manager  I recently became aware of the latest Sea Monkey Sales & Marketing tactic. Wait now, what is Sea Monkey Sales & Marketing and what does that have to do with ERP? Well if you grew up in USA during the 50’s, 60’s and maybe a bit in the early 70’s there was a unifying media of culture known as the comic book. I was a big Iron Man fan. I always liked the troubled hero aspect of Tony Start and hey he was a technologist. This is going somewhere, just hold on. Of course comic books like most media contained advertisements. Ninety pound weakling transformed by Charles Atlas in just 15 minutes per day. Baby Ruth, Juicy Fruit Gum and all assortments of Hostess goodies were on display. The best ad was for the “Amazing Live Sea-Monkeys – The real live fun-pets you grow yourself!” These ads set the standard for exaggeration and half-truth; “…they love attention…so eager to please, they can even be trained…” The cartoon picture on the ad is of a family of royal looking sea creatures – daddy, mommy, son and little sis – sea monkey? There was a disclaimer at the bottom in fine print, “Caricatures shown not intended to depict Artemia.” Ok what ten years old knows what the heck artemia is? Well you grow up fast once you’ve been separated from your buck twenty five plus postage just to discover that it is brine shrimp. Really dumb brine shrimp that don’t take commands or do tricks. Unfortunately the technology industry is full of sea monkey sales and marketing. Yes believe it or not in some cases there is subterfuge and obfuscation used to secure contracts. Hey I get it; the picture on the box might not be the actual size. Make up what you want about your product, but here is what I don’t like, could you leave out the obvious falsity when it comes to my product, especially the negative stuff. So here is the latest one – “Oracle’s JD Edwards is NOT tier one”. Really? Definition please! Well a whole host of googleable and reputable sources confirm that a tier one vendor is large, well known, and enjoys national and international recognition. Let me see large, so thousands of customers? Oh and part of the world’s largest business software and hardware corporation? Check and check JD Edwards has that and that. Well known, enjoying national and international recognition? Oracle’s JD Edwards EnterpriseOne is available in 21 languages and is directly localized in 33 countries that support some of the world’s largest multinationals and many midsized domestic market companies. Something on the order of half the JD Edwards customer base is outside North America. My passport is on its third insert after 2 years and not from vacations. So if you don’t mind I am going to mark national and international recognition in the got it column. So what else is there? Well let me offer a few criteria. Longevity – The JD Edwards products benefit from 35+ years of intellectual property development; through booms, busts, mergers and acquisitions, we are still here Vision & innovation – JD Edwards is the first full suite ERP to run on the iPad as just one example Proven track record of execution – Since becoming part of Oracle, JD Edwards has released to the market over 20 deliverables including major release, point releases, new apps modules, tool releases, integrations…. Solid, focused functionality with a flexible, interoperable, extensible underlying architecture – JD Edwards offers solid core ERP with specialty modules for verticals all delivered on a well defined independent tools layer that helps enable you to scale your business without an ERP reimplementation A continuation plan – Oracle’s JD Edwards offers our customers a 6 year roadmap as well as interoperability with Oracle’s next generation of applications Oh I almost forgot that the expert sources agree on one additional thing, tier one may be a preferred vendor that offers product and services to you with appealing value. You should check out the TCO studies of JD Edwards. I think you will see what the thousands of customers that rely on these products to run their businesses enjoy – that is the tier one solution with the lowest TCO. Oh and if you get an offer to buy an ERP for no license charge, remember the picture on the box might not be the actual size. 

    Read the article

  • C#/.NET Little Wonders: Static Char Methods

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Often times in our code we deal with the bigger classes and types in the BCL, and occasionally forgot that there are some nice methods on the primitive types as well.  Today we will discuss some of the handy static methods that exist on the char (the C# alias of System.Char) type. The Background I was examining a piece of code this week where I saw the following: 1: // need to get the 5th (offset 4) character in upper case 2: var type = symbol.Substring(4, 1).ToUpper(); 3:  4: // test to see if the type is P 5: if (type == "P") 6: { 7: // ... do something with P type... 8: } Is there really any error in this code?  No, but it still struck me wrong because it is allocating two very short-lived throw-away strings, just to store and manipulate a single char: The call to Substring() generates a new string of length 1 The call to ToUpper() generates a new upper-case version of the string from Step 1. In my mind this is similar to using ToUpper() to do a case-insensitive compare: it isn’t wrong, it’s just much heavier than it needs to be (for more info on case-insensitive compares, see #2 in 5 More Little Wonders). One of my favorite books is the C++ Coding Standards: 101 Rules, Guidelines, and Best Practices by Sutter and Alexandrescu.  True, it’s about C++ standards, but there’s also some great general programming advice in there, including two rules I love:         8. Don’t Optimize Prematurely         9. Don’t Pessimize Prematurely We all know what #8 means: don’t optimize when there is no immediate need, especially at the expense of readability and maintainability.  I firmly believe this and in the axiom: it’s easier to make correct code fast than to make fast code correct.  Optimizing code to the point that it becomes difficult to maintain often gains little and often gives you little bang for the buck. But what about #9?  Well, for that they state: “All other things being equal, notably code complexity and readability, certain efficient design patterns and coding idioms should just flow naturally from your fingertips and are no harder to write then the pessimized alternatives. This is not premature optimization; it is avoiding gratuitous pessimization.” Or, if I may paraphrase: “where it doesn’t increase the code complexity and readability, prefer the more efficient option”. The example code above was one of those times I feel where we are violating a tacit C# coding idiom: avoid creating unnecessary temporary strings.  The code creates temporary strings to hold one char, which is just unnecessary.  I think the original coder thought he had to do this because ToUpper() is an instance method on string but not on char.  What he didn’t know, however, is that ToUpper() does exist on char, it’s just a static method instead (though you could write an extension method to make it look instance-ish). This leads me (in a long-winded way) to my Little Wonders for the day… Static Methods of System.Char So let’s look at some of these handy, and often overlooked, static methods on the char type: IsDigit(), IsLetter(), IsLetterOrDigit(), IsPunctuation(), IsWhiteSpace() Methods to tell you whether a char (or position in a string) belongs to a category of characters. IsLower(), IsUpper() Methods that check if a char (or position in a string) is lower or upper case ToLower(), ToUpper() Methods that convert a single char to the lower or upper equivalent. For example, if you wanted to see if a string contained any lower case characters, you could do the following: 1: if (symbol.Any(c => char.IsLower(c))) 2: { 3: // ... 4: } Which, incidentally, we could use a method group to shorten the expression to: 1: if (symbol.Any(char.IsLower)) 2: { 3: // ... 4: } Or, if you wanted to verify that all of the characters in a string are digits: 1: if (symbol.All(char.IsDigit)) 2: { 3: // ... 4: } Also, for the IsXxx() methods, there are overloads that take either a char, or a string and an index, this means that these two calls are logically identical: 1: // check given a character 2: if (char.IsUpper(symbol[0])) { ... } 3:  4: // check given a string and index 5: if (char.IsUpper(symbol, 0)) { ... } Obviously, if you just have a char, then you’d just use the first form.  But if you have a string you can use either form equally well. As a side note, care should be taken when examining all the available static methods on the System.Char type, as some seem to be redundant but actually have very different purposes.  For example, there are IsDigit() and IsNumeric() methods, which sound the same on the surface, but give you different results. IsDigit() returns true if it is a base-10 digit character (‘0’, ‘1’, … ‘9’) where IsNumeric() returns true if it’s any numeric character including the characters for ½, ¼, etc. Summary To come full circle back to our opening example, I would have preferred the code be written like this: 1: // grab 5th char and take upper case version of it 2: var type = char.ToUpper(symbol[4]); 3:  4: if (type == 'P') 5: { 6: // ... do something with P type... 7: } Not only is it just as readable (if not more so), but it performs over 3x faster on my machine:    1,000,000 iterations of char method took: 30 ms, 0.000050 ms/item.    1,000,000 iterations of string method took: 101 ms, 0.000101 ms/item. It’s not only immediately faster because we don’t allocate temporary strings, but as an added bonus there less garbage to collect later as well.  To me this qualifies as a case where we are using a common C# performance idiom (don’t create unnecessary temporary strings) to make our code better. Technorati Tags: C#,CSharp,.NET,Little Wonders,char,string

    Read the article

  • Walmart and Fusion Apps

    - by ultan o'broin
    Photograph: Misha Vaughan I attended Fusion Apps (yes, I know I am supposed to say "Oracle Fusion Applications", but stuffy old style guides are a turn-off in interwebs conversations) User Experience Advocate (FXA) training in Long Beach, California last week; a suitable location as ODTUG KSCOPE 11 was kicking off and key players were in the area. As a member of Oracle's Apps-UX team I know the Fusion Apps messaging, natch, and done some other Fusion Apps go-to-market content work too. For the messaging details themselves, see Lonneke Dikmans (@lonnekedikmans) great blog, by the way. However, I wanted some 'formal' training combined with the opportunity to meet and learn from people already out there delivering those messages. The idea in me reaching out to Misha Vaughan, Apps-UX FXA maven, to get me onto this training was that in addition to my UX knowledge, I could leverage my location in EMEA and hit up customer events more quickly and easily. Those local user groups do like to hear the voice of locals too you know (so I need to work on that mid-Atlantic accent). I'm looking forward to such opportunities. The training was all smashing stuff, just the right level of detail, delivered professionally and with great style and humor. I was especially honored to be paired off for my er, coaching with Debra Lilley (@debralilley), who shared with everyone all kinds of tips and insights from her experiences of delivering the message and demo. For me, that was the real power of the FXA event--the communal, conversational aspect--the meeting up with people who had done all this for real, the sharing in their experiences, while learning along with other newbies. Sorry, but that all-important social aspect doesn't work so well with remote meetings. Katie Candland (Apps-UX) gave us a great tour of the Fusion Apps demo and included some useful presentational tips too (any excuse to buy that iPad). It's clear to me that the Fusion Apps messaging and demos really come alive with real-world examples that local application users will recognize, and I picked up some "yes, that's my job made easier" scene-stealers from Debra and Karen Brownfield too, to add to the great ones already provided. This power of examples shouldn't surprise anyone, they've long been a mainstay of applications user assistance, popular with users. We'll offer customers different types of example topics in the Fusion Apps online help too (stay tuned), and we know from research how important those 3S's (stories, scenarios, and simulations) are to users when they consume and apply information. Well, we've got the simulation, now it's time for more stories and scenarios. If you get a chance to participate in an FXA event (whether you are an Oracle employee or otherwise), I'd encourage it. It's committing your time and energy for sure, but I got real bang for the buck from it for my everyday job too. Listening to the room's feedback on the application demo really brought our internal design work to life, and I picked up on some things that I need to follow up on (like how you alphabetically sort stuff in other languages). User experience is after all, about users. What will I be doing next, and what would I like to see happen? Obviously, I need to develop my story-telling links with the people I met in Long Beach and do some practicing with the materials, and then get out there and deliver them at a suitable location. The demo is what it is right now, and that's a super-rich demo that I know everyone will want to see and ask questions about. Then, as mentioned by attendees at the FXA event, follow up on those translated and localized messages for EMEA (and APAC), that deal with different statutory or reporting requirements of the target markets. Given my background I would say that, wouldn't I? However, language is part of the UX, and international revenue is greater than US-only revenue for Oracle, so yes dear, we all need to get over the fact that enterprise apps users don't all speak, or want to speak, American-English. Most importantly perhaps, the continued development of a strong messaging community between Oracle and partners and customers where we can swap and share those FXA messaging stories and scenarios about Fusion Apps in a conversational way. The more the better, a combination of online and face-to-face meetings. I must also mention the great dinner after the event at Parker's Lighthouse, and the fun myself and Andrew Gilmour (Apps-UX) had at our end of the table talking about just about everything except Fusion Apps with Ronald Van Luttikhuizen and Ben Prusinski (who now understands the difference between Cork and Dublin people. I hope). Thanks to all the Apps-UXers who helped bring the FXA training to town, and to Debra and all the others that I am too jetlagged to mention right who were instrumental in making it happen for me. Here's to the next one. And the Walmart angle? That was me doing my Robert Scoble (ScO'bilizer?)-style guerilla smart phone research in Walmart in Long Beach, before the FXA event. It's all about stories for me. You can read more about it on the appslab blog (see the comments).

    Read the article

  • Soft lockup after upgrade - cannot install from live CD

    - by nbm
    I dual-boot MacIntel Core 2 duo. nVidia graphics. Ran upgrade from ubuntu 13.10 to 14.04 (64 bit). On restart ran into {numbers} Bug: soft lockup - CPU#0 stuck for 22s! [swapper/0:1] Tried loading earlier kernel: same problem Tried re-installing ubuntu from a liveCD that has worked in the past: version 13.04. Same problem. Tried re-partitioning hard drive using Mac OS X disk utility and then installing ubuntu 14.04LTS from liveCD. Same problem. Not possible to verify liveCD disk (creates same "soft lockup" bug.) Tried installing from the liveCD with version 13.04 that I know works (that's how I got Ubuntu on this machine in the first place.) Same problem. I know this is not a hardware problem as OS X works just fine, I am using it right now on the same machine. I have been using various versions of Ubuntu for 2 years. Things I cannot do: Open a terminal Verify CD image Start ubuntu from CD (same soft lockup problem) This problem is similar to some other questions, none of which have been satisfactorily answered: Ubuntu 14.04 soft lockup on Vostro 3500 Cannot do fresh install of Ubuntu 13.04 while booting from DVD: "soft lockup" bug Live CD stalls when installing Ubuntu 13.10 UPDATE 6/11/14: Following some much-appreciated advice from bain (see below) I burned a 12.04LTS disk and started with kernel parameters: noapic, no1apic, acpi=off, nomodeset, elevator=deadline, and clocksource=jiffies. With all of these parameters I was able to load the 12.04LTS CD ("Try without installing"). It worked fine. However, as soon as I tried to install Ubuntu from the CD, my wired ethernet (eth0) connection would hang. There are already various askubuntu questions and bug reports about this problem, none of which had answers for me. (E.g., dhclient eth0 does nothing, none of the various reset commands does anything, manually setting IP &etc does nothing. I could reliably kill the ethernet connection by clicking "install ubuntu" every single time.) I could go ahead and install 12.04 without an internet connection, but the install would freeze after mostly completing (I tried several times.) There were some relevant error messages in the details of the install output script that, IIRC, had to do with searching for missing files and not being able to access eth0 (internet) to get them. To be honest I gave up at that point and I'm not sure I wrote those down. If I find some notes I will post them. At this point I no longer have Ubuntu on my system. I wiped the partitions and am using exclusively OS X. I am leaving this question in case it helps anyone else with similar problems. I love open source and I love Linux, and the next machine I get I will probably just build from Arch. At the moment I miss repositories and a lot of other things about Ubuntu, but the OS X terminal is 'nix, I can pretty much use all the open source apps I like, and while I am not a fan of the Apple software it gets the job done for me. Unlike Ubuntu, which can't even install. I realize this isn't necessarily a place for a soapbox speech, but when I first installed 12.04 several years ago there were already people in the community complaining that Canonical was going too "commercial". But I loved it. Several years later and all I've seen is Canonical adding more not-so-useful bells and whistles to Ubuntu while continually failing to fix basic problems on upgrades. With a dual-boot (and sometimes triple-boot) system it always took me some tweaking to get an upgrade to work, and to some extent that is okay. But at this point I feel like Canonical ought to just put a price tag on Ubuntu. All I see is more commercialism and advertising and product tie-ins, and ongoing problems do not get fixed. I am a big fan of open-source, not-for profit enterprise. I am also a big fan of for-profit enterprise, which certainly has its place and usefulness. I am not a fan of companies who pretend to be in favor of open source but really are just out to make a buck, and IMNSHO that is what Canonical has become. This is a great community and I wish you all the best, but my next install of Linux will not be Ubuntu.

    Read the article

  • What to leave when you're leaving

    - by BuckWoody
    There's already a post on this topic - sort of. I read this entry, where the author did a good job on a few steps, but I found that a few other tips might be useful, so if you want to check that one out and then this post, you might be able to put together your own plan for when you leave your job.  I once took over the system administrator (of which the Oracle and SQL Server servers were a part) at a mid-sized firm. The outgoing administrator had about a two- week-long scheduled overlap with me, but was angry at the company and told me "hey, I know this is going to be hard on you, but I want them to know how important I was. I'm not telling you where anything is or what the passwords are. Good luck!" He then quit that day. It took me about three days to find all of the servers and crack the passwords. Yes, the company tried to take legal action against the guy and all that, but he moved back to his home country and so largely got away with it. Obviously, this isn't the way to leave a job. Many of us have changed jobs in the past, and most of us try to be very professional about the transition to a new team, regardless of the feelings about a particular company. I've been treated badly at a firm, but that is no reason to leave a mess for someone else. So here's what you should put into place at a minimum before you go. Most of this is common sense - which of course isn't very common these days - and another good rule is just to ask yourself "what would I want to know"? The article I referenced at the top of this post focuses on a lot of documentation of the systems. I think that's fine, but in actuality, I really don't need that. Even with this kind of documentation, I still perform a full audit on the systems, so in the end I create my own system documentation. There are actually only four big items I need to know to get started with the systems: 1. Where is everything/everybody?The first thing I need to know is where all of the systems are. I mean not only the street address, but the closet or room, the rack number, the IU number in the rack, the SAN luns, all that. A picture here is worth a thousand words, which is why I really like Visio. It combines nice graphics, full text and all that. But use whatever you have to tell someone the physical locations of the boxes. Also, tell them the physical location of the folks in charge of those boxes (in case you aren't) or who share that responsibility. And by "where" in this case, I mean names and phones.  2. What do they do?For both the servers and the people, tell them what they do. If it's a database server, detail what each database does and what application goes to that, and who "owns" that application. In my mind, this is one of hte most important things a Data Professional needs to know. In the case of the other administrtors or co-owners, document each person's responsibilities.   3. What are the credentials?Logging on/in and gaining access to the buildings are things that the new Data Professional will need to do to successfully complete their job. This means service accounts, certificates, all of that. The first thing they should do, of course, is change the passwords on all that, but the first thing they need is the ability to do that!  4. What is out of the ordinary?This is the most tricky, and perhaps the next most important thing to know. Did you have to use a "special" driver for that video card on server X? Is the person that co-owns an application with you mentally unstable (like me) or have special needs, like "don't talk to Buck before he's had coffee. Nothing will make any sense"? Do you have service pack requirements for a specific setup? Write all that down. Anything that took you a day or longer to make work is probably a candidate here. This is my short list - anything you care to add? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • GWT & HTML5 Video in Mobile Safari

    - by KevMo
    I'm trying to code a site in GWT that plays videos with HTML5. Everything works great on the desktop, but mobile Safari on both the iPhone and iPad do not play the video. I can play a video using Video for Everybody. I've even copied the code to my own plain HTML page, and it works flawlessly. If I serve that same code via a GWT widget, mobile safari will not play the video. On the iPhone I see a gray box with a prohibitory sign around the play button, and on the iPad it shows up as a black box. I've made sure my doctype is <!DOCTYPE html>, but I don't know where else to start debugging. Perhaps it it because the code is injected via javascript? Any pointers on where to start looking would be greatly appreciated. Here is the exact code I am using for the video: <!-- "Video For Everybody" by Kroc Camen. see <camendesign.com/code/video_for_everybody> for documented code =================================================================================================================== --> <video width="640" height="360" poster="poster.jpg" controls autoplay> <source src="http://clips.vorwaerts-gmbh.de/big_buck_bunny.mp4" type="video/mp4"></source> <source src="http://clips.vorwaerts-gmbh.de/big_buck_bunny.ogv" type="video/ogg"></source> <!--[if gt IE 6]> <object width="640" height="375" classid="clsid:02BF25D5-8C17-4B23-BC80-D3488ABDDC6B"><! [endif]--><!--[if !IE]><!--> <object width="640" height="375" type="video/quicktime" data="http://clips.vorwaerts-gmbh.de/big_buck_bunny.mp4"> <!--<![endif]--> <param name="src" value="http://clips.vorwaerts-gmbh.de/big_buck_bunny.mp4" /> <param name="autoplay" value="true" /> <param name="showlogo" value="false" /> <object width="640" height="384" type="application/x-shockwave-flash" data="player.swf?autostart=true&amp;image=poster.jpg&amp;file=http://clips.vorwaerts-gmbh.de/big_buck_bunny.mp4"> <param name="movie" value="player.swf?autostart=true&amp;image=poster.jpg&amp;file=http://clips.vorwaerts-gmbh.de/big_buck_bunny.mp4" /> <!-- fallback image --> <img src="poster.jpg" width="640" height="360" alt="Big Buck Bunny" title="No video playback capabilities, please download the video below" /> </object><!--[if gt IE 6]><!--> </object><!--<![endif]--> </video>

    Read the article

  • Silverlight Player Blank When Changing ism file

    - by Al Katawazi
    I am trying to get silverlight smooth streaming going on a site I am bilding and it works fine with the big buck bunny sample code which looks like this: <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%" id="Object2"> <param name="source" value="SmoothStreamingBlackGlass.xap"/> <param name="onerror" value="onSilverlightError" /> <param name="initparams"value='autoplay=False,muted=False,stretchmode=0,displaytimecode=False, playlist=<playList><playListItems><playListItem title="Big%20Buck%20Bunny" description="" mediaSource="Big%20Buck%20Bunny.ism/Manifest" adaptiveStreaming="True" thumbSource="Big%20Buck%20Bunny_Thumb.jpg" frameRate="24.0000384000614" ></playListItem></playListItems></playList>' /> <a href="http://go2.microsoft.com/fwlink/?LinkID=124807" style="text-decoration: none;"><img src="http://go2.microsoft.com/fwlink/?LinkId=108181" alt="Get Microsoft Silverlight" style="border-style: none" /></a> </object> <iframe style="visibility:hidden;height:0;width:0;border:0px"></iframe> but if i change the code like this i only get a blank area when the page is rendered instead of the movie clip. <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%" id="Object2"> <param name="source" value="SmoothStreamingBlackGlass.xap"/> <param name="onerror" value="onSilverlightError" /> <param name="initparams"value='autoplay=False,muted=False,stretchmode=0,displaytimecode=False, playlist=<playList><playListItems><playListItem title="Robotica_1080" description="" mediaSource="Robotica_1080.ism/Manifest" adaptiveStreaming="True" thumbSource="Robotica_1080_Thumb.jpg" frameRate="24.0000384000614" ></playListItem></playListItems></playList>' /> <a href="http://go2.microsoft.com/fwlink/?LinkID=124807" style="text-decoration: none;"><img src="http://go2.microsoft.com/fwlink/?LinkId=108181" alt="Get Microsoft Silverlight" style="border-style: none" /></a> </object> <iframe style="visibility:hidden;height:0;width:0;border:0px"></iframe> Any ideas? I am using Encoder 3 to do the encoding set on microsoft smooth streaming for 720p with all the default settings.

    Read the article

  • Programmatically use a server as the Build Server for multiple Project Collections

    Important: With this post you create an unsupported scenario by Microsoft. It will break your support for this server with Microsoft. So handle with care. I am the administrator an a TFS environment with a lot of Project Collections. In the supported configuration of Microsoft 2010 you need one Build Controller per Project Collection, and it is not supported to have multiple Build Controllers installed. Jim Lamb created a post how you can modify your system to change this behaviour. But since I have so many Project Collections, I automated this with the API of TFS. When you install a new build server via the UI, you do the following steps Register the build service (with this you hook the windows server into the build server environment) Add a new build controller Add a new build agent So in pseudo code, the code would look like foreach (projectCollection in GetAllProjectCollections) {       CreateNewWindowsService();       RegisterService();       AddNewController();       AddNewAgent(); } The following code fragements show you the most important parts of the method implementations. Attached is the full project. CreateNewWindowsService We create a new windows service with the SC command via the Diagnostics.Process class:             var pi = new ProcessStartInfo("sc.exe")                         {                             Arguments =                                 string.Format(                                     "create \"{0}\" start= auto binpath= \"C:\\Program Files\\Microsoft Team Foundation Server 2010\\Tools\\TfsBuildServiceHost.exe              /NamedInstance:{0}\" DisplayName= \"Visual Studio Team Foundation Build Service Host ({1})\"",                                     serviceHostName, tpcName)                         };            Process.Start(pi);             pi.Arguments = string.Format("failure {0} reset= 86400 actions= restart/60000", serviceHostName);            Process.Start(pi); RegisterService The trick in this method is that we set the NamedInstance static property. This property is Internal, so we need to set it through reflection. To get information on these you need nice Microsoft friends and the .Net reflector .             // Indicate which build service host instance we are using            typeof(BuildServiceHostUtilities).Assembly.GetType("Microsoft.TeamFoundation.Build.Config.BuildServiceHostProcess").InvokeMember("NamedInstance",              System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.SetProperty | System.Reflection.BindingFlags.Static, null, null, new object[] { serviceName });             // Create the build service host            serviceHost = buildServer.CreateBuildServiceHost(serviceName, endPoint);            serviceHost.Save();             // Register the build service host            BuildServiceHostUtilities.Register(serviceHost, user, password); AddNewController and AddNewAgent Once you have the BuildServerHost, the rest is pretty straightforward. There are methods on the BuildServerHost to modify the controllers and the agents                 controller = serviceHost.CreateBuildController(controllerName);                 agent = controller.ServiceHost.CreateBuildAgent(agentName, buildDirectory, controller);                controller.AddBuildAgent(agent); You have now seen the highlights of the application. If you need it and want to have sample information when you work in this area, download the app TFS2010_RegisterBuildServerToTPCs

    Read the article

  • 2010 Collaboration Summit Impressions

    - by Elena Zannoni
    It's a bit late, but there you have it anyway. April 14 to 16 I attended the Linux Foundation Collaboration Summit in SFO. I was running two tracks, one on tracing and one on tools. You can see the tracks and the slides here: http://events.linuxfoundation.org/events/collaboration-summit/slides I was pretty busy both days, Thursday with a whole day tracing track, Friday with a half day toolchain track. The sessions were well attended, the rooms were full, with people spilling in the hallways. Some new things were presented, like Kernelshark, by Steve Rostedt, a GUI (yes, believe it or not, a GUI) written in GTK. It is very nice, showing a timeline for traced kernel events, and you can zoom in and filter at will. It works on the latest kernels, and it requires some new things/fixes in GTK. I don't recall exactly what version of GTK though. Dominique Toupin from Ericsson presented something about user requirements for tracing. Mostly though about who's who in the embedded world, and eclipse. Masami and Mathieu presented an update on their work. See their slides. The interesting thing to me was of course the new version of uprobes w/o underlying utrace presented by Jim Keniston. At the end of the session we had a discussion about the future of utrace. Roland wasn't there, butTom Tromey (also from RedHat) collected the feedback. Basically we are at a standstill now that utrace has been rejected yet again. There wasn't much advise that anybody could give, except jokingly, we decided that the only way in is to make it a part of perf events. There needs to be another refactoring, but most of all, this "killer app" that would be enabled because of utrace hasn't materialized yet. We think that having a good debugging story on Linux is enough of a killer app, for instance allowing multiple tracers, and not relying on SIGCHLD etc. I think this wasn't completely clear to the kernel community. Trying to achieve debugging via a gdb stub inside the kernel interfacing to utrace and that is controlled via the gdb remote protocol also lost its appeal (thankfully, since the gdb remote protocol is archaic). Somebody would have to be creative in how to submit utrace. It doesn't have to be called utrace (it was really a random choice, for lack of a letter that was not already used in front of the word "trace"). So basically, I think the ideas behind utrace are sound, and the necessity of a new interface is acknowledged. But I believe the integration/submission process with the kernel folks has to restart from scratch, clean slate. We'll see. There are many conferences and meetings coming up in the near future where things can be discussed further. On the second day, Friday, we had the tools talks. It was interesting to observe the more "kernel" oriented people's behavior towards the gcc etc community. The first talk was by Mark Mitchell, about Gcc and its new plugin architecture. After that, Paolo talked about the new C++1x standard, which will be finalized in 2011. Many features are already implemented in the libstdc++ library and gcc and usable today. We had a few minutes (really, the half day track was quite short) where Bradley Kuhn from the Software Freedom Law Center explained the GPLv3 exception for gcc (due to the new gcc plugin architecture and the availability of the intermediate results from the compilation, which is a new thing). I will not try to explain, but basically you cannot take the result of the preprocessing and then use that in your own proprietary compiler. After, we had a talk by Ian Taylor about the new Gold linker. One good thing in that area is that they are trying to make gold the new default linker (for instance Fedora will use gold as the distro linker). However gold is very different from binutils' old linker. It doesn't use a linker script, for instance. The kernel has been linked with gold many times as an exercise (the ground work was done by Kris Van Hees), but this needs to be constantly tested/monitored because the kernel linker script is very complex, and uses esoteric features (Wenji is now monitoring that each kernel RC can be built with gold). It was positive that people are now aware of gold and the need for it to be ported to more architectures. It seems that the porting is very easy, with little arch dependent code. Finally Tom Tromey presented about gdb and the archer project. Archer is a development branch of gdb mostly done by RedHat, where they are focusing on better c++ printing, c++ expression parsing, and plugins. The archer work is merged regularly in the gdb mainline. In general it was a good conference. I did miss most of the first day, because that's when I flew in. But I caught a couple of talks. Nothing earth shattering, except for Google giving each person registered a free Android phone. Yey.

    Read the article

  • Off The Beaten Path—Three Things Growing Midsize Companies are Thankful For

    - by Christine Randle
    By: Jim Lein, Senior Director, Oracle Accelerate Last Sunday I went on a walkabout.  That’s when I just step out the door of my Colorado home and hike through the mountains for hours with no predetermined destination. I favor “social trails”, the unmapped routes pioneered by both animal and human explorers.  These tracks  are usually more challenging than established, marked routes and you can’t be 100% sure of where you’re going to end up. But I’ve found the rewards to be much greater. For awhile, I pondered on how—depending upon your perspective—the current economic situation worldwide could be viewed as either a classic “the glass is half empty” or a “the glass is half full” scenario. Midsize companies buy Oracle to grow and so I’m continually amazed and fascinated by the success stories our customers relate to me.  Oracle’s successful midsize companies are growing via innovation, agility, and opportunity. For them, the glass isn’t half full—it’s overflowing. Growing Midsize Companies are Thankful for: Innovation The sun angling through the pine trees reminded me of a conversation with a European customer a year ago May.  You might not recognize the name but, chances are, your local evening weather report relies on this company’s weather observation, monitoring and measurement products.  For decades, the company was recognized in its industry for product innovation, but its recent rapid growth comes from tailoring end to end product and service solutions based on the needs of distinctly different customer groups across industrial, public sector, and defense sectors.  Hours after that phone call I was walking my dog in a local park and came upon a small white plastic box sprouting short antennas and dangling by a nylon cord from a tree branch.  I cut it down. The name of that customer’s company was stamped on the housing. “It’s a radiosonde from a high altitude weather balloon,” he told me the next day. “Keep it as a souvenir.”  It sits on my fireplace mantle and elicits many questions from guests. Growing Midsize Companies are Thankful for: Agility In July, I had another interesting discussion with the CFO of an Asia-Pacific company which owns and operates a large portfolio of leisure assets. They are best known for their epic outdoor theme parks. However, their primary growth today is coming from a chain of indoor amusement centers in the USA where billiards, bowling, and laser tag take the place of roller coasters, kiddy rides, and wave pools. With mountains and rivers right out my front door, I’m not much for theme parks, but I’ll take a spirited game of laser tag any day.  This company has grown dramatically since first implementing Oracle ERP more than a decade ago. Their profitable expansion into a completely foreign market is derived from the ability to replicate proven and efficient best business practices across diverse operating environments.  They recently went live on Oracle’s Fusion HCM and Taleo. Their CFO explained to me how, with thousands of employees in three countries, Fusion HCM and Taleo would enable them to remain incredibly agile by acting on trends linking individual employee performance to their management, establishing and maintaining those best practices. Growing Midsize Companies are Thankful for: Opportunity I have three GPS apps on my iPhone. I use them mainly to keep track of my stats—distance, time, and vertical gain. However, every once in awhile I need to find the most efficient route back home before dark from my current location (notice I didn’t use the word “lost”). In August I listened in on an interview with the CFO of another European company that designs and delivers telematics solutions—the integrated use of telecommunications and informatics—for managing the mobile workforce. These solutions enable customers to achieve evolutionary step-changes in their performance and service delivery. Forgive the overused metaphor, but this is route optimization on steroids.  The company’s executive team saw an opportunity in this emerging market and went “all in”. Consequently, they are being rewarded with tremendous growth results and market domination by providing the ability for their clients to collect and analyze performance information related to fuel consumption, service workforce safety, and asset productivity. This Thanksgiving, I’m thankful for health, family, friends, and a career with an innovative company that helps companies leverage top tier software to drive and manage growth. And I’m thankful to have learned the lesson that good things happen when you get off the beaten path—both when hiking and when forging new routes through a complex world economy. Halfway through my walkabout on Sunday, after scrambling up a long stretch of scree-covered hill, I crested a ridge with an obstructed view of 14,265 ft Mt Evans just a few miles to the west.  There, nowhere near a house or a trail, someone had placed a wooden lounge chair. Its wood was worn and faded but it was sturdy. I had lunch and a cold drink in my pack. Opportunity knocked and I seized it. Happy Thanksgiving.  

    Read the article

  • 5 Things I Learned About the IT Labor Shortage

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein | Sr. Principal Product Marketing Director | Oracle Midsize Programs | @JimLein Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 5 Things I Learned About the IT Labor Shortage A gentle autumn breeze is nudging the last golden leaves off the aspen trees. It’s time to wrap up the series that I started back in April, “The Growing IT Labor Shortage: Are You Feeling It?” Even in a time of relatively high unemployment, labor shortages exist depending on many factors, including location, industry, IT requirements, and company size. According to Manpower Groups 2013 Talent Shortage Survey, 35% of hiring managers globally are having difficulty filling jobs. Their top three challenges in filling jobs are: 1. lack of technical competencies (hard skills) 2. Lack of available applicants 3. Lack of experience The same report listed Technicians as the most difficult position to fill in the United States For most companies, Human Capital and Talent Management have never been more strategic and they are striving for ways streamline processes, reduce turnover, and lower costs (see this Oracle whitepaper, “ Simplify Workforce Management and Increase Global Agility”). Everyone I spoke to—partner, customer, and Oracle experts—agreed that it can be extremely challenging to hire and retain IT talent in today’s labor market. And they generally agreed on the causes: a. IT is so pervasive that there are myriad moving parts requiring support and expertise, b. thus, it’s hard for university graduates to step in and contribute immediately without experience and specialization, c. big IT companies generally aren’t the talent incubators that they were in the freewheeling 90’s due to bottom line pressures that require hiring talent that can hit the ground running, and d. it’s often too expensive for resource-strapped midsize companies to invest the time and money required to get graduates up to speed. Here are my top lessons learned from my conversations with the experts. 1. A Better Title Would Have Been, “The Challenges of Finding and Retaining IT Talent That Matches Your Requirements” There are more applicants than jobs but it’s getting tougher and tougher to find individuals that perfectly fit each and every role. Top performing companies are increasingly looking to hire the “almost ready”, striving to keep their existing talent more engaged, and leveraging their employee’s social and professional networks to quickly narrow down candidate searches (here’s another whitepaper, “A Strategic Approach to Talent Management”). 2. Size Matters—But So Does Location Midsize companies must strive to build cultures that compete favorably with what large enterprises can offer, especially when they aren’t within commuting distance of IT talent strongholds. They can’t always match the compensation and benefits offered by large enterprises so it's paramount to offer candidates high quality of life and opportunities to build their resumes in alignment with their long term career aspirations. 3. Get By With a Little Help From Your Friends It doesn’t always make sense to invest time and money in training an employee on a task they will not perform frequently. Or get in a bidding war for talent with skills that are rare and in high demand. Many midsize companies are finding that it makes good economic sense to contract with partners for remote support rather than trying to divvy up each and every role amongst their lean staff. Internal staff can be assigned to roles that will have the highest positive impact on achieving organizational goals. 4. It’s Actually Both “What You Know” AND “Who You Know” If I was hiring someone today I would absolutely leverage the social and professional networks of my co-workers. Period. Most research shows that hiring in this manner is less expensive and time consuming AND produces better results. There is also some evidence that suggests new hires from employees’ networks have higher job performance and retention rates. 5. I Have New Respect for Recruiters and Hiring Managers My hats off to them—it’s not easy hiring and retaining top talent with today’s challenges. Check out the infographic, “A New Day: Taking HR from Chaos to Control”, on Oracle’s Human Capital Management solutions home page. You can also explore all of Oracle’s HCM solutions from that page based on your role. You can read all the posts in this series by clicking on the links in the right sidebar. Stay tuned…we’ll continue to post thought leadership on HCM and Talent Management topics.

    Read the article

  • Internal drives vs USB-3 with external SSD or eSata with External SSD

    - by normstorm
    I have a need to carry VMWare Virtual Machines with me for work. These are very large files (each VM is 20GB or more) and I carry around about 40 to 50 VM's to simulate different software configurations for different client needs. Key: they won't fit on the internal hard drive of my current laptop. I currently execute the VM's from an external 7200RPM 2.5" USB-2 drive. I keep copies of the VM's on other 5400 external USB-2 drives. The VM's work from this drive, but they are slow, costing me much time and frustration. It can take upwards of 30 minutes just to make a copy of one of the VM's. They can take upwards of 10-15 minutes to fully launch and then they operate sluggishly. I am buying a new laptop (Core I7, 8GB RAM and other high-end specs). I intend to buy an SSD for the O/S volume (C:). This SSD will not be large enough to hold the VM's. I have always wanted a second internal hard drive to operate the VM's. To have two hard drives, though, I am finding that I will have to go to a 17" laptop which would be bulky/heavy. I am instead considering purchasing a 15" laptop with either an eSATA port or USB-3 ports and then purchasing two external drives. One of the drives might be an external SSD (maybe OCX brand) for operating the VM's and the other a 7400RPM 1TB hard drive for carrying around the VM's not currently in use. The question is which options would give me the biggest bang for the buck and the weight: 1) 2nd Internal SSD hard drive. This would mean buying a 17" laptop with two drive "bays". The first bay would hold an SSD drive for the C: drive. I would leave the first bay empty from the manufacture and then purchase/install an aftermarket SSD drive. This second SSD drive would have to be very large (256 GB), which would be expensive. I would still also need another external hard drive for carrying around the VM's not in use. 2) 2nd internal hard drive - 7400 RPM. Again, a 17" laptop would be required, but there are models available with on SSD drive for the C: drive and a second 7200 RPM hard drives. The second drive could probably be large enough to hold the VM's in use as well as those not in use. But would it be fast enough to drive the VM's? 3) USB-3 with External SSD. I could buy a 15" laptop with an SSD drive for the C: drive and a second hard drive for general files. I would operate the VM's from an external USB-3 SSD drive and have a third USB-3 external 7200 RPM drive for holding the VM's not in use. 4) eSATA with External SSD. Ditto, just eSATA instead of USB-3 5) USB-3 with External 7400 RPM drive. Ditto, but the drive running the VM's would be USB-3 attached 7400 RPM drives rather than SSD. 6) eSATA with External 7400 RPM drive. Dittor, but the drive running the VM's would be eSATA attached 7400 RPM drives rather than SSD. Any thoughts on this and any creative solutions?

    Read the article

  • first install for windows eight.....da beta

    - by raysmithequip
    The W8 preview is now installed and I am enjoying it.  I remember the learning curve of my first unix machine back in the eighties, this ain't that.It is normal for me to do the first os install with a keyboard and low end monitor...you never know what you'll encounter out in the field.  The OS took like a fish to water.  I used a low end INTEL motherboard dp55w I gathered on the cheap, an 1157 i5 from the used bin a pair of 6 gig ddr3 sticks, a rosewell 550 watt power supply a cheap used twenty buck sub 200g wd sata drive, a half working dvd burner and an asus fanless nvidia vid card, not a great one but Sub 50.00 on newey eggey...I did have to hunt the ms forums for a key and of course to activate the thing, if dos would of needed this outmoded ritual, we would still be on cpm and osborne would be a household name, of course little do people know that this ritual was common as far back as the seventies on att unix installs....not, but it was possible, I used to joke about when I ran a bbs, what hell would of been wrought had dos 3.2 machines been required to dial into my bbs to send fido mail to ms and wait for an acknowledgement.  All in all the thing was pushing a seven on the ms richter scale, not including the vid card, sadly it came in at just a tad over three....I wanted to evaluate it for a possible replacement on critical machines that in the past went down due to a vid card fan failure....you have no idea what a customer thinks when you show them a failed vid card fan..."you mean that little plastic piece of junk caused all this!!??!!!"...yea man.  Some production machines don't need any sort of vid, I will at least keep it on the maybe list for those, MTBF is a very important factor, some big box stores should put percentage of failure rate within 24 month estimates on the outside of the carton for sure.  And a warning that the power supplies are already at their limit.  Let's face it, today even 550w can be iffy.A few neat eye candy improvements over the earlier windows is nice, the metro screen is nice, anyone who has used a newer phone recently will intuitively drag their fingers across the screen....lot of good that was with no mouse or touch screen though.  Lucky me, I have been using windows since day one, I still have a copy of win 2.0 (and every other version) for no good reason.  Still the old ix collection of disks is much larger, recompiling any kernal is another silly ritual, same machine, different day, same recompile...argh. Rh is my all time fav, mandrake was always missing something, like it rewrote the init file or something, novell is ok as long as you stay on the beaten path and of course ubuntu normally recompiles with the same errors consistantly....makes life easy that way....no errors on windows eight, just a screen that did not match the installed hardware, natuarally I alt tabbed right out of it, then hit the flag key to find the start menu....no start button. I miss the start button already. Keyboard cowboy funnin and I was browsing the harddrive, nothing stunning there, I like that, means I can find stuff. Only I can't find what I want, the start button....the start menu is that first screen for touch tablets. No biggie for useruser, that is where they will want to be, I can see that. Admins won't want to be there, it is easy enough to get the control panel a bazzilion other ways though, just not the start button. (see a pattern here?). Personally, from the keyboard I find it fun to hit the carets along the location bar at the top of the explorer screen with tabs and arrows and choose SHOW ALL CONTROL PANEL ITEMS, or thereabouts. Bottom line, I love seven and I'll love eight even more!...very happy I did not have to follow the normal rule of thumb (a customer watching me build a system and asking questions said "oh I get it, so every piece you put in there is basically a hundred bucks, right?)...ok, sure, pretty much, more or less, well, ya dude.  It will be WAY past october till I get a real touch screen but I did pick up a pair of cheap tatungs so I can try the NEW main start screen, I parse a lot of folders and have a vision of how a pair of touch screens will be easier than landing a rover on mars.  Ok.  fine, they are way smallish, and I don't expect multitouch to work but we are talking a few percent of a new 21 inch viewsonic touch screen.  Will this OS be a game changer?  I don't know.  Bottom line with all the pads and droids in the world, it is more of a catch up move at first glance.  Not something ms is used to.  An app store?  I can see ms's motivation, the others have it.  I gather there will not be gadgets there, go ahead and see what ms did  to the once populated gadget page...go ahead, google gadgets and take a gander, used to hundreds of gadgets, they are already gone.  They replaced gadgets?  sort of, I'll drop that, it's a bit of a sore point for me.  More of interest was what happened when I downloaded stuff off codeplex and some other normal programs that I like, like orbitron, top o' my list!!...cardware it is...anyways, click on the exe, get a screen, normal for windows, this one indicated that I was not running a normal windows program and had a button for  exit the install, naw, I hit details, a hidden run program anyways came into view....great, my path to the normal windows has detected a program tha.....yea ok, acl is on, fine, moving along I got orbitron installed in record time and was tracking the iss on the newest Microsoft OS, beta of course, felt like the first time I setup bsd all those year ago...FUN!!...I suppose I gotta start to think about budgeting for the real os when it comes out in october, by then I should have a rasberry pi and be done with fedora remixed.  Of course that sounds like fun too!!  I would use this OS on a tablet or phone.  I don't like the idea of being hearded to an app store, don't like that on anything, we are americans and want real choices not marketed hype, lest you are younger with opm (other peoples money).   This os would be neat on a zune, but I suspect the zune is a gonner, I am rooting for microsoft, after all their default password is not admin anymore, nor alpine,  it's blank. Others force a password, my first fawn password was so long I could not even log into it with the password in front of me, who the heck uses %$# anyways, and if I was writing a brute force attack what the heck kinda impasse is that anyways at .00001 microseconds of a code execution cycle (just a non qualified number, not a real clock speed)....AI is where it will be before too long, MS is on that path, perhaps soon someone will sit down and write an app for the kinect that watches your eyes while you scan the new main start screen, clicking on the big E icon when you blink.....boy is that going to be fun!!!! sure. Blink,dammit,blink,dammit...... OPM no doubt.I like windows eight, we are moving forwards, better keep a close eye on ubuntu.  The real clinch comes when open source becomes paid source......don't blink, I already see plenty of very expensive 'ix apps, some even in app stores already.  more to come.......

    Read the article

  • How I Work: Staying Productive Whilst Traveling

    - by BuckWoody
    I travel a lot. Not like some folks that are gone every week, mind you, although in the last month I’ve been to: Cambridge, UK; Anchorage, AK; San Jose, CA; Copenhagen, DK, Boston, MA; and I’m currently en-route to Anaheim, CA.  While this many places in a month is a bit unusual for me, I would say I travel frequently. I’ve travelled most of my 28+ years in IT, and at one time was a consultant traveling weekly.   With that much time away from my primary work location, I have to find ways to stay productive. Some might say “just rest – take a nap!” – but I’m not able to do that. For one thing, I’m a very light sleeper and I’ve never slept on a plane - even a 30+ hour trip to New Zealand in Business Class - so that just isn’t option. I also am not always in the plane, of course. There’s the hotel, the taxi/bus/train, the airport and then all that over again when I arrive. Since my regular jobs have many demands, I have to get work done.   Note: No, I’m not always focused on work. I need downtime just like everyone else. Sometimes I just think, watch a movie or listen to tunes – and I give myself permission to do that anytime – sometimes the whole trip. I have too fewheartbeats left in life to only focus on work – it’s just not that important, and neither am I. Some of these tasks are letters to friends and family, or other personal things. What I’m talking about here is a plan, not some task list I have to follow. When I get to the location I’m traveling to, I always build in as much time as I can to ensure I enjoy those sights and the people I’m with. I would find traveling to be a waste if not for that.   The Unrealistic Expectation As I would evaluate the trip I was taking – say a 6-8 hour flight – I would expect to get 10-12 hours of work done. After all, there’s the time at the airport, the taxi and so on, and then of course the time in the air with all of the room, power, internet and everything else I needed to get my work done. I would pile up tasks at home, pack my bags, and head happily to the magical land of the TSA.   Right. On return from the trip, I had accomplished little, had more e-mails and other work that had piled up, and I was tired, hungry, and unorganized. This had to change. So, I decided to do three things: Segment my work Set realistic expectations Plan accordingly  Segmenting By Available Resources The first task was to decide what kind of work I could do in each location – if any. I found that I was dependent on a few things to get work done, such as power, the Internet, and a place to sit down. Before I fly, I take some time at home to get all of the work I’d like to accomplish while away segmented into these areas, and print that out on paper, which goes in my suit-coat pocket along with a mechanical pencil. I print my tickets, and I’m all set for the adventure ahead. Then I simply do each kind of work whenever I’m in that situation. No power There are certain times when I don’t have power available. But not only that, I might not even be able to use most of my electronics. So I now schedule as many phone calls as I can for the taxi/bus/train ride and the airports as I can. I have a paper notebook (Moleskine, of course) and a pencil and I print out any notes or numbers I need prior to the trip. Once I’m airborne or at the airport, I work on my laptop. I check and respond to e-mails, create slides, write code, do architecture, whatever I can.  If I can’t use any electronics, or once the power runs out, I schedule time for reading. I can read at the airport or anywhere, actually, even in-flight or any other transport. I “read with a pencil”, meaning I take a lot of notes, which I liketo put in OneNote, but since in most cases I don’t have power, I use the Moleskine to do that. Speaking of which, sometimes as I’m thinking I come up with new topics, ideas, blog posts, or things to teach in my classes. Once again I take out the notebook and write it down. All of these notes get a check-mark when I get back to the office and transfer the writing to OneNote. I’ve tried those “smart pens” and so on to automate this, but it just never works out. Pencil and paper are just fine. As I mentioned, sometime I just need to think. I’ll do nothing, and let my mind wander, thinking of nothing in particular, or some math problem or science question I’m interested in. My only issue with this is that I communicate tothink, and I don’t want to drive people crazy by being that guy that won’t shut up, so I think in a different way. Power, but no Internet or Phone If I have power but no Internet or phone, I focus on the laptop and the tablet as before, and I also recharge my other gadgets. Power, Internet, Phone and a Place to Work At first I thought that when I arrived at the hotel or event I could get the same amount of work done that I do at the office. Not so. There’s simply too many distractions, things you need, or other issues that allow this. Of course, Ican work on any device, read, think, write or whatever, but I am simply not as productive as I am in my home office. So I plan for about 25-50% as much work getting done in this environment as I think I could really do. I’ve done some measurements, and this holds out to be true almost every time. The key is that I re-set my expectations (and my co-worker’s expectations as well) that this is the case. I use the Out-Of-Office notices to let people know that I’m just not going to be 100% at this time – it’s hard for everyone, but it’s more honest and realistic, and I’d rather they know that – and that I realize that – than to let them think I’m totally available. Because I’m not – I’m traveling. I don’t tend to put too much detail, because after all I don’t necessarily want to let people know when I’m not home :) but I do think it’s important to let people that depend on my know that I’ll get back with them later. I hope this helps you think through your own methodology of staying productive when you travel. Or perhaps you just go offline, and don’t worry about any of this – good for you! That’s completely valid as well.   (Oh, and yes, I wrote this at 35K feet, on Alaska Airlines on a trip. :)  Practice what you preach, Buck.)

    Read the article

  • Validation firing in ASP.NET MVC

    - by rkrauter
    I am lost on this MVC project I am working on. I also read Brad Wilsons article. http://bradwilson.typepad.com/blog/2010/01/input-validation-vs-model-validation-in-aspnet-mvc.html I have this: public class Employee { [Required] public int ID { get; set; } [Required] public string FirstName { get; set; } [Required] public string LastName { get; set; } } and these in a controller: public ActionResult Edit(int id) { var emp = GetEmployee(); return View(emp); } [HttpPost] public ActionResult Edit(int id, Employee empBack) { var emp = GetEmployee(); if (TryUpdateModel(emp,new string[] { "LastName"})) { Response.Write("success"); } return View(emp); } public Employee GetEmployee() { return new Employee { FirstName = "Tom", LastName = "Jim", ID = 3 }; } and my view has the following: <% using (Html.BeginForm()) {%> <%= Html.ValidationSummary() %> <fieldset> <legend>Fields</legend> <div class="editor-label"> <%= Html.LabelFor(model => model.FirstName) %> </div> <div class="editor-field"> <%= Html.DisplayFor(model => model.FirstName) %> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.LastName) %> </div> <div class="editor-field"> <%= Html.TextBoxOrLabelFor(model => model.LastName, true)%> <%= Html.ValidationMessageFor(model => model.LastName) %> </div> <p> <input type="submit" value="Save" /> </p> </fieldset> <% } %> Note that the only field editable is the LastName. When I postback, I get back the original employee and try to update it with only the LastName property. But but I see on the page is the following error: •The FirstName field is required. This from what I understand, is because the TryUpdateModel failed. But why? I told it to update only the LastName property. I am using MVC2 RTM Thanks in advance.

    Read the article

  • Looking for efficient scaling patterns for Silverlight application with distributed text-file data s

    - by Edward Tanguay
    I'm designing a Silverlight software solution for students and teachers to record flashcards, e.g. words and phrases that students find while reading and errors that teachers notice while teaching. Requirements are: each person publishes his own flashcards in a file on a web server, e.g. http://:www.mywebserver.com/flashcards.txt other people subscribe to that person's flashcards by using a Silverlight flashcard reader that I have developed and entering the URLs of flashcard files they want to subscribe to, URLs and imported flashcards being saved in IsolatedStorage the flashcards.txt file has the following simple format: title, then blocks of question/answers: Jim Smith's flashcards from English class 53-222, winter semester 2009 ==fla Das kann nicht sein. That can't be. ==fla Es sei denn, er kommt nicht. Unless he doesn't come. The user then makes public the URL to his flashcard file and other readers begin reading in his flashcards. In order to lower the bar for non-technical users to contribute, it will even be possible for them to save this text in a Google Document, which they publish and distribute the URL. The flashcard readers will then recognize it is a google document and perform the necessary screen scraping to get at the raw text. I have two technical questions about this approach: What is a best way to plan now for scalability issues: e.g. if your reader is subscribed to 10 flashcard files that are each 200K, it will have to download 2MB of text just to find out if any new flashcards are available. Or can I somehow accurately and consistently get at the last update date/time of text files on servers and published google docs? Each reader will have the ability to allow the person to test himself on imported flashcards and add meta information to them, e.g. categorize them, edit them, etc. This information will be stored in IsolatedStorage along with the important flashcards themselves. What is a good pattern to allow these readers to share and synchronize this meta data, e.g. so when you are looking at a flashcard you can see that 5 other people have made corrections to it. The best solution I can think of now is that the Silverlight readers will have to republish their data to a central database, but then there is the problem of uniquely identifying each flashcard, the best approach seems to be URL + position-in-file, or even better URL + original text of both question and answer fields, but both of these have their obvious drawbacks. The main requirement is that the bar for participation is kept as low as possible, i.e. type text in a google document, publish it, distribute the URL, and you're publishing within the flashcard community. So I want to come up with the most efficient technical solutions in order to compensate for the lack of database, lack of unique ids, etc. For those who have designed or developed similar non-traditional, distributed database projects like this, what advice, experience or best-practice tips you can share on the above two points?

    Read the article

  • Event Listener in Google Charts API

    - by DeanGrobler
    I'm busy using Google Charts in one of my projects to display data in a table. Everything is working great. Except that I need to see what line a user selected once they click a button. This would obviously be done with Javascript, but I've been struggling for days now to no avail. Below I've pasted code for a simple example of the table, and the Javascript function that I want to use (that doesn't work). <html> <head> <script type='text/javascript' src='https://www.google.com/jsapi'></script> <script type='text/javascript'> google.load('visualization', '1', {packages:['table']}); google.setOnLoadCallback(drawTable); var table = ""; function drawTable() { var data = new google.visualization.DataTable(); data.addColumn('string', 'Name'); data.addColumn('number', 'Salary'); data.addColumn('boolean', 'Full Time Employee'); data.addRows(4); data.setCell(0, 0, 'Mike'); data.setCell(0, 1, 10000, '$10,000'); data.setCell(0, 2, true); data.setCell(1, 0, 'Jim'); data.setCell(1, 1, 8000, '$8,000'); data.setCell(1, 2, false); data.setCell(2, 0, 'Alice'); data.setCell(2, 1, 12500, '$12,500'); data.setCell(2, 2, true); data.setCell(3, 0, 'Bob'); data.setCell(3, 1, 7000, '$7,000'); data.setCell(3, 2, true); table = new google.visualization.Table(document.getElementById('table_div')); table.draw(data, {showRowNumber: true}); } function selectionHandler() { selectedData = table.getSelection(); row = selectedData[0].row; item = table.getValue(row,0); alert("You selected :" + item); } </script> </head> <body> <div id='table_div'></div> <input type="button" value="Select" onClick="selectionHandler()"> </body> </html> Thanks in advance for anyone taking the time to look at this. I've honestly tried my best with this, hope someone out there can help me out a bit.

    Read the article

  • how do i add images from drable folder instead of url in this code

    - by hayya anam
    i used the following URL https://github.com/jgilfelt/android-mapviewballoons source in my application is show image by using url how do i give images form my own drawable folder?? i found this mapview url which show images inbaloon but is show images by url i wanna show myown iamges how i do? howi give my own images from my folder public class CustomMap extends MapActivity { MapView mapView; List<Overlay> mapOverlays; Drawable drawable; Drawable drawable2; CustomItemizedOverlay<CustomOverlayItem> itemizedOverlay; CustomItemizedOverlay<CustomOverlayItem> itemizedOverlay2; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); mapView = (MapView) findViewById(R.id.mapview); mapView.setBuiltInZoomControls(true); mapOverlays = mapView.getOverlays(); // first overlay drawable = getResources().getDrawable(R.drawable.marker); itemizedOverlay = new CustomItemizedOverlay<CustomOverlayItem>(drawable, mapView); GeoPoint point = new GeoPoint((int)(51.5174723*1E6),(int)(-0.0899537*1E6)); CustomOverlayItem overlayItem = new CustomOverlayItem(point, "Tomorrow Never Dies (1997)", "(M gives Bond his mission in Daimler car)", "http://ia.media-imdb.com/images /M/MV5BMTM1MTk2ODQxNV5BMl5BanBnXkFtZTcwOTY5MDg0NA@@._V1._SX40_CR0,0,40,54_.jpg"); itemizedOverlay.addOverlay(overlayItem); GeoPoint point2 = new GeoPoint((int)(51.515259*1E6),(int)(-0.086623*1E6)); CustomOverlayItem overlayItem2 = new CustomOverlayItem(point2, "GoldenEye (1995)", "(Interiors Russian defence ministry council chambers in St Petersburg)", "http://ia.media-imdb.com/images M/MV5BMzk2OTg 4MTk1NF5BMl5BanBnXkFtZTcwNjExNTgzNA@@._V1._SX40_CR0,0,40,54_.jpg"); itemizedOverlay.addOverlay(overlayItem2); mapOverlays.add(itemizedOverlay); // second overlay drawable2 = getResources().getDrawable(R.drawable.marker2); itemizedOverlay2 = new CustomItemizedOverlay<CustomOverlayItem> (drawable2, mapView); GeoPoint point3 = new GeoPoint((int)(51.513329*1E6),(int)(-0.08896*1E6)); CustomOverlayItem overlayItem3 = new CustomOverlayItem(point3, "Sliding Doors (1998)", "(interiors)", null); itemizedOverlay2.addOverlay(overlayItem3); GeoPoint point4 = new GeoPoint((int)(51.51738*1E6),(int)(-0.08186*1E6)); CustomOverlayItem overlayItem4 = new CustomOverlayItem(point4, "Mission: Impossible (1996)", "(Ethan & Jim cafe meeting)", "http://ia.media-imdb.com/images /M/MV5BMjAyNjk5Njk0MV 5BMl5BanBnXkFtZTcwOTA4MjIyMQ@@._V1._SX40_CR0,0,40,54_.jpg"); itemizedOverlay2.addOverlay(overlayItem4); mapOverlays.add(itemizedOverlay2); final MapController mc = mapView.getController(); mc.animateTo(point2); mc.setZoom(16); } @Override protected boolean isRouteDisplayed() { return false; } }

    Read the article

  • Why is two-way binding in silverlight not working?

    - by Edward Tanguay
    According to how Silverlight TwoWay binding works, when I change the data in the FirstName field, it should change the value in CheckFirstName field. Why is this not the case? ANSWER: Thank you Jeff, that was it, for others: here is the full solution with downloadable code. XAML: <StackPanel> <Grid x:Name="GridCustomerDetails"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="300"/> </Grid.ColumnDefinitions> <TextBlock VerticalAlignment="Center" Margin="10" Grid.Row="0" Grid.Column="0">First Name:</TextBlock> <TextBox Margin="10" Grid.Row="0" Grid.Column="1" Text="{Binding FirstName, Mode=TwoWay}"/> <TextBlock VerticalAlignment="Center" Margin="10" Grid.Row="1" Grid.Column="0">Last Name:</TextBlock> <TextBox Margin="10" Grid.Row="1" Grid.Column="1" Text="{Binding LastName}"/> <TextBlock VerticalAlignment="Center" Margin="10" Grid.Row="2" Grid.Column="0">Address:</TextBlock> <TextBox Margin="10" Grid.Row="2" Grid.Column="1" Text="{Binding Address}"/> </Grid> <Border Background="Tan" Margin="10"> <TextBlock x:Name="CheckFirstName"/> </Border> </StackPanel> Code behind: public Page() { InitializeComponent(); Customer customer = new Customer(); customer.FirstName = "Jim"; customer.LastName = "Taylor"; customer.Address = "72384 South Northern Blvd."; GridCustomerDetails.DataContext = customer; Customer customerOutput = (Customer)GridCustomerDetails.DataContext; CheckFirstName.Text = customer.FirstName; }

    Read the article

< Previous Page | 33 34 35 36 37 38  | Next Page >