Search Results

Search found 1218 results on 49 pages for 'jeffrey kevin pry'.

Page 47/49 | < Previous Page | 43 44 45 46 47 48 49  | Next Page >

  • Holiday Approval /tracking

    - by nav
    Hi, Has anyone implemented a holiday workflow approval / tracking list in MOSS Sharepoint 2007? Can anyone suggests other solutions? The solution below works fine but I am specifically looking for a way to lookup manager of the user who created the holiday request list item in the workflow. I have followed this link http://www.u2u.info/Blogs/Kevin/Lists/Posts/Post.aspx?ID=39 which shows you how to create a custom workflow approval. Below are the steps outlined by the link. User add new holiday item to list Workflow kicks off Wf has the manager hardcoded (need a way to look this up, maybe from AD??) and creates a Task for them to review the request. If desired, this can include an email notification of the task Manager reviews, adds comments and approves/denies request User is notified of completed request Many Thanks, Naveen

    Read the article

  • jaxb entity print out as xml

    - by Cristian Boariu
    Hi, I have a class, let's say User adnotated with @XmlRootElement, with some properties (name, surname etc). I use this class for REST operations, as application/xml. The client will POST User class so i want to keep the values in the log. Is there any method in jax-ws to prints out this object as xml? For instance: log.info("Customers sent a USER"+user.whichMethod()); Customer sent a User <user> <name>cristi</name> <surname>kevin</surname> </user> Thanks.

    Read the article

  • Nepotism In The SQL Family

    - by Rob Farley
    There’s a bunch of sayings about nepotism. It’s unpopular, unless you’re the family member who is getting the opportunity. But of course, so much in life (and career) is about who you know. From the perspective of the person who doesn’t get promoted (when the family member is), nepotism is simply unfair; even more so when the promoted one seems less than qualified, or incompetent in some way. We definitely get a bit miffed about that. But let’s also look at it from the other side of the fence – the person who did the promoting. To them, their son/daughter/nephew/whoever is just another candidate, but one in whom they have more faith. They’ve spent longer getting to know that person. They know their weaknesses and their strengths, and have seen them in all kinds of situations. They expect them to stay around in the company longer. And yes, they may have plans for that person to inherit one day. Sure, they have a vested interest, because they’d like their family members to have strong careers, but it’s not just about that – it’s often best for the company as well. I’m not announcing that the next LobsterPot employee is one of my sons (although I wouldn’t be opposed to the idea of getting them involved), but actually, admitting that almost all the LobsterPot employees are SQLFamily members… …which makes this post good for T-SQL Tuesday, this month hosted by Jeffrey Verheul (@DevJef). You see, SQLFamily is the concept that the people in the SQL Server community are close. We have something in common that goes beyond ordinary friendship. We might only see each other a few times a year, at events like the PASS Summit and SQLSaturdays, but the bonds that are formed are strong, going far beyond typical professional relationships. And these are the people that I am prepared to hire. People that I have got to know. I get to know their skill level, how well they explain things, how confident people are in their expertise, and what their values are. Of course there people that I wouldn’t hire, but I’m a lot more comfortable hiring someone that I’ve already developed a feel for. I need to trust the LobsterPot brand to people, and that means they need to have a similar value system to me. They need to have a passion for helping people and doing what they can to make a difference. Above all, they need to have integrity. Therefore, I believe in nepotism. All the people I’ve hired so far are people from the SQL community. I don’t know whether I’ll always be able to hire that way, but I have no qualms admitting that the things I look for in an employee are things that I can recognise best in those that are referred to as SQLFamily. …like Ted Krueger (@onpnt), LobsterPot’s newest employee and the guy who is representing our brand in America. I’m completely proud of this guy. He’s everything I want in an employee. He’s an experienced consultant (even wrote a book on it!), loving husband and father, genuine expert, and incredibly respected by his peers. It’s not favouritism, it’s just choosing someone I’ve been interviewing for years. @rob_farley

    Read the article

  • Windows Phone 7 Design using Expression Blend - Resources

    - by Nikita Polyakov
    I’ve been doing a series of talks across Florida regarding Windows Phone 7 Design using Microsoft Expression Blend 4. I discuss the WP7 phone and application experience; show how to use Expression Blend toolset to effectively design such apps. Next presentation is on 5/4/2010 at 6:30PM EST will be a webcast format over LiveMeeting at Ft. Lauderdale Online group. Registration and the LiveMeeting link are both here: http://www.fladotnet.com/Reg.aspx?EventID=459 [I will post a link if it’s recorded]   Here are the resources from my presentations: The Biggest source is the Windows Phone UI and Design Language video from MIX10 Windows Phone 7 Design Guide as it’s found on the WP7 Dev Home Page Study The Silverlight Mobile Tutorials on official Silverlight website I will be blogging a separate entry for a new demo app that will showcase the elements I presented. I suggest you actually watch all of the MIX videos about SL and Design as great primer to get you thinking the WP7 way.   A lot happening with WP7Dev and it’s just the beginning! So watch these Twitter accounts and blogs: @Ckindel - Charlie Kindel - WP7 Dev Head http://blogs.msdn.com/ckindel @WP7Dev - Official Dev Twitter @WP7 - Official WP7 Twitter Peter Torr - http://blogs.msdn.com/ptorr Mike Harsh - http://blogs.msdn.com/mharsh Shawn Oster - http://www.shawnoster.com   Other worthwhile mention my local friends speaking and blogging about Windows Phone 7: Bill Reiss is doing great presentations on Building games with XNA for Windows Phone 7. Be on the lookout for those around Florida. Bill is a Silverlight MVP and has a legacy of XNA and Silverlight games, see his site. Kevin Wolf aka ByteMaster he is a Device Application Developer MVP with tremendous experience building mobile applications. He has developed WinMo-GF a multi-platform gaming framework. Get these tools and get creating! You will need the following components installed in this order: Expression Blend 4 Beta Windows Phone Developer Tools Microsoft Expression Blend Add-in Preview for Windows Phone Microsoft Expression Blend SDK Preview for Windows Phone Want more training? Don’t forget that Channel 9 has complete walkthroughs of their WP7 Training Kit posted online. PS: To continue with all this design talk check out Microsoft .toolbox “Learn to create Silverlight applications using Expression Studio and to apply fundamental design principles.” A great website with a lot of design tutorials set up as a wonderful full course on design all for free, including a great forum community and neat little avatars you can build yourself.

    Read the article

  • Informal Interviews: Just Relax (or Should I?)

    - by david.talamelli
    I was in our St Kilda Rd office last week and had the chance to meet up with Dan and David from GradConnection. I love what these guys are doing, their business has been around for two years and I really like how they have taken their own experiences from University found a niche in their market and have chased it. These guys are always networking. Whenever they come to Melbourne they send me a tweet to catch up, even though we often miss each other they are persistent. It sounds like their business is going from strength to strength and I have to think that success comes from their hard work and enthusiasm for their business. Anyway, before my meeting with ProGrad I noticed a tweet from Kevin Wheeler who was saying it was his last day in Melbourne - I sent him a message and we met up that afternoon for a coffee (I am getting to the point I promise). On my way back to the office after my meeting I was on a tram and was sitting beside a lady who was talking to her friend on her mobile. She had just come back from an interview and was telling her friend how laid back the meeting was and how she wasn't too sure of the next steps of the process as it was a really informal meeting. The recurring theme from this phone call was that 1) her and the interviewer got along really well and had a lot in common 2) the meeting was very informal and relaxed. I wasn't at the interview so I cannot say for certain, but in my experience regardless of the type of interview that is happening whether it is a relaxed interview at a coffee shop or a behavioural interview in an office setting one thing is consistent: the employer is assessing your ability to perform the role and fit into the company. Different interviewers I find have different interviewing styles. For example some interviewers may create a very relaxed environment in the thinking this will draw out less practiced answers and give a more realistic view of the person and their abilities while other interviewers may put the candidate "under the pump" to see how they react in a stressful situation. There are as many interviewing styles as there are interviewers. I think candidates regardless of the type of interview need to be professional and honest in both their skills/experiences, abilities and career plans (if you know what they are). Even though an interview may be informal, you shouldn't slip into complacency. You should not forget the end goal of the interview which is to get a job. Business happens outside of the office walls and while you may meet someone for a coffee it is still a business meeting no matter how relaxed the setting. You don't need to be stick in the mud and not let your personality shine through, but that first impression you make may play a big part in how far in the interview process you go. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • CRMIT Solution´s CRM++ Asterisk Telephony Connector Achieves Oracle Validated Integration with Oracle Sales Cloud

    - by Richard Lefebvre
    To achieve Oracle Validated Integration, Oracle partners are required to meet a stringent set of requirements that are based on the needs and priorities of the customers. Based on a Telephony Application Programming Interface (TAPI) framework the CRM++ Asterisk Telephony Connector integrates the Asterisk telephony solutions with Oracle® Sales Cloud. "The CRM++ Asterisk Telephony Connector for Oracle® Sales Cloud showcases CRMIT Solutions focus and commitment to extend the Customer Experience (CX) expertise to our existing and potential customers," said Vinod Reddy, Founder & CEO, CRMIT Solutions. "Oracle® Validated Integration applies a rigorous technical review and test process," said Kevin O’Brien, senior director, ISV and SaaS Strategy, Oracle®. "Achieving Oracle® Validated Integration through Oracle® PartnerNetwork gives our customers confidence that the CRM++ Asterisk Telephony Connector for Oracle® Sales Cloud has been validated and that the products work together as designed. This helps reduce deployment risk and improves the user experience for our joint customers." CRM++ is a suite of native Customer Experience solutions for Oracle® CRM On Demand, Oracle® Sales Cloud and Oracle® RightNow Cloud Service. With over 3000+ users the CRM++ framework helps extend the Customer Experience (CX) and the power of Customer Relations Management features including Email WorkBench, Self Service Portal, Mobile CRM, Social CRM and Computer Telephony Integration.. About CRMIT Solutions CRMIT Solutions is a pioneer in delivering SaaS-based customer experience (CX) consulting and solutions. With more than 200 certified customer relationship management (CRM) consultants and more than 175 successful CRM deployments globally, CRMIT Solutions offers a range of CRM++ applications for accelerated deployments including various rapid implementation and migration utilities for Oracle® Sales Cloud, Oracle® CRM On Demand, Oracle® Eloqua, Oracle® Social Relationship Management and Oracle® RightNow Cloud Service. About Oracle Validated Integration Oracle Validated Integration, available through the Oracle PartnerNetwork (OPN), gives customers confidence that the integration of complementary partner software products with Oracle Applications and specific Oracle Fusion Middleware solutions have been validated, and the products work together as designed. This can help customers reduce risk, improve system implementation cycles, and provide for smoother upgrades and simpler maintenance. Oracle Validated Integration applies a rigorous technical process to review partner integrations. Partners who have successfully completed the program are authorized to use the “Oracle Validated Integration” logo. For more information, please visit Oracle.com at http://www.oracle.com/us/partnerships/solutions/index.html.

    Read the article

  • OrbitFX: JavaFX 8 3D & NetBeans Platform in Space!

    - by Geertjan
    Here is a collection of screenshots from a proof of concept tool being developed by Nickolas Sabey and Sean Phillips from a.i. solutions. Before going further, read a great new article here written on java.net by Kevin Farnham, in light of the Duke's Choice Award (DCA) recently received at JavaOne 2013 by the a.i. solutions team. Here's Sean receiving the award on behalf of the a.i. solutions team, surrounded by the DCA selection committee and other officials: They won the DCA for helping facilitate and deploy the 2014 launch of NASA's Magnetospheric Multiscale mission, using JDK 7, the NetBeans Platform, and JavaFX to create the GEONS Ground Support System, helping reduce software development time by approximately 35%. The prototype tool that Nicklas and Sean are now working on uses JavaFX 3D with the NetBeans Platform and is nicknamed OrbitFX. Much of the early development is being done to experiment with different patterns, so that accuracy is currently not the goal. For example, you'll notice in the screenshots that the Earth is really close to the Sun, which is obviously not correct. The screenshots are generated using Java 8 build 111, together with NetBeans Platform 7.4. Inspired by various JavaOne demos using JavaFX 3D, Nick began development integrating them into their existing NetBeans Platform infrastructure. The 3D scene showing the Sun and Earth objects is all JavaFX 8 3D, demonstrating the use of Phong Material support, along with multiple light and camera objects. Each JavaFX component extends a JFXPanel type, so that each can easily be added to NetBeans Platform TopComponents. Right-clicking an item in the explorer view offers a context menu that animates and centers the 3D scene on the selected celestial body.  With each JavaFX scene component wrapped in a JFXPanel, they can easily be integrated into a NetBeans Platform Visual Library scene.  In this case, Nick and Sean are using an instance of their custom Slipstream PinGraphScene, which is an extension of the NetBeans Platform VMDGraphScene. Now, via the NetBeans Platform Visual Library, the OrbitFX celestial body viewer can be used in the same space as a WorldWind viewer, which is provided by a previously developed plugin. "This is a clear demonstration of the power of the NetBeans Platform as an application development framework," says Sean Phillips. "How else could you have so much rich application support placed literally side by side so easily?"

    Read the article

  • AdventureWorks 2014 Sample Databases Are Now Available

    - by aspiringgeek
      Where in the World is AdventureWorks? Recently, SQL Community feedback from twitter prompted me to look in vain for SQL Server 2014 versions of the AdventureWorks sample databases we’ve all grown to know & love. I searched Codeplex, then used the bing & even the google in an effort to locate them, yet all I could find were samples on different sites highlighting specific technologies, an incomplete collection inconsistent with the experience we users had learned to expect.  I began pinging internally & learned that an update to AdventureWorks wasn’t even on the road map.  Fortunately, SQL Marketing manager Luis Daniel Soto Maldonado (t) lent a sympathetic ear & got the update ball rolling; his direct report Darmodi Komo recently announced the release of the shiny new sample databases for OLTP, DW, Tabular, and Multidimensional models to supplement the extant In-Memory OLTP sample DB.  What Success Looks Like In my correspondence with the team, here’s how I defined success: 1. Sample AdventureWorks DBs hosted on Codeplex showcasing SQL Server 2014’s latest-&-greatest features, including:  In-Memory OLTP (aka Hekaton) Clustered Columnstore Online Operations Resource Governor IO 2. Where it makes sense to do so, consolidate the DBs (e.g., showcasing Columnstore likely involves a separate DW DB) 3. Documentation to support experimenting with these features As Microsoft Senior SDE Bonnie Feinberg (b) stated, “I think it would be great to see an AdventureWorks for SQL 2014.  It would be super helpful for third-party book authors and trainers.  It also provides a common way to share examples in blog posts and forum discussions, for example.”  Exactly.  We’ve established a rich & robust tradition of sample databases on Codeplex.  This is what our community & our customers expect.  The prompt response achieves what we all aim to do, i.e., manifests the Service Design Engineering mantra of “delighting the customer”.  Kudos to Luis’s team in SQL Server Marketing & Kevin Liu’s team in SQL Server Engineering for doing so. Download AdventureWorks 2014 Download your copies of SQL Server 2014 AdventureWorks sample databases here.

    Read the article

  • ArchBeat Top 10 for November 18-24, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook page for the week of November 18-24, 2012. One-Stop Shop for over 200 On-Demand Oracle Webcasts Webcasts can be a great way to get information about Oracle products without having to go cross-eyed reading yet another document off your computer screen. Oracle's new Webcast Center offers selectable filtering to make it easy to get to the information you want. Yes, you have to register to gain access, but that process is quick, and with over 200 webcasts to choose from you know you'll find useful content. Oracle SOA Suite 11g PS 5 introduces BPEL with conditional correlation for aggregation scenarios | Lucas Jellema An extensive, detailed technical post from Oracle ACE Director Lucas Jellema. Oracle Utilities Application Framework V4.2.0.0.0 Released | Anthony Shorten Principal Product Manager Anthony Shorten shares an overview of the changes implemented in the new release. Fault Handling and Prevention - Part 1 | Guido Schmutz and Ronald van Luttikhuizen In this technical article, part one of a four part series, Oracle ACE Directors Guido Schmutz and Ronald van Luttikhuizen guide you through an introduction to fault handling in a service-oriented environment using Oracle SOA Suite and Oracle Service Bus. Oracle BPM Process Accelerators and process excellence | Andrew Richards "Process Accelerators are ready-to-deploy solutions based on best practices to simplify process management requirements," says Capgemini's Andrew Richards. "They are considered to be 'product grade,' meaning they have been designed; engineered, documented and tested by Oracle themselves to a level that they can be deployed as-is for a solution to a problem or extended as appropriate for a particular scenario." Videos: Getting Started with Java Embedded | The Java Source Interested in Java Embedded? You'll want to check out these videos provided Tori Weildt, including interviews with Oracle's James Allen and Kevin Smith, recorded at ARM TechCon. JPA SQL and Fetching tuning ( EclipseLink ) | Edwin Biemond Oracle ACE Edwin Biemond's post illustrates how to "use the department and employee entity of the HR Oracle demo schema to explain the JPA options you have to control the SQL statements and the JPA relation Fetching." Devoxx 2012 Trip Report - clouds and sunshine | Markus Eisele Oracle ACE Director Markus Eisele shares an extensive and entertaining account of his experience at Devoxx 2012. Towards Ultra-Reusability for ADF - Adaptive Bindings | Duncan Mills "The task flow mechanism embodies one of the key value propositions of the ADF Framework," says Duncan Mills. "However, what if we could do more? How could we make task flows even more re-usable than they are today?" As you might expect, Duncan has answers for those questions. Java Specification Requests in Numbers | Markus Eisele Oracle ACE Director Markus Eisele shares some interesting data culled from the Java Community Process site. Thought for the Day "You can't have great software without a great team, and most software teams behave like dysfunctional families." — Jim McCarthy Source: SoftwareQuotes.com

    Read the article

  • ArchBeat Link-o-Rama for November 16, 2012

    - by Bob Rhubart
    X.509 Certificate Revocation Checking Using OCSP protocol with Oracle WebLogic Server 12c | Abhijit Patil Abhijit Patil's article focuses on how to use X.509 Certificate Revocation Checking Functionality with the OCSP protocol to validate in-bound certificates. Although this article focuses on inbound OCSP validation using OCSP, Oracle WebLogic Server 12c also supports outbound OCSP validation. Leveraging Oracle Scorecard and Strategy Management for Everyday BI Needs "Oracle Scorecard and Strategy Management (OSSM) is built-upon the premise that a scorecard system should not be separate from the BI system, like many comparable tools are today," says author Kevin McGinely. "Instead of a separate application with its own data, its own data definitions, and its own front-end, Oracle made the choice to integrate OSSM directly into OBIEE." Applying BI for personal productivity recognition and gamification | Capgemini Oracle Blog "It is quite obvious that if you want people to participate you need an appealing and intuitive user interface," says Capgemini's Henk Vermeulen in this interesting exploration of gamification in the enterprise. Build and release OSB projects with Maven | Edwin Biemond "With Maven we are able to build and deploy OSB projects," says Oracle ACE Edwin Biemond. "The artifacts generated by Maven called snaphosts and releases can be automatically uploaded to a software repository. These versioned OSB jars can then be downloaded by the OSB Servers and deployed." Biemond shows you how in this detailed technical post. ADF Generator for Dynamic ADF BC and ADF UI | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis' post is an extension of his OOW12 presentation, "Oracle ADF Implementations Around the Globe: Best Practices," and includes the sample application he promised to share. Service-oriented organizations have a head start in the cloud race | ZDNet ZDNet SOA blogger Joe McKendrick offers a snapshot of a recent report Forrester analyst James Staten. Oracle Fusion Middleware Security: X509 Fallback to Form | Debasish BhattacharyaOracle Fusion Middleware A-Team architect Debasish Bhattacharya shares a solution that resulted from brainstorming with colleagues Chris Johnson and Brian Eidelman. "The solution is not very difficult," says Bhattacharya, "though it needs some additional configurations and coding." It's all presented in this detailed post. Agile Architecture | David Sprott "There is ample evidence that Agile Architecture is a primary contributor to business agility, yet we do not have a well understood architecture management system that integrates with Agile methods," observes David Sprott in this extensive post. Thought for the Day "Operating systems are like underwear — nobody really wants to look at them." — Bill Joy Source: SoftwareQuotes.com

    Read the article

  • Screen Aspect Ratio

    - by Bill Evjen
    Jeffrey Dean, Pixar Aspect Ratio is very important to home video. What is aspect ratio – the ratio from the height to the width 2.35:1 The image is 2.35 times wide as it is high Pixar uses this for half of our movies This is called a widescreen image When modified to fit your television screen They cut this to fit the box of your screen When a comparison is made huge chunks of picture is missing It is harder to find what is going on when these pieces are missing The whole is greater than the pieces themselves. If you are missing pieces – you are missing the movie The soul and the mood is in the film shots. Cutting it to fit a screen, you are losing 30% of the movie Why different aspect ratios? Film before the 1950s 1.33:1 Academy Standard There were all aspects of images though. There was no standard. Thomas Edison developed projecting images onto a wall/screen He didn’t patent it as he saw no value in it. Then 1.37:1 came about to add a strip of sound This is the same size as a 35mm film Around 1952 – TV comes along NTSC Television followed the Academy Standard (4x3) Once TV came out, movie theater attendance plummets So Film brought forth color to combat this. Also early 3D Also Widescreen was brought forth. Cinema-Scope Studios at the time made movies bigger and bigger There was a Napoleon movie that was actually 4x1 … really wide. 1.85:1 Academy Flat 2.35:1 Anamorphic Scope (aka Panavision/Cinemascope) Almost all movies are made in these two aspect ratios Pixar has done half in one and half in the other Why choose one over the other? Artist choice It is part of the story the director wants to tell Can we preserve the story outside of the theaters? TVs before 1998 – they were very square Now TVs are very wide Historical options Toy Story released as it was and people cut it in a way that wasn’t liked by the studio Pan and Scan is another option Cut and then scan left or right depending on where the action is Frame Height Pixar can go back and animate more picture to account for the bottom/top bars. You end up with more sky and more ground The characters seem to get lost in the picture You lose what the director original intended Re-staging For animated movies, you can move characters around – restage the scene. It is a new completely different version of the film This is the best possible option that Pixar came up with They have stopped doing this really as the demand as pretty much dropped off Why not 1.33 today? There has been an evolution of taste and demands. VHS is a linear item The focus is about portability and not about quality Most was pan and scan and the quality was so bad – but people didn’t notice DVD was introduced in 1996 You could have more content – two versions of the film You could have the widescreen version and the 1.33 version People realized that they are seeing more of the movie with the widescreen High Def Televisions (16x9 monitors) This was introduced in 2005 Blu-ray Disc was introduced in 2006 This is all widescreen You cannot find a square TV anymore TVs are roughly 1.85:1 aspect ratio There is a change in demand Users are used to black bars and are used to widescreen Users are educated now What’s next for in-flight entertainment? High Def IFE Personal Electronic Devices 3D inflight

    Read the article

  • Stuff I learned at Innovate 2011

    - by David Dorf
    After returning from the NRF Innovate 2011 conference, I picked up few nuggets I thought I'd share here.  These thoughts are a bit random, but I hope they're useful nonetheless.Kevin Kelly opened the conference with six verbs that represent the future.  They were Screening, Interacting, Sharing, Accessing, Flowing, and Generating.  It struck me that these are all ways in which we merge the digital and physical worlds.  The internet of things continues to gain momentum.Some buzzwords:  deal economy, subscription commerce, discovery (instead of search), curationThat last one, curation, came up over and over.  Retailers, especially those in fashion, are finding value in helping their customers organize and present their own collections.  Social media has made sharing such collections easy, and mobile lets them take those ideas into the stores.  Mannequins are becoming less relevant.I heard from both HauteLook and Gilt Groupe (flash sale retailers) that a large percentage of their visits come from mobile devices, and most of those are iOS devices.  I find it interesting that even though Android has passed iPhone in units shipped (and will eventually pass iOS as a whole), its still the Apple crowd that leads the way.RadioShack mentioned their Holiday Heroes campaigned was very successful.  They asked their Foursquare users to check-in at a gym, coffee shop, and transportation hub as part of being a hero.  For this feat, customers were awarded a special badge that was worth 20% off at their next store visit. They claim a 3.5x increase in ticket size vs. regular check-in customers, and a 5x increase vs those that don't check-in at all.I also learned of RadioShack's #28 campaign, which is apparently one of the largest Twitter trends ever.  Their partnership with LIVESTRONG has gotten them followers, impressions, and credit for supporting the fight against cancer.The guys at Invodo showed the importance of video to e-commerce.  They gave compelling examples of how video can show customers the value of products better than just words.The highlight of the show was Guy Kawasaki's talk on innovation, which was not only informative but also peppered with humor and personality.  Back in the early days of the internet boom, Guy turned down the CEO position at Yahoo! because the commute was too long.  By his calculation, that was a $2B mistake.There are other good accounts of the conference at the NRF Blog.

    Read the article

  • OTN Virtual Technology Summit - July 9 - Middleware Track

    - by OTN ArchBeat
    The Architecture of Analytics: Big Time Big Data and Business Intelligence This four-session track, part of the free OTN Virtual Technology Summit on July 9, will present a solution architect's perspective on how business intelligence products in Oracle's Fusion Middleware family and beyond fit into an effective big data architecture, offering insight and expertise from Oracle ACE Directors and product team experts specializing in business Intelligence to help you meet your big data business intelligence challenges. Register now! Sessions Oracle Big Data Appliance Case Study: Using Big Data to Analyze Cancer-Genome Relationships Tom Plunkett, Lead Author of the Oracle Big Data Handbook What does it take to build an award winning Big Data solution? This presentation takes a deep technical dive into the use of the Oracle Big Data Appliance in a project for the National Cancer Institute's Frederick National Laboratory for Cancer Research. The Frederick National Laboratory and the Oracle team won several awards for analyzing relationships between genomes and cancer subtypes with big data, including the 2012 Government Big Data Solutions Award, the 2013 Excellence.Gov Finalist for Innovation, and the 2013 ComputerWorld Honors Laureate for Innovation. [30 mins] Getting Value from Big Data Variety Richard Tomlinson, Director, Product Management, Oracle Big data variety implies big data complexity. Performing analytics on diverse data typically involves mashing up structured, semi-structured and unstructured content. So how can we do this effectively to get real value? How do we relate diverse content so we can start to analyze it? This session looks at how we approach this tricky problem using Endeca Information Discovery. [30 mins] How To Leverage Your Investment In Oracle Business Intelligence Enterprise Edition Within a Big Data Architecture Oracle ACE Director Kevin McGinley More and more organizations are realizing the value Big Data technologies contribute to the return on investment in Analytics. But as an increasing variety of data types reside in different data stores, organizations are finding that a unified Analytics layer can help bridge the divide in modern data architectures. This session will examine how you can enable Oracle Business Intelligence Enterprise Edition (OBIEE) to play a role in a unified Analytics layer and the benefits and use cases for doing so. [30 mins] Oracle Data Integrator 12c As Your Big Data Data Integration Hub Oracle ACE Director Mark Rittman Oracle Data Integrator 12c (ODI12c), as well as being able to integrate and transform data from application and database data sources, also has the ability to load, transform and orchestrate data loads to and from Big Data sources. In this session, we'll look at ODI12c's ability to load data from Hadoop, Hive, NoSQL and file sources, transform that data using Hive and MapReduce processing across the Hadoop cluster, and then bulk-load that data into an Oracle Data Warehouse using Oracle Big Data Connectors. We will also look at how ODI12c enables ETL-offloading to a Hadoop cluster, with some tips and techniques on real-time capture into a Hadoop data reservoir and techniques and limitations when performing ETL on big data sources. [90 mins] Register now!

    Read the article

  • ArchBeat Link-o-Rama for 10-24-2012

    - by Bob Rhubart
    Play Oracle Vanquisher Here's a little respite from whatever it is you normally spend your time on. Oracle Vanquisher is an online diversion that makes a game of data center optimization. According to the description: "Armed with a cool Oracle vacuum pack suit and a strategic IT roadmap, you will thwart threats and optimize your data center to increase your company’s stock price and boost your company's position." Mainly you avoid electric shock and killer birds. The current high score belongs to someone identified as "TEN." My score? Never mind. Book: DevOps for Developers | The Java Source The subject of DevOps has come up in a couple of recent OTN ArchBeat Podcasts, so it's somewhat serendipitous that Tori Weildt's recent blog post offers an overview of Java Champion Michael Hutterman's new book, DevOps for Developers, now available from Apress. Bring Your Own Device (BYOD) : Context is everything… | The ORACLE-BASE Blog BOYD is a factor in the evolution of IT, but in what context? "The real IT work in companies is still being done on PCs," says Oracle ACE Director Tim Hall. "Yes, you can use a cloud service on your phone, but look around the office and you will see those cloud services are actually being used by people on PCs." Oracle in the Cloud: Oracle EBusiness Suite sizing | Tom Laszewski Cloud expert Tom Laszewski shares several technical resources that will be helpful for sizing of Oracle EBusiness Suite. Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster Author and expert Yuli Vasiliev shows you how take advantage of multiple Oracle WebLogic Server instances grouped into a cluster to maximize scalability and availability. Webcast: Reduce Costs with Oracle's Database Storage Management Watch this! Join Oracle experts Kevin Jernigan and Margaret Hamburger for an interactive webcast in which you'll learn how Oracle's Database Storage Management can reduce storage costs and management complexity while improving query performance to meet service-level agreements and compliance requirements. Event Date: Tuesday, November 6, 2012 Event Time: 10 a.m. PT/1 p.m. ET Thought for the Day "Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves." — Alan Kay Source: softwarequotes.com

    Read the article

  • Concurrency and Coordination Runtime (CCR) Learning Resources

    - by Harry
    I have recently been learning the in's and out's of the Concurrency and Coordination Runtime (CCR). Finding good learning resources for this relatively new technology has been quite difficult. (A quick google search brings up "Creedence Clearwater Revival" as the top result!) Some of the resources I have found: Free e-book chapter from WROX on the Robotics Developer Studio Good Article/post on InfoQ Robotic's Member blog Very active MSDN CCR Forum - Got plenty of help from here! Great MSDN Magazine by Jeffrey Richter Official CCR User Guide - Didn't find this very helpful Great blogging series on CCR iodyner CCR Related Blog - Update: Moved to here Eight or so Videos on Channel9.msdn.com CCR Patterns page on MS Robotics Studio - I haven't read this yet 4 x CCR Questions on Stackoverflow - Most of the questions have been Mine! LOL CCR and DSS toolkit has now been released to MSDN Members Do you have any good learning resources for the CCR? I really hope that Microsoft will publish more material, so far it has been too Robotics specific. I believe that MS needs to acknowledge that most people are using the CCR in issolation from the DSS and Robotics Studio. Update The Mix 2010 conference had a presentation by Myspace about how they have used the CCR framework in their middle tier. They also open sourced the code base. MySpace DataRelay Mix Video Presentation

    Read the article

  • How to structurally display a multi-dimensional array in PHP?

    - by Jaime Cross
    How can I display the contents of an array as follows: Company Name - Username1 - Username2 Another Company Name - Username3 The array I have created is as follows: $array[1]['company_id'] = '12'; $array[1]['company_name'] = 'ABC Company'; $array[1]['company_type'] = 'default'; $array[1]['user_id'] = '23'; $array[1]['user_name'] = 'Andrew'; $array[2]['company_id'] = '12'; $array[2]['company_name'] = 'ABC Company'; $array[2]['company_type'] = 'default'; $array[2]['user_id'] = '27'; $array[2]['user_name'] = 'Jeffrey'; $array[3]['company_id'] = '1'; $array[3]['company_name'] = 'Some Company'; $array[3]['company_type'] = 'default'; $array[3]['user_id'] = '29'; $array[3]['user_name'] = 'William'; $array[4]['company_id'] = '51'; $array[4]['company_name'] = 'My Company'; $array[4]['company_type'] = 'default'; $array[4]['user_id'] = '20'; $array[4]['user_name'] = 'Jaime';

    Read the article

  • Custom view transition in OpenGL ES

    - by melfar
    I'm trying to create a custom transition, to serve as a replacement for a default transition you would get here, for example: [self.navigationController pushViewController:someController animated:YES]; I have prepared an OpenGL-based view that performs an effect on some static texture mapped to a plane (let's say it's a copy of the flip effect in Core Animation). What I don't know how to do is: grab current view content and make a texture out of it (I remember seeing a function that does just that, but can't find it) how to do the same for the view that is currently offscreen and is going to replace current view are there some APIs I can hook to in order to make my transition class as native as possible (make it a kind of Core Animation effect)? Any thoughts or links are greatly appreciated! UPDATE Jeffrey Forbes's answer works great as a solution to capture the content of a view. What I haven't figured out yet is how to capture the content of the view I want to transition to, which should be invisible until the transition is done. Also, which method should I use to present the OpenGL view? For demonstration purposes I used pushViewController. That affects the navbar, though, which I actually want to go one item back, with animation, check this vid for explanation: http://vimeo.com/4649397. Another option would be to go with presentViewController, but that shows fullscreen. Do you think maybe creating another window (or view?) could be useful?

    Read the article

  • Should try...catch go inside or outside a loop?

    - by mmyers
    I have a loop that looks something like this: for(int i = 0; i < max; i++) { String myString = ...; float myNum = Float.parseFloat(myString); myFloats[i] = myNum; } This is the main content of a method whose sole purpose is to return the array of floats. I want this method to return null if there is an error, so I put the loop inside a try...catch block, like this: try { for(int i = 0; i < max; i++) { String myString = ...; float myNum = Float.parseFloat(myString); myFloats[i] = myNum; } } catch (NumberFormatException ex) { return null; } But then I also thought of putting the try...catch block inside the loop, like this: for(int i = 0; i < max; i++) { String myString = ...; try { float myNum = Float.parseFloat(myString); } catch (NumberFormatException ex) { return null; } myFloats[i] = myNum; } So my question is: is there any reason, performance or otherwise, to prefer one over the other? EDIT: The consensus seems to be that it is cleaner to put the loop inside the try/catch, possibly inside its own method. However, there is still debate on which is faster. Can someone test this and come back with a unified answer? (EDIT: did it myself, but voted up Jeffrey and Ray's answers)

    Read the article

  • Current wisdom on SQL Server and Hyperthreading?

    - by BradC
    Lots of articles out there (see Slava Oks's original SQL 2000 article and Kevin Kline's SQL 2005 update) recommend disabling hyperthreading on SQL servers, or at least testing your specific workload before enabling it on your servers. This issue is gradually becoming less relevant as true multi-core processors replace hyperthreaded ones, but what's the current wisdom on this issue? Does this advice change any with SQL 2005 64-bit, or SQL 2008, or Windows Server 2008? Ideally, this should be tested in advance in a staging environment, but what about for servers that have already made it into production with HT enabled? How can I tell if performance issues we're experiencing might be related to HT? Is there some specific combination of perfmon counters that might point me in that direction, as opposed to all the other things I normally pursue when working on improving SQL performance? Edit: This is especially attractive because of the potential for an across the board improvement for some of my high-cpu servers, but the client is going to want to see something concrete that helps me identify which servers really could benefit from disabling hyperthreading. Of course, conventional performance troubleshooting is ongoing, but sometimes any little bit helps.

    Read the article

  • I Know What I Did This Summer: Put Down Trex Decking

    - by thatjeffsmith
    If you’re wondering why I would bore everyone with my pictures and frequent status updates/tweets from the past week – it’s so I could document the process of refurbishing my deck, or what some would call a porch. When we go to take a vacation, buy a car, do anything – we also read personal blogs to get the real story. So, if you’re curious about what it takes to tackle this sort of project, read on. Skills/Equipment/Manpower We Possessed I took the old decking out by myself. I’m about 230 lbs, more than 6′ tall, and I’m pretty healthy. This took about 8 hours over two afternoons. Three of us put the deck back together. My wife has two engineering degrees. Her father also has two engineering degrees. Lots of brainpower available here. Also, her dad ran the public works department for a country for more than 20 years – so lots and lots of practical experience on hand. We had a compound mitre saw, a skilsaw, 2-3 crowbars, a framing hammer, 3 cordless drills, a corded drill, lots of sawhorses, a power sander, an angle grinder, a 10×10 Coleman canopy tent, a Ford F-150 pickup truck, outdoor speakers and lots of iTunes playlists, plenty of water and cold beer. Why We Did This Our deck was relatively young – it was built in 2005. However, the pressure treated boards must not have been adequately maintained before we bought the house. I had powerwashed the deck every other year and had it stained a few times. The boards just rotted. We’re going to be in the house for a long time, and we wanted something that would look nice and require little maintenance. More bad deck boards The deck boards were in bad shape Things We Learned The two most important things: The hidden fasteners have to be put in JUST right. Wedge them into the grooved board, then bend down the bit that is screwed down. We didn’t do this on the first board and couldn’t get the second board to fit nearly close enough. Watching the official TREX YouTube video helped immensely, and we should have watched that first. When pre-drilling holes for the boards that need screwed down – DO NOT pre-drill through the underlying framing wood. ONLY pre-drill through the TREX itself. The screw won’t seat in the board properly. Instead of sitting down flush with the board, it will stop at the top of the board and just spin. I had to call the the place that sold me the screws to find this out. So about a third of our screws look like crap. If it doesn’t look or feel right – stop everything and pick up your computer or your phone. It’s not right, and it will be much easier to stop and find out why. We didn’t do this, and now I’m going to see every screw that’s not flush with the boards and get upset. Oh well. The Process How much time did it take? Well I spent about 8 hours taking the deck apart. And then the 3 of use spent 8 hours the first day, 10 hours the second day, 8 hours the third, and another 6 hours on the fourth day. That’s like 104 man-hours. We supposedly saved four or five thousand dollars in labor, but don’t do the math here or you might get a bit upset. The main thing is that we got what we wanted, and there won’t be any surprises later. Now for some pictures… This 6”+ pry bar made the destruction of the old deck much easier Most of the joists, once exposed, were OK. This joist wasn’t sitting on ANYTHING before. We think a lazy gas person cut the board to sneak a gas line in. Awesome… These monster lag bolts had to be accounted for when putting in the additional framing The border pattern Sheri wanted to put in required a lot more framing. These were the first boards to go down – we screwed them in as there was no way to attach clips I sat, kicked in the boards, and then drilled these clips in – but my wife was able to go MUCH faster by using her hands to lock the boards in and drill on her knees. I liked locking the board in with my feet when they needed to be ‘encouraged’ to go straight. The first board took FOREVER to go in, but then when we got rolling, we were able to put in a 20′ board in less than 10 minutes. This was end of construction day #2 – we got much further than we thought we would. Ah, the dreaded last 10% – what to do here? Remember those ‘floating’ stringers? Yeah, we fixed that up a bit, too. My wife used a website (and her brain) to calculate exactly how to cut the stringers to give us the rise/run we needed with the proper clearance and all that jazz. The stairs with stringers and toe kicks – this was worth the effort It started raining on us as I screwed down the steps – this we managed to get our shade tent up on the deck to protect us from the rain too The stairs, finished Finished, mostly Good corner shot The top of the stairs Stairs, looking down Celebratory beer In Summary There are a few things we’re not happy with. I think we can fix them up – but later. I have a few things left to finish, rewire the lighting, get the gas grille put back in, and rehang some screen doors. I was expecting this to be a lot worse than it was. If I didn’t have the help, I would have never done it myself. But I’m glad that I did have that help and did do that project. It’s not often you get to spend that kind of qualify time with family and building cool stuff.

    Read the article

  • XenServer 5.5 running WHS.. trying to add local or network printer/scanner/copier

    - by ProstheticHead
    Hi guys. Just wondering if anyone has prior experience in sharing a multifunction printer across a (mostly windows) network? My situation at the minute is complicated.. I DID have the printer attached directly to a Windows Home Server box and was able to scan to a share and print across the network from my other computers. Compatibility problems have forced me to virtualize WHS on XenServer 5.5... This is actually quite useful because I can now run other things on the same box, but the problem is that now WHS doesn't get direct hardware access so it doesn't see a USB attached PSC... Grrrr! So now I have a choice to make. I've read somewhere that I can buy an add in PCI USB controller and somehow set up a passthrough to one VM at a time. To me, this sound complicated but if it's likely to work reliably I'd prefer this method. I've read about another approach, which I'm not sure about either, but I guess sounds plausible. A Network USB server, (NOT a print server) that can somehow make a USB port accessible across a network. My worry here is that it likely needs some kind of 3rd party software to work.. so not ideal. If there are any other methods you can suggest I'd be happy to hear them... I need your help guys. I'm also in the market for a PCI express SATA controller, nothing flashy, just need up to 8 ports, JBOD and 100% compatability.. Any suggestions? Regards Kevin

    Read the article

  • Is 2 GB of RAM better than 2.5 GB?

    - by pibboater
    My laptop has two slots for RAM, and currently has two 512 MB chips, for 1 GB. Windows XP is running terribly slow on it, so I want to upgrade the RAM. I could buy two 1 GB chips to replace both of the current 512 MB chips, to give me 2 GB of RAM. Or, the price is the same to buy one 2 GB chip, to replace just one of the 512 MB chips, and give me 2.5 GB total. The RAM it takes is PC2-4200 533MHz DDR2. What do you think would be better: buying two 1 GB chips so it can take advantage of dual-channel operation, or buying one 2 GB chip to end up with more total RAM but not dual-channel operation? Like I said, price is the same, so performance is the only consideration. I'm not doing anything especially intensive like video or photo editing -- just having multiple Office programs open, playing music, browsers, etc., but currently even opening the first application takes forever. If it matters, the laptop is a Toshiba Qosmio G25-AV513 running Windows XP Media Center SP3. Thanks! Kevin

    Read the article

  • Is 1GB + 1GB RAM better than 2GB +0.5GB?

    - by pibboater
    My laptop has two slots for RAM, and currently has two 512 MB chips, for 1 GB. Windows XP is running terribly slow on it, so I want to upgrade the RAM. I could buy two 1 GB chips to replace both of the current 512 MB chips, to give me 2 GB of RAM. Or, the price is the same to buy one 2 GB chip, to replace just one of the 512 MB chips, and give me 2.5 GB total. The RAM it takes is PC2-4200 533MHz DDR2. What do you think would be better: buying two 1 GB chips so it can take advantage of dual-channel operation, or buying one 2 GB chip to end up with more total RAM but not dual-channel operation? Like I said, price is the same, so performance is the only consideration. I'm not doing anything especially intensive like video or photo editing -- just having multiple Office programs open, playing music, browsers, etc., but currently even opening the first application takes forever. If it matters, the laptop is a Toshiba Qosmio G25-AV513 running Windows XP Media Center SP3. Thanks! Kevin

    Read the article

  • How do I get .NET to garbage collect aggressively?

    - by mmr
    I have an application that is used in image processing, and I find myself typically allocating arrays in the 4000x4000 ushort size, as well as the occasional float and the like. Currently, the .NET framework tends to crash in this app apparently randomly, almost always with an out of memory error. 32mb is not a huge declaration, but if .NET is fragmenting memory, then it's very possible that such large continuous allocations aren't behaving as expected. Is there a way to tell the garbage collector to be more aggressive, or to defrag memory (if that's the problem)? I realize that there's the GC.Collect and GC.WaitForPendingFinalizers calls, and I've sprinkled them pretty liberally through my code, but I'm still getting the errors. It may be because I'm calling dll routines that use native code a lot, but I'm not sure. I've gone over that C++ code, and make sure that any memory I declare I delete, but still I get these C# crashes, so I'm pretty sure it's not there. I wonder if the C++ calls could be interfering with the GC, making it leave behind memory because it once interacted with a native call-- is that possible? If so, can I turn that functionality off? EDIT: Here is some very specific code that will cause the crash. According to this SO question, I do not need to be disposing of the BitmapSource objects here. Here is the naive version, no GC.Collects in it. It generally crashes on iteration 4 to 10 of the undo procedure. This code replaces the constructor in a blank WPF project, since I'm using WPF. I do the wackiness with the bitmapsource because of the limitations I explained in my answer to @dthorpe below as well as the requirements listed in this SO question. public partial class Window1 : Window { public Window1() { InitializeComponent(); //Attempts to create an OOM crash //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops int theRows = 4000, currRows; int theColumns = 4000, currCols; int theMaxChange = 30; int i; List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack byte[] displayBuffer = null;//the buffer used as a bitmap source BitmapSource theSource = null; for (i = 0; i < theMaxChange; i++) { currRows = theRows - i; currCols = theColumns - i; theList.Add(new ushort[(theRows - i) * (theColumns - i)]); displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create(currCols, currRows, 96, 96, PixelFormats.Gray8, null, displayBuffer, (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8); System.Console.WriteLine("Got to change " + i.ToString()); System.Threading.Thread.Sleep(100); } //should get here. If not, then theMaxChange is too large. //Now, go back up the undo stack. for (i = theMaxChange - 1; i >= 0; i--) { displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create((theColumns - i), (theRows - i), 96, 96, PixelFormats.Gray8, null, displayBuffer, ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8); System.Console.WriteLine("Got to undo change " + i.ToString()); System.Threading.Thread.Sleep(100); } } } Now, if I'm explicit in calling the garbage collector, I have to wrap the entire code in an outer loop to cause the OOM crash. For me, this tends to happen around x = 50 or so: public partial class Window1 : Window { public Window1() { InitializeComponent(); //Attempts to create an OOM crash //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops for (int x = 0; x < 1000; x++){ int theRows = 4000, currRows; int theColumns = 4000, currCols; int theMaxChange = 30; int i; List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack byte[] displayBuffer = null;//the buffer used as a bitmap source BitmapSource theSource = null; for (i = 0; i < theMaxChange; i++) { currRows = theRows - i; currCols = theColumns - i; theList.Add(new ushort[(theRows - i) * (theColumns - i)]); displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create(currCols, currRows, 96, 96, PixelFormats.Gray8, null, displayBuffer, (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8); } //should get here. If not, then theMaxChange is too large. //Now, go back up the undo stack. for (i = theMaxChange - 1; i >= 0; i--) { displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create((theColumns - i), (theRows - i), 96, 96, PixelFormats.Gray8, null, displayBuffer, ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8); GC.WaitForPendingFinalizers();//force gc to collect, because we're in scenario 2, lots of large random changes GC.Collect(); } System.Console.WriteLine("Got to changelist " + x.ToString()); System.Threading.Thread.Sleep(100); } } } If I'm mishandling memory in either scenario, if there's something I should spot with a profiler, let me know. That's a pretty simple routine there. Unfortunately, it looks like @Kevin's answer is right-- this is a bug in .NET and how .NET handles objects larger than 85k. This situation strikes me as exceedingly strange; could Powerpoint be rewritten in .NET with this kind of limitation, or any of the other Office suite applications? 85k does not seem to me to be a whole lot of space, and I'd also think that any program that uses so-called 'large' allocations frequently would become unstable within a matter of days to weeks when using .NET. EDIT: It looks like Kevin is right, this is a limitation of .NET's GC. For those who don't want to follow the entire thread, .NET has four GC heaps: gen0, gen1, gen2, and LOH (Large Object Heap). Everything that's 85k or smaller goes on one of the first three heaps, depending on creation time (moved from gen0 to gen1 to gen2, etc). Objects larger than 85k get placed on the LOH. The LOH is never compacted, so eventually, allocations of the type I'm doing will eventually cause an OOM error as objects get scattered about that memory space. We've found that moving to .NET 4.0 does help the problem somewhat, delaying the exception, but not preventing it. To be honest, this feels a bit like the 640k barrier-- 85k ought to be enough for any user application (to paraphrase this video of a discussion of the GC in .NET). For the record, Java does not exhibit this behavior with its GC.

    Read the article

  • Video games, content strategy, and failure - oh my.

    - by Roger Hart
    Last night was the CS London group's event Content Strategy, Manhattan Style. Yes, it's a terrible title, feeling like a self-conscious grasp for chic, sadly commensurate with the venue. Fortunately, this was not commensurate with the event itself, which was lively, relevant, and engaging. Although mostly if you're a consultant. This is a strong strain in current content strategy discourse, and I think we're going to see it remedied quite soon. Not least in Paris on Friday. A lot of the bloggers, speakers, and commentators in the sphere are consultants, or part of agencies and other consulting organisations. A lot of the talk is about how you sell content strategy to your clients. This is completely acceptable. Of course it is. And it's actually useful if that's something you regularly have to do. To an extent, it's even portable to those of us who have to sell content strategy within an organisation. We're still competing for credibility and resource. What we're doing less is living in the beginning of a project. This was touched on by Jeffrey MacIntyre (albeit in a your-clients kind of a way) who described "the day two problem". Companies, he suggested, build websites for launch day, and forget about the need for them to be ongoing entities. Consultants, agencies, or even internal folks on short projects will live through Day Two quite often: the trainwreck moment where somebody realises that even if the content is right (which it often isn't), and on time (which it often isn't), it'll be redundant, outdated, or inaccurate by the end of the week/month/fickle social media attention cycle. The thing about living through a lot of Day Two is that you see a lot of failure. Nothing succeeds like failure? Failure is good. When it's structured right, it's an awesome tool for learning - that's kind of how video games work. I'm chewing over a whole blog post about this, but basically in game-like learning, you try, fail, go round the loop again. Success eventually yields joy. It's a relatively well-known phenomenon. It works best when that failing step is acutely felt, but extremely inexpensive. Dying in Portal is highly frustrating and surprisingly characterful, but the save-points are well designed and the reload unintrusive. The barrier to re-entry into the loop is very low, as is the cost of your failure out in meatspace. So it's easy (and fun) to learn. Yeah, spot the difference with business failure. As an external content strategist, you get to rock up with a big old folder full of other companies' Day Two (and ongoing day two hundred) failures. You can't send the client round the learning loop - although you may well be there because they've been round it once - but you can show other people's round trip. It's not as compelling, but it's not bad. What about internal content strategists? We can still point to things that are wrong, and there are some very compelling tools at our disposal - content inventories, user testing, and analytics, for instance. But if we're picking up big organically sprawling legacy content, Day Two may well be a distant memory, and the felt experience of web content failure is unlikely to be immediate to many people in the organisation. What to do? My hunch here is that the first task is to create something immediate and felt, but that it probably needs to be a success. Something quickly doable and visible - a content problem solved with a measurable business result. Now, that's a tall order; but scrape of the "quickly" and it's the whole reason we're here. At Red Gate, I've started with the text book fear and passion introduction to content strategy. In fact, I just typo'd that as "contempt strategy", and it isn't a bad description. Yelling "look at this, our website is rubbish!" gets you the initial attention, but it doesn't make you many friends. And if you don't produce something pretty sharp-ish, it's easy to lose the momentum you built up for change. The first thing I've done - after the visual content inventory - is to delete a bunch of stuff. About 70% of the SQL Compare web content has gone, in fact. This is a really, really cheap operation. It's visible, and it's powerful. It's cheap because you don't have to create any new content. It's not free, however, because you do have to validate your deletions. This means analytics, actually reading that content, and talking to people whose business purposes that content has to serve. If nobody outside the company uses it, and nobody inside the company thinks they ought to, that's a no-brainer for the delete list. The payoff here is twofold. There's the nebulous hard-to-illustrate "bad content does user experience and brand damage" argument; and there's the "nobody has to spend time (money) maintaining this now" argument. One or both are easily felt, and the second at least should be measurable. But that's just one approach, and I'd be interested to hear from any other internal content strategy folks about how they get buy-in, maintain momentum, and generally get things done.

    Read the article

< Previous Page | 43 44 45 46 47 48 49  | Next Page >