Search Results

Search found 5245 results on 210 pages for 'william hand'.

Page 155/210 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Uninstall, Disable, or Remove Windows 7 Media Center

    - by Mysticgeek
    Although Windows 7 Media Center has improved a lot over previous versions of Windows, but you might want to disable it for different reasons. Here we take a look at a couple of methods to get rid of it. There are a variety of reasons you might want to disable Windows 7 Media Center. Maybe you own a business and don’t want it to run on the machines. Or perhaps you don’t use it at all and just don’t want it around. Turn Off WMC Using Programs and Features Probably the easiest way to get rid of it on all versions of Windows 7 is to open Control Panel and select Programs and Features. This method is similar to disabling Internet Explorer 8 in Windows 7. On the left hand panel click on Turn Windows Features on or off. Scroll down to Media Features and expand the folder. Then Uncheck Windows Media Center… You’ll get a verification message making sure you want to disable it, click Yes. Then the box next to Windows Media Center will be empty…click OK. Wait while WMC is disabled… To complete the process a reboot is required. After getting back from the restart, the WMC icon will be gone and there won’t be any way to launch it. Re-enable WMC If you want to re-enable it, just go back in and recheck it. Again you’ll need to wait while it’s configured, but when it’s done, a restart is not required.   Disable Media Center Using Group Policy Note: This process uses Group Policy Editor which is not available in Home versions of Windows 7. Click on the Start menu and type gpedit.msc into the Search box and hit Enter. Now navigate to User Configuration \ Administrative Templates \ Windows Components \ Windows Media Center. Double-click on Do not allow Windows Media Center to run. Then select the radio button next to Enabled, click OK and close out of Group Policy Editor. Now if a user tries to launch WMC they will get the following message. Conclusion If you’re not a fan of Windows Media Center or want to disable it for whatever reason, the process is simple and there are a couple of ways you can do it. WMC is not included in Starter or Home Basic versions of Windows 7. If you’re new to Windows 7 Media Center, you might want to check out our guide on getting started and setting up live TV. Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Disable Windows Mobility Center in Windows 7 or VistaMake Outlook Faster by Disabling Unnecessary Add-InsSchedule Updates for Windows Media CenterRemove "Map Network Drive" Menu Item from Windows Vista or XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Find Downloads and Add-ins for Outlook Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa !

    Read the article

  • JDeveloper 11g R1 (11.1.1.4.0) - New Features on ADF Desktop Integration Explained

    - by juan.ruiz
    One of the areas that introduced many new features on the latest release (11.1.1.4.0)  of JDeveloper 11g R1 is ADF Desktop integration - in this article I’ll provide an overview of these new features. New ADF Desktop Integration Ribbon in Excel - After installing the ADF desktop integration add-in and depending on the mode in which you open the desktop integration workbook, the ADF Desktop integration ribbon for design time and runtime are displayed as a separate tab within Excel. In previous version the ADF Desktop integration environment used to be placed inside the add-ins tab. Above you can see both, design time ribbon as well as runtime ribbon. On the design time ribbon you can manage the workbook and worksheet properties, worksheet component properties, diagnostics, execution and publication of the workbook. The runtime version of the ribbon is totally customizable and represents what it used to be the runtime menu on the spreadsheet, in this ribbon you can include all the operations and actions that could be executed by the end user while working with the spreadsheet data. Diagnostics - A very important aspect for developers is how to debug or verify the interactions of the client with the server, for that ADF desktop integration has provided since day one a series of diagnostics tools. In this release the diagnostics tools are more visible and are really easy to configure. You can access the client console while testing the workbook, or you can simple dump all the messages to a log file – having the ability of setting the output level for both. Security - There are a number of enhancements on security but the one with more impact for developers is tha security now is optional when using ADF Desktop Integration. Until this version every time that you wanted to work with ADFdi it was a must that the application was previously secured. In this release security is optional which means that if you have previously defined security on your application, then you must secure the ADFdi servlet as explained in one of my previous (ADD LINK) posts. In the other hand, if but the time that you start working with ADFdi you have not defined security, you can test and publish your workbooks without adding security. Support for Continuous Integration - In this release we have added tooling for continuous integration building. in the ADF desktop integration space, the concept translates to adding functionality that developers can use to publish ADFdi workbooks as part of their entire application build. For that purpose, we have a publish tool that can be easily invoke from an ANT task such that all the design time workbooks are re-published into the latest version of the application building process. Key Column - At runtime, on any worksheet containing editable tables you will notice a new additional column called the key column. The purpose of this column is to make the end user aware that all rows on the table need to be selected at the time of sorting. The users cannot alter the value of this column. From the developers points of view there are no steps required in order to have the key column included into the worksheets. Installation and Creation of New Workbooks - Both use cases can be executed now directly from JDeveloper. As part of the Tools menu options the developer can install the ADF desktop integration designer. Also, creating new workbooks that previously was done through that convert tool shipped with JDeveloper is now automatic done from the New Gallery. Creating a new ADFdi workbook adds metadata information information to the Excel workbook so you can work in design time. Other Enhancements Support for Excel 2010 and the ADF components ready-only enabled don’t allow to change its value – the cell in Excel is automatically protected, this could cause confusion among customers of previous releases.

    Read the article

  • SQLAuthority News – Getting Ready to Learn SQL Server

    - by pinaldave
    If you have read my earlier blog post you must be aware of how I am always eager to learn new things. I have signed up for three days learning course at Koenig Solutions for End to End SQL Server Business Intelligence. You may wonder why I sign up for the course on SQL Server when it seems that I know a lot about it. Well, the belief is incorrect that I know a lot. I think there are plenty of things which I have been dreaming to learn. Why am I learning SQL Server? First of all – I do not know everything and second it is always a good idea to learn more. No matter how old we get or how much we think we know – there are always details which we can learn and refresh few concepts. Learning is never ending process philosophically but it is true as well in reality. SQL Server 2012 is already released earlier this year and there are plenty of enhancements released. Recently I was going over the list of the all the new feature and enhancement and I realized that there few things about SQL Server 2008 R2 I never got a chance to have a hand’s on experience and we have entered into the era of SQL Server 2012. I feel a bit bad about it and I decided to make it a priority for me to learn all the missing experiences. Quick Action – Registration The very same day I called up my friend who owns Koenig Solution and expressed my concern and requested his help. During my early career when I was a SQL Server Trainer, we had some good synergy between us and now they are very successful offshore training company by having a physical location in Delhi,  Goa, Dahradun, Shimla, Goa and Bangalore. I quickly visited their Bangalore Center and paid my fees for learning SQL Server Business Intelligence course. Very next second I got call from my friend suggesting that I learn this course from Delhi instead of Bangalore. As per him I should travel to Delhi and learn the course how other students are learning “Away from Home”. This made sense as I stay in Bangalore and if I return home after a long day of learning, I will be not able to practice for the next day as there will be “sweet distraction” of the family. Well I opted for Delhi. What Registration Fees Included I learned from registration processes that the following were included in the fees. 3 meals every day (hearty breakfast, lunch from premium restaurants and home cooked like dinner) Airport pick up and drop Hotel Stay Internet at hotel and at learning institute Unlimited coffee and snacks at learning institute Printed Learning Material Certification Fees (if applicable) Learning material … And of course classroom training I thought registration process was over when I paid fees. Well, I was in for a very nice surprise. Registration Experience – Bliss! Within few hours I received emails from Center Manager of Delhi with all the necessary details I need to know about my learning experience. The email contained following information in detail and it blew me away. Details of the pick up from airport – driver information Details of Delhi and important information List of all the important people and emergency contact details Internet connection details Detail of the trainer and all the training details and lots of other relevant information Well so far everything looks great. Tomorrow I will reach to Delhi and I will share how things go on. Any suggestion for things to do in Delhi? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Looking under the hood of SSRS

    - by Jim Giercyk
    SSRS is a powerful tool, but there is very little available to measure it’s performance or view the SSRS execution log or catalog in detail.  Here are a few simple queries that will give you insight to the system that you never had before.   ACTIVE REPORTS:  Have you ever seen your SQL Server performance take a nose dive due to a long-running report?  If the SPID is executing under a generic Report ID, or it is a scheduled job, you may have no way to tell which report is killing your server.  Running this query will show you which reports are executing at a given time, and WHO is executing them.   USE ReportServerNative SELECT runningjobs.computername,             runningjobs.requestname,              runningjobs.startdate,             users.username,             Datediff(s,runningjobs.startdate, Getdate()) / 60 AS    'Active Minutes' FROM runningjobs INNER JOIN users ON runningjobs.userid = users.userid ORDER BY runningjobs.startdate               SSRS CATALOG:  We have all asked “What was the last thing that changed”, or better yet, “Who in the world did that!”.  Here is a query that will show all of the reports in your SSRS catalog, when they were created and changed, and by who.           USE ReportServerNative SELECT DISTINCT catalog.PATH,                            catalog.name,                            users.username AS [Created By],                             catalog.creationdate,                            users_1.username AS [Modified By],                            catalog.modifieddate FROM catalog         INNER JOIN users ON catalog.createdbyid = users.userid  INNER JOIN users AS users_1 ON catalog.modifiedbyid = users_1.userid INNER JOIN executionlogstorage ON catalog.itemid = executionlogstorage.reportid WHERE ( catalog.name <> '' )               SSRS EXECUTION LOG:  Sometimes we need to know what was happening on the SSRS report server at a given time in the past.  This query will help you do just that.  You will need to set the timestart and timeend in the WHERE clause to suit your needs.         USE ReportServerNative SELECT catalog.name AS report,        executionlogstorage.username AS [User],        executionlogstorage.timestart,        executionlogstorage.timeend,         Datediff(mi,e.timestart,e.timeend) AS ‘Time In Minutes',        catalog.modifieddate AS [Report Last Modified],        users.username FROM   catalog  (nolock)        INNER JOIN executionlogstorage e (nolock)          ON catalog.itemid = executionlogstorage.reportid        INNER JOIN users (nolock)          ON catalog.modifiedbyid = users.userid WHERE  executionlogstorage.timestart >= Dateadd(s, -1, '03/31/2012')        AND executionlogstorage.timeend <= Dateadd(DAY, 1, '04/02/2012')      LONG RUNNING REPORTS:  This query will show the longest running reports over a given time period.  Note that the “>5” in the WHERE clause sets the report threshold at 5 minutes, so anything that ran less than 5 minutes will not appear in the result set.  Adjust the threshold and start/end times to your liking.  With this information in hand, you can better optimize your system by tweaking the longest running reports first.         USE ReportServerNative SELECT executionlogstorage.instancename,        catalog.PATH,        catalog.name,        executionlogstorage.username,        executionlogstorage.timestart,        executionlogstorage.timeend,        Datediff(mi, e.timestart, e.timeend) AS 'Minutes',        executionlogstorage.timedataretrieval,        executionlogstorage.timeprocessing,        executionlogstorage.timerendering,        executionlogstorage.[RowCount],        users_1.username        AS createdby,        CONVERT(VARCHAR(10), catalog.creationdate, 101)        AS 'Creation Date',        users.username        AS modifiedby,        CONVERT(VARCHAR(10), catalog.modifieddate, 101)        AS 'Modified Date' FROM   executionlogstorage e         INNER JOIN catalog          ON executionlogstorage.reportid = catalog.itemid        INNER JOIN users          ON catalog.modifiedbyid = users.userid        INNER JOIN users AS users_1          ON catalog.createdbyid = users_1.userid WHERE  ( e.timestart > '03/31/2012' )        AND ( e.timestart <= '04/02/2012' )        AND  Datediff(mi, e.timestart, e.timeend) > 5        AND catalog.name <> '' ORDER  BY 'Minutes' DESC        I have used these queries to build SSRS reports that I can refer to quickly, and export to Excel if I need to report or quantify my findings.  I encourage you to look at the data in the ReportServerNative database on your report server to understand the queries and create some of your own.  For instance, you may want a query to determine which reports are using which shared data sources.  Work smarter, not harder!

    Read the article

  • AppKata - Enter the next level of programming exercises

    - by Ralf Westphal
    Doing CodeKatas is all the rage lately. That´s great since widely accepted exercises are important to further the art. They provide a means of communication across platforms and allow to compare results which is part of any deliberate practice. But CodeKatas suffer from their size. They are intentionally small, so they can be done again and again. Repetition helps to build habit and to dig deeper. Over time ever new nuances of the problem or one´s approach become visible. On the other hand, though, their small size limits the methods, techniques, technologies that can be applied. To improve your TDD skills doing CodeKatas might be enough. But what about other skills? Developing on a software in a team, designing larger pieces of software, iteratively releasing software… all this and more is kinda hard to train using the tiny CodeKata problems. That´s why I´d like to present here another kind of kata I call Application Kata (or just AppKata). AppKatas are larger programming problems. They require the development of “whole” applications, i.e. not just one class or method, but bunches of classes accessible through a user interface. Also AppKata problems always are split into iterations. To get the most out of them, just look at the requirements of one iteration at a time. This way you´re closer to reality where requirements evolve in unexpected ways. So if you´re looking for more of a challenge for your software development skills, check out these AppKatas – or invent your own. AppKatas are platform independent like CodeKatas. Use whatever programming language and IDE you like. Also use whatever approach to software development you like. Just be sensitive to how easy it is to evolve your code across iterations. Reflect on what went well and what not. Compare your solutions with others. Or – for even more challenge – go for the “Coding Carousel” (see below). CSV Viewer An application to view CSV files. Sounds easy, but watch out! Requirements sometimes drastically change if the customer is happy with what you delivered. Iteration 1 Iteration 2 Iteration 3 Iteration 4 Iteration 5 (to come) Questionnaire If you like GUI programming, this AppKata might be for you. It´s about an app to let people fill out questionnaires. Also this problem might be interestin for you, if you´re into DDD. Iteration 1 Iteration 2 (to come) Iteration 3 (to come) Iteration 4 (to come) Tic Tac Toe For developers who like game programming. Although Tic Tac Toe is a trivial game, this AppKata poses some interesting infrastructure challenges. The GUI, however, stays simple; leave any 3D ambitions at home ;-) Iteration 1 Iteration 2 (to come) Iteration 3 (to come) Iteration 4 (to come) Iteration 5 (to come) Coding Carousel There are many ways you can do AppKatas. Work on them alone or in a team, pitch several devs against each other in an AppKata contest – or go around in a Coding Carousel. For the Coding Carousel you need at least 3 dev teams (regardless of size). All teams work on the same iteration at the same time. But here´s the trick: After each iteration the teams swap their code. Whatever they did for iteration n will be the basis for changes another team has to apply in iteration n+1. The code is going around the teams like in a carousel. I promise you, that´s gonna be fun! :-)

    Read the article

  • What Counts For a DBA: Imagination

    - by drsql
    "Imagination…One little spark, of inspiration… is at the heart, of all creation." – From the song "One Little Spark", by the Sherman Brothers I have a confession to make. Despite my great enthusiasm for databases and programming, it occurs to me that every database system I've ever worked on has been, in terms of its inputs and outputs, downright dull. Most have been glorified e-spreadsheets, many replacing manual systems built on actual spreadsheets. I've created a lot of database-driven software whose main job was to "count stuff"; phone calls, web visitors, payments, donations, pieces of equipment and so on. Sometimes, instead of counting stuff, the database recorded values from other stuff, such as data from sensors or networking devices. Yee hah! So how do we, as DBAs, maintain high standards and high spirits when we realize that so much of our work would fail to raise the pulse of even the most easily excitable soul? The answer lies in our imagination. To understand what I mean by this, consider a role that, in terms of its output, offers an extreme counterpoint to that of the DBA: the Disney Imagineer. Their job is to design Disney's Theme Parks, of which I'm a huge fan. To me this has always seemed like a fascinating and exciting job. What must an Imagineer do, every day, to inspire the feats of creativity that are so clearly evident in those spectacular rides and shows? Here, if ever there was one, is a role where "dull moments" must be rare indeed, surely? I wanted to find out, and so parted with a considerable sum of money for my wife and I to have lunch with one; I reasoned that if I found one small way to apply their secrets to my own career, it would be money well spent. Early in the conversation with our Imagineer (Cindy Cote), the job did indeed sound magical. However, as talk turned to management meetings, budget-wrangling and insane deadlines, I came to the strange realization that, in fact, her job was a lot more like mine than I would ever have guessed. Much like databases, all those spectacular Disney rides bring with them a vast array of complex plumbing, lighting, safety features, and all manner of other "boring bits", kept well out of sight of the end user, but vital for creating the desired experience; and, of course, it is these "boring bits" that take up much of the Imagineer's time. Naturally, there is still a vital part of their job that is spent testing out new ideas, putting themselves in the place of a park visitor, from a 9-year-old boy to a 90-year-old grandmother, and trying to imagine what experiences they'd like to have. It is these small, but vital, sparks of imagination and creativity that have the biggest impact. The real feat of a successful Imagineer is clearly to never to lose sight of this fact, in among all the rote tasks. It is the same for a DBA. Not matter how seemingly dull is the task at hand, try to put yourself in the shoes of the end user, and imagine how your input will affect the experience he or she will have with the database you're building, and how that may affect the world beyond the bits stored in your database. Then, despite the inevitable rush to be "done", find time to go the extra mile and hone the design so that it delivers something as close to that imagined experience as you can get. OK, our output still can't and won't reach the same spectacular heights as the "Journey into The Imagination" ride at EPCOT Theme Park in Orlando, where I first heard "One Little Spark". However, our imaginative sparks and efforts can, and will, make a difference to the user who now feels slightly more at home with a database application, or to the manager holding a report presented with enough clarity to drive an interesting decision or two. They are small victories, but worth having, and appreciated, or at least that's how I imagine it.

    Read the article

  • Normalisation and 'Anima notitia copia' (Soul of the Database)

    - by Phil Factor
    (A Guest Editorial for Simple-Talk) The other day, I was staring  at the sys.syslanguages  table in SQL Server with slightly-raised eyebrows . I’d just been reading Chris Date’s  interesting book ‘SQL and Relational Theory’. He’d made the point that you’re not necessarily doing relational database operations by using a SQL Database product.  The same general point was recently made by Dino Esposito about ASP.NET MVC.  The use of ASP.NET MVC doesn’t guarantee you a good application design: It merely makes it possible to test it. The way I’d describe the sentiment in both cases is ‘you can hit someone over the head with a frying-pan but you can’t call it cooking’. SQL enables you to create relational databases. However,  even if it smells bad, it is no crime to do hideously un-relational things with a SQL Database just so long as it’s necessary and you can tell the difference; not only that but also only if you’re aware of the risks and implications. Naturally, I’ve never knowingly created a database that Codd would have frowned at, but around the edges are interfaces and data feeds I’ve written  that have caused hissy fits amongst the Normalisation fundamentalists. Part of the problem for those who agonise about such things  is the misinterpretation of Atomicity.  An atomic value is one for which, in the strange virtual universe you are creating in your database, you don’t have any interest in any of its component parts.  If you aren’t interested in the electrons, neutrinos,  muons,  or  taus, then  an atom is ..er.. atomic. In the same way, if you are passed a JSON string or XML, and required to store it in a database, then all you need to do is to ask yourself, in your role as Anima notitia copia (Soul of the database) ‘have I any interest in the contents of this item of information?’.  If the answer is ‘No!’, or ‘nequequam! Then it is an atomic value, however complex it may be.  After all, you would never have the urge to store the pixels of images individually, under the misguided idea that these are the atomic values would you?  I would, of course,  ask the ‘Anima notitia copia’ rather than the application developers, since there may be more than one application, and the applications developers may be designing the application in the absence of full domain knowledge, (‘or by the seat of the pants’ as the technical term used to be). If, on the other hand, the answer is ‘sure, and we want to index the XML column’, then we may be in for some heavy XML-shredding sessions to get to store the ‘atomic’ values and ensure future harmony as the application develops. I went back to looking at the sys.syslanguages table. It has a months column with the months in a delimited list January,February,March,April,May,June,July,August,September,October,November,December This is an ordered list. Wicked? I seem to remember that this value, like shortmonths and days, is treated as a ‘thing’. It is merely passed off to an external  C++ routine in order to format a date in a particular language, and never accessed directly within the database. As far as the database is concerned, it is an atomic value.  There is more to normalisation than meets the eye.

    Read the article

  • Upgrades in 5 Easy Pieces

    - by Anne R.
    Even though there are a few select tasks that I have to do once or twice a year, I can’t remember how to do them! Or where to find the bits and pieces to complete the task. So I love it when someone consolidates everything under one spot. That’s what the CRM On Demand team has done with the upgrade information. Specifically, they have: Provided a “one-stop” area for managing upgrades at your company. Broken down the upgrade process into 5 (yes, 5) steps. Explained when and how to perform each step with dates specific to your pod. Included details about each step, visible by expanding the step. Translated the steps into 11 languages. Added a list of release-specific resources with links from the page. Now, just head for the Training and Support portal, click the Release Info tab, and walk through the “5 Essential Steps to a Successful Upgrade.” Before you continue, though, select your language from the drop-down list on the Release Info page. CRM On Demand now has the upgrade steps translated into 11 languages. On the Step page, you can expand each section in sequence and follow the more detailed instructions that appear. This will ensure that you’ve covered all your bases for each upgrade. Here’s a shortened version of the information that you’ll find: 1. Verify your Primary Contact Information. Have you checked your primary contact information to make sure you’re being notified of all upgrade information? Or do you want more users to receive upgrade announcements? This section provides you with the navigation path to do that in CRM On Demand. 2. Review your Key Upgrade Dates. If you expand this step, a nice table appears with your critical dates for the various milestones. IMPORTANT: When your CRM On Demand pod has been officially added to the upgrade schedule, closer to the release date itself, this table will display your specific timetable. 3. Migrate your Customizations from the Staging Environment before the Snapshot Date. Oracle refreshes the Staging data with a copy of your Production data made on the Production Snapshot Date. So this section lists considerations relevant to this step. It also reminds you of the 2-week period when you should not be making any changes in your Staging environment.   4. Conduct your Upgrade Validation on the Staging Environment. When the Customer Validation Testing period begins, you need to log in to your Staging Environment to validate that your key business processes and customizations continue to behave as expected. If your company utilizes Web Services, Web Links, Web Applets or Workflow, focus on testing these first. You generally have about two weeks for testing. If you run into problems during this time, follow the instructions shown in this section for logging a service request. It describes exactly how to fill out the fields in the SR for the fastest resolution. 5. Conduct "White Glove" Testing in your Upgraded Production Environment. Before users start using the upgrade, you should access a few tabs and reports. Doing this actually warms up the cache so that frequently used pages and reports will come up at normal speed on Monday morning, when users log in to the upgraded system. Resources listed under this step help you in further preparing for the upgrade. Now there’s also a new Documentation section on the right with links to these release-specific resources.   Very nice, I commented, when discussing these improvements with the “responsible party.” She confirmed that, yes, they tried to consolidate the upgrade information, translate it for better communication, simplify it into 5 easy pieces, and drive admins responsible for handling upgrades to this one site instead of sending out elaborate emails. Yes, I just love it when someone practically reaches out and holds my hand through a process. Next best thing to a wizard!

    Read the article

  • Strange Happenings

    - by MOSSLover
    There are weeks we go about our life thinking nothing is going to change nothing will happen.  Then there are other weeks a billion things happen at once.  Friday started off very weird for me.  I flew into Atlanta and I met some cool people for another SharePoint event.  I had some good conversations.  Saturday then hit me and my virtual machine bombed in my presentation after the auto updater ran.  I was writing code on the board and describing everything in notepad.  I would say as presentations go it was the best and the worst presentation all wrapped into one.  The next day I was in Baltimore and I hung out with my aunt which was relatively uneventful and great.  Then Monday hit and half my presentations failed or succeeded and my screen freezes so I start describing the code.  I was on top of my game until Monday night.  On top of the world.  I'm exhausted I get into Raleigh and one of the craziest stories of my life happens.  So my boss has been renting cars through Priceline this week I got a different company than the other weeks. The company gives me a Ford Focus and I plug in the coordinates on my IPhone where I want go.  I head out and then I get to the destination hotel (or I thought I did). I go inside it's the wrong hotel the other one is a few miles away.  I walk outside hop into the car and it sounds like a gunshot.  Nothing is starting...Am I doing something wrong?  No I'm not the car is completely dead in the water.  I call the rental car facility and they tell me to call roadside they are closing for the night.  Roadside says they can't give me a new car but they can get me a jump then I have to take it up with the facility.  They send me a tow truck to give me a jump the guy can't jump the car.  He tells me this vehicle was towed about an hour ago.  He shows me a copy of a slip from when he towed it.  We also notice the rental car company left one of there price scanning guns in the vehicle.  I call up roadside and now they are interested in getting me a car because I need to be onsite tomorrow.  They get the manager of the facility on the phone he apologizes profusely and he says he'll be there in 10 minutes.  About 30 minutes pass and him plus another dude show up with a Ford Escape leather interior.  At this point I hand him the gun tell him someone left it in the vehicle and that I'm not so happy with them.  I ask them to comp my rental they can't due to Priceline, however if I call him again this week he can get me a voucher.  It's about 2 am and I'm ready to get to the hotel I don't make it in the next morning until 10 am.  I would say this was a crazy week all forms of technology are trying to tell me something.  What I have no idea, but we'll see the outcome soon.  I feel so weird tons of change is about to happen.  I don't know if it's good or bad.  I think this week is some form of omen.

    Read the article

  • Copying A Slide From One Presentation To Another

    - by Tim Murphy
    There are many ways to generate a PowerPoint presentation using Open XML.  The first way is to build it by hand strictly using the SDK.  Alternately you can modify a copy of a base presentation in place.  The third approach to generate a presentation is to build a new presentation from the parts of an existing presentation by copying slides as needed.  This post will focus on the third option. In order to make this solution a little more elegant I am going to create a VSTO add-in as I did in my previous post.  This one is going to insert Tags to identify slides instead of NonVisualDrawingProperties which I used to identify charts, tables and images.  The code itself is fairly short. SlideNameForm dialog = new SlideNameForm(); Selection selection = Globals.ThisAddIn.Application.ActiveWindow.Selection;   if(dialog.ShowDialog() == DialogResult.OK) { selection.SlideRange.Tags.Add(dialog.slideName,dialog.slideName); } Zeyad Rajabi has a good post here on combining slides from two presentations.  The example he gives is great if you are doing a straight merge.  But what if you want to use your source file as almost a supermarket where you pick and chose slides and may even insert them repeatedly?  The following code uses the tags we created in the previous step to pick a particular slide an copy it to a destination file. using (PresentationDocument newDocument = PresentationDocument.Open(OutputFileText.Text,true)) { PresentationDocument templateDocument = PresentationDocument.Open(FileNameText.Text, false);   uniqueId = GetMaxIdFromChild(newDocument.PresentationPart.Presentation.SlideMasterIdList); uint maxId = GetMaxIdFromChild(newDocument.PresentationPart.Presentation.SlideIdList);   SlidePart oldPart = GetSlidePartByTagName(templateDocument, SlideToCopyText.Text);   SlidePart newPart = newDocument.PresentationPart.AddPart<SlidePart>(oldPart, "sourceId1");   SlideMasterPart newMasterPart = newDocument.PresentationPart.AddPart(newPart.SlideLayoutPart.SlideMasterPart);   SlideIdList idList = newDocument.PresentationPart.Presentation.SlideIdList;   // create new slide ID maxId++; SlideId newId = new SlideId(); newId.Id = maxId; newId.RelationshipId = "sourceId1"; idList.Append(newId);   // Create new master slide ID uniqueId++; SlideMasterId newMasterId = new SlideMasterId(); newMasterId.Id = uniqueId; newMasterId.RelationshipId = newDocument.PresentationPart.GetIdOfPart(newMasterPart); newDocument.PresentationPart.Presentation.SlideMasterIdList.Append(newMasterId);   // change slide layout ID FixSlideLayoutIds(newDocument.PresentationPart);     //newPart.Slide.Save(); newDocument.PresentationPart.Presentation.Save(); } The GetMaxIDFromChild and FixSlideLayoutID methods are barrowed from Zeyad’s article.  The GetSlidePartByTagName method is listed below.  It is really one LINQ query that finds SlideParts with child Tags that have the requested Name. private SlidePart GetSlidePartByTagName(PresentationDocument templateDocument, string tagName) { return (from p in templateDocument.PresentationPart.SlideParts where p.UserDefinedTagsParts.First().TagList.Descendants <DocumentFormat.OpenXml.Presentation.Tag>().First().Name == tagName.ToUpper() select p).First(); } This is what really makes the difference from what Zeyad posted.  The most powerful thing you can have when generating documents from templates is a consistent way of naming items to be manipulated.  I will be show more approaches like this in upcoming posts. del.icio.us Tags: Office Open XML,Presentation,PowerPoint,VSTO,TagList

    Read the article

  • Master Data Management for Location Data - Oracle Site Hub

    - by david.butler(at)oracle.com
    Most MDM discussions cover key domains such as customer, supplier, product, service, and reference data. It is usually understood that these domains have complex structures and hundreds if not thousands of attributes that need governing. Location, on the other hand, strikes most people as address data. How hard can that be? But for many industries, locations are complex, and site information is critical to efficient operations and relevant analytics. Retail stores and malls, bank branches, construction sites come to mind. But one of the best industries for illustrating the power of a site mastering application is Oil & Gas.   Oracle's Master Data Management solution for location data is the Oracle Site Hub. It is a location mastering solution that enables organizations to centralize site and location specific information from heterogeneous systems, creating a single view of site information that can be leveraged across all functional departments and analytical systems.   Let's take a look at the location entities the Oracle Site Hub can manage for the Oil & Gas industry: organizations, property, land, buildings, roads, oilfield, service center, inventory site, real estate, facilities, refineries, storage tanks, vendor locations, businesses, assets; project site, area, well, basin, pipelines, critical infrastructure, offshore platform, compressor station, gas station, etc. Any site can be classified into multiple hierarchies, like organizational hierarchy, operational hierarchy, geographic hierarchy, divisional hierarchies and so on. Any site can also be associated to multiple clusters, i.e. collections of sites, and these can be used as a foundation for driving reporting, analysis, organize daily work, etc. Hierarchies can also be used to model entities which are structured or non-structured collections of nodes, like for example routes, pipelines and more. The User Defined Attribute Framework provides the needed infrastructure to add single row attributes groups like well base attributes (well IDs, well type, well structure and key characterizing measures, and more) and well geometry, and multi row attribute groups like well applications, permits, production data, activities, operations, logs, treatments, tests, drills, treatments, and KPIs. Site Hub can also model areas, lands, fields, basins, pools, platforms, eco-zones, and stratigraphic layers as specific sites, tracking their base attributes, aliases, descriptions, subcomponents and more. Midstream entities (pipelines, logistic sites, pump stations) and downstream entities (cylinders, tanks, inventories, meters, partner's sites, routes, facilities, gas stations, and competitor sites) can also be easily modeled, together with their specific attributes and relationships. Site Hub can store any type of unstructured data associated to a site. This could be stored directly or on an external content management solution, like Oracle Universal Content Management. Considering a well, for example, Site Hub can store any relevant associated multimedia file such as: CAD drawings of the well profile, structure and/or parts, engineering documents, contracts, applications, permits, logs, pictures, photos, videos and more. For any site entity, Site Hub can associate all the related assets and equipments at the site, as well as all relationships between sites, between a site and multiple parties, and between a site and any purchasable or sellable item, over time. Items can be equipment, instruments, facilities, services, products, production entities, production facilities (pipelines, batteries, compressor stations, gas plants, meters, separators, etc.), support facilities (rigs, roads, transmission or radio towers, airstrips, etc.), supplier products and services, catalogs, and more. Items can just be associated to sites using standard Site Hub features, or they can be fully mastered by implementing Oracle Product Hub. Site locations (addresses or geographical coordinates) are also managed with out-of-the-box address geo-coding capabilities coupled with Google Maps integration to deliver powerful mapping capabilities and spatial data analysis. Locations can be shared between different sites. Centered on the site location, any site can also have associated areas. Site Hub can master any site location specific information, like for example cadastral, ownership, jurisdictional, geological, seismic and more, and any site-centric area specific information, like for example economical, political, risk, weather, logistic, traffic information and more. Now if anyone ever asks you why locations need MDM, think about how all these Oil & Gas entities and attributes would translate into your business locations. To learn more about Oracle's full MDM solution for the digital oil field, here is a link to Roberto Negro's outstanding whitepaper: Oracle Site Master Data Management for mastering wells and other PPDM entities in a digital oilfield context  

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • Ubuntu 13.04 installation issues: unable to handle kernel paging request error

    - by user173944
    I wish I could say that I’ve done more for the Linux community as of recent but I am very VERY new to all of this and I feel very much in over my head. I figured I would install Ubuntu. on my computer and then I would learn and contribute to the community simultaneously. I will try to be as detailed as I can, please ask questions if you need clarification. I installed Ubuntu. 13.04 (64-bit) on my dell Inspiron 1501. This has an AMD Turion 64-bit TL-56 1.8 Ghz mobile processor. It is a dual core. It has an ATI Radeon Xpress 1150 chipset in it as well. As of right now I only have a total of 2Ghz ram, however I was planning on upgrading that in the near future so I opted for the 64-bit Ubuntu. 13.04. I first tried the live CD and everything seemed to be functioning correctly other than the wireless (but that's not the issue at hand, there are plenty of guides on the internet on how to get that functioning). The internet worked just fine when it was plugged in so no issues there. However, once I went from that to installing 13.04 (just 13.04, no dual partitioning... I want this computer to run strictly Ubuntu.) it did not work. It took me into a shell that I could not type anything into. In this shell it said Bug: unable to handle kernel paging and then it called a bunch of traces and froze up. I had to hard reset the laptop. I tried the boot-repair program multiple times with many different settings and typically after starting up the laptop would say something along the lines of recursive errors. will attempt to fix and then it would attempt to fix a couple of things, and then the computer would freeze up after the text said end trace... so I had to hard reset it again. I'm not an impatient person either, when I say it would freeze up it would be for a period of at least 15 minutes each time before I decided to hard reset. I attempted to install 12.10 on it instead and I got the same exact message, and when I ran boot-repair it did the same exact thing as before. I am currently in the process of running memtest64+ on the computer's memory, though I really don't believe that, nor any of the hardware is at fault due to the fact that it was still running windows vista perfectly when I had decided to switch over to Ubuntu. so far the memtest has came back just fine without any errors, but I’ve only been running it for approximately an hour. So this is the situation I’m in. I did notice when I was using the live disk that the video driver needed updated so I performed that, though I’m fairly certain that has nothing to do with this. I have also attempted (though I’m not certain that my attempt was successful in accomplishing what I had planned) to manually edit the grub settings by making acpi=0 along top of adding nomodeset to the boot commands. Like I said, I’m not sure I did that correctly though, but I’m fairly certain I did. If anyone needs any more information I will be more than happy to provide it, I will post back once I get the full results of the memtest. I very much appreciate any ideas anyone else has, I’ve been at this for a few days to no avail... thank you

    Read the article

  • Too Clever for My Own Good

    - by AjarnMark
    Yesterday I caught myself being a little too clever for my own good with some ASP.NET code.  It seems that I have forgotten some of my good old classic HTML and JavaScript skills, and become too dependent on the .NET Framework and WebControls to do the work for me.  Here’s the scenario… In order to improve the User Interface and better communicate to the user when something is happening that they need to wait for, we have started to modify some of our larger (slower) pages to display messages like Processing… or Reloading… while they are cycling through a postback.  (Yes, I understand this could be improved by using AJAX / Callbacks and so on, but even then, you need to let your user know that they need to wait for that section to be re-rendered, so for the moment these pages will continue to use good ol’ Postbacks.)  It’s a very simple trick, really.  All I want to do is when some control triggers a postback, first run a little client-side JavaScript to hide the main contents of the page (such as a GridView) and display the appropriate message.  This lets the user know, “Hey, we’re doing something, don’t click another link or scroll and try to take action right now.” The first places I hooked this up were easy.  Most common cause of a postback:  Buttons.  And when you’re writing the markup or declarative code for an ASP:Button control, there is the handy OnClientClick property which is designed for just this purpose…to run client-side JavaScript before the postback occurs.  This is distinguished from the OnClick property which tells the control what Server-side code to run.  Great!  Done!  Easy! But then there are other controls like DropDownLists and CheckBoxes that we use on our pages with the AutoPostback=True setting which cause postbacks.  And these don’t have OnClientClick or OnClientSelectedIndexChanged events.  So I started getting creative, using an ASP:CustomValidator control in conjunction with setting the CausesValidation and ValidationGroup settings on these controls, which basically caused the action on the control to fire the Custom Validator, which was defined with a Client Side validation function which then did the hide content/show message code (and return a meaningless IsValid setting).  This also caused me to define a different ValidationGroup setting for my real data entry validator controls so that I could control them separately and only have them fire when I really wanted validation, and not just my show/hide trick. For a little while I was pretty proud of myself for coming up with this clever approach to get around what I considered to be a serious oversight on the DropDownList and CheckBox controls declarative syntax.  Then, in the midst of my smugness, just as I was about to commit my changes to the source code repository, it dawned on me that there is a much simpler and much more appropriate way to accomplish this.  All that I really needed to do was to put in my server-side code (I used the Page_Init section) a call to MyControl.Attributes.Add(“onClick”, “myJavaScriptFunctionName()”) for the checkboxes, and for the DropDownLists (which become select tags) use “onChange” instead of “onClick”.  This is exactly the type of thing that the Attributes collection is there for…so you can add attributes to be rendered with the control that you would have otherwise stuck right into the HTML markup if you had been doing this by hand in the first place. Ugh!  A few hours wasted on clever tricks that I ended up completely removing, but I did learn a lot more about custom validators and validation groups in the process.  And got a good reminder that all that stuff (HTML, JavaScript, and CSS) I learned back when I wrote classic ASP pages is still valuable today.  Oh, and one more thing…don’t get lulled into too much reliance on the the whiz-bang tool to do it for you.  After all, WebControls are just another layer of abstraction, and sometimes you need to dig down through the layers and get a little closer to the native language.

    Read the article

  • C# 5 Async, Part 2: Asynchrony Today

    - by Reed
    The .NET Framework has always supported asynchronous operations.  However, different mechanisms for supporting exist throughout the framework.  While there are at least three separate asynchronous patterns used through the framework, only the latest is directly usable with the new Visual Studio Async CTP.  Before delving into details on the new features, I will talk about existing asynchronous code, and demonstrate how to adapt it for use with the new pattern. The first asynchronous pattern used in the .NET framework was the Asynchronous Programming Model (APM).  This pattern was based around callbacks.  A method is used to start the operation.  It typically is named as BeginSomeOperation.  This method is passed a callback defined as an AsyncCallback, and returns an object that implements IAsyncResult.  Later, the IAsyncResult is used in a call to a method named EndSomeOperation, which blocks until completion and returns the value normally directly returned from the synchronous version of the operation.  Often, the EndSomeOperation call would be called from the callback function passed, which allows you to write code that never blocks. While this pattern works perfectly to prevent blocking, it can make quite confusing code, and be difficult to implement.  For example, the sample code provided for FileStream’s BeginRead/EndRead methods is not simple to understand.  In addition, implementing your own asynchronous methods requires creating an entire class just to implement the IAsyncResult. Given the complexity of the APM, other options have been introduced in later versions of the framework.  The next major pattern introduced was the Event-based Asynchronous Pattern (EAP).  This provides a simpler pattern for asynchronous operations.  It works by providing a method typically named SomeOperationAsync, which signals its completion via an event typically named SomeOperationCompleted. The EAP provides a simpler model for asynchronous programming.  It is much easier to understand and use, and far simpler to implement.  Instead of requiring a custom class and callbacks, the standard event mechanism in C# is used directly.  For example, the WebClient class uses this extensively.  A method is used, such as DownloadDataAsync, and the results are returned via the DownloadDataCompleted event. While the EAP is far simpler to understand and use than the APM, it is still not ideal.  By separating your code into method calls and event handlers, the logic of your program gets more complex.  It also typically loses the ability to block until the result is received, which is often useful.  Blocking often requires writing the code to block by hand, which is error prone and adds complexity. As a result, .NET 4 introduced a third major pattern for asynchronous programming.  The Task<T> class introduced a new, simpler concept for asynchrony.  Task and Task<T> effectively represent an operation that will complete at some point in the future.  This is a perfect model for thinking about asynchronous code, and is the preferred model for all new code going forward.  Task and Task<T> provide all of the advantages of both the APM and the EAP models – you have the ability to block on results (via Task.Wait() or Task<T>.Result), and you can stay completely asynchronous via the use of Task Continuations.  In addition, the Task class provides a new model for task composition and error and cancelation handling.  This is a far superior option to the previous asynchronous patterns. The Visual Studio Async CTP extends the Task based asynchronous model, allowing it to be used in a much simpler manner.  However, it requires the use of Task and Task<T> for all operations.

    Read the article

  • Conference networking for the socially awkward

    - by Melanie Townsend
    Do you approach a room full of strangers with excitement at all the new people you’re going to chat to over coffee and a muffin as you swap tales of how you convinced your manager to give you the day “off”? Or, do you find rooms full of strangers intimidating and begin by scouting out a place you can stand quietly and not be in someone’s way until the next session begins? If you’re on the train to extrovert city, that’s great, well done, move along. If, on the other hand, a room full of strangers who all seem to inexplicably know each other already is more challenge than opportunity, then making those connections with other professionals can be more difficult. So, here’s some advice, some gleaned from other things I’ve read online when trying to overcome my own discomfort in large groups (hopefully minus the infuriating condescension), others are just things I’ve found helpful over the years. Start small Smaller groups are less intimidating, and, now that you’ve taken the plunge to show up, it’s harder to remain inconspicuous. I find it’s easier to speak to new people once the option NOT to has been taken away. You’re there now, smile through the awkward and you’ll be forever grateful when the three people you’ve met and gotten to know here are also at that gigantic conference later on (ideally, introducing you to other people). Smile, or at the very least, stop scowling You probably don’t even know you’re doing it. If your resting face doesn’t come across as manically happy, tinge that with some social anxiety and you become one great ball of unapproachable. Normally, I wouldn’t suggest this as a problem that needs fixing, I have personally honed this face to use while travelling alone all the time. However, if you are indeed hoping to meet some useful people and get the most out of this conference, you may need to remind yourself to smile. Prepare some ice breakers This is going to sound stupid, like “no one does this right?” stupid, but, just, trust me a minute. It’s okay to prepare. You don’t need to write word-for-word questions to ask people and practice them in a mirror – that would be strange. I’m suggesting to just have an arsenal of questions to ask people if you get stuck, what session has been your favorite, which ones are you most looking forward to, have you heard X presenter speak before, what did you think of them? Even just thinking about these things in advance can help, and, as a bonus, while the other person is answering it gives you a moment to tamp down that panic, I mean breathe, I mean get to know them. You’re not alone (in the least creepy way possible) See that person in the corner clutching their phone with a mild deer in the headlights look?  That is potentially your new conference buddy. Starting with something along the lines of: I don’t know about you, the sessions here are great but I find the crowds a little tough to deal with. Mind if I park here for a second? is a decent opener. Just walking around and looking at exhibitors (if applicable) is fine, but it’s a little too easy to wander about and not actually speak to anyone if that’s all you’re doing. If joining a group of people talking is too much to start with, one-on-one can be easier. Have goals Are there people in particular you wanted to speak to? Did you have a personal goal of speaking to at least “x” new people? Are you trying to get a contact in a specific company because you want to work with them on something? Does the business have vague goals as well that you may or may not be judged on later? Making specific goals you can accomplish lets you know whether you’ve actually succeeded in your “networking pursuits” or what you need to work on more for next time. Everyone’s got their own coping technique. Some people are able to remind themselves that “humans are fundamentally social creatures” and somehow that helps them, others drink which is not really something I recommend for professional conferences but to each their own, and some focus on the fact that networking can play a big role in their career path. Just do what works for you, and if there’re any tricks you’ve found helpful over the years, please share em.

    Read the article

  • How to run RCU from the command line

    - by Kevin Smith
    When I was trying to figure out how to run RCU on 64-bit Linux I found this post. It shows how to run RCU from the command line. It didn't actually work for me, so you can see my post on how to run RCU on 64-bit Linux. But, seeing how to run RCU from the command got me started thinking about running RCU from the command line to create the schema for WebCenter Content. That post got me part of the way there since it shows how run RCU silently from the command line, but to do this you need to know the name of the RCU component for WebCenter Content. I poked around in the RCU files and found the component name for WCC is CONTENTSERVER11. There is a contentserver11 directory in rcuHome/rcu/integration and when you look at the contentserver11.xml file you will see <RepositoryConfig COMP_ID="CONTENTSERVER11"> With the component name for WCC in hand I was able to use this command line to run RCU and create the schema for WCC. .../rcuHome/bin/rcu -silent -createRepository -databaseType ORACLE -connectString localhost:1521:orcl1 -dbUser sys -dbRole sysdba -schemaPrefix TEST -component CONTENTSERVER11 -f <rcu_passwords.txt To make the silent part work and not have it prompt you for the passwords needed (sys password and password for each schema) you use the -f option and specify a file containing the passwords, one per line, in the order the components are listed on the -component argument. Here is the output from rcu when I ran the above command. Processing command line ....Repository Creation Utility - Checking PrerequisitesChecking Global PrerequisitesRepository Creation Utility - Checking PrerequisitesChecking Component PrerequisitesRepository Creation Utility - Creating TablespacesValidating and Creating TablespacesRepository Creation Utility - CreateRepository Create in progress.Percent Complete: 0...Percent Complete: 100Repository Creation Utility: Create - Completion SummaryDatabase details:Host Name              : localhostPort                   : 1521Service Name           : ORCL1Connected As           : sysPrefix for (prefixable) Schema Owners : TESTRCU Logfile            : /u01/app/oracle/logdir.2012-09-26_07-53/rcu.logComponent schemas created:Component                            Status  LogfileOracle Content Server 11g - Complete Success /u01/app/oracle/logdir.2012-09-26_07-53/contentserver11.logRepository Creation Utility - Create : Operation Completed This works fine if you want to use the default tablespace sizes and options, but there does not seem to be a way to specify the tablespace options on the command line. You can specify the name of the tablespace and temp tablespace, but they must already exist in the database before running RCU. I guess you can always create the tablespaces first using your desired sizes and options and then run RCU and specify the tablespaces you created. When looking up the command line options in the RCU doc I found it has the list of components for each product that it supports. See Appendix B in the RCU User's Guide.

    Read the article

  • JavaOne Latin America 2012 Trip Report

    - by reza_rahman
    JavaOne Latin America 2012 was held at the Transamerica Expo Center in Sao Paulo, Brazil on December 4-6. The conference was a resounding success with a great vibe, excellent technical content and numerous world class speakers. Some notable local and international speakers included Bruno Souza, Yara Senger, Mattias Karlsson, Vinicius Senger, Heather Vancura, Tori Wieldt, Arun Gupta, Jim Weaver, Stephen Chin, Simon Ritter and Henrik Stahl. Topics covered included the JCP/JUGs, Java SE 7, HTML 5/WebSocket, CDI, Java EE 6, Java EE 7, JSF 2.2, JMS 2, JAX-RS 2, Arquillian and JavaFX. Bruno Borges and I manned the GlassFish booth at the Java Pavilion on Tuesday and Webnesday. The booth traffic was decent and not too hectic. We met a number of GlassFish adopters including perhaps one of the largest GlassFish deployments in Brazil as well as some folks migrating to Java EE from Spring. We invited them to share their stories with us. We also talked with some key members of the local Java community. Tuesday evening we had the GlassFish party at the Tribeca Pub. The party was definitely a hit and we could have used a larger venue (this was the first time we had the GlassFish party in Brazil). Along with GlassFish enthusiasts, a number of Java community leaders were there. We met some of the same folks again at the JUG leader's party on Wednesday evening. On Thursday Arun Gupta, Bruno Borges and I ran a hands-on-lab on JAX-RS, WebSocket and Server-Sent Events (SSE) titled "Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket". This is the same Java EE 7 lab run at JavaOne San Francisco. The lab provides developers a first hand glipse of how an HTML 5 powered Java EE application might look like. We had an overflow crowd for the lab (at one point we had about twenty people standing) and the lab went very well. The slides for the lab are here: Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket from Reza Rahman The actual contents for the lab is available here. Give me a shout if you need help getting it up and running. I gave two solo talks following the lab. The first was on JMS 2 titled "What’s New in Java Message Service 2". This was essentially the same talk given by JMS 2 specification lead Nigel Deakin at JavaOne San Francisco. I talked about the JMS 2 simplified API, JMSContext injection, delivery delays, asynchronous send, JMS resource definition in Java EE 7, standardized configuration for JMS MDBs in EJB 3.2, mandatory JCA pluggability and the like. The session went very well, there was good Q & A and someone even told me this was the best session of the conference! The slides for the talk are here: What’s New in Java Message Service 2 from Reza Rahman My last talk for the conference was on JAX-RS 2 in the keynote hall. Titled "JAX-RS 2: New and Noteworthy in the RESTful Web Services API" this was basically the same talk given by the specification leads Santiago Pericas-Geertsen and Marek Potociar at JavaOne San Francisco. I talked about the JAX-RS 2 client API, asyncronous processing, filters/interceptors, hypermedia support, server-side content negotiation and the like. The talk went very well and I got a few very kind complements afterwards. The slides for the talk are here: JAX-RS 2: New and Noteworthy in the RESTful Web Services API from Reza Rahman On a more personal note, Sao Paulo has always had a special place in my heart as the incubating city for Sepultura and Soulfy -- two of my most favorite heavy metal musical groups of all time! Consequently, the city has a perpertually alive and kicking metal scene pretty much any given day of the week. This time I got to check out a solid performance by local metal gig Republica at the legendary Manifesto Bar. I also wanted to see a Dio Tribute at the Blackmore but ran out of time and energy... Overall I enjoyed the conference/Sao Paulo and look forward to going to Brazil again next year!

    Read the article

  • wifi hardware switch doesn't work on a Dell 1018

    - by user42566
    I have a problem with my Dell 1018 Inspiron. I can't switch the wifi on, through the key on the keyboard. I think it's a driver problem since Ubuntu 11.10. This are the versions i tried: Ubuntu 10.04 / 10.10 It's possible to install the driver by hand: sudo add-apt-repository ppa:lexical/hwe-wireless sudo apt-get update sudo apt-get install rtl8192ce-dkms Ubuntu 11.04 It works out of the box Ubuntu 11.10 / 12.04 I haven’t found any solution for these versions. The "ppa:lexical/hwe-wireless" doesn't work for these versions. It says Can not find package rtl8192ce-dkms. The window of additional drivers is empty. So I can't install the driver. The wired network works good. Here is some information: 0: dell-wifi: Wireless LAN Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: yes sudo lshw -class network *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 05 serial: 5c:26:0a:0d:20:10 size: 10Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=rtl_nic/rtl8105e-1.fw latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:43 ioport:2000(size=256) memory:f0f2c000-f0f2cfff memory:f0f18000-f0f1bfff *-network description: Wireless interface product: RTL8188CE 802.11b/g/n WiFi Adapter vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:07:00.0 logical name: wlan0 version: 01 serial: 70:f1:a1:fe:15:bd width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=rtl8192ce driverversion=3.2.0-22-generic-pae firmware=N/A ip=192.168.1.76 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 ioport:3000(size=256) memory:f0100000-f0103fff mark@mark-Inspiron-1018:~$ mark@mark-Inspiron-1018:~$ sudo lspci -nn 00:00.0 Host bridge [0600]: Intel Corporation N10 Family DMI Bridge [8086:a010] 00:02.0 VGA compatible controller [0300]: Intel Corporation N10 Family Integrated Graphics Controller [8086:a011] 00:02.1 Display controller [0380]: Intel Corporation N10 Family Integrated Graphics Controller [8086:a012] 00:1b.0 Audio device [0403]: Intel Corporation N10/ICH 7 Family High Definition Audio Controller [8086:27d8] (rev 02) 00:1c.0 PCI bridge [0604]: Intel Corporation N10/ICH 7 Family PCI Express Port 1 [8086:27d0] (rev 02) 00:1c.1 PCI bridge [0604]: Intel Corporation N10/ICH 7 Family PCI Express Port 2 [8086:27d2] (rev 02) 00:1d.0 USB controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 [8086:27c8] (rev 02) 00:1d.1 USB controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 [8086:27c9] (rev 02) 00:1d.2 USB controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 [8086:27ca] (rev 02) 00:1d.3 USB controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 [8086:27cb] (rev 02) 00:1d.7 USB controller [0c03]: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller [8086:27cc] (rev 02) 00:1e.0 PCI bridge [0604]: Intel Corporation 82801 Mobile PCI Bridge [8086:2448] (rev e2) 00:1f.0 ISA bridge [0601]: Intel Corporation NM10 Family LPC Controller [8086:27bc] (rev 02) 00:1f.2 SATA controller [0106]: Intel Corporation N10/ICH7 Family SATA Controller [AHCI mode] [8086:27c1] (rev 02) 00:1f.3 SMBus [0c05]: Intel Corporation N10/ICH 7 Family SMBus Controller [8086:27da] (rev 02) 05:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 05) 07:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter [10ec:8176] (rev 01) mark@mark-Inspiron-1018:~$ mark@mark-Inspiron-1018:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 174f:1127 Syntek mark@mark-Inspiron-1018:~$

    Read the article

  • Computing a normal matrix in conjunction with gluLookAt

    - by Chris Smith
    I have a hand-rolled camera class that converts yaw, pitch, and roll angles into a forward, side, and up vector suitable for calling gluLookAt. Using this camera class I can modify the model-view matrix to move about the 3D world just fine. However, I am having trouble when using this camera class (and associated model-view matrix) when trying to perform directional lighting in my vertex shader. The problem is that the light direction, (0, 1, 0) for example, is relative to where the 'camera is looking' and not the actual world coordinates. (Or is this eye coordinates vs. model coordinates?) I would like the light direction to be unaffected by the camera's viewing direction. For example, when the camera is looking down the Z axis the ground is lit correctly. However, if I point the camera straight at the ground, then it goes dark. This is (I think) because the light direction is parallel with the camera's 'up' vector which is perpendicular with the ground's normal vector. I tried computing the normal matrix without taking the camera's model view into account, but then none of my objects were rotated correctly. Sorry if this sounds vague. I suspect there is a straight forward answer, but I'm not 100% clear on how the normal matrix should be used for transforming vertex normals in my vertex shader. For reference, here is pseudo code for my rendering loop: pMatrix = new Matrix(); pMatrix = makePerspective(...) mvMatrix = new Matrix() camera.apply(mvMatrix); // Calls gluLookAt // Move the object into position. mvMatrix.translatev(position); mvMatrix.rotatef(rotation.x, 1, 0, 0); mvMatrix.rotatef(rotation.y, 0, 1, 0); mvMatrix.rotatef(rotation.z, 0, 0, 1); var nMatrix = new Matrix(); nMatrix.set(mvMatrix.get().getInverse().getTranspose()); // Set vertex shader uniforms. gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, new Float32Array(pMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, new Float32Array(mvMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.nMatrixUniform, false, new Float32Array(nMatrix.getFlattened())); // ... gl.drawElements(gl.TRIANGLES, this.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); And the corresponding vertex shader: // Attributes attribute vec3 aVertexPosition; attribute vec4 aVertexColor; attribute vec3 aVertexNormal; // Uniforms uniform mat4 uMVMatrix; uniform mat4 uNMatrix; uniform mat4 uPMatrix; // Varyings varying vec4 vColor; // Constants const vec3 LIGHT_DIRECTION = vec3(0, 1, 0); // Opposite direction of photons. const vec4 AMBIENT_COLOR = vec4 (0.2, 0.2, 0.2, 1.0); float ComputeLighting() { vec4 transformedNormal = vec4(aVertexNormal.xyz, 1.0); transformedNormal = uNMatrix * transformedNormal; float base = dot(normalize(transformedNormal.xyz), normalize(LIGHT_DIRECTION)); return max(base, 0.0); } void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); float lightWeight = ComputeLighting(); vColor = vec4(aVertexColor.xyz * lightWeight, 1.0) + AMBIENT_COLOR; } Note that I am using WebGL, so if the anser is use glFixThisProblem(...) any pointers on how to re-implement that on WebGL if missing would be appreciated.

    Read the article

  • Lease Accounting Closed for Comment

    - by Theresa Hickman
    December 15, 2010 marked the last day to send public comments to FASB and IASB on lease accounting. June 2011 is the deadline for the final consideration of the Leases Exposure Draft that will be given to standard setters in order to create a new lease accounting standard. Landlords, lessees, retailers, airlines industry, etc. are all worried right now about the changes to lease accounting. They feel the changes will be too costly and complex without adding significant improvement to the quality and relevance of financial statements. In a nutshell, IASB and FASB want to abolish operating leases where the lessee records the periodic payments as an expense over time. The proposed changes will mean that the accounting for leases will move from the P&L and hit both the lessee's and lessor's balance sheets. For companies that occupy a lot of property, this could significantly increase their liabilities not to mention front-load much of the costs that they were able to spread out over time before. Why are IASB and FASB doing this? Their goal is to have consistent accounting for both the lessees and lessors with higher quality financial statements. Leasing is one of four major projects being undertaken by the IASB and FASB in order to complete convergence between US GAAP and IFRS. I spoke to our resident accounting expert Seamus Moran about this to better understand how this might impact accounting software. He reminded me that the proposed changes to both US GAAP and IFRS in respect to leases are "proposed." It is still inappropriate to account for leases the way they are being proposed and we still need to account for them in accordance to the current regulations, which is what current accounting software programs, such as E-Business Suite Release 12.1 and prior and PeopleSoft Enterprise support. The FASB (US GAAP) and IASB (IFRS) exposure drafts (EDs) that outline the proposal were published. The FASB edition was published on August 17th, with comments due by December 15th. The IASB edition was published on the same date, and comments were due in London on the same date. Exposure drafts are the method both the FASB and the IASB use to solicit General Acceptance, the "GA" in GAAP. Both Boards will consider the input they have received, and perhaps revise the proposal. The proposal has come in for some criticism, both from the finance houses and the uses of the leased assets. There is, given the opposition to it, an excellent chance that the Leasing proposal will be modified or rewritten. We will know this in about six months, the usual time it takes for the FASB and IASB to digest the comments they receive. If they feel the proposal has General Acceptance, they will issue the final Standard at that time; if not, they will issue a revised proposal with another year of comment of drafting. Oracle participates in the standard setting process and is fully aware of the leasing proposal. We have designs that would reflect the proposal in hand. These designs will be finalized when the proposal is finalized. It is likely that customers will develop new financial arrangements if the proposal is finalized, and we are working with customers and partners to stay in touch with people's business responses to the proposal. The IASB and FASB are aware that ERP companies will have to revise their software, and that the companies filing results under IFRS or under US GAAP will have to implement such software. The form and timing of the release of the updated software will depend on the schedule of the take up of the new standard, the complexity of the standard, and the releases supported at the time the standard becomes effective.

    Read the article

  • Critical Threads Optimization

    - by Rafael Vanoni
    Background One of the more common issues we've been seeing in the field is the growing difficulty in optimizing performance of multi-threaded applications. A good portion of this difficulty is due to the increasing complexity of modern processors that present various degrees of sharing relationships between hardware components. Take any current CMT processor and you'll find any number of CPUs sharing execution pipelines, floating point units, caches, etc. Consequently, applying the traditional recipe of one software thread for each CPU will have varying degrees of success, according to the layout of the underlying hardware. On top of this increasing complexity we've also seen processors with features that aim at dynamically resourcing software threads according to their utilization. Intel's Turbo Boost allows processors to increase their operating frequency if there is enough thermal headroom available and the processor isn't fully utilized. More recently, the SPARC T4 processor introduced dynamic threading, allowing each core to dynamically allocate more resources to its active CPUs. Both cases are in essence recognizing that current processors will be running a wide mix of workloads, some will be designed for throughput, others for low latency. The hardware is providing mechanisms to dynamically resource threads according to their runtime behavior. We're very aware of these challenges in Solaris, and have been working to provide the best out of box performance while providing mechanisms to further optimize applications when necessary. The Critical Threads Optimzation was introduced in Solaris 10 8/11 and Solaris 11 as one such mechanism that allows customers to both address issues caused by contention over shared hardware resources and explicitly take advantage of features such as T4's dynamic threading. What it is The basic idea is to allow performance critical threads to execute with more exclusive access to hardware resources. For example, when deploying an application that implements a producer/consumer model, it'll likely be advantageous to give the producer more exclusive access to the hardware instead of having it competing for resources with all the consumers. In the case of a T4 based system, we may want to have a producer running by itself on a single core and create one consumer for each of the remaining CPUs. With the Critical Threads Optimization we're extending the semantics of scheduling priorities (which thread should run first) to include priority over shared resources (which thread should have more "space"). Now the scheduler will not only run higher priority threads first: it will also provide them with more exclusive access to hardware resources if they are available. How does it work ? Using the previous example in Solaris 11, all you'd have to do would be to place the producer in the Fixed Priority (FX) scheduling class at priority 60, or in the Real Time (RT) class at any priority and Solaris will try to give it more "hardware space". On both Solaris 10 8/11 and Solaris 11 this can be achieved through the existing priocntl(1,2) and priocntlset(2) interfaces. If your application already assigns these priorities to performance critical threads, there's no additional step you need to take. One important aspect of this optimization is that it requires some level of idleness in the system, either as a result of sizing the application before hand or through periods of transient idleness during runtime. If the system is fully committed, the scheduler will put all the available CPUs to work.Best practices If you're an application developer, we encourage you to look into assigning the right priorities for the different threads in your application. Solaris provides different scheduling classes (Time Share, Interactive, Fair Share, Fixed Priority and Real Time) that offer different policies and behaviors. It is not always simple to figure out which set of threads are critical to the performance of a workload, and it may not always be feasible to take advantage of this optimization, but we believe that this can be correctly (and safely) done during development. Overall, the out of box performance in Solaris should meet your workload's requirements. If you are looking into that extra bit of performance, then the Critical Threads Optimization may be what you're looking for.

    Read the article

  • CSM shadow errors when models are split

    - by KaiserJohaan
    I'm getting closer to fixing CSM, but there seems to be one more issue at hand. At certain angles, the models will be caught/split between two shadow map cascades, like below. first depth split second depth split - here you can see the model is caught between the splits How does one fix this? Increase the overlapping boundaries between the splits? Or is the frustrum erronous? CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix, Mat4& outFrustrumMat) { CameraFrustrum ret = { Vec4(1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, 1.0f, 0.0f, 1.0f), Vec4(-1.0f, -1.0f, 0.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); outFrustrumMat = invMVP; for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, halfExtents.y, -halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; }

    Read the article

  • How to Waste Your Marketing Budget

    - by Mike Stiles
    Philosophers have long said if you find out where a man’s money is, you’ll know where his heart is. Find out where money in a marketing budget is allocated, and you’ll know how adaptive and ready that company is for the near future. Marketing spends are an investment. Not unlike buying stock, the money is placed in areas the marketer feels will yield the highest return. Good stock pickers know the lay of the land, the sectors, the companies, and trends. Likewise, good marketers should know the media available to them, their audience, what they like & want, what they want their marketing to achieve…and trends. So what are they doing? And how are they doing? A recent eTail report shows nearly half of retailers planned on focusing on SEO, SEM, and site research technologies in the coming months. On the surface, that’s smart. You want people to find you. And you’re willing to let the SEO tail wag the dog and dictate the quality (or lack thereof) of your content such as blogs to make that happen. So search is prioritized well ahead of social, multi-channel initiatives, email, even mobile - despite the undisputed explosive growth and adoption of it by the public. 13% of retailers plan to focus on online video in the next 3 months. 29% said they’d look at it in 6 months. Buying SEO trickery is easy. Attracting and holding an audience with wanted, relevant content…that’s the hard part. So marketers continue to kick the content can down the road. Pretty risky since content can draw and bind customers to you. Asked to look a year ahead, retailers started thinking about CRM systems, customer segmentation, and loyalty, (again well ahead of online video, social and site personalization). What these investors are missing is social is spreading across every function of the enterprise and will be a part of CRM, personalization, loyalty programs, etc. They’re using social for engagement but not for PR, customer service, and sales. Mistake. Allocations are being made seemingly blind to the trends. Even more peculiar are the results of an analysis Mary Meeker of Kleiner Perkins made. She looked at how much time people spend with media types and how marketers are investing in those media. 26% of media consumption is online, marketers spend 22% of their ad budgets there. 10% of media time is spent with mobile, but marketers are spending 1% of their ad budgets there. 7% of media time is spent with print, but (get this) marketers spend 25% of their ad budgets there. It’s like being on Superman’s Bizarro World. Mary adds that of the online spending, most goes to search while spends on content, even ad content, stayed flat. Stock pickers know to buy low and sell high. It means peering with info in hand into the likely future of a stock and making the investment in it before it peaks. Either marketers aren’t believing the data and trends they’re seeing, or they can’t convince higher-ups to acknowledge change and adjust their portfolios accordingly. Follow @mikestilesImage via stock.xchng

    Read the article

  • Two Candidates + One Job = Two Different Outcomes

    - by david.talamelli
    Recruiters have always headhunted (sidenote: I do not like this word, in general I think the type of people who use the phrase “headhunting” are the ones who are trying to sound more important than what they likely are). Any serious Recruiter engages in direct recruiting activity, it is part and parcel of the business it is not something unique. With the uptake in Social Media the past 4-5 years, we have seen an increase in the number of Recruiters proactively reaching out to people about job opportunities. We have also seen this activity increase across all levels of hire, from help desk roles to C-Level Executives. While getting approached about a role can be a nice boost to a person’s ego, do not let it give you an inflated sense of entitlement. It is The way that people handle themselves during these calls and subsequent interviews will have a large impact on their potential to land that job. Last week I spoke to two very different candidates, both about the same position and both with very different outcomes. On paper, Candidate #1 looked fantastic; they ticked many of the boxes that we were looking for. The person is working at global IT company and working in a similar role as the one we were hiring for but not in as senior as the role we had. This role would have been the perfect step to getting involved in more complex work for the person. Candidate #2 had less polished IT experience, ticked some of the boxes we were looking for and on paper in comparison to Candidate #1 was not as close a fit as Candidate #1 was. It seemed like I was comparing apples and oranges. After speaking to both candidates it turns out I was comparing apples and oranges except the person better suited for our role was not the one I was expecting it would be. The first candidate on paper looked great – they had the experience we were looking for and appeared to be just right for the role, but after talking to them, they gave me the impression that they thought the world owed them. The impression I was left with was that they did not equate success with hard work, they seemed more interested in “what is in it for me”. Rather than having a proper conversation with me, I was often cut off and asked to hurry it up when explaining our business, what we are doing, etc... . This person seemed more interested in the job title and money than how rather than think about ways to make the role successful. Candidate #2 who had limited experience, made up for any perceived lack of experience and them some with a demonstrated motivation to succeed and do the things needed to make that happen. Candidate #2 made a great first impression, they did not seem afraid of hard work and demonstrated a “team player” attitude. In talking to them they kept me engaged, listened and asked thoughtful questions that made me think this is the type of person who creates their own luck and who would thrive in a place like Oracle. Skills, capabilities, experience and a good resume can certainly get your foot in the door, but the wrong attitude or approach to work can close those opportunities just as easily. On the other hand, hard work, effort and a genuine work ethic may help open those doors that would otherwise closed for you. A resume with all the credentials gets you in the front door but that is just the beginning of the process. It is not how we start the race that is important, it’s how things end that matter most.

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >