Search Results

Search found 7637 results on 306 pages for 'cd drives'.

Page 297/306 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • Can Microsoft Build Appliances?

    - by andrewbrust
    Billy Hollis, my Visual Studio Live! colleague and fellow Microsoft Regional Director said recently, and I am paraphrasing, that the computing world, especially on the consumer side, has shifted from one of building hardware and software that makes things possible to do, to building products and technologies that make things easy to do.  Billy crystalized things perfectly, as he often does. In this new world of “easy to do,” Apple has done very well and Microsoft has struggled.  In the old world, customers wanted a Swiss Army Knife, with the most gimmicks and gadgets possible.  In the new world, people want elegantly cutlery.  They may want cake cutters and utility knives too, but they don’t want one device that works for all three tasks.  People don’t want tools, they want utensils.  People don’t want machines.  They want appliances. Microsoft Appliances: They Do Exist Microsoft has built a few appliance-like devices.  I would say XBox 360 is an appliance,  It’s versatile, mind you, but it’s the kind of thing you plug in, turn on and use, as opposed to set-up, tune, and open up to upgrade the internals.  Windows Phone 7 is an appliance too.  It’s a true smartphone, unlike Windows Mobile which was a handheld computer with a radio stack.  Zune is an appliance too, and a nice one.  It hasn’t attained much traction in the market, but that’s probably because the seminal consumer computing appliance -- the iPod – got there so much more quickly. In the embedded world, Mediaroom, Microsoft’s set-top product for the cable industry (used by AT&T U-Verse and others) is an appliance.  So is Microsoft’s Sync technology, used in Ford automobiles.  Even on the enterprise side, Microsoft has an appliance: SQL Server Parallel Data Warehouse Edition (PDW) combines Microsoft software with select OEMs’ server, networking and storage hardware.  You buy the appliance units from the OEMs, plug them in, connect them and go. I would even say that Bing is an appliance.  Not in the hardware sense, mind you.  But from the software perspective, it’s a single-purpose product that you visit or run, use and then move on.  You don’t have to install it (except the iOS and Android native apps where it’s pretty straightforward), you don’t have to customize it, you don’t have to program it.  Basically, you just use it. Microsoft Appliances that Should Exist But Microsoft builds a bunch of things that are not appliances.  Media Center is not an appliance, and it most certainly should be.  Instead, it’s an app that runs on Windows 7.  It runs full-screen and you can use this configuration to conceal the fact that Windows is under it, but eventually something will cause you to abandon that masquerade (like Patch Tuesday). The next version of Windows Home Server won’t, in my opinion, be an appliance either.  Now that the Drive Extender technology is gone, and users can’t just add and remove drives into and from a single storage pool, the product is much more like a IT server and less like an appliance-premised one.  Much has been written about this decision by Microsoft.  I’ll just sum it up in one word: pity. Microsoft doesn’t have anything remotely appliance-like in the tablet category, either.  Until it does, it likely won’t have much market share in that space either.  And of course, the bulk of Microsoft’s product catalog on the business side is geared to enterprise machines and not personal appliances. Appliance DNA: They Gotta Have It. The consumerization of IT is real, because businesspeople are consumers too.  They appreciate the fit and finish of appliances at home, and they increasingly feel entitled to have it at work too.  Secure and reliable push email in a smartphone is necessary, but it isn’t enough.  People want great apps and a pleasurable user experience too.  The full Microsoft Office product is needed at work, but a PC with a keyboard and mouse, or maybe a touch screen that uses a stylus (or requires really small fingers), to run Office isn’t enough either.  People want a flawless touch experience available for the times they want to read and take quick notes.  Until Microsoft realizes this fully and internalizes it, it will suffer defeats in the consumer market and even setbacks in the business market.  Think about how slow the Office upgrade cycle is…now imagine if the next version of Office had a first-class alternate touch UI and consider the possible acceleration in adoption rates. Can Microsoft make the appliance switch?  Can the appliance mentality become pervasive at the company?  Can Microsoft hasten its release cycles dramatically and shed the “some assembly required” paradigm upon which many of its products are based?  Let’s face it, the chances that Microsoft won’t make this transition are significant. But there are also encouraging signs, and they should not be ignored.  The appliances we have already discussed, especially Xbox, Zune and Windows Phone 7, are the most obvious in this regard.  The fact that SQL Server has an appliance SKU now is a more subtle but perhaps also more significant outcome, because that product sits so smack in the middle of Microsoft’s enterprise stack.  Bing is encouraging too, especially given its integrated travel, maps and augmented reality capabilities.  As Bing gains market share, Microsoft has tangible proof that it can transform and win, even when everyone outside the company, and many within it, would bet otherwise. That Great Big Appliance in the Sky Perhaps the most promising (and evolving) proof points toward the appliance mentality, though, are Microsoft’s cloud offerings -- Azure and BPOS/Office 365.  While the cloud does not represent a physical appliance (quite the opposite in fact) its ability to make acquisition, deployment and use of technology simple for the user is absolutely an embodiment of the appliance mentality and spirit.  Azure is primarily a platform as a service offering; it doesn’t just provide infrastructure.  SQL Azure does likewise for databases.  And Office 365 does likewise for SharePoint, Exchange and Lync. You don’t administer, tune and manage servers; instead, you create databases or site collections or mailboxes and start using them. Upgrades come automatically, and it seems like releases will come more frequently.  Fault tolerance and content distribution is just there.  No muss.  No fuss.  You use these services; you don’t have to set them up and think about them.  That’s how appliances work.  To me, these signs point out that Microsoft has the full capability of transforming itself.  But there’s a lot of work ahead.  Microsoft may say they’re “all in” on the cloud, but the majority of the company is still oriented around its old products and models.  There needs to be a wholesale cultural transformation in Redmond.  It can happen, but product management, program management, the field and executive ranks must unify in the effort. So must partners, and even customers.  New leaders must rise up and Microsoft must be able to see itself as a winner.  If Microsoft does this, it could lock-in decades of new success, and be a standard business school case study for doing so.  If not, the company will have missed an opportunity, and may see its undoing.

    Read the article

  • NVIDIA x server - "sudo nvidia config" does not generate a working 'xorg.config'

    - by Mike
    I am over 18 hours deep on this challenge. I got to this point and am stuck. very stuck. Maybe you can figure it out? Ubuntu Version 12.04 LTS with all the updates installed. Problem: The default settings in "etc/X11/xorg.conf" that are generated by the "nvidia-xconfig" tool, do not allow the NVIDIA x server to connect to the driver in my "System Settings Additional Driver window". (that's how I understand it. Lots of information below). Symptoms of Problem "System Settings Additional Driver" window has drivers, but the nvidia x server cannot connect/utilize any of the 4 drivers. the drivers are activated, but not in use. When I go to "System Tools Administration NVIDIA x server settings" I get an error that basically tells me to create a default file to initialize the NVIDIA X server (screen shot below). This is the messages the terminal gives after running a "sudo nvidia-xconfig" command for the first time. It seems that the generated file by the tool i just ran is generating a bad/unusable file: If I run the "sudo nvidia-xconfig" command again, I wont get an error the second time. However when I reboot, the default file that is generated (etc/X11/xorg.conf) simply puts the screen resolution at 800 x 600 (or something big like that). When I try to go to NVIDIA x server settings I am greeted with the same screen as the screen shot as in symptom 2 (no option to change the resolution). If I try to go to "system settings display" there are no other resolutions to choose from. At this point I must delete the newly minted "xorg.conf" and reinstate the original in its place. Here are the contents of the "xorg.conf" that is generated first (the one missing required information): # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 304.88 (buildmeister@swio-display-x86-rhel47-06) Wed Mar 27 15:32:58 PDT 2013 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Hardware: I ran the "lspci|grep VGA". There results are: 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [Quadro 1000M] (rev a1) More Hardware info: Ram: 16GB CPU: Intel Core i7-2720QM @2.2GHz * 8 Other: 64 bit. This is a triple boot computer and not a VM. Attempts With Not Success on My End: 1) Tried to append the "xorg.conf" with what I perceive is missing information and obviously it didn't fly. 2) All the other stuff I tried got me to this point. 3) See if this link is helpful to you (I barely get it, but i get enough knowing that a smarter person might find this useful): http://manpages.ubuntu.com/manpages/lucid/man1/nvidia-xconfig.1.html 4) I am completely new to Linux (40 hours over past week), but not to programming. However I am very serious about changing over to Linux. When you respond (I hope someone responds...) please respond in a way that a person new to Linux can understand. 5) By the way, the reason I am in this mess is because I MUST have a second monitor running from my laptop, and "System Settings Display" doesn't recognize my second display. I know it is possible to make the second display work in my system, because when I boot from the install CD, I perform work on the native laptop monitor, but the second monitor shows a purple screen with Ubuntu in the middle, so I know the VGA port is sending a signal out. If this is too much for you to tackle please suggest an alternative method to get a second display. I don't want to go to windows but I cannot have a single display. I am really fudged here. I hope some smart person can help. Thanks in advance. Mike. **********************EDIT #1********************** More Details About Graphics Card I was asked "which brand of nvidia-card do you have exactly?" Here is what I did to provide more info (maybe relevant, maybe not, but here is everything): 1) Took my Lenovo W520 right apart to see if there is an identifier on the actual card. However I realized that if I get deep enough to take a look, the laptop "won't like it". so I put it back together. Figuring out the card this way is not an option for me right now. 2) (My computer is triple boot) I logged into Win7 and ran 'dxdiag' command. here is the screen shot: 3) I tried to look on the lenovo website for more details... but no luck. I took a look at my receipts and here is info form receipt: System Unit: W520 NVIDIA Quadro 1000M 2GB 4) In win7 I went to the NVIDIA website and used the option to have my card 'scanned' by a Java applet to determine the latest update for my card. I tried the same with Ubuntu but I can't get the applet to run. Here is the recommended driver from from the NVIDIA Applet for my card for Win7 (I hope this shines some light on the specifics of the card): Quadro/NVS/Tesla/GRID Desktop Driver Release R319 Version: 320.00 WHQL Release Date: 3.5.2013 5) Also I went on the NVIDIA driver search and looked through every possible combination of product type + product series + product to find all the combinations that yield a 1000M card. My card is: Product Type: Quadro Product Series: Quadro Series (Notebooks) Product: 1000M ***********************EDIT #2******************* Additional Symptoms Another question that generated more symptoms I previously didn't mention was: "After generating xorg.conf by nvidia-xconfig, go to additional drivers, do you see nvidia-304?" 1) I took a screen shot of the "additional drivers" right after generating xorg.conf by nvidia-xconfig. Here it is: 2) Then I did a reboot. Now Ubuntu is 600 x 800 resolution. When I logged in after the computer came up I got an error (which I always get after generating xorg.conf by nvidia-xconfig and rebooting) 3) To finally answer the question - No. There is no "NVIDIA-304" driver. Screen shot of additional drivers after generating xorg.conf by nvidia-xconfig and rebooting : At this point I revert to the original xorg.conf and delete the xorg.conf generated by Nvidia.

    Read the article

  • The Top 5 MDM Sessions You Can’t Miss at OpenWorld

    - by Mala Narasimharajan
    Sessions, Demo pods, Hands On Labs, and much more – but where should you focus?  MDM has some excellent sessions planned for OOW –  here is a top 5 list to identify the sessions you just can’t afford to miss. October 3, 2012  1:15 PM - 2:15 PM    Moscone West - 3002/3004     What's There to Know About Oracle’s Master Data Management Portfolio and Roadmap? Hear about product strategy our vision for the future and how Oracle MDM is positioned to excel in helping organizations make the most of their customer,      partner, supplier or product data. October 3, 2012  5:00 PM - 6:00 PM   Westin San Francisco – Metropolitan I Oracle Customer MDM Applications: Implementation Best Practices, Data Governance, and ROI       Customers successes provide solid examples of technology at work and how organizations derive value from it. Attend this session and hear from our customers on how they built a business case, established governance and are realizing the benefits of Oracle Customer Hub. October 2, 2012  10:15 AM - 11:15 AM   Moscone West – 3001 Mastering Product Data: Strategies for Effective Product Information Management                                                                      Product data is vital for any enterprise in being able to provide a consolidated representation of products to their partners, customers and suppliers.  Hear how our customers leverage product information to be a leader in their respective area and how Oracle is critical to achieving this. October 2, 2012  11:45 AM - 12:45 PM   Moscone West – 2022 Enabling Trusted Enterprise Product Data with Oracle Fusion Product Hub                                                                                       Learn how Oracle Fusion Product Hub is paving the way for providing organizations with trusted product data as well as helping organizations make the most of the information and infrastructure they already possess. October 1, 2012  4:45 PM – 5:45 PM   InterContinental - Ballroom A Oracle Hyperion Data Relationship Management: Enabling Enterprise Transformation                                                                         Hear how Data Relationship Management drives enterprise transformation and why any organization embarking on an master data management initiative needs it, plus hear from our customers best practices as well as lessons learned.  Check out the Master Data Management Focus On document for all our sessions at OpenWorld 2012. 

    Read the article

  • Master Data

    - by david.butler(at)oracle.com
    Let's take a deeper look at what we mean when we talk about 'Master' data. In its most general sense, master data is data that exists in more than one operational application. These are the applications that automate business processes. These applications require significant amounts of data to function correctly.  This includes data about the objects that are involved in transactions, as well as the transaction data itself.  For example, when a customer buys a product, the transaction is managed by a sales application.  The objects of the transaction are the Customer and the Product.  The transactional data is the time, place, price, discount, payment methods, etc. used at the point of sale. Many thousands of transactional data attributes are needed within the application. These important data elements are local to the applications and have no bearing on other applications. Harmonization and synchronization across applications is not necessary. The Customer and Product objects of the transaction also have a large number of attributes. Customer for example, includes hierarchies, hierarchical and matrixed relationships, contacts, classifications, preferences, accounts, identifiers, profiles, and addresses galore for 'ship to', 'mail to'; 'service at'; etc. Dozens of attributes exist for individuals, hundreds for organizations, and thousands for products. This data has meaning beyond any particular application. It exists in many applications and drives the vital cross application enterprise business processes. These are the processes that define and differentiate the organization. At every decision point, information about the objects of the process determines the direction of the process flow. This is the nature of the data that exists in more than one application, and this is why we call it 'master data'. Let me elaborate. Parties Oracle has developed a party schema to model all participants in your daily business operations. It models people, organizations, groups, customers, contacts, employees, and suppliers. It models their accounts, locations, classifications, and preferences.  And most importantly, it models the vast array of hierarchical and matrixed relationships that exist between all the participants in your real world operations.  The model logically separates people and organizations from their relationships and accounts.  This separation creates flexibility unmatched in the industry and accounts for the fact that the Oracle schema for Customers, Suppliers, and Accounts is a true superset of the wide variety of commercial and homegrown customer models in existence. Sites Sites are places where business is conducted. They can be addresses, clusters such as retail malls, locations within a cluster, floors within a building, places where meters are located, rooms on floors, etc.  Fully understanding all attributes of a site is key to many business processes. Attributes such as 'noise abatement policy' at a point of delivery, or the size of an oven in a business kitchen drive day-to-day activities such as delivery schedules or food promotions. Typically this kind of data is siloed in departments and scattered across applications and spreadsheets.  This leads to conflicting information and poor operational efficiencies. Oracle's Global Single Schema can hold all site attributes in one place and enables a single version of authoritative site information across the enterprise. Products and Services The Oracle Global Single Schema also includes a number of entities that define the products and services a company creates and offers for sale. Key entities include Items organized into Catalogs and Price Lists. The Catalog structures provide for the ability to capture different views of a product such as engineering, manufacturing, and service which are based on a unified product model. As a result, designers, manufacturing engineers, purchasers and partners can work simultaneously on a common product definition. The Catalog schema allows for unlimited attributes, combines them into meaningful groups, and maps them to catalog categories to track these different types of information. The model also maps an unlimited number of functional structures for each item. For example, multiple Bills of Material (BOMs) can be constructed representing requirements BOM, features BOM, and packaging BOM for an item. The Catalog model also supports hierarchical information about each item and all standard Global Data Synchronization attributes. Business Processes Utilizing Linked Data Entities Each business entity codified into a centralized master data environment significantly improves the efficiency of the automated business processes that use the consolidated data.  When all the key business entities used by an organization's process are so consolidated, the advantages are multiplied.  The primary reason for business process breakdowns (i.e. data errors across application boundaries) is eliminated. All processes are positively impacted and business process automation is itself automated.  I like to use the "Call to Resolution" business process as an example to help illustrate this important point. It involves call center applications, service applications, RMA applications, transportation applications, inventory applications, etc. Customer, Site, Product and Supplier master data must all be correct and consistent across these applications.  What's more, the data relationships between customer and product, and product and suppliers must be right. This is the minimum quality needed to insure the business process flows without error. But that is not the end of the story. Critical master data attributes such as customer loyalty, profitability, credit worthiness, and propensity to buy can optimize the call center point of contact component of the process. Critical product information such as alternative parts or equivalent products can optimize the resolution selected by the process. A comprehensive understanding of the 'service at' location can help insure multiple trips are avoided in the process. Full supplier information on reliability, delivery delays, and potential alternates can prevent supplier exceptions and play a significant role in optimizing the process.  In other words, these master data attributes enable the optimization of the "Call to Resolution" enterprise business process. Master data supports and guides business process flows. Thus the phrase 'Master Data' is indeed appropriate. MDM is the software that houses, manages, and governs the master data that resides in all applications and controls the enterprise business processes. A complete master data solution takes a data model that holds fully attributed master data entities and their inter-relationships. Oracle has this model. Oracle, with its deep understanding of application data is the logical choice for managing all your master data within the enterprise whether or not your organization actually runs any Oracle Applications.

    Read the article

  • How to Never Use iTunes With Your iPhone, iPad, or iPod Touch

    - by Chris Hoffman
    iTunes isn’t an amazing program on Windows. There was a time when Apple device users had to plug their devices into their PCs or Macs and use iTunes for device activation, updates, and syncing, but iTunes is no longer necessary. Apple still allows you to use iTunes for these things, but you don’t have to. Your iOS device can function independently from iTunes, so you should never be forced to plug it into a PC or Mac. Device Activation When the iPad first came out, it was touted as a device that could replace full PCs and Macs for people who only needed to perform light computing tasks. Yet, to set up a new iPad, users had to plug it into a PC or Mac running iTunes and use iTunes to activate the device. This is no longer necessary. With new iPads, iPhones, and iPod Touches, you can simply go through the setup process after turning on your new device without ever having to plug it into iTunes. Just connect to a Wi-Fi or cellular data network and log in with your Apple ID when prompted. You’ll still see an option that allows you to activate the device via iTunes, but this should only be necessary if you don’t have a wireless Internet connection available for your device. Operating System Updates You no longer have to use Apple’s iTunes software to update to a new version of Apple’s iOS operating system, either. Just open the Settings app on your device, select the General category, and tap Software Update. You’ll be able to update right from your device without ever opening iTunes. Purchased iTunes Media Apple allows you to easily access content you’ve purchased from the iTunes Store on any device. You don’t have to connect your device to your computer and sync via iTunes. For example, you can purchase a movie from the iTunes Store. Then, without any syncing, you can open the iTunes Store app on any of your iOS devices, tap the Purchased section, and see stuff you’ve downloaded. You can download the content right from the store to your device. This also works for apps — apps you purchase from the App Store can be accessed in the Purchased section on the App Store on your device later. You don’t have to sync apps from iTunes to your device, although iTunes still allows you to. You can even set up automatic downloads from the iTunes & App Store settings screen. This would allow you to purchase content on one device and have it automatically download to your other devices without any hassle. Music Apple allows you to re-download purchased music from the iTunes Store in the same way. However, there’s a good chance you have your own music you didn’t purchase from iTunes. Maybe you spent time ripping it all from your old CDs and you’ve been syncing it to your devices via iTunes ever since. Apple’s solution for this is named iTunes Match. This feature isn’t free, but it’s not a bad deal at all. For $25 per year, Apple allows you to upload all your music to your iCloud account. You can then access all your music from any iPhone, IPad, or iPod Touch. You can stream all your music — perfect if you have a huge library and little storage on your device — and choose which songs you want to download to your device for offline use. When you add additional music to your computer, iTunes will notice it and upload it using iTunes Match, making it available for streaming and downloading directly from your iOS devices without any syncing. This feature is named iTunes Match because it doesn’t just upload music — if Apple already has a song you upload, it will “match” your song with Apple’s copy. This means you may get higher-quality versions of your songs if you ripped them from CD at a lower bitrate. Podcasts You don’t have to use iTunes to subscribe to podcasts and sync them to your devices. Even if you have a lowly iPod Touch, you can install APple’s Podcasts app from the app store. Use it to subscribe to podcasts and configure them to automatically download directly to your device. You can use other podcast apps for this, too. Backups You can continue backing up your device’s data through iTunes, generating local backups that are stored on your computer. However, new iOS devices are configured to automatically back up their data to iCloud. This happens automatically in the background without you even having to think about it, and you can restore such backups when setting up a device simply by logging in with your Apple ID. Personal Data In the days of PalmPilots, people would use desktop programs like iTunes to sync their email, contacts, and calendar events with their mobile devices. You probably shouldn’t have to sync this data form your computer. Just sign into your email account — for example, a Gmail account — on your device and iOS will automatically pull your email, contacts, and calendar events from your associated account. Photos Rather than connecting your iOS device to your computer and syncing photos from it, you can use an app that automatically uploads your photos to a web service. Dropbox, Google+, and even Flickr all have this feature in their apps. You’ll be able to access your photos from any computer and have a backup copy without any syncing required. You may still need to use iTunes if you want to sync local music without paying for iTunes Match or copy local video files to your device. Copying large local files over is the only real scenario where you’d need iTunes. If you don’t need to copy such files over, you can go ahead and uninstall iTunes from your Windows PC if you like. You shouldn’t need it.     

    Read the article

  • Christmas in the Clouds

    - by andrewbrust
    I have been spending the last 2 weeks immersing myself in a number of Windows Azure and SQL Azure technologies.  And in setting up a new business (I’ll speak more about that in the future), I have also become a customer of Microsoft’s BPOS (Business Productivity Online Services).  In short, it has been a fortnight of Microsoft cloud computing. On the Azure side, I’ve looked, of course, at Web Roles and Worker Roles.  But I’ve also looked at Azure Storage’s REST API (including coding to it directly), I’ve looked at Azure Drive and the new VM Role; I’ve looked quite a bit at SQL Azure (including the project “Houston” Silverlight UI) and I’ve looked at SQL Azure labs’ OData service too. I’ve also looked at DataMarket and its integration with both PowerPivot and native Excel.  Then there’s AppFabric Caching, SQL Azure Reporting (what I could learn of it) and the Visual Studio tooling for Azure, including the storage of certificate-based credentials.  And to round it out with some user stuff, on the BPOS side, I’ve been working with Exchange Online, SharePoint Online and LiveMeeting. I have to say I like a lot of what I’ve been seeing.  Azure’s not perfect, and BPOS certainly isn’t either.  But there’s good stuff in all these products, and there’s a lot of value. Azure Goes Deep Most people know that Web and Worker roles put the platform in charge of spinning virtual machines up and down, and keeping them up to date. But you can go way beyond that now.  The still-in-beta VM Role gives you the power to craft the machine (much as does Amazon’s EC2), though it takes away the platform’s self-managing attributes.  It still spins instances up and down, making drive storage non-durable, but Azure Drive gives you the ability to store VHD files as blobs and mount them as virtual hard drives that are readable and writeable.  Whether with Azure Storage or SQL Azure, Azure does data.  And OData is everywhere.  Azure Table Storage supports an OData Interface.  So does SQL Azure and so does DataMarket (the former project “Dallas”).  That means that Azure data repositories aren’t just straightforward to provision and configure…they’re also easy to program against, from just about any programming environment, in a RESTful manner.  And for more .NET-centric implementations, Azure AppFabric caching takes the technology formerly known as “Velocity” and throws it up into the cloud, speeding data access even more. Snapping in Place Once you get the hang of it, this stuff just starts to work in a way that becomes natural to understand.  I wasn’t expecting that, and I was really happy to discover it. In retrospect, I am not surprised, because I think the various Azure teams are the center of gravity for Redmond’s innovation right now.  The products belie this and so do my observations of the product teams’ motivation and high morale.  It is really good to see this; Microsoft needs to lead somewhere, and they need to be seen as the underdog while doing so.  With Azure, both requirements are in place.   BPOS: Bad Acronym, Easy Setup BPOS is about products you already know; Exchange, SharePoint, Live Meeting and Office Communications Server.  As such, it’s hard not to be underwhelmed by BPOS.  Until you realize how easy it makes it to get all that stuff set up.  I would say that from sign-up to productive use took me about 45 minutes…and that included the time necessary to wrestle with my DNS provider, set up Outlook and my SmartPhone up to talk to the Exchange account, create my SharePoint site collection, and configure the Outlook Conferencing add-in to talk to the provisioned Live Meeting account. Never before did I think setting up my own Exchange mail could come anywhere close to the simplicity of setting up an SMTP/POP account, and yet BPOS actually made it faster.   What I want from my Azure Christmas Next Year Not everything about Microsoft’s cloud is good.  I close this post with a list of things I’d like to see addressed: BPOS offerings are still based on the 2007 Wave of Microsoft server technologies.  We need to get to 2010, and fast.  Arguably, the 2010 products should have been released to the off-premises channel before the on-premise sone.  Office 365 can’t come fast enough. Azure’s Internet tooling and domain naming, is scattered and confusing.  Deployed ASP.NET applications go to cloudapp.net; SQL Azure and Azure storage work off windows.net.  The Azure portal and Project Houston are at azure.com.  Then there’s appfabriclabs.com and sqlazurelabs.com.  There is a new Silverlight portal that replaces most, but not all of the HTML ones.  And Project Houston is Silvelright-based too, though separate from the Silverlight portal tooling. Microsoft is the king off tooling.  They should not make me keep an entire OneNote notebook full of portal links, account names, access keys, assemblies and namespaces and do so much CTRL-C/CTRL-V work.  I’d like to see more project templates, have them automatically reference the appropriate assemblies, generate the right using/Imports statements and prime my config files with the right markup.  Then I want a UI that lets me log in with my Live ID and pick the appropriate project, database, namespace and key string to get set up fast. Beta programs, if they’re open, should onboard me quickly.  I know the process is difficult and everyone’s going as fast as they can.  But I don’t know why it’s so difficult or why it takes so long.  Getting developers up to speed on new features quickly helps popularize the platform.  Make this a priority. Make Azure accessible from the simplicity platforms, i.e. ASP.NET Web Pages (Razor) and LightSwitch.  Support .NET 4 now.  Make WebMatrix, IIS Express and SQL Compact work with the Azure development fabric. Have HTML helpers make Azure programming easier.  Have LightSwitch work with SQL Azure and not require SQL Express.  LightSwitch has some promising Azure integration now.  But we need more.  WebMatrix has none and that’s just silly, now that the Extra Small Instance is being introduced. The Windows Azure Platform Training Kit is great.  But I want Microsoft to make it even better and I want them to evangelize it much more aggressively.  There’s a lot of good material on Azure development out there, but it’s scattered in the same way that the platform is.   The Training Kit ties a lot of disparate stuff together nicely.  Make it known. Should Old Acquaintance Be Forgot All in all, diving deep into Azure was a good way to end the year.  Diving deeper into Azure should a great way to spend next year, not just for me, but for Microsoft too.

    Read the article

  • Spacewalk 2.0 provided to manage Oracle Linux systems

    - by wcoekaer
    Oracle Linux customers have a few options to manage and provision their servers. We provide a license to use Oracle Enterprise Manager's Linux OS management, monitoring and provisioning features without additional cost for every server that has an Oracle Linux support subscription. So there is no additional pack to license and no additional per server cost, it's all included in our Basic, Premier and Systems support subscriptions. The nice thing with Oracle Enterprise Manager is that you end up with a single management product that can manage all aspects of your software stack. You have complete insight into the applications running, you have roles and responsibilities, you have third party connectors for storage or other products and it makes it very easy and convenient to correlate data and events when something happens. If you use Oracle VM as well, you end up with a complete cloud portal with selfservice, chargeback, etc... Another, much simpler option, is just using yum. It is very easy to take a server and create directories and expose these through apache as repositories. You can have a simple yum config on each server pointing to a few specific repositories. It requires some manual effort in terms of creating directories, downloading packages and creating local repo files but it's easy to do and for many people a preferred solution. There are also a good number of customers that just connect their servers directly to ULN or to our free update server public-yum. Just to re-iterate, our public-yum servers have all the errata and updates available for free. Now we added another option. Many of our customers have switched from a competing Linux vendor and they had familiarity with their management tools. Switching to Oracle for support is very easy since we don't require changes to the installed servers but we also want to make sure there is a very easy and almost transparent switch for the management tools as well. While Oracle Enterprise Manager is our preferred way of managing systems, we now are offering Spacewalk 2.0 to our customers. The community project can be found here. We have made a few changes to ensure easy and complete support for Oracle Linux, tested it with public-yum, etc.. You can find the rpms in our public-yum repos at http://public-yum.oracle.com/repo/OracleLinux/OL6/. There are repositories for spacewalk server and then for each version (OL5,OL6) and architecture (x86 and x86-64) we have the client repositories as well. Spacewalk itself is only made available for OL6 x86-64. Documentation can be found here. I set it up myself and here are some quick steps on how you can get going in just a matter of minutes: Spacewalk Server Installation : 1) Installing an Oracle Database Use an existing Oracle Database or install a new Oracle Database (Standard or Enterprise Edition) [at this time use 11g, we will add support for 12c in the near future]. This database can be installed on the spacewalk server or on a separate remote server. While Oracle XE might work to create a small sample POC, we do not support the use of Oracle XE, spacewalk repositories can become large and create a significant database workload. Customers can use their existing database licenses, they can download the database with a trial licence from http://edelivery.oracle.com or Oracle Linux subscribers (customers) will be allowed to use the Oracle Database as a spacewalk repository as part of their Oracle Linux subscription at no additional cost. |NOTE : spacewalk requires the database to be configured with the UTF8 characterset. |Installation will fail if your database does not use UTF8. |To verify if your database is configured correctly, run the following command in sqlplus: | |select value from nls_database_parameters where parameter='NLS_CHARACTERSET'; |This should return 'AL32UTF8' 2) Configure the database schema for spacewalk Ideally, create a tablespace in the database to hold the spacewalk schema tables/data; create tablespace spacewalk datafile '/u01/app/oracle/oradata/orcl/spacewalk.dbf' size 10G autoextend on; Create the database user spacewalk (or use some other schema name) in sqlplus. example : create user spacewalk identified by spacewalk; grant connect, resource to spacewalk; grant create table, create trigger, create synonym, create view, alter session to spacewalk; grant unlimited tablespace to spacewalk; alter user spacewalk default tablespace spacewalk; 4) Spacewalk installation and configuration Spacewalk server requires an Oracle Linux 6 x86-64 system. Clients can be Oracle Linux 5 or 6, both 32- and 64bit. The server is only supported on OL6/64bit. The easiest way to get started is to do a 'Minimal' install of Oracle Linux on a server and configure the yum repository to include the spacewalk repo from public-yum. Once you have a system with a minimal install, modify your yum repo to include the spacewalk repo. Example : edit /etc/yum.repos.d/public-yum-ol.repo and add the following lines at the end of the file : [spacewalk] name=spacewalk baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/spacewalk20/server/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 Install the following pre-requisite packages on your spacewalk server : oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64 oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64 rpm -ivh oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64 rpm -ivh oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64 The above RPMs can be found on the Oracle Technology Network website : http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html As the root user, configure the library path to include the Oracle Instant Client libraries : cd /etc/ld.so.conf.d echo /usr/lib/oracle/11.2/client64/lib oracle-instantclient11.2.conf ldconfig Install spacewalk : # yum install spacewalk-oracle The above yum command should download and install all required packages to run spacewalk on your local server. | NOTE : if you did a full, desktop or workstation installation, | you have to remove the JTA package | BEFORE installing spacewalk-oracle (rpm -e --nodeps jta) Once the installation completes, simply run the spacewalk configuration tool and you are all set. (make sure to run the command with the 2 arguments) spacewalk-setup --disconnected --external-db Answer the questions during the setup, ensure you provide the current database user (example : spacewalk) and password (example : spacewalk) and database server hostname (the standard hostname of the server on which you have deployed the Oracle database) At the end of the setup script, your spacewalk server should be fully configured and you can log into the web portal. Use your favorite browser to connect to the website : http://[spacewalkserverhostname] The very first action will be to create the main admin account.

    Read the article

  • H1 Visa interview tips–What you must know before attending the interview?

    - by Gopinath
    USA’s H1 visa allows highly qualified professionals from other countries to work in America. Many IT professionals in India aspire to go to USA on H1 and work for their clients. Recently I had a chance to study H1 visa process to help one of my friends and I would like to share what I learned. With the assumption that your H1 petition is approved and you got an interview scheduled at US Embassy for your visa stamping, here are tips you must know before attending the interview Dress Code – Formals Say no to casuals or any fancy dress when you attend the interview. It’s not a party or friends home you are visiting. Consider H1 Visa interview as your job interview and dress up in formals. There is no option B for your, you must be in formals. A plain formal shirt with a matching pant is suggested for men. Tie and Suit would not be required, but if you are a professional at management level you can consider wearing suit. Women can wear either formal Salwar or formal pant-shirt. Avoid heavy jewellery, wear what is must as per your tradition or culture. Body Language -  Smile on your face Your body language reflects what you are and what’s going on in your mind. Don’t be nervous or restless, be relaxed and wear a beautiful smile on your face. A smile is a curve that sets everything straight. When you are called for the interview, greet the interviewer with a beautiful smile. Say Good Morning/Afternoon/Evening depending on time you are visiting them. Whenever appropriate say Thank You. Generally American professionals are very friendly people and they reciprocate for your greetings. Make sure that you make them comfortable to start the interview. Carry original documents in a separate folder I don’t want to talk much about the documents that are required for your H1B interview as it’s big subject on it’s own and it requires a separate post. I assume that your consultant or employer helped you in gathering all the required documents like – petition, DS 160 forms, education & job related documents, resume, interview call letters, client letters, etc. For all the documents you are going to submit at the interview make sure that you have originals in a separate folder.  If required interviewer may ask you show the originals of any of the document you submitted for visa processing. Don’t mix the original documents with the documents you need to submit for interview. Have a separate folder for them. For those who are going to stamping along with their spouse and children, they need to carry few extra original documents like – marriage certificate, marriage photos(30 numbers)/album, birth certificates, passports, education and profession related certificates of the spouse and children. Know your role & responsibilities The interviewer will ask you questions on your roles and responsibilities at client location. Be clear what is your day to day tasks at client place and prepared to face detailed questions on the same. When asked explain clearly and also make sure what you say is inline with what is mentioned in your petition and client invitation letter. At times they may ask you questions specific to the project/technology you are going to work. So doing some homework in this area will help you easily answer the questions. Failing to answer basic questions on your role & responsibilities may result in rejection. You work for your Employer at Client location but NOT FOR CLIENT One of the important things to keep in mind that you work for your employer and you are being deputed to client location on a work visa.  Your employer is going to be solely responsible for your salary, work, promotion, pay hikes or what so ever during your stay at USA. Your client will not be responsible for anything. Lets say you are employed with Company X in India and they are applying for H1B to work at your client(ex: Microsoft) in USA, you must keep in my mind that Microsoft is not your employer. Microsoft will not pay your salaries or responsible for any employment related activities. Company X will be solely responsible for all your employer related activities. If you don’t get this correctly and say to Visa interviewer that your client is responsible, then you may get into troubles. Know your client It’s always good to know the clients with whom you are going to work in USA and their business. If your client is a well know organisation then you may not get many questions from interviewer else you need to be well prepared to provide details like – nature of business, location, size of the organisation, etc.  Get to know the basic details about your client and be confident while providing those details to the interviewer. Also make sure that you never talk about any confidential details of your client projects and business. Revealing confidential details of your client may land your job itself in soup. Make sure that your spouse is also in sync with you If you’ve applied a H4 visa for your spouse along with your H1, make sure that spouse is in sync with you. Your spouse also should know the basic details of your job, your employer, client and location where you will be travelling. Your spouse should also be prepared to answers questions related to marriage, their profession(if working), kids, education, etc. Interviewers will try to asses your spouse communication skills, whereabouts while staying in USA and would they prefer to work USA or not. On H4, which is a dependent visa, your spouse is not allowed to work in USA and at any point your spouse should not show the intentions to search for work in USA. Less luggage more comfort You would have definitely heard that there are lot of restrictions on what you can carry along with you to an US Embassy while attending the interview. To be frank it’s not good to say there are many restrictions, but there are a hell a lot of restrictions. There are unbelievable restrictions and it’s for the safety of everyone. You are not allowed to carry mobile phones, CD/DVDs, USBs, bank cards, cameras, cosmetics, food(except baby food), water, wallets, backpacks, sealed covers, etc. Trust me most of the things we carry with us regularly every day are not allowed inside. As there are 100s of restrictions, it would be easier if you understand what you can carry along with you and just carry them alone. Ask your employer/consultant to provide you a checklist of items that you can carry. Most what you would require are H1B related documents provided by the employer/consultant Photographs All original documents supporting your H1B Passports Some cash for your travel expenses (avoid coins) Any important phone number / details written in a paper(like your cab driver number, etc.) If you carry restricted stuff then you will be stopped at security checks, you have to find people who can safely keep all the restricted items. Due to heavy restrictions in and around the US Embassy you will not find any  place to keep your luggage. So just carry the bare minimum things required so that you feel more comfortable. Useful Links THE U.S. NON IMMIGRANT VISA APPLICATION PROCESS U.S VISA SECURITY REGULATIONS GENERAL FAQS Hope this information is helpful to you and best of luck for your interview. Creative commons Image credit: Flickr/ alexfrance, vinothchandar. hughelectronic, architratan, striatic

    Read the article

  • How to Recover From a Virus Infection: 3 Things You Need to Do

    - by Chris Hoffman
    If your computer becomes infected with a virus or another piece of malware, removing the malware from your computer is only the first step. There’s more you need to do to ensure you’re secure. Note that not every antivirus alert is an actual infection. If your antivirus program catches a virus before it ever gets a chance to run on your computer, you’re safe. If it catches the malware later, you have a bigger problem. Change Your Passwords You’ve probably used your computer to log into your email, online banking websites, and other important accounts. Assuming you had malware on your computer, the malware could have logged your passwords and uploaded them to a malicious third party. With just your email account, the third party could reset your passwords on other websites and gain access to almost any of your online accounts. To prevent this, you’ll want to change the passwords for your important accounts — email, online banking, and whatever other important accounts you’ve logged into from the infected computer. You should probably use another computer that you know is clean to change the passwords, just to be safe. When changing your passwords, consider using a password manager to keep track of strong, unique passwords and two-factor authentication to prevent people from logging into your important accounts even if they know your password. This will help protect you in the future. Ensure the Malware Is Actually Removed Once malware gets access to your computer and starts running, it has the ability to do many more nasty things to your computer. For example, some malware may install rootkit software and attempt to hide itself from the system. Many types of Trojans also “open the floodgates” after they’re running, downloading many different types of malware from malicious web servers to the local system. In other words, if your computer was infected, you’ll want to take extra precautions. You shouldn’t assume it’s clean just because your antivirus removed what it found. It’s probably a good idea to scan your computer with multiple antivirus products to ensure maximum detection. You may also want to run a bootable antivirus program, which runs outside of Windows. Such bootable antivirus programs will be able to detect rootkits that hide themselves from Windows and even the software running within Windows. avast! offers the ability to quickly create a bootable CD or USB drive for scanning, as do many other antivirus programs. You may also want to reinstall Windows (or use the Refresh feature on Windows 8) to get your computer back to a clean state. This is more time-consuming, especially if you don’t have good backups and can’t get back up and running quickly, but this is the only way you can have 100% confidence that your Windows system isn’t infected. It’s all a matter of how paranoid you want to be. Figure Out How the Malware Arrived If your computer became infected, the malware must have arrived somehow. You’ll want to examine your computer’s security and your habits to prevent more malware from slipping through in the same way. Windows is complex. For example, there are over 50 different types of potentially dangerous file extensions that can contain malware to keep track of. We’ve tried to cover many of the most important security practices you should be following, but here are some of the more important questions to ask: Are you using an antivirus? – If you don’t have an antivirus installed, you should. If you have Microsoft Security Essentials (known as Windows Defender on Windows 8), you may want to switch to a different antivirus like the free version of avast!. Microsoft’s antivirus product has been doing very poorly in tests. Do you have Java installed? – Java is a huge source of security problems. The majority of computers on the Internet have an out-of-date, vulnerable version of Java installed, which would allow malicious websites to install malware on your computer. If you have Java installed, uninstall it. If you actually need Java for something (like Minecraft), at least disable the Java browser plugin. If you’re not sure whether you need Java, you probably don’t. Are any browser plugins out-of-date? – Visit Mozilla’s Plugin Check website (yes, it also works in other browsers, not just Firefox) and see if you have any critically vulnerable plugins installed. If you do, ensure you update them — or uninstall them. You probably don’t need older plugins like QuickTime or RealPlayer installed on your computer, although Flash is still widely used. Are your web browser and operating system set to automatically update? – You should be installing updates for Windows via Windows Update when they appear. Modern web browsers are set to automatically update, so they should be fine — unless you went out of your way to disable automatic updates. Using out-of-date web browsers and Windows versions is dangerous. Are you being careful about what you run? – Watch out when downloading software to ensure you don’t accidentally click sketchy advertisements and download harmful software. Avoid pirated software that may be full of malware. Don’t run programs from email attachments. Be careful about what you run and where you get it from in general. If you can’t figure out how the malware arrived because everything looks okay, there’s not much more you can do. Just try to follow proper security practices. You may also want to keep an extra-close eye on your credit card statement for a while if you did any online-shopping recently. As so much malware is now related to organized crime, credit card numbers are a popular target.     

    Read the article

  • Corsair Hackers Reboot

    It wasn't easy for me to attend but it was absolutely worth to go. The Linux User Group of Mauritius (LUGM) organised another get-together for any open source enthusiast here on the island. Strangely named "Corsair Hackers Reboot" but it stands for a positive cause: "Corsair Hackers Reboot Event A collaborative activity involving LUGM, UoM Computer Club, Fortune Way Shopping Mall and several geeks from around the island, striving to put FOSS into homes & offices. The public is invited to discover and explore Free Software & Open Source." And it was a good opportunity for me and the kids to visit the east coast of Mauritius, too. Perfect timing It couldn't have been better... Why? Well, for two important reasons (in terms of IT): End of support for Microsoft Windows XP - 08.04.2014 Release of Ubuntu 14.04 Long Term Support - 17.04.2014 Quite funnily, those two IT dates weren't the initial reasons and only during the weeks of preparations we put those together. And therefore it was even more positive to promote the use of Linux and open source software in general to a broader audience. Getting there ... Thanks to the new motor way M3 and all the additional road work which has been completed recently it was very simple to get across the island in a very quick and relaxed manner. Compared to my trips in the early days of living in Mauritius (and riding on a scooter) it was very smooth and within less than an hour we hit Centrale de Flacq. Well, being in the city doesn't necessarily mean that one has arrived at the destination. But thanks to modern technology I had a quick look on Google Maps, and we finally managed to get a parking behind the huge bus terminal in Flacq. From there it was just a short walk to Fortune Way. The children were trying to count the number of buses... Well, lots and lots of buses - really impressive actually. What was presented? There were different areas set up. Right at the entrance one's attention was directly drawn towards the elevated hacker's stage. Similar to rock stars performing their gig there was bunch of computers, laptops and networking equipment in order to cater the right working conditions for coding/programming challenge(s) on the one hand and for the pen-testing or system hacking competition on the other hand. Personally, I was very impresses that actually Nitin took care of the pen-testing competition. He hardly started one year back with Linux in general, and Kali Linux specifically. Seeing his personal development from absolute newbie to a decent Linux system administrator within such a short period of time, is really impressive. His passion to open source software made him a living. Next, clock-wise seen, was the Kid's Corner with face-painting as the main attraction. Additionally, there were numerous paper print outs to colour. Plus a decent workstation with the educational suite GCompris. Of course, my little ones were into that. They already know GCompris since a while as they are allowed to use it on an IGEL thin client terminal here at home. To simplify my life, I set up GCompris as full-screen guest session on the server, and they can pass the login screen without any further obstacles. And because it's a thin client hooked up to a XDMCP remote session I don't have to worry about the hardware on their desk, too. The next section was the main attraction of the event: BYOD - Bring Your Own Device Well, compared to the usual context of BYOD the corsairs had a completely different intention. Here, you could bring your own laptop and a team of knowledgeable experts - read: geeks and so on - offered to fully convert your system on any Linux distribution of your choice. And even though I came later, I was told that the USB pen drives had been in permanent use. From being prepared via dd command over launching LiveCD session to finally installing a fresh Linux system on bare metal. Most interestingly, I did a similar job already a couple of months ago, while upgrading an existing Windows XP system to Xubuntu 13.10. So far, the female owner is very happy and enjoys her system almost every evening to go shopping online, checking mails, and reading latest news from the Anime world. Back to the Hackers event, Ish told me that they managed approximately 20 conversion during the day. Furthermore, Ajay and others gladly assisted some visitors with some tricky issues and by the end of the day you can call is a success. While I was around, there was a elderly male visitor that got a full-fledged system conversion to a Linux system running completely in French language. A little bit more to the centre it was Yasir's turn to demonstrate his Arduino hardware that he hooked up with an experimental electrical circuit board connected to an LCD matrix display. That's the real spirit of hacking, and he showed some minor adjustments on the fly while demo'ing the system. Also, very interesting there was a thermal sensor around. Personally, I think that platforms like the Arduino as well as the Raspberry Pi have a great potential at a very affordable price in order to bring a better understanding of electronics as well as computer programming to a broader audience. It would be great to see more of those experiments during future activities. And last but not least there were a small number of vendors. Amongst them was Emtel - once again as sponsor of the general internet connectivity - and another hardware supplier from Riche Terre shopping mall. They had a good collection of Android related gimmicks, like a autonomous web cam that can convert any TV with HDMI connector into an online video chat system given WiFi. It's actually kind of awesome to have a Skype or Google hangout video session on the big screen rather than on the laptop. Some pictures of the event LUGM: Great conversations on Linux, open source and free software during the Corsair Hackers Reboot LUGM: Educational workstation running GCompris suite attracted the youngest attendees of the day. Of course, face painting had to be done prior to hacking... LUGM: Nadim demoing some Linux specifics to interested visitors. Everyone was pretty busy during the whole day LUGM: The hacking competition, here pen-testing a wireless connection and access point between multiple machines LUGM: Well prepared workstations to be able to 'upgrade' visitors' machines to any Linux operating system Final thoughts Gratefully, during the preparations of the event I was invited to leave some comments or suggestions, and the team of the LUGM did a great job. The outdoor banner was a eye-catcher, the various flyers and posters for the event were clearly written and as far as I understood from the quick chats I had with Ish, Nadim, Nitin, Ajay, and of course others all were very happy about the event execution. Great job, LUGM! And I'm already looking forward to the next Corsair Hackers Reboot event ... Crossing fingers: Very soon and hopefully this year again :) Update: In the media The event had been announced in local media, too. L'Express: Salon informatique: Hacking Challenge à Flacq

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Unable to connect to Wireless after installing Ubuntu 12.10

    - by Moulik
    I am using Asus U56E laptop and after installing Ubuntu 12.10 alongside Windows 8, I am unable to connect to the Wireless. I have been trying to solve this problem since two weeks and couldn't solve it. Please help. Any answer would be appreciated. Here are some command-line results. lspci -v | grep -iA 7 network ubuntu@ubuntu:~$ lspci -v | grep -iA 7 network 02:00.0 Network controller: Intel Corporation Centrino Wireless-N + WiMAX 6150 (rev 67) Subsystem: Intel Corporation Centrino Wireless-N + WiMAX 6150 BGN Flags: bus master, fast devsel, latency 0, IRQ 52 Memory at de800000 (64-bit, non-prefetchable) [size=8K] Capabilities: <access denied> Kernel driver in use: iwlwifi Kernel modules: iwlwifi lsmod | grep iwlwifi ubuntu@ubuntu:~$ lsmod | grep iwlwifi iwlwifi 386826 0 mac80211 539908 1 iwlwifi cfg80211 206566 2 iwlwifi,mac80211 ubuntu@ubuntu:~$ dmesg | grep iwlwifi [ 57.846261] iwlwifi: Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree: [ 57.846264] iwlwifi: Copyright(c) 2003-2012 Intel Corporation [ 57.846336] iwlwifi 0000:02:00.0: >pci_resource_len = 0x00002000 [ 57.846338] iwlwifi 0000:02:00.0: >pci_resource_base = ffffc90000c7c000 [ 57.846341] iwlwifi 0000:02:00.0: >HW Revision ID = 0x67 [ 57.846438] iwlwifi 0000:02:00.0: >irq 52 for MSI/MSI-X [ 59.558335] iwlwifi 0000:02:00.0: >loaded firmware version 41.28.5.1 build 33926 [ 59.558514] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEBUG disabled [ 59.558516] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEBUGFS enabled [ 59.558517] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEVICE_TRACING enabled [ 59.558519] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEVICE_TESTMODE enabled [ 59.558520] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_P2P disabled [ 59.558522] iwlwifi 0000:02:00.0: >Detected Intel(R) Centrino(R) Wireless-N + WiMAX 6150 BGN, REV=0x84 [ 59.558583] iwlwifi 0000:02:00.0: >L1 Disabled; Enabling L0S [ 59.569083] iwlwifi 0000:02:00.0: >device EEPROM VER=0x557, CALIB=0x6 [ 59.569085] iwlwifi 0000:02:00.0: >Device SKU: 0x150 [ 59.569087] iwlwifi 0000:02:00.0: >Valid Tx ant: 0x1, Valid Rx ant: 0x3 [ 59.569100] iwlwifi 0000:02:00.0: >Tunable channels: 13 802.11bg, 0 802.11a channels [ 70.208469] iwlwifi 0000:02:00.0: >L1 Disabled; Enabling L0S [ 70.208648] iwlwifi 0000:02:00.0: >Radio type=0x1-0x2-0x0 [ 70.366319] iwlwifi 0000:02:00.0: >L1 Disabled; Enabling L0S [ 70.366470] iwlwifi 0000:02:00.0: >Radio type=0x1-0x2-0x0 sudo lshw -c network ubuntu@ubuntu:~$ sudo lshw -c network *-network description: Wireless interface product: Centrino Wireless-N + WiMAX 6150 vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 67 serial: 40:25:c2:84:99:c4 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.5.0-17-generic firmware=41.28.5.1 build 33926 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:52 memory:de800000-de801fff *-network description: Ethernet interface product: AR8151 v2.0 Gigabit Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: c0 serial: 54:04:a6:2b:6a:ef capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI latency=0 link=no multicast=yes port=twisted pair resources: irq:54 memory:dd400000-dd43ffff ioport:a000(size=128) ifconfig ubuntu@ubuntu:~$ ifconfig eth0 Link encap:Ethernet HWaddr 54:04:a6:2b:6a:ef UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:176 errors:0 dropped:0 overruns:0 frame:0 TX packets:176 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:14368 (14.3 KB) TX bytes:14368 (14.3 KB) wlan0 Link encap:Ethernet HWaddr 40:25:c2:84:99:c4 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) iwconfig ubuntu@ubuntu:~$ iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off iwlist scan ubuntu@ubuntu:~$ iwlist scan eth0 Interface doesn't support scanning. lo Interface doesn't support scanning. wlan0 No scan results nm-tool ubuntu@ubuntu:~$ nm-tool NetworkManager Tool State: disconnected - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: atl1c State: unavailable Default: no HW Address: 54:04:A6:2B:6A:EF Capabilities: Carrier Detect: yes Wired Properties Carrier: off - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: iwlwifi State: disconnected Default: no HW Address: 40:25:C2:84:99:C4 Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points hypeness2: Infra, 00:21:29:DA:08:4F, Freq 2462 MHz, Rate 54 Mb/s, Strength 42 WPA love: Infra, 68:7F:74:17:02:66, Freq 2412 MHz, Rate 54 Mb/s, Strength 19 WPA WPA2 DIRECT-MwSCX-3400Pamela: Infra, 02:15:99:A3:3F:AC, Freq 2412 MHz, Rate 54 Mb/s, Strength 22 WPA2 router: Infra, 1C:AF:F7:D6:76:F3, Freq 2417 MHz, Rate 54 Mb/s, Strength 20 WPA2 wing: Infra, E8:40:F2:34:E4:F7, Freq 2437 MHz, Rate 54 Mb/s, Strength 20 WPA WPA2 132LINKSYS: Infra, 00:1A:70:80:1F:E9, Freq 2437 MHz, Rate 54 Mb/s, Strength 57 WEP VMITTAL: Infra, E0:46:9A:3C:F0:C4, Freq 2412 MHz, Rate 54 Mb/s, Strength 27 WEP HP-Print-10-LaserJet 1025: Infra, 7C:E9:D3:7E:F8:10, Freq 2437 MHz, Rate 54 Mb/s, Strength 59 ACNBB: Infra, 00:26:75:22:A6:2F, Freq 2437 MHz, Rate 54 Mb/s, Strength 20 SATKAIVAL: Infra, 00:18:E7:CE:69:A6, Freq 2412 MHz, Rate 54 Mb/s, Strength 69 WPA WPA2 hypeness: Infra, B8:E6:25:24:C3:B1, Freq 2437 MHz, Rate 54 Mb/s, Strength 54 WPA WPA2 CSNetwork: Infra, BC:14:01:58:C5:88, Freq 2437 MHz, Rate 54 Mb/s, Strength 25 WPA WPA2 tharma: Infra, BC:14:01:E2:06:18, Freq 2412 MHz, Rate 54 Mb/s, Strength 15 WPA WPA2 Active2.4: Infra, 10:6F:3F:0E:F3:8E, Freq 2462 MHz, Rate 54 Mb/s, Strength 17 WPA WPA2 ACNBB: Infra, 00:26:75:58:4E:7A, Freq 2437 MHz, Rate 54 Mb/s, Strength 85 KO: Infra, BC:14:01:2E:AF:A8, Freq 2452 MHz, Rate 54 Mb/s, Strength 22 WPA WPA2 FEAR: Infra, 00:18:4D:C0:BC:58, Freq 2462 MHz, Rate 54 Mb/s, Strength 17 WPA Pamela: Infra, BC:14:01:52:F6:F8, Freq 2412 MHz, Rate 54 Mb/s, Strength 24 WPA WPA2 bvrk2: Infra, 78:CD:8E:7B:3C:79, Freq 2457 MHz, Rate 54 Mb/s, Strength 19 WPA WPA2 BELL030: Infra, D8:6C:E9:17:AF:09, Freq 2462 MHz, Rate 54 Mb/s, Strength 22 WPA2 Desai: Infra, 00:1D:7E:52:FB:C5, Freq 2437 MHz, Rate 54 Mb/s, Strength 14 WEP Sritharan: Infra, BC:14:01:E5:59:78, Freq 2462 MHz, Rate 54 Mb/s, Strength 19 WPA WPA2 PFN: Infra, 00:13:10:8B:CF:45, Freq 2437 MHz, Rate 54 Mb/s, Strength 19 WEP rfkill list all ubuntu@ubuntu:~$ rfkill list all 0: asus-wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: asus-wimax: WiMAX Soft blocked: yes Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no so these are some more results sudo modprobe -r iwlwifi ubuntu@ubuntu:~$ sudo modprobe -r iwlwifi sudo modprobe iwlwifi 11n_disable=1 ubuntu@ubuntu:~$ sudo modprobe iwlwifi 11n_disable=1 echo "blacklist asus_wmi" | sudo tee -a /etcmodprobe.d/blacklist.conf ubuntu@ubuntu:~$ echo "blacklist asus_wmi" | sudo tee -a /etc/modprobe.d/blacklist.conf blacklist asus_wmi echo "options iwlwifi 11n_disable=1" | sudo tee /etc/modprobe.d/iwlwifi.conf ubuntu@ubuntu:~$ echo "options iwlwifi 11n_disable=1" | sudo tee /etc/modprobe.d/iwlwifi.conf options iwlwifi 11n_disable=1 sudo modprobe -rfv iwlwifi ubuntu@ubuntu:~$ sudo modprobe -rfv iwlwifi rmmod /lib/modules/3.5.0-17-generic/kernel/drivers/net/wireless/iwlwifi/iwlwifi.ko rmmod /lib/modules/3.5.0-17-generic/kernel/net/mac80211/mac80211.ko rmmod /lib/modules/3.5.0-17-generic/kernel/net/wireless/cfg80211.ko sudo modprobe -v iwlwifi ubuntu@ubuntu:~$ sudo modprobe -v iwlwifi insmod /lib/modules/3.5.0-17-generic/kernel/net/wireless/cfg80211.ko insmod /lib/modules/3.5.0-17-generic/kernel/net/mac80211/mac80211.ko insmod /lib/modules/3.5.0-17-generic/kernel/drivers/net/wireless/iwlwifi/iwlwifi.ko 11n_disable=1

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Let your Signature Experience drive IT-decision making

    - by Tania Le Voi
    Today’s CIO job description:  ‘’Align IT infrastructure and solutions with business goals and objectives ; AND while doing so reduce costs; BUT ALSO, be innovative, ensure the architectures are adaptable and agile as we need to act today on the changes that we may request tomorrow.”   Sound like an unachievable request? The fact is, reality dictates that CIO’s are put under this type of pressure to deliver more with less. In a past career phase I spent a few years as an IT Relationship Manager for a large Insurance company. This is a role that we see all too infrequently in many of our customers, and it’s a shame.  The purpose of this role was to build a bridge, a relationship between IT and the business. Key to achieving that goal was to ensure the same language was being spoken and more importantly that objectives were commonly understood - hence service and projects were delivered to time, to budget and actually solved the business problems. In reality IT and the business are already married, but the relationship is most often defined as ‘supplier’ of IT rather than a ‘trusted partner’. To deliver business value they need to understand how to work together effectively to attain this next level of partnership. The Business cannot compete if they do not get a new product to market ahead of the competition, or for example act in a timely manner to address a new industry problem such as a legislative change. An even better example is when the Application or Service fails and the Business takes a hit by bad publicity, being trending topics on social media and losing direct revenue from online channels. For this reason alone Business and IT need the alignment of their priorities and deliverables now more than ever! Take a look at Forrester’s recent study that found ‘many IT respondents considering themselves to be trusted partners of the business but their efforts are impaired by the inadequacy of tools and organizations’.  IT Meet the Business; Business Meet IT So what is going on? We talk about aligning the business with IT but the reality is it’s difficult to do. Like any relationship each side has different goals and needs and language can be a barrier; business vs. technology jargon! What if we could translate the needs of both sides into actionable information, backed by data both sides understand, presented in a meaningful way?  Well now we can with the Business-Driven Application Management capabilities in Oracle Enterprise Manager 12cR2! Enterprise Manager’s Business-Driven Application Management capabilities provide the information that IT needs to understand the impact of its decisions on business criteria.  No longer does IT need to be focused solely on speeds and feeds, performance and throughput – now IT can understand IT’s impact on business KPIs like inventory turns, order-to-cash cycle, pipeline-to-forecast, and similar.  Similarly, now the line of business can understand which IT services are most critical for the KPIs they care about. There are a good deal of resources on Oracle Technology Network that describe the functionality of these products, so I won’t’ rehash them here.  What I want to talk about is what you do with these products. What’s next after we meet? Where do you start? Step 1:  Identify the Signature Experience. This is THE business process (or set of processes) that is core to the business, the one that drives the economic engine, the process that a customer recognises the company brand for, reputation, the customer experience, the process that a CEO would state as his number one priority. The crème de la crème of your business! Once you have nailed this it gets easy as Enterprise Manager 12c makes it easy. Step 2:  Map the Signature Experience to underlying IT.  Taking the signature experience, map out the touch points of the components that play a part in ensuring this business transaction is successful end to end, think of it like mapping out a critical path; the applications, middleware, databases and hardware. Use the wealth of Enterprise Manager features such as Systems, Services, Business Application Targets and Business Transaction Management (BTM) to assist you. Adding Real User Experience Insight (RUEI) into the mix will make the end to end customer satisfaction story transparent. Work with the business and define meaningful key performance indicators (KPI’s) and thresholds to enable you to report and action upon. Step 3:  Observe the data over time.  You now have meaningful insight into every step enabling your signature experience and you understand the implication of that experience on your underlying IT.  Watch if for a few months, see what happens and reconvene with your business stakeholders and set clear and measurable targets which can re-define service levels.  Step 4:  Change the information about which you and the business communicate.  It’s amazing what happens when you and the business speak the same language.  You’ll be able to make more informed business and IT decisions. From here IT can identify where/how budget is spent whether on the level of support, performance, capacity, HA, DR, certification etc. IT SLA’s no longer need be focused on metrics such as %availability but structured around business process requirements. The power of this way of thinking doesn’t end here. IT staff get to see and understand how their own role contributes to the business making them accountable for the business service. Take a step further and appraise your staff on the business competencies that are linked to the service availability. For the business, the language barrier is removed by producing targeted reports on the signature experience core to the business and therefore key to the CEO. Chargeback or show back becomes easier to justify as the ‘cost of day per outage’ can be more easily calculated; the business will be able to translate the cost to the business to the cost/value of the underlying IT that supports it. Used this way, Oracle Enterprise Manager 12c is a key enabler to a harmonious relationship between the end customer the business and IT to deliver ultimate service and satisfaction. Just engage with the business upfront, make the signature experience visible and let Enterprise Manager 12c do the rest. In the next blog entry we will cover some of the Enterprise Manager features mentioned to enable you to implement this new way of working.  

    Read the article

  • OpenGL loading functions error [on hold]

    - by Ghilliedrone
    I'm new to OpenGL, and I bought a book on it for beginners. I finished writing the sample code for making a context/window. I get an error on this line at the part PFNWGLCREATECONTEXTATTRIBSARBPROC, saying "Error: expected a ')'": typedef HGLRC(APIENTRYP PFNWGLCREATECONTEXTATTRIBSARBPROC)(HDC, HGLRC, const int*); Replacing it or adding a ")" makes it error, but the error disappears when I use the OpenGL headers included in the books CD, which are OpenGL 3.0. I would like a way to make this work with the newest gl.h/wglext.h and without libraries. Here's the rest of the class if it's needed: #include <ctime> #include <windows.h> #include <iostream> #include <gl\GL.h> #include <gl\wglext.h> #include "Example.h" #include "GLWindow.h" typedef HGLRC(APIENTRYP PFNWGLCREATECONTEXTATTRIBSARBPROC)(HDC, HGLRC, const int*); PFNWGLCREATECONTEXTATTRIBSARBPROC wglCreateContextAttribsARB = NULL; bool GLWindow::create(int width, int height, int bpp, bool fullscreen) { DWORD dwExStyle; //Window Extended Style DWORD dwStyle; //Window Style m_isFullscreen = fullscreen;//Store the fullscreen flag m_windowRect.left = 0L; m_windowRect.right = (long)width; m_windowRect.top = 0L; m_windowRect.bottom = (long)height;//Set bottom to height // fill out the window class structure m_windowClass.cbSize = sizeof(WNDCLASSEX); m_windowClass.style = CS_HREDRAW | CS_VREDRAW; m_windowClass.lpfnWndProc = GLWindow::StaticWndProc; //We set our static method as the event handler m_windowClass.cbClsExtra = 0; m_windowClass.cbWndExtra = 0; m_windowClass.hInstance = m_hinstance; m_windowClass.hIcon = LoadIcon(NULL, IDI_APPLICATION); // default icon m_windowClass.hCursor = LoadCursor(NULL, IDC_ARROW); // default arrow m_windowClass.hbrBackground = NULL; // don't need background m_windowClass.lpszMenuName = NULL; // no menu m_windowClass.lpszClassName = (LPCWSTR)"GLClass"; m_windowClass.hIconSm = LoadIcon(NULL, IDI_WINLOGO); // windows logo small icon if (!RegisterClassEx(&m_windowClass)) { MessageBox(NULL, (LPCWSTR)"Failed to register window class", NULL, MB_OK); return false; } if (m_isFullscreen)//If we are fullscreen, we need to change the display { DEVMODE dmScreenSettings; //Device mode memset(&dmScreenSettings, 0, sizeof(dmScreenSettings)); dmScreenSettings.dmSize = sizeof(dmScreenSettings); dmScreenSettings.dmPelsWidth = width; //Screen width dmScreenSettings.dmPelsHeight = height; //Screen height dmScreenSettings.dmBitsPerPel = bpp; //Bits per pixel dmScreenSettings.dmFields = DM_BITSPERPEL | DM_PELSWIDTH | DM_PELSHEIGHT; if (ChangeDisplaySettings(&dmScreenSettings, CDS_FULLSCREEN) != DISP_CHANGE_SUCCESSFUL) { MessageBox(NULL, (LPCWSTR)"Display mode failed", NULL, MB_OK); m_isFullscreen = false; } } if (m_isFullscreen) //Is it fullscreen? { dwExStyle = WS_EX_APPWINDOW; //Window Extended Style dwStyle = WS_POPUP; //Windows Style ShowCursor(false); //Hide mouse pointer } else { dwExStyle = WS_EX_APPWINDOW | WS_EX_WINDOWEDGE; //Window Exteneded Style dwStyle = WS_OVERLAPPEDWINDOW; //Windows Style } AdjustWindowRectEx(&m_windowRect, dwStyle, false, dwExStyle); //Adjust window to true requested size //Class registered, so now create window m_hwnd = CreateWindowEx(NULL, //Extended Style (LPCWSTR)"GLClass", //Class name (LPCWSTR)"Chapter 2", //App name dwStyle | WS_CLIPCHILDREN | WS_CLIPSIBLINGS, 0, 0, //x, y coordinates m_windowRect.right - m_windowRect.left, m_windowRect.bottom - m_windowRect.top, //Width and height NULL, //Handle to parent NULL, //Handle to menu m_hinstance, //Application instance this); //Pass a pointer to the GLWindow here //Check if window creation failed, hwnd would equal NULL if (!m_hwnd) { return 0; } m_hdc = GetDC(m_hwnd); ShowWindow(m_hwnd, SW_SHOW); UpdateWindow(m_hwnd); m_lastTime = GetTickCount() / 1000.0f; return true; } LRESULT CALLBACK GLWindow::StaticWndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { GLWindow* window = nullptr; //If this is the create message if (uMsg == WM_CREATE) { //Get the pointer we stored during create window = (GLWindow*)((LPCREATESTRUCT)lParam)->lpCreateParams; //Associate the window pointer with the hwnd for the other events to access SetWindowLongPtr(hWnd, GWL_USERDATA, (LONG_PTR)window); } else { //If this is not a creation event, then we should have stored a pointer to the window window = (GLWindow*)GetWindowLongPtr(hWnd, GWL_USERDATA); if (!window) { //Do the default event handling return DefWindowProc(hWnd, uMsg, wParam, lParam); } } //Call our window's member WndProc(allows us to access member variables) return window->WndProc(hWnd, uMsg, wParam, lParam); } LRESULT GLWindow::WndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { switch (uMsg) { case WM_CREATE: { m_hdc = GetDC(hWnd); setupPixelFormat(); //Set the version that we want, in this case 3.0 int attribs[] = { WGL_CONTEXT_MAJOR_VERSION_ARB, 3, WGL_CONTEXT_MINOR_VERSION_ARB, 0, 0}; //Create temporary context so we can get a pointer to the function HGLRC tmpContext = wglCreateContext(m_hdc); //Make the context current wglMakeCurrent(m_hdc, tmpContext); //Get the function pointer wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC)wglGetProcAddress("wglCreateContextAttribsARB"); //If this is NULL then OpenGl 3.0 is not supported if (!wglCreateContextAttribsARB) { MessageBox(NULL, (LPCWSTR)"OpenGL 3.0 is not supported", (LPCWSTR)"An error occured", MB_ICONERROR | MB_OK); DestroyWindow(hWnd); return 0; } //Create an OpenGL 3.0 context using the new function m_hglrc = wglCreateContextAttribsARB(m_hdc, 0, attribs); //Delete the temporary context wglDeleteContext(tmpContext); //Make the GL3 context current wglMakeCurrent(m_hdc, m_hglrc); m_isRunning = true; } break; case WM_DESTROY: //Window destroy case WM_CLOSE: //Windows is closing wglMakeCurrent(m_hdc, NULL); wglDeleteContext(m_hglrc); m_isRunning = false; //Stop the main loop PostQuitMessage(0); break; case WM_SIZE: { int height = HIWORD(lParam); //Get height and width int width = LOWORD(lParam); getAttachedExample()->onResize(width, height); //Call the example's resize method } break; case WM_KEYDOWN: if (wParam == VK_ESCAPE) //If the escape key was pressed { DestroyWindow(m_hwnd); } break; default: break; } return DefWindowProc(hWnd, uMsg, wParam, lParam); } void GLWindow::processEvents() { MSG msg; //While there are messages in the queue, store them in msg while (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { //Process the messages TranslateMessage(&msg); DispatchMessage(&msg); } } Here is the header: #pragma once #include <ctime> #include <windows.h> class Example;//Declare our example class class GLWindow { public: GLWindow(HINSTANCE hInstance); //default constructor bool create(int width, int height, int bpp, bool fullscreen); void destroy(); void processEvents(); void attachExample(Example* example); bool isRunning(); //Is the window running? void swapBuffers() { SwapBuffers(m_hdc); } static LRESULT CALLBACK StaticWndProc(HWND wnd, UINT msg, WPARAM wParam, LPARAM lParam); LRESULT CALLBACK WndProc(HWND wnd, UINT msg, WPARAM wParam, LPARAM lParam); float getElapsedSeconds(); private: Example* m_example; //A link to the example program bool m_isRunning; //Is the window still running? bool m_isFullscreen; HWND m_hwnd; //Window handle HGLRC m_hglrc; //Rendering context HDC m_hdc; //Device context RECT m_windowRect; //Window bounds HINSTANCE m_hinstance; //Application instance WNDCLASSEX m_windowClass; void setupPixelFormat(void); Example* getAttachedExample() { return m_example; } float m_lastTime; };

    Read the article

  • Accessing your web server via IPv6

    Being able to run your systems on IPv6, have automatic address assignment and the ability to resolve host names are the necessary building blocks in your IPv6 network infrastructure. Now, that everything is in place it is about time that we are going to enable another service to respond to IPv6 requests. The following article will guide through the steps on how to enable Apache2 httpd to listen and respond to incoming IPv6 requests. This is the fourth article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. Surfing the web - IPv6 style Enabling IPv6 connections in Apache 2 is fairly simply. But first let's check whether your system has a running instance of Apache2 or not. You can check this like so: $ service apache2 status Apache2 is running (pid 2680). In case that you got a 'service unknown' you have to install Apache to proceed with the following steps: $ sudo apt-get install apache2 Out of the box, Apache binds to all your available network interfaces and listens to TCP port 80. To check this, run the following command: $ sudo netstat -lnptu | grep "apache2\W*$"tcp6       0      0 :::80                   :::*                    LISTEN      28306/apache2 In this case Apache2 is already binding to IPv6 (and implicitly to IPv4). If you only got a tcp output, then your HTTPd is not yet IPv6 enabled. Check your Listen directive, depending on your system this might be in a different location than the default in Ubuntu. $ sudo nano /etc/apache2/ports.conf # If you just change the port or add more ports here, you will likely also# have to change the VirtualHost statement in# /etc/apache2/sites-enabled/000-default# This is also true if you have upgraded from before 2.2.9-3 (i.e. from# Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and# README.Debian.gzNameVirtualHost *:80Listen 80<IfModule mod_ssl.c>    # If you add NameVirtualHost *:443 here, you will also have to change    # the VirtualHost statement in /etc/apache2/sites-available/default-ssl    # to <VirtualHost *:443>    # Server Name Indication for SSL named virtual hosts is currently not    # supported by MSIE on Windows XP.    Listen 443</IfModule><IfModule mod_gnutls.c>    Listen 443</IfModule> Just in case that you don't have a ports.conf file, look for it like so: $ cd /etc/apache2/$ fgrep -r -i 'listen' ./* And modify the related file instead of the ports.conf. Which most probably might be either apache2.conf or httpd.conf anyways. Okay, please bear in mind that Apache can only bind once on the same interface and port. So, eventually, you might be interested to add another port which explicitly listens to IPv6 only. In that case, you would add the following in your configuration file: Listen 80Listen [2001:db8:bad:a55::2]:8080 But this is completely optional... Anyways, just to complete all steps, you save the file, and then check the syntax like so: $ sudo apache2ctl configtestSyntax OK Ok, now let's apply the modifications to our running Apache2 instances: $ sudo service apache2 reload * Reloading web server config apache2   ...done. $ sudo netstat -lnptu | grep "apache2\W*$"                                                                                               tcp6       0      0 2001:db8:bad:a55:::8080 :::*                    LISTEN      5922/apache2    tcp6       0      0 :::80                   :::*                    LISTEN      5922/apache2 There we have two daemons running and listening to different TCP ports. Now, that the basics are in place, it's time to prepare any website to respond to incoming requests on the IPv6 address. Open up any configuration file you have below your sites-enabled folder. $ ls -al /etc/apache2/sites-enabled/... $ sudo nano /etc/apache2/sites-enabled/000-default <VirtualHost *:80 [2001:db8:bad:a55::2]:8080>        ServerAdmin [email protected]        ServerName server.ios.mu        ServerAlias server Here, we have to check and modify the VirtualHost directive and enable it to respond to the IPv6 address and port our web server is listening to. Save your changes, run the configuration test and reload Apache2 in order to apply your modifications. After successful steps you can launch your favourite browser and navigate to your IPv6 enabled web server. Accessing an IPv6 address in the browser That looks like a successful surgery to me... Note: In case that you received a timeout, check whether your client is operating on IPv6, too.

    Read the article

  • Right-Time Retail Part 1

    - by David Dorf
    This is the first in a three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Revolution Technology enables some amazing feats in retail. I can order flowers for my wife while flying 30,000 feet in the air. I can order my groceries in the subway and have them delivered later that day. I can even see how clothes look on me without setting foot in a store. Who knew that a TV, diamond necklace, or even a car would someday be as easy to purchase as a candy bar? Can technology make a mattress an impulse item? Wake-up and your back is hurting, so you rollover and grab your iPad, then a new mattress is delivered the next day. Behind the scenes the many processes are being choreographed to make the sale happen. This includes moving data between systems with the least amount for friction, which in some cases is near real-time. But real-time isn’t appropriate for all the integrations. Think about what a completely real-time retailer would look like. A consumer grabs toothpaste off the shelf, and all systems are immediately notified so that the backroom clerk comes running out and pushes the consumer aside so he can replace the toothpaste on the shelf. Such a system is not only cost prohibitive, but it’s also very inefficient and ineffectual. Retailers must balance the realities of people, processes, and systems to find the right speed of execution. That’ what “right-time retail” means. Retailers used to sell during the day and count the money and restock at night, but global expansion and the Web have complicated that simplistic viewpoint. Our 24hr society demands not only access but also speed, which constantly pushes the boundaries of our IT systems. In the last twenty years, there have been three major technology advancements that have moved us closer to real-time systems. Networking is the first technology that drove the real-time trend. As systems became connected, it became easier to move data between them. In retail we no longer had to mail the daily business report back to corporate each day as the dial-up modem could transfer the data. That was soon replaced with trickle-polling, when sale transactions were occasionally sent from stores to corporate throughout the day, often through VSAT. Then we got terrestrial networks like DSL and Ethernet that allowed the constant stream of data between stores and corporate. When corporate could see the sales transactions coming from stores, it could better plan for replenishment and promotions. That drove the need for speed into the supply chain and merchandising, but for many years those systems were stymied by the huge volumes of data. Nordstrom has 150 million SKU/Store combinations when planning (RPAS); The Gap generates 110 million price changes during end-of-season (RPM); Argos does 1.78 billion calculations executed each day for replenishment planning (AIP). These areas are now being alleviated by the second technology, storage. The typical laptop disk drive runs at 5,400rpm with PCs stepping up to 7,200rpm and servers hitting 15,000rpm. But the platters can only spin so fast, so to squeeze more performance we’ve had to rely on things like disk striping. Then solid state drives (SSDs) were introduced and prices continue to drop. (Augmenting your harddrive with a SSD is the single best PC upgrade these days.) RAM continues to be expensive, but compressing data in memory has allowed more efficient use. So a few years back, Oracle decided to build a box that incorporated all these advancements to move us closer to real-time. This family of products, often categorized as engineered systems, combines the hardware and software so that they work together to provide better performance. How much better? If Exadata powered a 747, you’d go from New York to Paris in 42 minutes, and it would carry 5,000 passengers. If Exadata powered baseball, games would last only 18 minutes and Boston’s Fenway would hold 370,000 fans. The Exa-family enables processing more data in less time. So with faster networks and storage, that brings us to the third and final ingredient. If we continue to process data in traditional ways, we won’t be able to take advantage of the faster networks and storage. Enter what Harvard calls “The Sexiest Job of the 21st Century” – the data scientist. New technologies like the Hadoop-powered Oracle Big Data Appliance, Oracle Advanced Analytics, and Oracle Endeca Information Discovery change the way in which we organize data. These technologies allow us to extract actionable information from raw data at incredible speeds, often ad-hoc. So the foundation to support the real-time enterprise exists, but how does a retailer begin to take advantage? The most visible way is through real-time marketing, but I’ll save that for part 3 and instead begin with improved integrations for the assets you already have in part 2.

    Read the article

  • 11gR2 ???:Oracle Cluster Health Monitor(CHM)??

    - by JaneZhang(???)
       Cluster Health Monitor(????CHM)???Oracle?????,?????????????(CPU????SWAP????I/O?????)??????CHM??????????   ??????????????????????Hang?????(Eviction)????????????????,??????CHM????????????????????,????????????? CHM???????????:    11.2.0.2 ?????? Oracle Grid Infrastructure for Linux (???Linux Itanium) ?Solaris (Sparc 64 ? x86-64)    11.2.0.3 ????? Oracle Grid Infrastructure for AIX ? Windows (???Windows Itanium)?    ????,???????????CHM?????(ora.crf)???:    $ crsctl stat res -t -init    --------------------------------------------------------------------------------    NAME           TARGET  STATE        SERVER                   STATE_DETAILS       Cluster Resources ora.crf        ONLINE  ONLINE       rac1 CHM????????:    1). System Monitor Service(osysmond):?????????????,osysmond????????????????cluster logger service,???????????????????CHM?????      $ ps -ef|grep osysmond       root      7984     1  0 Jun05 ?        01:16:14 /u01/app/11.2.0/grid/bin/osysmond.bin    2). Cluster Logger Service(ologgerd):???????,ologgerd ???????(master),???????(standby)??ologgerd???????????????,??????????     ???:     $ ps -ef|grep ologgerd       root      8257     1  0 Jun05 ?        00:38:26 /u01/app/11.2.0/grid/bin/ologgerd -M -d       /u01/app/11.2.0/grid/crf/db/rac2     ???:      $ ps -ef|grep ologgerd       root      8353     1  0 Jun05 ?        00:18:47 /u01/app/11.2.0/grid/bin/ologgerd -m rac2 -r -d /u01/app/11.2.0/grid/crf/db/rac1 CHM Repository:?????????,?????,????Grid Infrastructure home ? ,??1 GB ?????,???????????0.5GB???? ?????OCLUMON??????????????????(??????3????)? ??????????????:     $ oclumon manage -get reppath       CHM Repository Path = /u01/app/11.2.0/grid/crf/db/rac2       Done     $ oclumon manage -get repsize       CHM Repository Size = 68082 <====????       Done     ????:     $ oclumon manage -repos reploc /shared/oracle/chm      ????:     $ oclumon manage -repos resize 68083 <==?3600(??) ? 259200(3?)??      rac1 --> retention check successful      New retention is 68083 and will use 1073750609 bytes of disk space      CRS-9115-Cluster Health Monitor repository size change completed on all nodes.      Done ??CHM???????????:     1. ?????Grid_home/bin/diagcollection.pl:         1). ??,??cluster logger service????:         $ oclumon manage -get master         Master = rac2         2).?root??????rac2???????:         # <Grid_home>/bin/diagcollection.pl -collect -chmos -incidenttime inc_time -incidentduration duration         inc_time?????????????,???MM/DD/YYYY24HH:MM:SS, duration??????????????????         ??:# diagcollection.pl -collect -crshome /u01/app/11.2.0/grid -chmoshome  /u01/app/11.2.0/grid -chmos -incidenttime 06/15/201215:30:00 -incidentduration 00:05       3).????????,CHM?????????chmosData_rac2_20120615_1537.tar.gz?    2. ??????CHM?????????oclumon:        $oclumon dumpnodeview [[-allnodes] | [-n node1 node2] [-last "duration"] | [-s "time_stamp" -e "time_stamp"] [-v] [-warning]] [-h]        -s??????,-e??????        $ oclumon dumpnodeview -allnodes -v -s "2012-06-15 07:40:00" -e "2012-06-15 07:57:00" > /tmp/chm1.txt       $ oclumon dumpnodeview -n node1 node2 node3 -last "12:00:00" >/tmp/chm1.txt       $ oclumon dumpnodeview -allnodes -last "00:15:00" >/tmp/chm1.txt ???/tmp/chm1.txt??????:----------------------------------------Node: rac1 Clock: '06-15-12 07.40.01' SerialNo:168880----------------------------------------SYSTEM:#cpus: 1 cpu: 17.96 cpuq: 5 physmemfree: 32240 physmemtotal: 2065856 mcache: 1064024 swapfree: 3988376 swaptotal: 4192956 ior: 57 iow: 59 ios: 10 swpin: 0 swpout: 0 pgin: 57 pgout: 59 netr: 65.767 netw: 34.871 procs: 183 rtprocs: 10 #fds: 4902 #sysfdlimit: 6815744 #disks: 4 #nics: 3  nicErrors: 0TOP CONSUMERS:topcpu: 'mrtg(32385) 64.70' topprivmem: 'ologgerd(8353) 84068' topshm: 'oracle(8760) 329452' topfd: 'ohasd.bin(6627) 720' topthread: 'crsd.bin(8235) 44'PROCESSES:name: 'mrtg' pid: 32385 #procfdlimit: 65536 cpuusage: 64.70 privmem: 1160 shm: 1584 #fd: 5 #threads: 1 priority: 20 nice: 0name: 'oracle' pid: 32381 #procfdlimit: 65536 cpuusage: 0.29 privmem: 1456 shm: 12444 #fd: 32 #threads: 1 priority: 15 nice: 0...name: 'oracle' pid: 8756 #procfdlimit: 65536 cpuusage: 0.0 privmem: 2892 shm: 24356 #fd: 47 #threads: 1 priority: 16 nice: 0----------------------------------------Node: rac2 Clock: '06-15-12 07.40.02' SerialNo:168878----------------------------------------SYSTEM:#cpus: 1 cpu: 40.72 cpuq: 8 physmemfree: 34072 physmemtotal: 2065856 mcache: 1005636 swapfree: 3991808 swaptotal: 4192956 ior: 54 iow: 104 ios: 11 swpin: 0 swpout: 0 pgin: 54 pgout: 104 netr: 77.817 netw: 33.008 procs: 178 rtprocs: 10 #fds: 4948 #sysfdlimit: 6815744 #disks: 4 #nics: 4  nicErrors: 0TOP CONSUMERS:topcpu: 'orarootagent.bi(8490) 1.59' topprivmem: 'ologgerd(8257) 83108' topshm: 'oracle(8873) 324868' topfd: 'ohasd.bin(6744) 720' topthread: 'crsd.bin(8362) 47'PROCESSES:name: 'oracle' pid: 9040 #procfdlimit: 65536 cpuusage: 0.19 privmem: 6040 shm: 121712 #fd: 33 #threads: 1 priority: 16 nice: 0...  ??CHM?????,???Oracle????:  http://docs.oracle.com/cd/E11882_01/rac.112/e16794/troubleshoot.htm#CWADD92242  Oracle® Clusterware Administration and Deployment Guide  11g Release 2 (11.2)  Part Number E16794-17  ?? My Oracle Support??:  Cluster Health Monitor (CHM) FAQ (Doc ID 1328466.1)

    Read the article

  • git | error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied [SOLVED]

    - by Corbin Tarrant
    I am having a strange issue that I can't seem to resolve. Here is what happend: I had some log files in a github repository that I didn't want there. I found this script that removes files completely from git history like so: #!/bin/bash set -o errexit # Author: David Underhill # Script to permanently delete files/folders from your git repository. To use # it, cd to your repository's root and then run the script with a list of paths # you want to delete, e.g., git-delete-history path1 path2 if [ $# -eq 0 ]; then exit 0are still fi # make sure we're at the root of git repo if [ ! -d .git ]; then echo "Error: must run this script from the root of a git repository" exit 1 fi # remove all paths passed as arguments from the history of the repo files=$@ git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch $files" HEAD # remove the temporary history git-filter-branch otherwise leaves behind for a long time rm -rf .git/refs/original/ && git reflog expire --all && git gc --aggressive --prune I, of course, made a backup first and then tried it. It seemed to work fine. I then did a git push -f and was greeted with the following messages: error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied error: Cannot update the ref 'refs/remotes/origin/master'. Everything seems to have pushed fine though, because the files seem to be gone from the GitHub repository, if I try and push again I get the same thing: error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied error: Cannot update the ref 'refs/remotes/origin/master'. Everything up-to-date EDIT $ sudo chgrp {user} .git/logs/refs/remotes/origin/master $ sudo chown {user} .git/logs/refs/remotes/origin/master $ git push Everything up-to-date Thanks! EDIT Uh Oh. Problem. I've been working on this project all night and just went to commit my changes: error: Unable to append to .git/logs/refs/heads/master: Permission denied fatal: cannot update HEAD ref So I: sudo chown {user} .git/logs/refs/heads/master sudo chgrp {user} .git/logs/refs/heads/master I try the commit again and I get: error: Unable to append to .git/logs/HEAD: Permission denied fatal: cannot update HEAD ref So I: sudo chown {user} .git/logs/HEAD sudo chgrp {user} .git/logs/HEAD And then I try the commit again: 16 files changed, 499 insertions(+), 284 deletions(-) create mode 100644 logs/DBerrors.xsl delete mode 100644 logs/emptyPHPerrors.php create mode 100644 logs/trimXMLerrors.php rewrite public/codeCore/Classes/php/DatabaseConnection.php (77%) create mode 100644 public/codeSite/php/init.php $ git push Counting objects: 49, done. Delta compression using up to 2 threads. Compressing objects: 100% (27/27), done. Writing objects: 100% (27/27), 7.72 KiB, done. Total 27 (delta 15), reused 0 (delta 0) To [email protected]:IAmCorbin/MooKit.git 59da24e..68b6397 master -> master Hooray. I jump on http://GitHub.com and check out the repository, and my latest commit is no where to be found. ::scratch head:: So I push again: Everything up-to-date Umm...it doesn't look like it. I've never had this issue before, could this be a problem with github? or did I mess something up with my git project? EDIT Nevermind, I did a simple: git push origin master and it pushed fine.

    Read the article

  • Paying great programmers more than average programmers

    - by Kelly French
    It's fairly well recognized that some programmers are up to 10 times more productive than others. Joel mentions this topic on his blog. There is a whole blog devoted to the idea of the "10x productive programmer". In years since the original study, the general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000). Fred Brooks mentions the wide range in the quality of designers in his "No Silver Bullet" article, The differences are not minor--they are rather like the differences between Salieri and Mozart. Study after study shows that the very best designers produce structures that are faster, smaller, simpler, cleaner, and produced with less effort. The differences between the great and the average approach an order of magnitude. The study that Brooks cites is: H. Sackman, W.J. Erikson, and E.E. Grant, "Exploratory Experimental Studies Comparing Online and Offline Programming Performance," Communications of the ACM, Vol. 11, No. 1 (January 1968), pp. 3-11. The way programmers are paid by employers these days makes it almost impossible to pay the great programmers a large multiple of what the entry-level salary is. When the starting salary for a just-graduated entry-level programmer, we'll call him Asok (From Dilbert), is $40K, even if the top programmer, we'll call him Linus, makes $120K that is only a multiple of 3. I'd be willing to be that Linus does much more than 3 times what Asok does, so why wouldn't we expect him to get paid more as well? Here is a quote from Stroustrup: "The companies are complaining because they are hurting. They can't produce quality products as cheaply, as reliably, and as quickly as they would like. They correctly see a shortage of good developers as a part of the problem. What they generally don't see is that inserting a good developer into a culture designed to constrain semi-skilled programmers from doing harm is pointless because the rules/culture will constrain the new developer from doing anything significantly new and better." This leads to two questions. I'm excluding self-employed programmers and contractors. If you disagree that's fine but please include your rationale. It might be that the self-employed or contract programmers are where you find the top-10 earners, but please provide a explanation/story/rationale along with any anecdotes. [EDIT] I thought up some other areas in which talent/ability affects pay. Financial traders (commodities, stock, derivatives, etc.) designers (fashion, interior decorators, architects, etc.) professionals (doctor, lawyer, accountant, etc.) sales Questions: Why aren't the top 1% of programmers paid like A-list movie stars? What would the industry be like if we did pay the "Smart and gets things done" programmers 6, 8, or 10 times what an intern makes? [Footnote: I posted this question after submitting it to the Stackoverflow podcast. It was included in episode 77 and I've written more about it as a Codewright's Tale post 'Of Rockstars and Bricklayers'] Epilogue: It's probably unfair to exclude contractors and the self-employed. One aspect of the highest earners in other fields is that they are free-agents. The competition for their skills is what drives up their earning power. This means they can not be interchangeable or otherwise treated as a plug-and-play resource. I liked the example in one answer of a major league baseball team trying to field two first-basemen. Also, something that Joel mentioned in the Stackoverflow podcast (#77). There are natural dynamics to shrink any extreme performance/pay ranges between the highs and lows. One is the peer pressure of organizations to pay within a given range, another is the likelyhood that the high performer will realize their undercompensation and seek greener pastures.

    Read the article

  • migrating simple rails database to mysql

    - by joseph-misiti
    i am interested in creating a rails app with a mysql database. i am new to rails and am just trying to start creating something simple: rails -d mysql MyMoviesSQL cd MyMoviesSQL script/generate scaffold Movies title:string rating:integer rake db:migrate i am seeing the following error: rake aborted! NoMethodError: undefined method `ord' for 0:Fixnum: SET NAMES 'utf8' if i do a trace: ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! NoMethodError: undefined method ord' for 0:Fixnum: SET NAMES 'utf8' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract_adapter.rb:219:inlog' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:323:in execute' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:599:inconfigure_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:594:in connect' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:203:ininitialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:in new' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:inmysql_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in send' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:innew_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:in checkout_new_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:incheckout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in loop' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:incheckout' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in synchronize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:incheckout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:in connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:inretrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:in retrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:inconnection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:435:in initialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:innew' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:in up' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:383:inmigrate' /Library/Ruby/Gems/1.8/gems/rails-2.3.5/lib/tasks/databases.rake:116 /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in call' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:inexecute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:inexecute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:insynchronize' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in invoke_with_call_chain' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:ininvoke' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in invoke_task' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in standard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:instandard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/bin/rake:19:inload' /usr/bin/rake:19 here are my versions: rails - 2.3.5 ruby - 1.8.6 gem list * LOCAL GEMS * actionmailer (2.3.5, 1.3.6) actionpack (2.3.5, 1.13.6) actionwebservice (1.2.6) activerecord (2.3.5, 1.15.6) activeresource (2.3.5) activesupport (2.3.5, 1.4.4) acts_as_ferret (0.4.1) capistrano (2.0.0) cgi_multipart_eof_fix (2.5.0) daemons (1.0.9) dbi (0.4.3) deprecated (2.0.1) dnssd (0.6.0) fastthread (1.0.1) fcgi (0.8.7) ferret (0.11.4) gem_plugin (0.2.3) highline (1.2.9) hpricot (0.6) libxml-ruby (0.9.5, 0.3.8.4) mongrel (1.1.4) needle (1.3.0) net-sftp (1.1.0) net-ssh (1.1.2) rack (1.0.1) rails (2.3.5) rake (0.8.7, 0.7.3) RedCloth (3.0.4) ruby-openid (1.1.4) ruby-yadis (0.3.4) rubygems-update (1.3.6) rubynode (0.1.3) sqlite3-ruby (1.2.1) termios (0.9.4) also, if i need to add a patch to FixNum, can someone please tell which file to add the patch to. thanks for your help

    Read the article

  • becomeFirstResponder not working!!!

    - by vikinara
    In the below code becomeFirstResonder not working, only resignFirstresponder working...can anyone please help - (BOOL)textFieldShouldReturn:(UITextField *)textField { if (textField == txtDate) { [txtDate resignFirstResponder]; [txtTime becomeFirstResponder]; } if (textField == txtTime) { [txtTime resignFirstResponder]; [txtAddress becomeFirstResponder]; } if (textField == txtAddress) { [txtAddress resignFirstResponder]; [txtCity becomeFirstResponder]; } if (textField == txtCity) { [txtCity resignFirstResponder]; [txtState becomeFirstResponder]; } if(textField == txtState) { [txtState resignFirstResponder]; [txtZip becomeFirstResponder]; } if (textField == txtZip) { [txtZip resignFirstResponder]; } return NO; } - (BOOL)textFieldShouldEndEditing:(UITextField *)textField { if(textField == txtDate) { NSString *dateString = txtDate.text; NSString *dateRegex = @"^(1[0-2]|0[1-9])/(3[01]|[12][0-9]|0[1-9])/[0-9]{4}$"; NSPredicate *dateTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", dateRegex]; BOOL validateDate = [dateTest evaluateWithObject:dateString]; if(!validateDate){ UIAlertView *alert2 = [[UIAlertView alloc] initWithTitle:nil message:@"Date Error." delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert2 show]; [alert2 release]; txtDate.text = nil; } } if(textField == txtTime) { NSString *timeString = txtTime.text; NSString *timeRegex = @"^(([0]?[0-5][0-9]|[0-9]):([0-5][0-9]))$"; NSPredicate *timeTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", timeRegex]; BOOL validateTime = [timeTest evaluateWithObject:timeString]; if(!validateTime) { UIAlertView *alert2 = [[UIAlertView alloc] initWithTitle:nil message:@"Incorrect Time Entry." delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert2 show]; [alert2 release]; txtTime.text = nil; } } if(textField == txtAddress) { NSString *addressString = txtAddress.text; NSString *addressRegex = @"^[a-z0-9 ]+$"; NSPredicate *addressTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", addressRegex]; BOOL validateAddress = [addressTest evaluateWithObject:addressString]; if(!validateAddress) { UIAlertView *alert2 = [[UIAlertView alloc] initWithTitle:nil message:@"Incorrect State." delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert2 show]; [alert2 release]; txtAddress.text = nil; } } if(textField == txtState) { NSString *stateString = txtState.text; NSString *stateRegex = @"^(?-i:A[LKSZRAEP]|C[AOT]|D[EC]|F[LM]|G[AU]|HI|I[ADLN]|K[SY]|LA|M[ADEHINOPST]|N[CDEHJMVY]|O[HKR]|P[ARW]|RI|S[CD]|T[NX]|UT|V[AIT]|W[AIVY])$"; NSPredicate *stateTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", stateRegex]; BOOL validateState = [stateTest evaluateWithObject:stateString]; if(!validateState) { UIAlertView *alert2 = [[UIAlertView alloc] initWithTitle:nil message:@"Incorrect State." delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert2 show]; [alert2 release]; txtState.text = nil; } } if(textField == txtCity) { NSString *cityString = txtCity.text; NSString *cityRegex = @"^[a-z ]+$"; NSPredicate *cityTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", cityRegex]; BOOL validateCity = [cityTest evaluateWithObject:cityString]; if(!validateCity) { UIAlertView *alert2 = [[UIAlertView alloc] initWithTitle:nil message:@"Incorrect City." delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert2 show]; [alert2 release]; txtCity.text = nil; } } if(textField == txtZip) { NSString *zipString = txtZip.text; NSString *zipRegex = @"^[0-9]{5}([- /]?[0-9]{4})?$"; NSPredicate *zipTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", zipRegex]; BOOL validateZip = [zipTest evaluateWithObject:zipString]; if(!validateZip) { UIAlertView *alert2 = [[UIAlertView alloc] initWithTitle:nil message:@"Incorrect Zip." delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert2 show]; [alert2 release]; txtZip.text = nil; } } return NO; }

    Read the article

  • Implementing Unit Testing on the iPhone

    - by james.ingham
    Hi, So I've followed this tutorial to setup unit testing on my app when I got a little stuck. At bullet point 8 in that tutorial it shows this image, which is what I should be expecting when I build: However this isn't what I get when I build. I get this error message: Command /bin/sh failed with exit code 1 as well as the error message the unit test has created. Then, when I expand on the first error I get this: PhaseScriptExecution "Run Script" "build/3D Pool.build/Debug-iphonesimulator/LogicTests.build/Script-1A6BA6AE10F28F40008AC2A8.sh" cd "/Users/james/Desktop/FYP/3D Pool" setenv ACTION build setenv ALTERNATE_GROUP staff ... setenv XCODE_VERSION_MAJOR 0300 setenv XCODE_VERSION_MINOR 0320 setenv YACC /Developer/usr/bin/yacc /bin/sh -c "\"/Users/james/Desktop/FYP/3D Pool/build/3D Pool.build/Debug-iphonesimulator/LogicTests.build/Script-1A6BA6AE10F28F40008AC2A8.sh\"" /Developer/Tools/RunPlatformUnitTests.include:412: note: Started tests for architectures 'i386' /Developer/Tools/RunPlatformUnitTests.include:419: note: Running tests for architecture 'i386' (GC OFF) objc[12589]: GC: forcing GC OFF because OBJC_DISABLE_GC is set Test Suite '/Users/james/Desktop/FYP/3D Pool/build/Debug-iphonesimulator/LogicTests.octest(Tests)' started at 2010-01-04 21:05:06 +0000 Test Suite 'LogicTests' started at 2010-01-04 21:05:06 +0000 Test Case '-[LogicTests testFail]' started. /Users/james/Desktop/FYP/3D Pool/LogicTests.m:17: error: -[LogicTests testFail] : Must fail to succeed. Test Case '-[LogicTests testFail]' failed (0.000 seconds). Test Suite 'LogicTests' finished at 2010-01-04 21:05:06 +0000. Executed 1 test, with 1 failure (0 unexpected) in 0.000 (0.000) seconds Test Suite '/Users/james/Desktop/FYP/3D Pool/build/Debug-iphonesimulator/LogicTests.octest(Tests)' finished at 2010-01-04 21:05:06 +0000. Executed 1 test, with 1 failure (0 unexpected) in 0.000 (0.002) seconds /Developer/Tools/RunPlatformUnitTests.include:448: error: Failed tests for architecture 'i386' (GC OFF) /Developer/Tools/RunPlatformUnitTests.include:462: note: Completed tests for architectures 'i386' Command /bin/sh failed with exit code 1 Now this is very odd as it is running the tests (and succeeding as you can see my STFail firing) because if I add a different test which passes I get no errors, so the tests are working fine. But why am I getting this extra build fail? It may also be of note that when downloading solutions/templates which should work out the box, I get the same error. I'm guessing I've set something up wrong here but I've just followed a tutorial 100% correctly!! If anyone could help I'd be very grateful! Thanks EDIT: According to this blog, this post and a few other websites, I'm not the only one getting this problem. It has been like this since the release of xCode 3.2, assuming the apple dev center documents and tutorials etc are pre-3.2 as well. However some say its a known issue whereas others seem to think this was intentional. I for one would like both the extended console and in code messages, and I certainly do not like the "Command /bin/sh..." error and really think they would have documented such an update. Hopefully it will be fixed soon anyway. UPDATE: Here's confirmation it's something changed since the release of xCode 3.2.1. This image: is from my test build using 3.2.1. This one is from an older version (3.1.4): . (The project for both was unchanged).

    Read the article

  • HgWebDir push permission denied error

    - by Gregg
    I have a new Fedora 12 server that I am attempting to set up Mercurial on. I have yum installed mercurial, and most things seem to work fine. However, after setting up hgwebdir.cgi through apache, I am unable to do an hg push to the only repo currently being hosted. The error I get back is: searching for changes abort: HTTP Error 500: Permission denied: .hg/store/lock httpd is running as user apache UID PID PPID C STIME TTY TIME CMD root 1691 1 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1694 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1695 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1696 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1697 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1698 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1699 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1700 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1701 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd and I set permissions so that the apache user owns the whole repo and everything. In a last ditch attempt, I even made the repo globally writable. [root@builds .hg]# ll total 424K drwxrwxrwx. 3 apache apache 4.0K 2010-04-19 14:43 . drwxrwxrwx. 19 apache apache 4.0K 2010-04-15 13:33 .. -rw-rw-rw-. 2 apache apache 57 2010-04-13 11:42 00changelog.i -rw-rw-rw-. 1 apache apache 93 2010-04-16 15:33 branchheads.cache -rw-rw-rw-. 1 apache apache 192K 2010-04-15 13:33 dirstate -rw-r--r--. 1 apache apache 156 2010-04-19 14:43 hgrc -rw-rw-rw-. 1 apache apache 42 2010-04-15 13:33 last-message.txt -rw-rw-rw-. 2 apache apache 23 2010-04-13 11:42 requires drwxrwxrwx. 4 apache apache 4.0K 2010-04-19 11:26 store -rw-rw-rw-. 1 apache apache 45 2010-04-14 14:08 tags.cache -rw-rw-rw-. 1 apache apache 7 2010-04-16 15:33 undo.branch -rw-rw-rw-. 1 apache apache 192K 2010-04-16 15:33 undo.dirstate [root@builds .hg]# cd store [root@builds store]# ll total 308K drwxrwxrwx. 4 apache apache 4.0K 2010-04-19 11:26 . drwxrwxrwx. 3 apache apache 4.0K 2010-04-19 14:43 .. -rw-rw-rw-. 1 apache apache 20K 2010-04-16 15:33 00changelog.i -rw-rw-rw-. 1 apache apache 81K 2010-04-16 15:33 00manifest.i drwxrwxrwx. 17 apache apache 4.0K 2010-04-13 11:47 data drwxrwxrwx. 3 apache apache 4.0K 2010-04-13 11:43 dh -rw-rw-rw-. 2 apache apache 177K 2010-04-15 11:03 fncache -rw-rw-rw-. 1 apache apache 67 2010-04-16 15:33 undo I have a clone of the repo elsewhere on the machine running as a different user. If I set the the default value in the [paths] section of the clones hgrc file to the local filepath on the server, the push works fine, but if I switch it to use the url, I get the error every time. Some possible quirks in how I've set this up... hgwebdir.cgi is sitting in /var/www/cgi-bin and the repo is a child of /opt/hg. I turned off suexec as well, and this doesn't seem to clear up the issue. The only line I added in the apache config to get hgwebdir running is: ScriptAlias /hg "/var/www/cgi-bin/hgwebdir.cgi" The hgweb.config is also in /var/www/cgi-bin and it's contents are: [collections] /opt/hg = /opt/hg [trusted] users = * [web] baseurl = /hg push_ssl = false allow_push = * The repo browser is working fine, it's just push that doesn't work. Apache error_log doesn't have anything in about this error at all.

    Read the article

  • Rails 3.0.0.beta and Facebooker: anyone else seeing the following?

    - by nafe
    Hi all, My rails server seems to break after installing the facebooker plugin. Any suggestions on fixing this would be great. I'm using rails 3.0.0.beta and facebooker. Here are the steps and the error that I'm seeing: $ rails -v Rails 3.0.0.beta $ rails break; cd break $ ./script/rails plugin install git://github.com/mmangino/facebooker.git $ vim Rakefile #and add "require 'tasks/facebooker'" $ ./script/rails server => Booting WEBrick => Rails 3.0.0.beta application starting in development on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server Exiting /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:456:in `load_missing_constant': uninitialized constant ActiveSupport::CoreExtensions (NameError) from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:91:in `const_missing' from /path/break/vendor/plugins/facebooker/lib/facebooker/adapters/adapter_base.rb:6 from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:537:in `new_constants_in' from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' from /path/break/vendor/plugins/facebooker/lib/facebooker.rb:252 from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:537:in `new_constants_in' from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' from /path/break/vendor/plugins/facebooker/rails/../init.rb:5 from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:537:in `new_constants_in' from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' from /path/break/vendor/plugins/facebooker/rails/init.rb:1 from /Library/Ruby/Gems/1.8/gems/railties-3.0.0.beta/lib/rails/plugin.rb:49 from /Library/Ruby/Gems/1.8/gems/railties-3.0.0.beta/lib/rails/initializable.rb:25:in `instance_exec' from /Library/Ruby/Gems/1.8/gems/railties-3.0.0.beta/lib/rails/initializable.rb:25:in `run' from /Library/Ruby/Gems/1.8/gems/railties-3.0.0.beta/lib/rails/initializable.rb:55:in `run_initializers' from /Library/Ruby/Gems/1.8/gems/railties-3.0.0.beta/lib/rails/initializable.rb:54:in `each' from /Library/Ruby/Gems/1.8/gems/railties-3.0.0.beta/lib/rails/initializable.rb:54:in `run_initializers' from /Library/Ruby/Gems/1.8/gems/railties-3.0.0.beta/lib/rails/application.rb:71:in `initialize!' from /Library/Ruby/Gems/1.8/gems/railties-3.0.0.beta/lib/rails/application.rb:41:in `send' from /Library/Ruby/Gems/1.8/gems/railties-3.0.0.beta/lib/rails/application.rb:41:in `method_missing' from /path/break/config/environment.rb:5 from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:537:in `new_constants_in' from /Library/Ruby/Gems/1.8/gems/activesupport-3.0.0.beta/lib/active_support/dependencies.rb:167:in `require' from config.ru:3 from /Library/Ruby/Gems/1.8/gems/rack-1.1.0/lib/rack/builder.rb:46:in `instance_eval' from /Library/Ruby/Gems/1.8/gems/rack-1.1.0/lib/rack/builder.rb:46:in `initialize' from config.ru:1:in `new' from config.ru:1

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >