Search Results

Search found 11687 results on 468 pages for 'drupal installation'.

Page 445/468 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • RDA Health Checks for SOA

    - by ShawnBailey
    What is a health check in RDA? A health check evaluates something in your environment to determine whether a change needs to be considered in order to avoid a problem or optimize fuctionality. Examples of what this 'something' might be are: Configuration Parameters JVM Options Runtime Statistics What have we done for SOA? In the latest release of RDA, 4.30, we have added a Rule Set for SOA called 'Oracle SOA 11g (11.1.1) Post Installation (Generic)'. This Rule Set contains 14 SOA related health checks. These checks were all derived from common issues / solutions we see in support of the SOA product. Many of the recommendations come from the product documentation while others are covered in the SOA Knowledge Base. Our goal is that you will be able to easily identify the areas of concern and understand the guidance available from the output of the Rule Set. Running the health checks for SOA The rules that the checks use are installed with RDA and bundled by product or functional area into what are called 'Rule Sets'. To view the available Rule Sets simply run the command from the RDA home location: rda.cmd (or .sh) -dT hcve This will bring up a list of the available HCVE (Health Check / Verification Engine) Rule Sets. Each Rule Set contains a group of related rules that are used for evalutation and display of results. A rule can be considered synonymous with a single health check and they are assigned an ID, Name and Description that can be seen when they are executed. The Rule Set for SOA is option number 11 and you just enter this selection at the prompt. The Rule Set will then execute to completion. After running an HCVE Rule Set the tool will write the output to the RDA_HOME/output folder. The simplest way to view the output is to drag the .htm file to a browser but of course it can also be uploaded to a Service Request for evaluation by Oracle Support. Many of the Rule Sets will prompt you for information before they can execute their rules but the SOA Rule Set will identify the SOA domains configured in your RDA setup.cfg file. This means that you don't need to answer all of the questions again about where stuff is but it also means that you must have configured RDA for SOA. To run the Rule Set: Download the latest version of RDA from MOS Doc ID 314422.1 Configure RDA for your SOA domains. Detailed steps can be found here In it's simplest form the command is 'rda.cmd (.sh) -S SOA' Go to the RDA home location and enter the command 'rda.cmd (or .sh) -dT hcve' Select option '11' It should be noted that this our first release of a SOA Rule Set so there will probably be some things we need to clean up or fix. None of these rules will actually modify anything on your system as they are read only and do the evaluations internally. Please let us know if you have any issues with the rules or ideas for new ones so we can make them as useful as possible. The Checks Here is a list of the SOA health checks by ID, Name and Description. ID Name Description A00100 SOA Domain Homes Lists the SOA domains that were indentified from the RDA setup.cfg file A00200 Coherence Protocol Conflict Checks to see if you have both Unicast and Multicast configured in the same domain. Checks both the setDomainEnv and config.xml entries (if it exists). We recommend Unicast with fully qualified host names or IP addresses. A00210 Coherence Fully Qualified Host Checks that the host names are fully qualified or that IP addresses are used. Will fail if unqualified host names are detected. A00220 Unicast Local Host Checks that the Coherence localhost is specified for use with Unicast A00300 JTA Timeout Checks that the JTA timeout is configured for the domain and lists the value. The bundled rule will only list the current values of the JTA timeout for each SOA Domain. In the future the rule with fail with a warning if the value is 300 seconds or lower. It is recommended that timeouts follow the pattern 'syncMaxWaitTime' < EJB Timeouts < JTA Timeout. The 300 second value is important because the EJB Timeouts default to 300 seconds. Additional information can be found in MOS Doc ID 880313.1. A00310 XA Max Time Checks that the JTA Maximum XA call time is set for the domain. Fails if it is not explicitly set or if the value is less than or equal to the default of 12000 ms. A00320 XA Timeout Checks that the XA timeout is enabled and that the value is '0' for the SOA Data Source (SOADataSource-jdbc.xml) A00330 JDBC Statement Timeout Checks that the Statement Timeout is set for all SOA Data Sources. Fails if the value is not set or if it is set to the default of -1. A00400 XA Driver Checks that the SOA Data Source is configured to use an XA driver. Fails if it is not. A00410 JDBC Capacity Settings Checks that the minimum and maximum capacity are equal for all SOA Data Sources. Fails if they are not and lists specifically which data sources failed. A00500 SOA Roles Checks that the default SOA roles 'SOAAdmin' and 'SOAOperator' are configured for the soa-infra application in the file sytem-jazn-data.xml. Fails if they are not. A00700 SOA-INFRA Deployment Checks that the soa-infra application is deployed to either a cluster, all members of a cluster or a stand alone server. A00710 SOA Deployments Checks that the SOA related applications are deployed to the same domain members as soa-infra. A00720 SOA Library Deployments Checks that the SOA related libraries are deployed to the same domain members as soa-infra. A00730 Data Source Deployments Checks that the SOA Data Sources are all targeted to the same domain members as soa-infra

    Read the article

  • Visual Studio &amp; TFS 11 &ndash; List of extensions and upgrades

    - by terje
    This post is a list of the extensions I recommend for use with Visual Studio 11. It’s coming up all the time – what to install, where are the download sites, last version, etc etc, and thus I thought it better to post it here and keep it updated. The basics are Visual Studio 11 connected to a Team Foundation Server 11. Note that we now are at Beta time, and that also many live in a side-by-side environment with Visual Studio 2010.  The side-by-side is supported by VS 11. However, if you installed a component supporting VS11 before you installed VS11, then you need to reinstall it.  The VSIX installer will understand that it is to apply those only for VS11, and will not touch – nor remove – the same for VS2010. A good example here is the Power Commands. The list is more or less in priority order. The focus is to get a setup which can be used for a complete coding experience for the whole ALM process. The list of course reflects what I use for my work , so it is by no means complete, and for some of the tools there are equally useful alternatives. Many components have not yet arrived with VS11 support.  I will add them as they arrive.  The components directly associated with Visual Studio from Microsoft should be common, see the Microsoft column. If you still need the VS2010 extensions, here they are: The extensions for VS 2010.   Components ready for VS 11, both upgrades and new ones Product Notes Latest Version License Applicable to Microsoft TFS Power Tools Beta 111 Side-by-side with TFS 2010 should work, but remove the Shell Extension from the TFS 2010 power tool first. March 2012(11.0.50321.0) Free TFS integration Yes ReSharper EAP for Beta 11 (updates very often, nearly daily) 7.0.3.261 pr. 16/3/2012 Free as EAP, Licensed later Coding & Quality No Power Commands1 Just reinstall, even if you already have it for VS2010. The reinstall will then apply it to VS 11 1.0.2.3 Free Coding Yes Visualization and Modelling SDK for beta Info here and here. Another download site and info here. Also download from MSDN Subscription site. Requires VS 11 Beta SDK 11 Free now, otherwise Part of MSDN Subscription Modeling Yes Visual Studio 11 Beta SDK Published 16.2.2012     Yes Visual Studio 11 Feedback tool1 Use this to really ease the process of sending bugs back to Microsoft. 1.1 Free as prerelase Visual Studio Yes             #1 Get via Visual Studio’s Tools | Extension Manager (or The Code Gallery). (From Adam : All these are auto updated by the Extension Manager in Visual Studio) #2 Works with ultimate only Components we wait for, not yet in a VS 11 version Product Notes Latest Version License Applicable to Microsoft       Coding Yes Inmeta Build Explorer     Free TFS integration No Build Manager Community Build Manager. Info here from Jakob   Free TFS Integration No Code Contracts Coming real soon   Free Coding & Quality Yes Code Contracts Editor Extensions     Free Coding & Quality Yes Web Std Update     Free Coding (Web) Yes (MSFT) Web Essentials     Free Coding (Web) Yes (MSFT) DotPeek It says up to .Net 4.0, but some tests indicates it seems to be able to handle 4.5. 1.0.0.7999 Free Coding/Investigation No Just Decompile Also says up to .net 4.0   Free Coding/Investigation No dotTrace     Licensed Quality No NDepend   Licensed Quality No tangible T4 editor     Lite version Free (Good enough) Coding (T4 templates) No Pex Moles are now integrated and improved in VS 11 as a new library called Fakes.     Coding & Unit Testing Yes Components which are now integrated into VS 11 Product Notes Productivity Power Tools Features integrated into VS11, with a few exceptions, I don’t think you will miss those. Fakes  Was Moles in 2010. Fakes is improved and made into a product.  NuGet Manager Included in the install, but still an extension package. Info here. Product installation, upgrades and patches for VS/TFS 11   Product Notes Date Applicable to Visual Studio 11 & TFS 11 Beta This is the beta release, and you are free to download and try it out. March 2012 Visual Studio and TFS SQL Server 2008 R2 SP1 Cumulative Update 4 The TFS 11 requires the CU1 at least, but you should go up to at least CU4, since this update solves a ghost record problem that otherwise may cause your TFS database to not release records the way it should when you clean it up, see this post for more information on that issue.  Oct 2011 SQL Server 2008 R2 SP1

    Read the article

  • Manage Your Amazon S3 Account with CloudBerry Explorer

    - by Mysticgeek
    If you have an Amazon S3 account you’re using to backup your data, you might want an easy way to manage it. CloudBerry Explorer is a free app that runs on your desktop an provides an easy way to manage your S3 account. Installation and Setup Just download and install the application with the defaults. When the application launches you’ll be prompted to enter in your username and email to get a registration key. Or you can continue on by clicking Register later. Now you will want to set up your Amazon S3 account. Click on File \ Amazon S3 Accounts. Double-click on the New Account icon.   Next enter in your Amazon account Access and Secret keys, select SSL if you want, then click the Test Connection button. Provided everything was entered correctly, you’ll see the Connection Success screen, just close out of it. Browse and Manage files Once you have your account setup through the Explorer, you can start viewing and managing your files on S3. The left pane shows your S3 buckets and stored files, while the right side shows your local computer. This allows you to manage your files in your Amazon S3 buckets directly from your desktop! It’s very easy to use, and you can drag and drop files from your computer to the S3 account or vice versa. There is also the ability to transfer files between Amazon S3 accounts from within the explorer. Go into Tools and Content Types and you can control the file types by adding, removing, or editing them. If you end up messing something up along the lines, you can always select Reset to defaults and everything will be back to normal. There is a multiple tabbed view so you can easily keep track of your different accounts and local machine. It allows the ability to create new storage buckets directly in the Explorer. Or you can delete buckets as well… Different actions can be accessed from the toolbars or by right-clicking and selecting from the context menu. Here we see a cool option that lets you move your data inside Amazon S3. It is faster and doesn’t cost money by moving the files to your computer first, then to another account. However, if you want data moved to your local machine first, you have that option as well.   Not all features are available in the free version, and if it’s not, you’ll be prompted to purchase a license for the Pro version. We will have a comprehensive review of the Pro version in the near future.    If you ever need help with CloudBerry Explorer, go to Tools \ Diagnostics. It will run a quick diagnostics check and you can send the information to the CloudBerry team for assistance. Delete Files from Amazon S3 To delete a file from you Amazon S3 account, simply highlight the files or folder you want to get rid of then click Delete on the toolbar. You can also right-click the file and select Delete from the Context Menu. Click Yes to the confirmation dialog box… Then you can watch the progress as your files are deleted in the bottom section of the explorer. Conclusion CloudBerry Explorer free version has several neat features that will allow you easy and basic control over you Amazon S3 account. The free version may be enough for basic users, but power users will want to upgrade to the pro version, as it includes a lot more features. Using the free version allows you to get a feel for what CloudBerry Explorer has to offer, and is a good starting point. Keep in mind that Amazon S3 is introducing Reduced Redundancy Storage which will lower the price of data stored. The price drops from $0.15 per GB to only $0.10 per GB. If you’re a Windows Home Server user, check out our review of CloudBerry Online Backup 1.5 for WHS. Download CloudBerry Explorer Free for Amazon S3 Similar Articles Productive Geek Tips CloudBerry Online Backup 1.5 for Windows Home ServerReopen Closed Tabs in Internet ExplorerPreview and Purchase Ebooks with Kindle for PCTroubleshoot and Manage Addons in Internet Explorer 8Beginner Geek: Delete User Accounts in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor

    Read the article

  • Database-as-a-Service on Exadata Cloud

    - by Gagan Chawla
    Note – Oracle Enterprise Manager 12c DBaaS is platform agnostic and is designed to work on Exadata/non-Exadata, physical/virtual, Oracle/non Oracle platforms and it’s not a mandatory requirement to use Exadata as the base platform. Database-as-a-Service (DBaaS) is an important trend these days and the top business drivers motivating customers towards private database cloud model include constant pressure to reduce IT Costs and Complexity, and also to be able to improve Agility and Quality of Service. The first step many enterprises take in their journey towards cloud computing is to move to a consolidated and standardized environment and Exadata being already a proven best-in-class popular consolidation platform, we are seeing now more and more customers starting to evolve from Exadata based platform into an agile self service driven private database cloud using Oracle Enterprise Manager 12c. Together Exadata Database Machine and Enterprise Manager 12c provides industry’s most comprehensive and integrated solution to transform from a typical silo’ed environment into enterprise class database cloud with self service, rapid elasticity and pay-per-use capabilities.   In today’s post, I’ll list down the important steps to enable DBaaS on Exadata using Enterprise Manager 12c. These steps are chalked down based on a recent DBaaS implementation from a real customer engagement - Project Planning - First step involves defining the scope of implementation, mapping functional requirements and objectives to use cases, defining high availability, network, security requirements, and delivering the project plan. In a Cloud project you plan around technology, business and processes all together so ensure you engage your actual end users and stakeholders early on in the project right from the scoping and planning stage. Setup your EM 12c Cloud Control Site – Once the project plan approval and sign off from stakeholders is achieved, refer to EM 12c Install guide and these are some important tips to follow during the site setup phase - Review the new EM 12c Sizing paper before you get started with install Cloud, Chargeback and Trending, Exadata plug ins should be selected to deploy during install Refer to EM 12c Administrator’s guide for High Availability, Security, Network/Firewall best practices and options Your management and managed infrastructure should not be combined i.e. EM 12c repository should not be hosted on same Exadata where target Database Cloud is to be setup Setup Roles and Users – Cloud Administrator (EM_CLOUD_ADMINISTRATOR), Self Service Administrator (EM_SSA_ADMINISTRATOR), Self Service User (EM_SSA_USER) are the important roles required for cloud lifecycle management. Roles and users are managed by Super Administrator via Setup menu –> Security option. For Self Service/SSA users custom role(s) based on EM_SSA_USER should be created and EM_USER, PUBLIC roles should be revoked during SSA user account creation. Configure Software Library – Cloud Administrator logs in and in this step configures software library via Enterprise menu –> provisioning and patching option and the storage location is OMS shared filesystem. Software Library is the centralized repository that stores all software entities and is often termed as ‘local store’. Setup Self Update – Self Update is one of the most innovative and cool new features in EM 12c framework. Self update can be accessed via Setup -> Extensibility option by Super Administrator and is the unified delivery mechanism to get all new and updated entities (Agent software, plug ins, connectors, gold images, provisioning bundles etc) in EM 12c. Deploy Agents on all Compute nodes, and discover Exadata targets – Refer to Exadata discovery cookbook for detailed walkthrough to ensure successful discovery of Exadata targets. Configure Privilege Delegation Settings – This step involves deployment of privilege setting template on all the nodes by Super Administrator via Setup menu -> Security option with the option to define whether to use sudo or powerbroker for all provisioning and patching operations. Provision Grid Infrastructure with RAC Database on Compute Nodes – Software is provisioned in this step via a provisioning profile using EM 12c database provisioning. In case of Exadata, Grid Infrastructure and RAC Database software is already deployed on compute nodes via OneCommand from Oracle, so SSA Administrator just needs to discover Oracle Homes and Listener as EM targets. Databases will be created as and when users request for databases from cloud. Customize Create Database Deployment Procedure – the actual database creation steps are "templatized" in this step by Self Service Administrator and the newly saved deployment procedure will be used during service template creation in next step. This is an important step and make sure you have locked all the required variables marked as locked as ‘Y’ in this table. Setup Self Service Portal – This step involves setting up of zones, user quotas, service templates, chargeback plan. The SSA portal is setup by Self Service Administrator via Setup menu -> Cloud -> Database option and following guided workflow. Refer to DBaaS cookbook for details. You also have an option to customize SSA login page via steps documented in EM 12c Cloud Administrator’s guide Final Checks – Define and document process guidelines for SSA users and administrators. Get your SSA users trained on Self Service Portal features and overall DBaaS model and SSA administrators should be familiar with Self Service Portal setup pieces, EM 12c database lifecycle management capabilities and overall EM 12c monitoring framework. GO LIVE – Announce rollout of Database-as-a-Service to your SSA users. Users can login to the Self Service Portal and request/monitor/view their databases in Exadata based database cloud. Congratulations! You just delivered a successful database cloud implementation project! In future posts, we will cover these additional useful topics around database cloud – DBaaS Implementation tips and tricks – right from setup to self service to managing the cloud lifecycle ‘How to’ enable real production databases copies in DBaaS with rapid provisioning in database cloud Case study of a customer who recently achieved success with their transformational journey from traditional silo’ed environment on to Exadata based database cloud using Enterprise Manager 12c. More Information – Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Exadata Discovery Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Beginner Guide to User Styles for Firefox

    - by Asian Angel
    While the default styles for most websites are nice there may be times when you would love to tweak how things look. See how easy it can be to change how websites look with the Stylish Extension for Firefox. Note: Scripts from Userstyles.org can also be added to Greasemonkey if you have it installed. Getting Started After installing the extension you will be presented with a first run page. You may want to keep it open so that you can browse directly to the Userstyles.org website using the link in the upper left corner. In the lower right corner you will have a new Status Bar Icon. If you have used Greasemonkey before this icon works a little differently. It will be faded out due to no user style scripts being active at the moment. You can use either a left or right click to access the Context Menu. The user style script management section is also added into your Add-ons Management Window instead of being separate. When you reach the user style scripts homepage you can choose to either learn more about the extension & scripts or… Start hunting for lots of user style script goodness. There will be three convenient categories to get you jump-started if you wish. You could also conduct a search if you have something specific in mind. Here is some information directly from the website provided for your benefit. Notice the reference to using these scripts with Greasemonkey… This section shows you how the scripts have been categorized and can give you a better idea of how to search for something more specific. Finding & Installing Scripts For our example we decided to look at the Updated Styles Section”first. Based on the page number listing at the bottom there are a lot of scripts available to look through. Time to refine our search a little bit… Using the drop-down menu we selected site styles and entered Yahoo in the search blank. Needless to say 5 pages was a lot easier to look through than 828. We decided to install the Yahoo! Result Number Script. When you do find a script (or scripts) that you like simply click on the Install with Stylish Button. A small window will pop up giving you the opportunity to preview, proceed with the installation, edit the code, or cancel the process. Note: In our example the Preview Function did not work but it may be something particular to the script or our browser’s settings. If you decide to do some quick editing the window shown above will switch over to this one. To return to the previous window and install the user style script click on the Switch to Install Button. After installing the user style the green section in the script’s webpage will actually change to this message… Opening up the Add-ons Manager Window shows our new script ready to go. The script worked perfectly when we conducted a search at Yahoo…the Status Bar Icon also changed from faded out to full color (another indicator that everything is running nicely). Conclusion If you prefer a custom look for your favorite websites then you can have a lot of fun experimenting with different user style scripts. Note: See our article here for specialized How-To Geek User Style Scripts that can be added to your browser. Links Download the Stylish Extension (Mozilla Add-ons) Visit the Userstyles.org Website Install the Yahoo! Result Number User Style Similar Articles Productive Geek Tips Spice Up that Boring about:blank Page in FirefoxExpand the Add Bookmark Dialog in Firefox by DefaultEnjoy How-To Geek User Style Script GoodnessAuto-Hide Your Cluttered Firefox Status Bar ItemsBeginner Geek: Delete User Accounts in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins

    Read the article

  • Wireless not working on Dell XPS 17 after installing 12.04

    - by user60622
    I (linux newbie) have a Dell XPS 17 and tried to install Ubuntu 12.04. After installation all WLAN accesspoints near are detected. But I can not connect (but I am able to connect with other computers as well as with Dell XPS 17 under windows). Outputs: iwconfig lo no wireless extensions. wlan0 IEEE 802.11bg ESSID:"LerchenPoint" Mode:Managed Frequency:2.412 GHz Access Point: 58:6D:8F:A0:2D:58 Bit Rate=1 Mb/s Tx-Power=14 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=70/70 Signal level=-37 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:19 Missed beacon:0 eth0 no wireless extensions. sudo lshw -class network *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:04:00.0 logical name: wlan0 version: 00 serial: 00:26:c7:99:98:28 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-24-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bg resources: irq:50 memory:f0400000-f0401fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:0a:00.0 logical name: eth0 version: 06 serial: f0:4d:a2:56:e3:94 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=192.168.0.123 latency=0 link=yes multicast=yes port=MII speed=1Gbit/s resources: irq:47 ioport:6000(size=256) memory:f0a04000-f0a04fff memory:f0a00000-f0a03fff dmesg | grep iwl [ 10.157531] iwlwifi 0000:04:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 10.157561] iwlwifi 0000:04:00.0: setting latency timer to 64 [ 10.157598] iwlwifi 0000:04:00.0: pci_resource_len = 0x00002000 [ 10.157599] iwlwifi 0000:04:00.0: pci_resource_base = ffffc90011090000 [ 10.157601] iwlwifi 0000:04:00.0: HW Revision ID = 0x0 [ 10.157731] iwlwifi 0000:04:00.0: irq 50 for MSI/MSI-X [ 10.157834] iwlwifi 0000:04:00.0: Detected Intel(R) Centrino(R) Wireless-N 1000 BGN, REV=0x6C [ 10.157976] iwlwifi 0000:04:00.0: L1 Enabled; Disabling L0S [ 10.179772] iwlwifi 0000:04:00.0: device EEPROM VER=0x15d, CALIB=0x6 [ 10.179775] iwlwifi 0000:04:00.0: Device SKU: 0X50 [ 10.179777] iwlwifi 0000:04:00.0: Valid Tx ant: 0X1, Valid Rx ant: 0X3 [ 10.179796] iwlwifi 0000:04:00.0: Tunable channels: 13 802.11bg, 0 802.11a channels [ 10.574728] iwlwifi 0000:04:00.0: loaded firmware version 39.31.5.1 build 35138 [ 10.726409] ieee80211 phy0: Selected rate control algorithm 'iwl-agn-rs' [ 19.714132] iwlwifi 0000:04:00.0: L1 Enabled; Disabling L0S [ 19.777862] iwlwifi 0000:04:00.0: L1 Enabled; Disabling L0S [ 2251.603089] iwlwifi 0000:04:00.0: PCI INT A disabled [ 2266.578350] iwlwifi 0000:04:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 2266.578399] iwlwifi 0000:04:00.0: setting latency timer to 64 [ 2266.578435] iwlwifi 0000:04:00.0: pci_resource_len = 0x00002000 [ 2266.578437] iwlwifi 0000:04:00.0: pci_resource_base = ffffc90011090000 [ 2266.578439] iwlwifi 0000:04:00.0: HW Revision ID = 0x0 [ 2266.578704] iwlwifi 0000:04:00.0: irq 50 for MSI/MSI-X [ 2266.578808] iwlwifi 0000:04:00.0: Detected Intel(R) Centrino(R) Wireless-N 1000 BGN, REV=0x6C [ 2266.578916] iwlwifi 0000:04:00.0: L1 Enabled; Disabling L0S [ 2266.600709] iwlwifi 0000:04:00.0: device EEPROM VER=0x15d, CALIB=0x6 [ 2266.600712] iwlwifi 0000:04:00.0: Device SKU: 0X50 [ 2266.600713] iwlwifi 0000:04:00.0: Valid Tx ant: 0X1, Valid Rx ant: 0X3 [ 2266.600727] iwlwifi 0000:04:00.0: Tunable channels: 13 802.11bg, 0 802.11a channels [ 2266.605978] iwlwifi 0000:04:00.0: loaded firmware version 39.31.5.1 build 35138 [ 2266.606331] ieee80211 phy0: Selected rate control algorithm 'iwl-agn-rs' [ 2266.614179] iwlwifi 0000:04:00.0: L1 Enabled; Disabling L0S [ 2266.681541] iwlwifi 0000:04:00.0: L1 Enabled; Disabling L0S Solutions I tried: rfkill list all 0: dell-wifi: Wireless LAN Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no echo "options iwlwifi 11n_disable=1" | sudo tee /etc/modprobe.d/iwlwifi.conf options iwlwifi 11n_disable=1 sudo modprobe -rfv iwlwifi WARNING: All config files need .conf: /etc/modprobe.d/blacklist, it will be ignored in a future release. rmmod /lib/modules/3.2.0-24-generic/kernel/drivers/net/wireless/iwlwifi/iwlwifi.ko rmmod /lib/modules/3.2.0-24-generic/kernel/net/mac80211/mac80211.ko rmmod /lib/modules/3.2.0-24-generic/kernel/net/wireless/cfg80211.ko sudo modprobe iwlwifi WARNING: All config files need .conf: /etc/modprobe.d/blacklist, it will be ignored in a future release. replacing iwlwifi-1000-5.ucode (current driver) against iwlwifi-1000-3.ucode sudo jockey-gtk: (jockey-gtk:2493): Gtk-CRITICAL **: gtk_icon_set_render_icon_pixbuf: assertion icon_set != NULL' failed (jockey-gtk:2493): Gtk-CRITICAL **: gtk_icon_set_render_icon_pixbuf: assertion icon_set != NULL' failed nothing is listet in "Additional drivers" (german: "Zusätzliche Treiber"). gksudo gedit /etc/modprobe.d/blacklist.conf add "blacklist acer_wmi" Any help would be appreciated very much. Thanks!!

    Read the article

  • OBIEE 11.1.1.5.0 BP2 patch released

    - by THE
    Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} We are happy to announce that: OBIEE 11.1.1.5.0 BP2 patch is released for four platforms : Win64, Linux64, AIX64, and Solaris SPARC 64 Remaining four platforms Win32, Linux32, Hp-Itanium, and Solaris x86-64 are expected in a few weeks.This is released as patch 13611078 on MOS /  http://support.oracle.comCustomers can download this patch directly, there is no password needed. Please note these points: README contains a list of all bug fixes included in this patch.(Only "new" fixes are listed in the readme of the BP2 patch. The fixes in the BP1 patch (aka PS1 - Patch 13562882 ) are included in the BP2 patch, even though they are not explicitly listed in the BP2 Readme. The readme is currently under review to reflect this.) This is a (mostly) cumulative bundle patch, and includes all fixes from PS1 (patch 13562882) which was released for Linux64 platform.Customers who have PS1 applied will get the expected OPatch conflict message.  Since BP2 is cumulative, you can safely rollback PS1.  You can do this prior to applying BP2, or you can choose to rollback at the time of applying the patch. Likewise, customers who have other one-off patches applied will get the expected OPatch conflict message.  If you have questions about this, please review the applied patches and compare them with list of bug fixes in Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} README's of BP2 and BP1 Patch 13562882. If all thebug fixes are included, you can continue with patch installation and rollback applied patches. Please note, this is not a fully cumulative patch on 11.1.1.5.0.  This means it does not contain all one-off patches given out so far on top of 11.1.1.5.0.  There is a small number of such bug fixes remaining, which will all be included in BP3 patch. In case you encounter this, please have Support log an OOB (one-off backport) requests for missing bug fixes so they can be included in BP3 cumulative bundle patch, which is expected to be fully cumulative going forward. This BP2 includes the CPU patch fix from BUG 12830486 - OCT 2011 CPU - UPDATE FOR OBIEE 11.1.1.5.0 BP3 patch is in planning stage, no ETA is announced yet.

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • obiee 10g teradata Solaris deployment

    - by user554629
    I have 3-4 years worth of notes on proper Teradata deployment across multiple operating systems.   The topic that is too large to cover succinctly in a blog entry.   I'm trying something new:  document a specific situation, consolidate the facts, document diagnostic procedures and then clone the structure to cover other obiee deployments (11g and other operating systems). Until the icon below is removed, this blog entry may be revised frequently.  No construction between June 6th through June 25th. Getting started obiee 10g certification:  pg 24-25 Teradata V2R5.1.x - V2R6.2, Client 13.10, certified 10.1.3.4.1obiee 10g documentation: Deployment Guide, Server Administration, Install/Config Guideobiee overview: teradata connectivity downloads: ( requires registration )solaris odbc drivers: sparc 13.10:  Choose 13.10.00.04  ( ReadMe ) sparc 14.00: probably would work, but not certified by Oracle on 10g I assume you have obiee 10.1.3.4.1 installed; 10.1.3.4.2 would be a better choice. Teradata odbc install requires root for Solaris pkgadd Only 1 version of Teradata odbc can be installed.symbolic links to the current version are created in /usr/lib at install obiee implementation background database access has two types of implementation:  native and odbcnative drivers use DB vendor client interfaces for accessodbc drivers are provided by the DB vendor for DB accessTeradata is an odbc interface Database. odbc drivers require an ODBC Driver Managerobiee uses Merant Data Direct driver manager obiee servers communicate with one another using odbc.The internal odbc driver is implemented by the obiee team and requires Merant Driver Manager. Teradata supplies a Driver Manager, which is built by Merant, but should not be used in obiee. The nqsserver shared library deployment looks like this  OBIEE Server<->DataDirect Manager<->Teradata Driver<->Teradata Database nqsserver startup $ cd $BI/setup$ . ./sa-init64.sh$ run-sa.sh autorestart64 The following files are referenced from setup:  .variant.sh  user.sh  NQSConfig.INI  DBFeatures.INI  $ODBCINI ( odbc.ini )  sqlnet.ora How does nqsserver connect to Teradata? A teradata DSN is created in the RPD. ( TD71 )setup/odbc.ini contains: [ODBC Data Sources] TD71=tdata.so[TD71]Driver=/opt/tdodbc/odbc/drivers/tdata.soDescription=Teradata V7.1.0DBCName=###.##.##.### LastUser=Username=northwindPassword=northwindDatabase=DefaultDatabase=northwind setup/user.sh contains LIBPATH\=/opt/tdicu/lib_64\:/usr/odbc/lib\:/usr/odbc/drivers\:/usr/lpp/tdodbc/odbc/drivers\:$LIBPATHexport LIBPATH   setup/.variant.sh contains if [ "$ANA_SERVER_64" = "1" ]; then  ANA_BIN_DIR=${SAROOTDIR}/server/Bin64  ANA_WEB_DIR=${SAROOTDIR}/web/bin64  ANA_ODBC_DIR=${SAROOTDIR}/odbc/lib64         setup/sa-run.sh  contains . ${ANA_INSTALL_DIR}/setup/.variant.sh. ${ANA_INSTALL_DIR}/setup/user.sh logfile="${SAROOTDIR}/server/Log/nqsserver.out.log"${ANA_BIN_DIR}/nqsserver -quiet >> ${logfile} 2>&1 &   nqsserver is running: nqsserver produces $BI/server/nqsserver.logAt startup, the native database drivers connect and record DB versions.tdata.so is not loaded until a Teradata DB connection is attempted.    Teradata odbc client installation Accept all the defaults for pkgadd.   Install in /opt. $ mkdir odbc$ cd odbc$ gzip -dc ../tdodbc__solaris_sparc.13.10.00.04.tar.gz | tar -xf - $ sudo su# pkgadd -d . TeraGSS# pkgadd -d . tdicu1310# pkgadd -d . tdodbc1310   Directory Notes: /opt/teradata/client/13.10/odbc_64/lib/tdata.soThe 64-bit obiee library loaded by nqsserver. /opt/teradata/client/13.10/odbc_64/lib is not needed in LD_LIBRARY_PATH /opt/teradata/client/13.10/tdicu/lib64is needed in LD_LIBRARY_PATH /usr/odbc should not be referenced;  it is a link to 32-bit libraries LD_LIBRARY_PATH_64 should not be used.     Useful bash functions and aliases export SAROOTDIR=/export/home/dw_adm/OracleBIexport TERA_HOME=/opt/teradata/client/13.10 export ORACLE_HOME=/export/home/oracle/product/10.2.0/clientexport ODBCINI=$SAROOTDIR/setup/odbc.iniexport TD_ICU_DATA=$TERA_HOME/tdicu/lib64alias cds="alias | grep '^alias cd' | sed 's/^alias //' | sort"alias cdtd="cd $TERA_HOME; ls" alias cdtdodbc="cd $TERA_HOME/odbc_64; ls -l"alias cdtdicu="cd $TERA_HOME/tdicu/lib64; ls -l"alias cdbi="cd $SAROOTDIR; ls"alias cdbiodbc="cd $SAROOTDIR/odbc; ls -l"alias cdsetup="cd $SAROOTDIR/setup; ls -ltr"alias cdsvr="cd $SAROOTDIR/server; ls"alias cdrep="cd $SAROOTDIR/server/Repository; ls -ltr"alias cdsvrcfg="cd $SAROOTDIR/server/Config; ls -ltr"alias cdsvrlog="cd $SAROOTDIR/server/Log; ls -ltr"alias cdweb="cd $SAROOTDIR/web; ls"alias cdwebconfig="cd $SAROOTDIR/web/config; ls -ltr"alias cdoci="cd $ORACLE_HOME; ls"pkgfiles() { pkgchk -l $1 | awk  '/^Pathname/ {print $2}'; }pkgfind()  { pkginfo | egrep -i $1 ; } Examples: $ pkgfind td$ pkgfiles tdodbc1310 | grep 64$ cds$ cdtdodbc$ cdsetup$ cdsvrlog$ cdweblog

    Read the article

  • Implications of Java 6 End of Public Updates for EBS Users

    - by Steven Chan (Oracle Development)
    The Support Roadmap for Oracle Java is published here: Oracle Java SE Support Roadmap The latest updates to that page (as of Sept. 19, 2012) state (emphasis added): Java SE 6 End of Public Updates Notice After February 2013, Oracle will no longer post updates of Java SE 6 to its public download sites. Existing Java SE 6 downloads already posted as of February 2013 will remain accessible in the Java Archive on Oracle Technology Network. Developers and end-users are encouraged to update to more recent Java SE versions that remain available for public download. For enterprise customers, who need continued access to critical bug fixes and security fixes as well as general maintenance for Java SE 6 or older versions, long term support is available through Oracle Java SE Support . What does this mean for Oracle E-Business Suite users? EBS users fall under the category of "enterprise users" above.  Java is an integral part of the Oracle E-Business Suite technology stack, so EBS users will continue to receive Java SE 6 updates after February 2013. In other words, nothing will change for EBS users after February 2013.  EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6. These Java SE 6 updates will be made available to EBS users for the Extended Support periods documented in the Oracle Lifetime Support policy document for Oracle Applications (PDF): EBS 11i Extended Support ends November 2013 EBS 12.0 Extended Support ends January 2015 EBS 12.1 Extended Support ends December 2018 Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients? No. This upgrade will be highly recommended but currently remains optional. JRE 6 will be available to Windows users to run with EBS for the duration of your respective EBS Extended Support period.  Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 desktop clients.  The certification of Oracle E-Business Suite with JRE 7 (for desktop clients accessing EBS Forms-based content) is in its final stages.  If you plan to upgrade your EBS desktop clients to JRE 7 when that certification is released, you can get a head-start on that today. Coexistence of JRE 6 and JRE 7 on Windows desktops The upgrade to JRE 7 will be highly recommended for EBS users, but some users may need to run both JRE 6 and 7 on their Windows desktops for reasons unrelated to the E-Business Suite. Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 7 will be invoked instead of JRE 6 if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290801.1 and 393931.1. Applying Updates to JRE 6 and JRE 7 to Windows desktops Auto-update will keep JRE 7 up-to-date for Windows users with JRE 7 installed. Auto-update will only keep JRE 7 up-to-date for Windows users with both JRE 6 and 7 installed.  JRE 6 users are strongly encouraged to apply the latest Critical Patch Updates as soon as possible after each release. The Jave SE CPUs will be available via My Oracle Support.  EBS users can find more information about JRE 6 and 7 updates here: Information Center: Installation & Configuration for Oracle Java SE (Note 1412103.2) The dates for future Java SE CPUs can be found on the Critical Patch Updates, Security Alerts and Third Party Bulletin.  An RSS feed is available on that site for those who would like to be kept up-to-date. What will Mac users need? Oracle will provide updates to JRE 7 for Mac OS X users. EBS users running Macs will need to upgrade to JRE 7 to receive JRE updates. The certification of Oracle E-Business Suite with JRE 7 for Mac-based desktop clients accessing EBS Forms-based content is underway. Mac users waiting for that certification may find this article useful: How to Reenable Apple Java 6 Plug-in for Mac EBS Users Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers? No. This upgrade will be highly recommended but will be optional for EBS application tier servers running on Windows, Linux, and Solaris.  You can choose to remain on JDK 6 for the duration of your respective EBS Extended Support period.  If you remain on JDK 6, you will continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6. The certification of Oracle E-Business Suite with JDK 7 for EBS application tier servers on Windows, Linux, and Solaris as well as other platforms such as IBM AIX and HP-UX is planned.  Customers running platforms other than Windows, Linux, and Solaris should refer to their Java vendors's sites for more information about their support policies. Related Articles Planning Bulletin for JRE 7: What EBS Customers Can Do Today EBS 11i and 12.1 Support Timeline Changes Frequently Asked Questions about Latest EBS Support Changes Critical Patch Updates During EBS 11i Exception to Sustaining Support Period

    Read the article

  • HP Pavilion tx2000 - Wifi adapter no longer works after moving from 12.04 to a 12.10 clean install

    - by Marek L.
    I have a HP Pavilion tx2000 that I have been running Ubuntu 12.04 on for a couple of months without any problems (wifi worked great) until yesterday when my hard drive failed. I replaced the hard drive and decided to install Ubuntu 12.10. Unlike 12.04, the wifi did not work after the installation finished and all the updates where installed (over Ethernet). The network drop down in the top right didn't even show a wireless option. I Googled about for a bit and found some solutions that seemed like they might work. Unfortunately they did not. Here is what I tried: sudo apt-get remove bcmwl-kernel-source sudo apt-get install b43-fwcutter sudo apt-get install firmware-b43-lpphy-installer Restart the computer. And the wifi still didn't work. At which point I panicked a bit and tried to undo the previous commands by running: sudo apt-get remove b43-fwcutter firmware-b43-lpphy-installer sudo apt-get install bcmwl-kernel-source Restart the computer. The wifi still doesn't work. This is where I stopped because I have no idea what I am doing and don't want to mess something up. The network drop down still doesn't show a wireless option and the hardware wifi switch on the laptop is amber (it turns blue when the wifi is on). Using the hardware switch does not change the color. Output from: sudo lspci ... 08:00.0 Network controller: Broadcom Corporation BCM4322 802.11a/b/g/n Wireless LAN Controller (rev 01) ... Output from: sudo lshw -class network *-network UNCLAIMED description: Network controller product: BCM4322 802.11a/b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:08:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:d1100000-d1103fff ... Output from: sudo rfkill list all 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: yes UPDATE: After writing up this question tried the following command: sudo rfkill unblock all At first it didn't do anything but after running it about four times, sudo rfkill list all now returns: 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no But the network menu still does not have a wireless option and the hardware switch still glows amber. Pushing the hardware switch turns the hard block back on and I have to run sudo rfkill unblock all multiple times again to turn it off. Any help is appreciated! Update 2: Full output from sudo lspci -nn: 00:00.0 Host bridge [0600]: Advanced Micro Devices [AMD] RS780 Host Bridge [1022:9600] 00:01.0 PCI bridge [0604]: Advanced Micro Devices [AMD] RS780/RS880 PCI to PCI bridge (int gfx) [1022:9602] 00:04.0 PCI bridge [0604]: Advanced Micro Devices [AMD] RS780/RS880 PCI to PCI bridge (PCIE port 0) [1022:9604] 00:05.0 PCI bridge [0604]: Advanced Micro Devices [AMD] RS780/RS880 PCI to PCI bridge (PCIE port 1) [1022:9605] 00:06.0 PCI bridge [0604]: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) [1022:9606] 00:11.0 SATA controller [0106]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] [1002:4391] 00:12.0 USB controller [0c03]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller [1002:4397] 00:12.1 USB controller [0c03]: Advanced Micro Devices [AMD] nee ATI SB7x0 USB OHCI1 Controller [1002:4398] 00:12.2 USB controller [0c03]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller [1002:4396] 00:13.0 USB controller [0c03]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller [1002:4397] 00:13.1 USB controller [0c03]: Advanced Micro Devices [AMD] nee ATI SB7x0 USB OHCI1 Controller [1002:4398] 00:13.2 USB controller [0c03]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller [1002:4396] 00:14.0 SMBus [0c05]: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller [1002:4385] (rev 3a) 00:14.1 IDE interface [0101]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 IDE Controller [1002:439c] 00:14.2 Audio device [0403]: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) [1002:4383] 00:14.3 ISA bridge [0601]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 LPC host controller [1002:439d] 00:14.4 PCI bridge [0604]: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge [1002:4384] 00:14.5 USB controller [0c03]: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI2 Controller [1002:4399] 00:18.0 Host bridge [0600]: Advanced Micro Devices [AMD] Family 11h Processor HyperTransport Configuration [1022:1300] (rev 40) 00:18.1 Host bridge [0600]: Advanced Micro Devices [AMD] Family 11h Processor Address Map [1022:1301] 00:18.2 Host bridge [0600]: Advanced Micro Devices [AMD] Family 11h Processor DRAM Controller [1022:1302] 00:18.3 Host bridge [0600]: Advanced Micro Devices [AMD] Family 11h Processor Miscellaneous Control [1022:1303] 00:18.4 Host bridge [0600]: Advanced Micro Devices [AMD] Family 11h Processor Link Control [1022:1304] 01:05.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI RS780M/RS780MN [Mobility Radeon HD 3200 Graphics] [1002:9612] 08:00.0 Network controller [0280]: Broadcom Corporation BCM4322 802.11a/b/g/n Wireless LAN Controller [14e4:432b] (rev 01) 09:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 02)

    Read the article

  • The Linux powered LAN Gaming House

    - by sachinghalot
    LAN parties offer the enjoyment of head to head gaming in a real-life social environment. In general, they are experiencing decline thanks to the convenience of Internet gaming, but Kenton Varda is a man who takes his LAN gaming very seriously. His LAN gaming house is a fascinating project, and best of all, Linux plays a part in making it all work.Varda has done his own write ups (short, long), so I'm only going to give an overview here. The setup is a large house with 12 gaming stations and a single server computer.The client computers themselves are rack mounted in a server room, and they are linked to the gaming stations on the floor above via extension cables (HDMI for video and audio and USB for mouse and keyboard). Each client computer, built into a 3U rack mount case, is a well specced gaming rig in its own right, sporting an Intel Core i5 processor, 4GB of RAM and an Nvidia GeForce 560 along with a 60GB SSD drive.Originally, the client computers ran Ubuntu Linux rather than Windows and the games executed under WINE, but Varda had to abandon this scheme. As he explains on his site:"Amazingly, a majority of games worked fine, although many had minor bugs (e.g. flickering mouse cursor, minor rendering artifacts, etc.). Some games, however, did not work, or had bad bugs that made them annoying to play."Subsequently, the gaming computers have been moved onto a more conventional gaming choice, Windows 7. It's a shame that WINE couldn't be made to work, but I can sympathize as it's rare to find modern games that work perfectly and at full native speed. Another problem with WINE is that it tends to suffer from regressions, which is hardly surprising when considering the difficulty of constantly improving the emulation of the Windows API. Varda points out that he preferred working with Linux clients as they were easier to modify and came with less licensing baggage.Linux still runs the server and all of the tools used are open source software. The hardware here is a Intel Xeon E3-1230 with 4GB of RAM. The storage hanging off this machine is a bit more complex than the clients. In addition to the 60GB SSD, it also has 2x1TB drives and a 240GB SDD.When the clients were running Linux, they booted over PXE using a toolchain that will be familiar to anyone who has setup Linux network booting. DHCP pointed the clients to the server which then supplied PXELINUX using TFTP. When booted, file access was accomplished through network block device (NBD). This is a very easy to use system that allows you to serve the contents of a file as a block device over the network. The client computer runs a user mode device driver and the device can be mounted within the file system using the mount command.One snag with offering file access via NBD is that it's difficult to impose any security restrictions on different areas of the file system as the server only sees a single file. The advantage is perfomance as the client operating system simply sees a block device, and besides, these security issues aren't relevant in this setup.Unfortunately, Windows 7 can't use NBD, so, Varda had to switch to iSCSI (which works in both server and client mode under Linux). His network cards are not compliant with this standard when doing a netboot, but fortunately, gPXE came to the rescue, and he boostraps it over PXE. gPXE is also available as an ISO image and is worth knowing about if you encounter an awkward machine that can't manage a network boot. It can also optionally boot from a HTTP server rather than the more traditional TFTP server.According to Varda, booting all 12 machines over the Gigabit Ethernet network is surprisingly fast, and once booted, the machines don't seem noticeably slower than if they were using local storage. Once loaded, most games attempt to load in as much data as possible, filling the RAM, and the the disk and network bandwidth required is small. It's worth noting that these are aspects of this project that might differ from some other thin client scenarios.At time of writing, it doesn't seem as though the local storage of the client machines is being utilized. Instead, the clients boot into Windows from an image on the server that contains the operating system and the games themselves. It uses the copy on write feature of LVM so that any writes from a client are added to a differencing image allocated to that client. As the administrator, Varda can log into the Linux server and authorize changes to the master image for updates etc.SummaryOverall, Varda estimates the total cost of the project at about $40,000, and of course, he needed a property that offered a large physical space in order to house the computers and the gaming workstations. Obviously, this project has stark differences to most thin client projects. The balance between storage, network usage, GPU power and security would not be typical of an office installation, for example. The only letdown is that WINE proved to be insufficiently compatible to run a wide variety of modern games, but that is, perhaps, asking too much of it, and hats off to Varda for trying to make it work.

    Read the article

  • Checksum Transformation

    The Checksum Transformation computes a hash value, the checksum, across one or more columns, returning the result in the Checksum output column. The transformation provides functionality similar to the T-SQL CHECKSUM function, but is encapsulated within SQL Server Integration Services, for use within the pipeline without code or a SQL Server connection. As featured in The Microsoft Data Warehouse Toolkit by Joy Mundy and Warren Thornthwaite from the Kimbal Group. Have a look at the book samples especially Sample package for custom SCD handling. All input columns are passed through the transformation unaltered, those selected are used to generate the checksum which is passed out through a single output column, Checksum. This does not restrict the number of columns available downstream from the transformation, as columns will always flow through a transformation. The Checksum output column is in addition to all existing columns within the pipeline buffer. The Checksum Transformation uses an algorithm based on the .Net framework GetHashCode method, it is not consistent with the T-SQL CHECKSUM() or BINARY_CHECKSUM() functions. The transformation does not support the following Integration Services data types, DT_NTEXT, DT_IMAGE and DT_BYTES. ChecksumAlgorithm Property There ChecksumAlgorithm property is defined with an enumeration. It was first added in v1.3.0, when the FrameworkChecksum was added. All previous algorithms are still supported for backward compatibility as ChecksumAlgorithm.Original (0). Original - Orginal checksum function, with known issues around column separators and null columns. This was deprecated in the first SQL Server 2005 RTM release. FrameworkChecksum - The hash function is based on the .NET Framework GetHash method for object types. This is based on the .NET Object.GetHashCode() method, which unfortunately differs between x86 and x64 systems. For that reason we now default to the CRC32 option. CRC32 - Using a standard 32-bit cyclic redundancy check (CRC), this provides a more open implementation. The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox by hand. This process has been described in detail in the related FAQ entry for How do I install a task or transform component?, just select Checksum from the SSIS Data Flow Items list in the Choose Toolbox Items window. Downloads The Checksum Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Checksum Transformation for SQL Server 2005 Checksum Transformation for SQL Server 2008 Checksum Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.27 – SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2010) SQL Server 2008 Version 2.0.0.27 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . Fix for upgrade mappings between 2005 and 2008. (19 Oct 2010) Version 2.0.0.24 - SQL Server 2008 release. Introduces the new CRC-32 algorithm, which is consistent across x86 and x64.. The default algorithm is now CRC32. (29 Oct 2008) Version 2.0.0.6 - SQL Server 2008 pre-release. This version was released by mistake as part of the site migration, and had known issues. (20 Oct 2008) SQL Server 2005 Version 1.5.0.43 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . (19 Oct 2010) Version 1.5.0.16 - Introduces the new CRC-32 algorithm, which is consistent across x86 and x64. The default algorithm is now CRC32. (20 Oct 2008) Version 1.4.0.0 - Installer refresh only. (22 Dec 2007) Version 1.4.0.0 - Refresh for minor UI enhancements. (5 Mar 2006) Version 1.3.0.0 - SQL Server 2005 RTM. The checksum algorithm has changed to improve cardinality when calculating multiple column checksums. The original algorithm is still available for backward compatibility. Fixed custom UI bug with Output column name not persisting. (10 Nov 2005) Version 1.2.0.1 - SQL Server 2005 IDW 15 June CTP. A user interface is provided, as well as the ability to change the checksum output column name. (29 Aug 2005) Version 1.0.0 - Public Release (Beta). (30 Oct 2004) Screenshot

    Read the article

  • How to move MOSS 2007 to another SharePoint Farm

    - by DipeshBhanani
    It was time of my first onsite client assignment on SharePoint. Client had one server production environment. They wanted to upgrade the topology with completely new SharePoint Farm of three servers. So, the task was to move whole MOSS 2007 stuff to the new server environment without impacting data. The last three words “… without impacting data…” were actually putting pressure on my head. Moreover SSP was required to move because additional information has been added for users apart from AD import.   I thought I had to do only backup and restore. It appeared pretty easy at first thought. Just because of these three damn scary words, I thought to check out on internet for guidance related to this scenario. I couldn’t get anything except general guidance of moving server on Microsoft TechNet site. I promised myself for starting blogs with this post if I would be successful in this task. Well, I took long time to write this but finally made it. I hope it will be useful to all guys looking for SharePoint server movement.   Before beginning restoration, make sure that, there is no difference in versions of SharePoint at source and destination server. Also check whether the state of SharePoint Installation at the time of backup and restore is same or not. (E.g. SharePoint related service packs and patches if any)   The main tasks of the server movement are as follow:   1.        Backup all the databases 2.        Install and configure SharePoint on new environment 3.        Deploy all solutions (WSP Files) globally to destination server- for installing features attached to the solutions 4.        Install all the custom features 5.        Deploy/Copy custom pages/files which are added to the “12Hive” folder later 6.        Restore SSP 7.        Restore My Site 8.        Restore other web application   Tasks 3 to 5 are for making sure that we have configured the environment well enough for the web application to be restored successfully. The main and complex task was restoring SSP. I have started restoring SSP through Central Admin. After a while, the restoration status was updated to “unsuccessful”. “Damn it, what went wrong?” I thought looking at the error detail down the page. I couldn’t remember the error message but I had corrected and restored it again.   Actually once you fail restoring SSP, until and unless you don’t clean all related stuff well, your restoration will be failed again and again. I wanted to find the actual reason. So cleaned, restored, cleaned, restored… I had tried almost 5-6 times and finally, I succeeded. I had realized how pleasant it is, to see the word “Successful” on the screen. Without wasting your much time to read, let me write all the detailed steps of restoring SSP:   1.        Delete the SSP through following STSADM command. stsadm -o deletessp -title <SSP name> -deletedatabases -force e.g.: stsadm -o deletessp -title SharedServices1 -deletedatabases –force 2.        Check and delete the web application associated with SSP if it exists. 3.        Remove Link from Check and remove “Alternate Access Mapping” associated with SSP if it exists. 4.        Check and delete IIS site as well as application pool associated with SSP if it exists. 5.        Stop following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search   6.        Delete all the databases associated/related to SSP from SQL Server. 7.        Reset IIS. 8.        Start again following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search   9.        Restore the new SSP.   After the SSP restoration, all other stuffs had completed very smoothly without any more issues. I did few modifications to sites for change of server name and finally, the new environment was ready.

    Read the article

  • How can I read kindle book under xfce(ubuntu)? (using chromebook)(wine not working)

    - by yshn
    I'm using chromebook, dual booting xfce(ubuntu) and cr os. The ebook I bought on amazon is not supported on kindle cloud reader. (Under xfce)I downloaded wine and tried installing kindle for pc under wine, and after couples of times of trials, it always said installation error and could not install kindle, and it's been giving me: Unhandled exception: unimplemented function msvcp90.dll.??0?$basic_ofstream@DU?$char_traits@D@std@@@std@@QAE@XZ called in 32-bit code (0x7b839cf2). Register dump: CS:0023 SS:002b DS:002b ES:002b FS:0063 GS:006b EIP:7b839cf2 ESP:0033fcd4 EBP:0033fd38 EFLAGS:00000287( - -- I S - -P-C) EAX:7b826245 EBX:7b894ff4 ECX:00000008 EDX:0033fcf4 ESI:80000100 EDI:00dca568 Stack dump: 0x0033fcd4: 0033fd58 00000008 00000030 80000100 0x0033fce4: 00000001 00000000 7b839cf2 00000002 0x0033fcf4: 7e24b340 7e24f2ca 0000000d 00110000 0x0033fd04: 7bc47a0d 7e1dbff4 7e1417f0 00dca568 0x0033fd14: 0033fd24 7bc65d0b 00110000 00000000 0x0033fd24: 0033fd44 7e141801 7b839caa 7e1dbff4 000c: sel=0067 base=00000000 limit=00000000 16-bit r-x Backtrace: =0 0x7b839cf2 in kernel32 (+0x29cf2) (0x0033fd38) 1 0x7e24b2a8 in msvcp90 (+0x3b2a7) (0x0033fd68) 2 0x7e216c9d in msvcp90 (+0x6c9c) (0x0033fde8) 3 0x00938fdd in kindle (+0x538fdc) (0x0033fde8) 4 0x0089dc71 in kindle (+0x49dc70) (0x0033fe70) 5 0x7b859cdc call_process_entry+0xb() in kernel32 (0x0033fe88) 6 0x7b85af4f in kernel32 (+0x4af4e) (0x0033fec8) 7 0x7bc71db0 call_thread_func_wrapper+0xb() in ntdll (0x0033fed8) 8 0x7bc7486d call_thread_func+0x7c() in ntdll (0x0033ffa8) 9 0x7bc71d8e RtlRaiseException+0x21() in ntdll (0x0033ffc8) 10 0x7bc49f4e call_dll_entry_point+0x61d() in ntdll (0x0033ffe8) 0x7b839cf2: subl $4,%esp Modules: Module Address Debug info Name (130 modules) PE 340000- 37d000 Deferred ssleay32 PE 390000- 3ca000 Deferred webcoreviewer PE 3d0000- 3e0000 Deferred pthreadvc2 PE 400000- 1433000 Export kindle PE 1440000- 155c000 Deferred libeay32 PE 1560000- 169f000 Deferred qtscript4 PE 16a0000- 1795000 Deferred libxml2 PE 17a0000- 18c7000 Deferred javascriptcore PE 18d0000- 1974000 Deferred cflite PE 1980000- 2048000 Deferred libwebcore PE 2050000- 208d000 Deferred libjpeg PE 10000000-10a34000 Deferred qtwebkit4 PE 4a800000-4a8eb000 Deferred icuuc46 PE 4a900000-4aa36000 Deferred icuin46 PE 4ad00000-4bb80000 Deferred icudt46 PE 5a4c0000-5a4d4000 Deferred zlib1 PE 61000000-61056000 Deferred qtxml4 PE 62000000-62093000 Deferred qtsql4 PE 64000000-640ef000 Deferred qtnetwork4 PE 65000000-657b8000 Deferred qtgui4 PE 67000000-67228000 Deferred qtcore4 PE 78050000-780b9000 Deferred msvcp100 PE 78aa0000-78b5e000 Deferred msvcr100 ELF 7b800000-7ba15000 Dwarf kernel32 -PE 7b810000-7ba15000 \ kernel32 ELF 7bc00000-7bcc3000 Dwarf ntdll -PE 7bc10000-7bcc3000 \ ntdll ELF 7bf00000-7bf04000 Deferred ELF 7d7f7000-7d800000 Deferred librt.so.1 ELF 7d800000-7d818000 Deferred libresolv.so.2 ELF 7d818000-7d861000 Deferred libdbus-1.so.3 ELF 7d861000-7d873000 Deferred libp11-kit.so.0 ELF 7d873000-7d8f8000 Deferred libgcrypt.so.11 ELF 7d8f8000-7d90a000 Deferred libtasn1.so.3 ELF 7d90a000-7d913000 Deferred libkrb5support.so.0 ELF 7d913000-7d9e2000 Deferred libkrb5.so.3 ELF 7da42000-7da47000 Deferred libgpg-error.so.0 ELF 7da47000-7da6f000 Deferred libk5crypto.so.3 ELF 7da6f000-7da81000 Deferred libavahi-client.so.3 ELF 7da81000-7da8f000 Deferred libavahi-common.so.3 ELF 7da8f000-7db53000 Deferred libgnutls.so.26 ELF 7db53000-7db91000 Deferred libgssapi_krb5.so.2 ELF 7db91000-7dbe4000 Deferred libcups.so.2 ELF 7dc21000-7dc55000 Deferred uxtheme -PE 7dc30000-7dc55000 \ uxtheme ELF 7dc55000-7dc5b000 Deferred libxfixes.so.3 ELF 7dc5b000-7dc66000 Deferred libxcursor.so.1 ELF 7dc6a000-7dc6e000 Deferred libkeyutils.so.1 ELF 7dc6e000-7dc73000 Deferred libcom_err.so.2 ELF 7dca5000-7dccf000 Deferred libexpat.so.1 ELF 7dccf000-7dd03000 Deferred libfontconfig.so.1 ELF 7dd03000-7dd13000 Deferred libxi.so.6 ELF 7dd13000-7dd17000 Deferred libxcomposite.so.1 ELF 7dd17000-7dd20000 Deferred libxrandr.so.2 ELF 7dd20000-7dd2a000 Deferred libxrender.so.1 ELF 7dd2a000-7dd30000 Deferred libxxf86vm.so.1 ELF 7dd30000-7dd34000 Deferred libxinerama.so.1 ELF 7dd34000-7dd3b000 Deferred libxdmcp.so.6 ELF 7dd3b000-7dd5c000 Deferred libxcb.so.1 ELF 7dd5c000-7dd76000 Deferred libice.so.6 ELF 7dd76000-7deaa000 Deferred libx11.so.6 ELF 7deaa000-7debc000 Deferred libxext.so.6 ELF 7debc000-7dec5000 Deferred libsm.so.6 ELF 7ded4000-7df67000 Deferred winex11 -PE 7dee0000-7df67000 \ winex11 ELF 7df67000-7e001000 Deferred libfreetype.so.6 ELF 7e001000-7e023000 Deferred iphlpapi -PE 7e010000-7e023000 \ iphlpapi ELF 7e023000-7e03e000 Deferred wsock32 -PE 7e030000-7e03e000 \ wsock32 ELF 7e03e000-7e071000 Deferred wintrust -PE 7e040000-7e071000 \ wintrust ELF 7e071000-7e129000 Deferred crypt32 -PE 7e080000-7e129000 \ crypt32 ELF 7e129000-7e158000 Deferred msvcr90 -PE 7e130000-7e158000 \ msvcr90 ELF 7e158000-7e1e5000 Deferred msvcrt -PE 7e170000-7e1e5000 \ msvcrt ELF 7e1e5000-7e2ca000 Dwarf msvcp90 -PE 7e210000-7e2ca000 \ msvcp90 ELF 7e2ca000-7e2ec000 Deferred imm32 -PE 7e2d0000-7e2ec000 \ imm32 ELF 7e2ec000-7e3de000 Deferred oleaut32 -PE 7e300000-7e3de000 \ oleaut32 ELF 7e3de000-7e418000 Deferred winspool -PE 7e3f0000-7e418000 \ winspool ELF 7e418000-7e4f7000 Deferred comdlg32 -PE 7e420000-7e4f7000 \ comdlg32 ELF 7e4f7000-7e51f000 Deferred msacm32 -PE 7e500000-7e51f000 \ msacm32 ELF 7e51f000-7e5cc000 Deferred winmm -PE 7e530000-7e5cc000 \ winmm ELF 7e5cc000-7e641000 Deferred rpcrt4 -PE 7e5e0000-7e641000 \ rpcrt4 ELF 7e641000-7e749000 Deferred ole32 -PE 7e660000-7e749000 \ ole32 ELF 7e749000-7e841000 Deferred comctl32 -PE 7e750000-7e841000 \ comctl32 ELF 7e841000-7ea52000 Deferred shell32 -PE 7e850000-7ea52000 \ shell32 ELF 7ea52000-7eabc000 Deferred shlwapi -PE 7ea60000-7eabc000 \ shlwapi ELF 7eabc000-7ead5000 Deferred version -PE 7eac0000-7ead5000 \ version ELF 7ead5000-7eb35000 Deferred advapi32 -PE 7eae0000-7eb35000 \ advapi32 ELF 7eb35000-7ebf2000 Deferred gdi32 -PE 7eb40000-7ebf2000 \ gdi32 ELF 7ebf2000-7ed32000 Deferred user32 -PE 7ec00000-7ed32000 \ user32 ELF 7ed32000-7ed58000 Deferred mpr -PE 7ed40000-7ed58000 \ mpr ELF 7ed58000-7ed6e000 Deferred libz.so.1 ELF 7ed6e000-7eddd000 Deferred wininet -PE 7ed80000-7eddd000 \ wininet ELF 7eddd000-7ee0f000 Deferred ws2_32 -PE 7ede0000-7ee0f000 \ ws2_32 ELF 7ee0f000-7ee1c000 Deferred libnss_files.so.2 ELF 7ee1c000-7ee28000 Deferred libnss_nis.so.2 ELF 7ee28000-7ee42000 Deferred libnsl.so.1 ELF 7ee42000-7ee4b000 Deferred libnss_compat.so.2 ELF 7efd4000-7f000000 Deferred libm.so.6 ELF f74a3000-f74a7000 Deferred libxau.so.6 ELF f74a8000-f74ad000 Deferred libdl.so.2 ELF f74ad000-f7657000 Deferred libc.so.6 ELF f7658000-f7673000 Deferred libpthread.so.0 ELF f7675000-f767b000 Deferred libuuid.so.1 ELF f7682000-f77c4000 Dwarf libwine.so.1 ELF f77c6000-f77e8000 Deferred ld-linux.so.2 ELF f77e8000-f77e9000 Deferred [vdso].so Threads: process tid prio (all id:s are in hex) 0000000e services.exe 0000001f 0 0000001e 0 00000015 0 00000010 0 0000000f 0 00000012 winedevice.exe 0000001c 0 00000019 0 00000014 0 00000013 0 0000001a plugplay.exe 00000020 0 0000001d 0 0000001b 0 00000037 explorer.exe 00000038 0 00000042 (D) C:\Program Files (x86)\Amazon\Kindle\Kindle.exe 00000043 0 <== System information: Wine build: wine-1.4 Platform: i386 (WOW64) Host system: Linux Host version: 3.8.11 How can this be fixed?

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Modifying the Default Shipped Template

    - by The Old Toxophilist
    Having installed your Exalogic Virtual environment by default you have a single template which can be used to create your vServers. Although this template is suitable for creating simple test or development vServers it is recommended that you look at creating your own custom vServers that match the environment you wish to build and deploy. Therefore this Tea Time Snippet will take you through the simple process of modifying the standard template. Before You Start To edit the template you will need the Oracle ModifyJeos Utility which can be downloaded from the eDelivery Site. Once the ModifyJeos Utility has been downloaded we can install the rpms onto either an existing vServer or one of the Control vServers. rpm -ivh ovm-modify-jeos-1.0.1-10.el5.noarch.rpm rpm -ivh ovm-template-config-1.0.1-5.el5.noarch.rpm Alternatively you can install the modify jeos packages on a none Exalogic OEL installation or a VirtualBox image. If you are doing this, assuming OEL 5u8, you will need the following rpms. rpm -ivh ovm-modify-jeos-1.0.1-10.el5.noarch.rpm rpm -ivh ovm-el5u2-xvm-jeos-1.0.1-5.el5.i386.rpm rpm -ivh ovm-template-config-1.0.1-5.el5.noarch.rpm Base Template If you have installed the modify onto a vServer running on the Exalogic then simply mount the /export/common/images from the ZFS storage and you will be able to find the el_x2-2_base_linux_guest_vm_template_2.0.1.1.0_64.tgz (or similar depending which version you have) template file. Alternatively the latest can be downloaded from the eDelivery Site. Now we have the Template tgz we will need the extract it as follows: tar -zxvf  el_x2-2_base_linux_guest_vm_template_2.0.1.1.0_64.tgz This will create a directory called BASE which will contain the System.img (VServer image) and vm.cfg (VServer Config information). This directory should be renamed to something more meaning full that indicates what we have done to the template and then the Simple name / name in the vm.cfg editted for the same reason. Modifying the Template Resizing Root File System By default the shipped template has a root size of 4 GB which will leave a vServer created from it running at 90% full on the root disk. We can simply resize the template by executing the following: modifyjeos -f System.img -T <New Size MB>) For example to imcrease the default 4 GB to 40 GB we would execute: modifyjeos -f System.img -T 40960) Resizing Swap We can modify the size of the swap space within a template by executing the following: modifyjeos -f System.img -S <New Size MB>) For example to increase the swap from the default 512 MB to 4 GB we would execute: modifyjeos -f System.img -S 4096) Changing RPMs Adding RPMs To add RPMs using modifyjeos, complete the following steps: Add the names of the new RPMs in a list file, such as addrpms.lst. In this file, you should list each new RPM in a separate line. Ensure that all of the new RPMs are in a single directory, such as rpms. Run the following command to add the new RPMs: modifyjeos -f System.img -a <path_to_addrpms.lst> -m <path_to_rpms> -nogpg Where <path_to_addrpms.lst> is the path to the location of the addrpms.lst file, and <path_to_rpms> is the path to the directory that contains the RPMs. The -nogpg option eliminates signature check on the RPMs. Removing RPMs To remove RPM s using modifyjeos, complete the following steps: Add the names of the RPMs (the ones you want to remove) in a list file, such as removerpms.lst. In this file, you should list each RPM in a separate line. The Oracle Exalogic Elastic Cloud Administrator's Guide provides a list of all RPMs that must not be removed from the vServer. Run the following command to remove the RPMs: modifyjeos -f System.img -e <path_to_removerpms.lst> Where <path_to_removerpms.lst> is the path to the location of the removerpms.lst file. Mounting the System.img For all other modifications that are not supported by the modifyjeos command (adding you custom yum repositories, pre configuring NTP, modify default NFSv4 Nobody functionality, etc) we can mount the System.img and access it directly. To facititate quick and easy mounting/unmounting of the System.img I have put together the simple scripts below. MountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Export for later i.e. during unmount export LOOP=`losetup -f` export SYSTEMIMG=/mnt/elsystem # Make Temp Mount Directory mkdir -p $SYSTEMIMG # Create Loop for the System Image losetup $LOOP System.img kpartx -a $LOOP mount /dev/mapper/`basename $LOOP`p2 $SYSTEMIMG #Change Dir into mounted Image cd $SYSTEMIMG UnmountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Assume the $LOOP & $SYSTEMIMG exist from a previous run on the MountSystemImg.sh umount $SYSTEMIMG kpartx -d $LOOP losetup -d $LOOP Packaging the Template Once you have finished modifying the template it can be simply repackaged and then imported into EMOC as described in "Exalogic 2.0.1 Tea Break Snippets - Importing Public Server Template". To do this we will simply cd to the directory above that containing the modified files and execute the following: tar -zcvf <New Template Directory> <New Template Name>.tgz The resulting.tgz file can be copied to the images directory on the ZFS and uploadd using the IB network. This entry was originally posted on the The Old Toxophilist Site.

    Read the article

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • Updates broke my themes/shell [Ubuntu 12.04 running Gnome 3 ]

    - by APNW
    I am running gnome-session 3.4.2.1. After the latest updates (listed below) my theme regressed to what looks like tango - not sure. Am unable to change it using Gnome-tweak tool or the display settings. I am also unable to change the wallpaper. Here's what it looks like: Synaptic: Chromium and this is the wallpaper page even though I have selected the wallpaper, it actually does not change. This same problem occurred on my personal computer, and one other computer I have, all running the same software/config. The interesting thing is that while Gnome 3 and Unity are affected, Cinnamon is not. What I've done so far: purged and re-installed both gnome 3 and Unity- no change noted. So, how do I fix this? Thanks Here's the installation log: Start-Date: 2013-11-07 12:01:28 Upgrade: chromium-browser-l10n:i386 (28.0.1500.71-0ubuntu1.12.04.1, 30.0.1599.114-0ubuntu0.12.04.3), libswscale2:i386 (0.8.6-0ubuntu0.12.04.1, 0.8.8-0ubuntu0.12.04.1), chromium-codecs-ffmpeg:i386 (28.0.1500.71-0ubuntu1.12.04.1, 30.0.1599.114-0ubuntu0.12.04.3), chromium-browser:i386 (28.0.1500.71-0ubuntu1.12.04.1, 30.0.1599.114-0ubuntu0.12.04.3), libpostproc52:i386 (0.8.6-0ubuntu0.12.04.1, 0.8.8-0ubuntu0.12.04.1), libavcodec-extra-53:i386 (0.8.6ubuntu0.12.04.1, 0.8.8ubuntu0.12.04.1), libavformat53:i386 (0.8.6-0ubuntu0.12.04.1, 0.8.8-0ubuntu0.12.04.1), libavutil-extra-51:i386 (0.8.6ubuntu0.12.04.1, 0.8.8ubuntu0.12.04.1) End-Date: 2013-11-07 12:02:00 Start-Date: 2013-11-07 17:32:55 Commandline: aptdaemon role='role-commit-packages' sender=':1.136' Install: libmusicbrainz5-0:i386 (5.0.1-2~precise2), udisks2:i386 (1.98.0-1~precise1), libclutter-gst-1.0-0:i386 (1.5.4-0ubuntu2), libudisks2-0:i386 (1.98.0-1~precise1), cinnamon-session-common:i386 (2.0.4-20131105043005-precise), librhythmbox-core6:i386 (2.97-1ubuntu1~precise1), gcr:i386 (3.4.1-3~precise1), libcluttergesture-0.0.2-0:i386 (0.0.2.1-2ubuntu3), libmx-1.0-2:i386 (1.4.3-0ubuntu1), guile-2.0-libs:i386 (2.0.5+1-1), libclutter-imcontext-0.1-0:i386 (0.1.4-2build1), libnatpmp1:i386 (20110808-3ubuntu1) Upgrade: gnome-keyring:i386 (3.2.2-2ubuntu4.1, 3.4.1-4ubuntu1~precise1), cinnamon:i386 (2.0.6-20131026040307-precise, 2.0.10-20131105040309-precise), gir1.2-muffin-3.0:i386 (2.0.3-20131023003029-precise, 2.0.3-20131105003012-precise), gir1.2-totem-1.0:i386 (3.0.1-0ubuntu21.1, 3.4.3-0ubuntu1~precise1), nemo:i386 (2.0.2-20131023010018-precise, 2.0.5-20131105010007-precise), aisleriot:i386 (3.2.3.2-0ubuntu1, 3.4.1-1~precise1), procps:i386 (3.2.8-11ubuntu6.2, 3.2.8-11ubuntu6.3), libcinnamon-desktop0:i386 (2.0.2-20131025011504-precise, 2.0.3-20131105011505-precise), libgck-1-0:i386 (3.2.2-2ubuntu4.1, 3.4.1-3~precise1), totem-plugins:i386 (3.0.1-0ubuntu21.1, 3.4.3-0ubuntu1~precise1), cinnamon-desktop-data:i386 (2.0.2-20131025011504-precise, 2.0.3-20131105011505-precise), rhythmbox:i386 (2.96-0ubuntu4.3, 2.97-1ubuntu1~precise1), libgcr-3-1:i386 (3.2.2-2ubuntu4.1, 3.4.1-3~precise1), seahorse:i386 (3.2.2-0ubuntu2.1, 3.4.1-2~precise1), muffin-common:i386 (2.0.3-20131023003029-precise, 2.0.3-20131105003012-precise), totem-common:i386 (3.0.1-0ubuntu21.1, 3.4.3-0ubuntu1~precise1), libtotem0:i386 (3.0.1-0ubuntu21.1, 3.4.3-0ubuntu1~precise1), rhythmbox-data:i386 (2.96-0ubuntu4.3, 2.97-1ubuntu1~precise1), gir1.2-cinnamondesktop-3.0:i386 (2.0.2-20131025011504-precise, 2.0.3-20131105011505-precise), cinnamon-session:i386 (2.0.1-20131021043004-precise, 2.0.4-20131105043005-precise), rhythmbox-mozilla:i386 (2.96-0ubuntu4.3, 2.97-1ubuntu1~precise1), rhythmbox-plugin-zeitgeist:i386 (2.96-0ubuntu4.3, 2.97-1ubuntu1~precise1), libmuffin0:i386 (2.0.3-20131023003029-precise, 2.0.3-20131105003012-precise), cjs:i386 (2.0.0-20131021020602-precise, 2.0.0-20131105020703-precise), rhythmbox-plugin-cdrecorder:i386 (2.96-0ubuntu4.3, 2.97-1ubuntu1~precise1), cinnamon-common:i386 (2.0.6-20131026040307-precise, 2.0.10-20131105040309-precise), gnome-disk-utility:i386 (3.0.2-2ubuntu7, 3.4.1-0ubuntu1~precise1), nemo-fileroller:i386 (2.0.0-20131021020004-precise, 2.0.0-20131105020003-precise), libnemo-extension1:i386 (2.0.2-20131023010018-precise, 2.0.5-20131105010007-precise), rhythmbox-plugins:i386 (2.96-0ubuntu4.3, 2.97-1ubuntu1~precise1), gimp:i386 (2.8.6-0precise1~ppa, 2.8.8-0precise0~ppa), cinnamon-settings-daemon:i386 (2.0.5-20131026004504-precise, 2.0.6-20131105004505-precise), libgimp2.0:i386 (2.8.6-0precise1~ppa, 2.8.8-0precise0~ppa), gir1.2-rb-3.0:i386 (2.96-0ubuntu4.3, 2.97-1ubuntu1~precise1), wpasupplicant:i386 (0.7.3-6ubuntu2.1, 0.7.3-6ubuntu2.2), libcjs0c:i386 (2.0.0-20131021020602-precise, 2.0.0-20131105020703-precise), nemo-data:i386 (2.0.2-20131023010018-precise, 2.0.5-20131105010007-precise), totem:i386 (3.0.1-0ubuntu21.1, 3.4.3-0ubuntu1~precise1), gimp-data:i386 (2.8.6-0precise1~ppa, 2.8.8-0precise0~ppa), transmission-common:i386 (2.51-0ubuntu1.3, 2.73-0ubuntu1~precise1), cinnamon-translations:i386 (2.0.1-20131021040407-precise, 2.0.1-20131105040807-precise), totem-mozilla:i386 (3.0.1-0ubuntu21.1, 3.4.3-0ubuntu1~precise1), rhythmbox-plugin-magnatune:i386 (2.96-0ubuntu4.3, 2.97-1ubuntu1~precise1), transmission-gtk:i386 (2.51-0ubuntu1.3, 2.73-0ubuntu1~precise1) End-Date: 2013-11-07 17:34:40

    Read the article

  • Stepping outside Visual Studio IDE [Part 1 of 2] with Eclipse

    - by mbcrump
    “If you're walking down the right path and you're willing to keep walking, eventually you'll make progress." – Barack Obama In my quest to become a better programmer, I’ve decided to start the process of learning Java. I will be primary using the Eclipse Language IDE. I will not bore you with the history just what is needed for a .NET developer to get up and running. I will provide links, screenshots and a few brief code tutorials. Links to documentation. The Official Eclipse FAQ’s Links to binaries. Eclipse IDE for Java EE Developers the Galileo Package (based on Eclipse 3.5 SR2)  Sun Developer Network – Java Eclipse officially recommends Java version 5 (also known as 1.5), although many Eclipse users use the newer version 6 (1.6). That's it, nothing more is required except to compile and run java. Installation Unzip the Eclipse IDE for Java EE Developers and double click the file named Eclipse.exe. You will probably want to create a link for it on your desktop. Once, it’s installed and launched you will have to select a workspace. Just accept the defaults and you will see the following: Lets go ahead and write a simple program. To write a "Hello World" program follow these steps: Start Eclipse. Create a new Java Project: File->New->Project. Select "Java" in the category list. Select "Java Project" in the project list. Click "Next". Enter a project name into the Project name field, for example, "HW Project". Click "Finish" Allow it to open the Java perspective Create a new Java class: Click the "Create a Java Class" button in the toolbar. (This is the icon below "Run" and "Window" with a tooltip that says "New Java Class.") Enter "HW" into the Name field. Click the checkbox indicating that you would like Eclipse to create a "public static void main(String[] args)" method. Click "Finish". A Java editor for HW.java will open. In the main method enter the following line.      System.out.println("This is my first java program and btw Hello World"); Save using ctrl-s. This automatically compiles HW.java. Click the "Run" button in the toolbar (looks like a VCR play button). You will be prompted to create a Launch configuration. Select "Java Application" and click "New". Click "Run" to run the Hello World program. The console will open and display "This is my first java program and btw Hello World". You now have your first java program, lets go ahead and make an applet. Since you already have the HW.java open, click inside the window and remove all code. Now copy/paste the following code snippet. Java Code Snippet for an applet. 1: import java.applet.Applet; 2: import java.awt.Graphics; 3: import java.awt.Color; 4:  5: @SuppressWarnings("serial") 6: public class HelloWorld extends Applet{ 7:  8: String text = "I'm a simple applet"; 9:  10: public void init() { 11: text = "I'm a simple applet"; 12: setBackground(Color.GREEN); 13: } 14:  15: public void start() { 16: System.out.println("starting..."); 17: } 18:  19: public void stop() { 20: System.out.println("stopping..."); 21: } 22:  23: public void destroy() { 24: System.out.println("preparing to unload..."); 25: } 26:  27: public void paint(Graphics g){ 28: System.out.println("Paint"); 29: g.setColor(Color.blue); 30: g.drawRect(0, 0, 31: getSize().width -1, 32: getSize().height -1); 33: g.setColor(Color.black); 34: g.drawString(text, 15, 25); 35: } 36: } The Eclipse IDE should look like Click "Run" to run the Hello World applet. Now, lets test our new java applet. So, navigate over to your workspace for example: “C:\Users\mbcrump\workspace\HW Project\bin” and you should see 2 files. HW.class java.policy.applet Create a HTML page with the following code: 1: <HTML> 2: <BODY> 3: <APPLET CODE=HW.class WIDTH=200 HEIGHT=100> 4: </APPLET> 5: </BODY> 6: </HTML> Open, the HTML page in Firefox or IE and you will see your applet running.  I hope this brief look at the Eclipse IDE helps someone get acquainted with Java Development. Even if your full time gig is with .NET, it will not hurt to have another language in your tool belt. As always, I welcome any suggestions or comments.

    Read the article

  • Migrating Core Data to new UIManagedDocument in iOS 5

    - by samerpaul
    I have an app that has been on the store since iOS 3.1, so there is a large install base out there that still uses Core Data loaded up in my AppDelegate. In the most recent set of updates, I raised the minimum version to 4.3 but still kept the same way of loading the data. Recently, I decided it's time to make the minimum version 5.1 (especially with 6 around the corner), so I wanted to start using the new fancy UIManagedDocument way of using Core Data. The issue with this though is that the old database file is still sitting in the iOS app, so there is no migrating to the new document. You have to basically subclass UIManagedDocument with a new model class, and override a couple of methods to do it for you. Here's a tutorial on what I did for my app TimeTag.  Step One: Add a new class file in Xcode and subclass "UIManagedDocument" Go ahead and also add a method to get the managedObjectModel out of this class. It should look like:   @interface TimeTagModel : UIManagedDocument   - (NSManagedObjectModel *)managedObjectModel;   @end   Step two: Writing the methods in the implementation file (.m) I first added a shortcut method for the applicationsDocumentDirectory, which returns the URL of the app directory.  - (NSURL *)applicationDocumentsDirectory {     return [[[NSFileManagerdefaultManager] URLsForDirectory:NSDocumentDirectoryinDomains:NSUserDomainMask] lastObject]; }   The next step was to pull the managedObjectModel file itself (.momd file). In my project, it's called "minimalTime". - (NSManagedObjectModel *)managedObjectModel {     NSString *path = [[NSBundlemainBundle] pathForResource:@"minimalTime"ofType:@"momd"];     NSURL *momURL = [NSURL fileURLWithPath:path];     NSManagedObjectModel *managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:momURL];          return managedObjectModel; }   After that, I need to check for a legacy installation and migrate it to the new UIManagedDocument file instead. This is the overridden method: - (BOOL)configurePersistentStoreCoordinatorForURL:(NSURL *)storeURL ofType:(NSString *)fileType modelConfiguration:(NSString *)configuration storeOptions:(NSDictionary *)storeOptions error:(NSError **)error {     // If legacy store exists, copy it to the new location     NSURL *legacyPersistentStoreURL = [[self applicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTime.sqlite"];          NSFileManager* fileManager = [NSFileManagerdefaultManager];     if ([fileManager fileExistsAtPath:legacyPersistentStoreURL.path])     {         NSLog(@"Old db exists");         NSError* thisError = nil;         [fileManager replaceItemAtURL:storeURL withItemAtURL:legacyPersistentStoreURL backupItemName:niloptions:NSFileManagerItemReplacementUsingNewMetadataOnlyresultingItemURL:nilerror:&thisError];     }          return [superconfigurePersistentStoreCoordinatorForURL:storeURL ofType:fileType modelConfiguration:configuration storeOptions:storeOptions error:error]; }   Basically what's happening above is that it checks for the minimalTime.sqlite file inside the app's bundle on the iOS device.  If the file exists, it tells you inside the console, and then tells the fileManager to replace the storeURL (inside the method parameter) with the legacy URL. This basically gives your app access to all the existing data the user has generated (otherwise they would load into a blank app, which would be disastrous). It returns a YES if successful (by calling it's [super] method). Final step: Actually load this database Due to how my app works, I actually have to load the database at launch (instead of shortly after, which would be ideal). I call a method called loadDatabase, which looks like this: -(void)loadDatabase {     static dispatch_once_t onceToken;          // Only do this once!     dispatch_once(&onceToken, ^{         // Get the URL         // The minimalTimeDB name is just something I call it         NSURL *url = [[selfapplicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTimeDB"];         // Init the TimeTagModel (our custom class we wrote above) with the URL         self.timeTagDB = [[TimeTagModel alloc] initWithFileURL:url];           // Setup the undo manager if it's nil         if (self.timeTagDB.undoManager == nil){             NSUndoManager *undoManager = [[NSUndoManager  alloc] init];             [self.timeTagDB setUndoManager:undoManager];         }                  // You have to actually check to see if it exists already (for some reason you can't just call "open it, and if it's not there, create it")         if ([[NSFileManagerdefaultManager] fileExistsAtPath:[url path]]) {             // If it does exist, try to open it, and if it doesn't open, let the user (or at least you) know!             [self.timeTagDB openWithCompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Opened the file--it already existed");                     [self refreshData];                 }             }];         }         else {             // If it doesn't exist, you need to attempt to create it             [self.timeTagDBsaveToURL:url forSaveOperation:UIDocumentSaveForCreatingcompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Created the file--it did not exist");                     [self refreshData];                 }             }];         }     }); }   If you're curious what refreshData looks like, it sends out a NSNotification that the database has been loaded: -(void)refreshData {     NSNotification* refreshNotification = [NSNotificationnotificationWithName:kNotificationCenterRefreshAllDatabaseData object:self.timeTagDB.managedObjectContext  userInfo:nil];     [[NSNotificationCenter defaultCenter] postNotification:refreshNotification];     }   The kNotificationCenterRefreshAllDatabaseData is just a constant I have defined elsewhere that keeps track of all the NSNotification names I use. I pass the managedObjectContext of the newly created file so that my view controllers can have access to it, and start passing it around to one another. The reason we do this as a Notification is because this is being run in the background, so we can't know exactly when it finishes. Make sure you design your app for this! Have some kind of loading indicator, or make sure your user can't attempt to create a record before the database actually exists, because it will crash the app.

    Read the article

  • Cookbook: SES and UCM setup

    - by George Maggessy
    The purpose of this post is to guide you setting up the integration between UCM and SES. On my next post I’ll show different approaches to integrate WebCenter Portal, UCM and SES based on some common scenarios. Let’s get started. WebCenter Content Configuration WebCenter Content has a component that adds functionality to the content server to allow it to be searched via the Oracle SES. To enable the component installation, go to Administration -&gt; Admin Server and select SESCrawlerExport. Click the update button and restart UCM_server1 managed server. Once the managed server is back, we’ll configure the component. In the menu, under Administration you should see SESCrawlerExport. Click on the link. You’ll see the window below. Click on Configure SESCrawlerExport. Configure the values below: Hostname: SES hostname. Feed Location: Directory where data feeds will be saved. Metadata List: List of metadata that will be searchable by SES. After updating the values click on the Update button. Come back to the SESCrawlerExport Administration UI and click on Take Snapshot button. It will create the data feeds in the specified Feed Location. To check if the correct configuration was done, please access the following URL http://&lt;ucm_server&gt;:&lt;port&gt;/cs/idcplg?IdcService=SES_CRAWLER_DOWLOAD_CONFIG&amp;source=default. It should download config file in the format below: &lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;rsscrawler xmlns="http://xmlns.oracle.com/search/rsscrawlerconfig"&gt; &lt;feedLocation&gt;&lt;![CDATA[http://adc6160699.us.oracle.com:16200/cs/idcplg?IdcService=SES_CRAWLER_DOWNLOAD_CONTROL&amp;source=default]]&gt;&lt;/feedLocation&gt; &lt;errorFileLocation&gt;&lt;![CDATA[http://adc6160699.us.oracle.com:16200/cs/idcplg?IdcService=SES_CRAWLER_STATUS&amp;IsJava=1&amp;source=default&amp;StatusFeed=]]&gt;&lt;/errorFileLocation&gt; &lt;feedType&gt;controlFeed&lt;/feedType&gt; &lt;sourceName&gt;default&lt;/sourceName&gt; &lt;securityType&gt;attributeBased&lt;/securityType&gt; &lt;securityAttribute name="Account" grant="true"/&gt; &lt;securityAttribute name="DocSecurityGroup" grant="true"/&gt; &lt;securityAttribute name="Collab" grant="true"/&gt; &lt;/rsscrawler&gt; Make sure Account and DocSecurityGroup values are true. SES Configuration Let’s start by configuring the Identity Plug-ins in SES. Go to Global Settings -&gt; System -&gt; Identity Management Setup. Select Oracle Content Server and click the Activate button. We’ll populate the following values: HTTP endpoint for authentication: URL to WebCenter Content. Notice that /cs/idcplg was added at the end of the URL. Admin User: UCM Admin user. This user must have access to all CPOE content. Password: Password to Admin user. Authentication Type: NATIVE. Go back to the Home tab and click on Sources on the top left. Select Oracle Content Server on the right and click the Create button. Configuration URL: URL that point to the configuration file. Example: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs/idcplg?IdcService=SES_CRAWLER_DOWNLOAD_CONFIG&amp;source=default. User ID: UCM Admin user. Password: Password to Admin user. Click on the Authorization tab and add the appropriate values to the fields below. Make sure you see the ACCOUNT and DOCSECURITYGROUP security attributes at the end of the page. HTTP endpoint for authorization: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs/idcplg. Display URL prefix: http://&lt;ucm_hostname&gt;:&lt;port&gt;/cs. Administrator user: UCM Admin user. Administrator password. On the Document Types tab, add the documents that should be indexed by SES. As our last step, we’ll configure the Federation Trusted Entities under Global Settings. Entity Name: The user must be present in both the identity management server configured for your WebCenter application and the identity management server configured for Oracle SES. For instance, I used weblogic in my sample. Password: Entity user password.\ Now you are ready to test the integration on the SES UI: http://&lt;ses hostname&gt;:&lt;port&gt;/search/query/.

    Read the article

  • SharePoint 2010 Diagnostic Studio Remote Diag

    - by juanlarios
    I have had some time this week to try out some tools that I have been meaning to try out. This week I am trying out the SP 2010 Diagnostic Studio. I installed it successfully and tried it on my development evironment. I was able to build a report and a snapshot of the environment. I decided to turn my attention to my Employer's intranet environment. This would allow me to analyze it and measure it against benchmarks. I didn't want to install the Diagnostic studio on the Production Envorinment, lucky for me, the Diagnostic studio can be run remotely, well...kind of. Issue My development environment is a stand alone, full installation of SharePoint 2010 Server. It has Office 2010, SQL 2008 Enterprise, a DC...well you get the point, it's jammed packed! But more importantly it's a stand alone, self contained VM environment. Well Microsoft has instructions as to how to connect remotely with Diagnostic Studio here. The deciving part of this is that the SP2010DS prompts you for credentails. So I thought I was getting the right account to run the reports. I tried all the Power Shell commands in the link above but I still ended up getting the following errors: 06/28/2011 12:50:18    Connecting to remote server failed with the following error message : The WinRM client cannot process the request...If the SPN exists, but CredSSP cannot use Kerberos to validate the identity of the target computer and you still want to allow the delegation of the user credentials to the target computer, use gpedit.msc and look at the following policy: Computer Configuration -> Administrative Templates -> System -> Credentials Delegation -> Allow Fresh Credentials with NTLM-only Server Authentication.  Verify that it is enabled and configured with an SPN appropriate for the target computer. For example, for a target computer name "myserver.domain.com", the SPN can be one of the following: WSMAN/myserver.domain.com or WSMAN/*.domain.com. Try the request again after these changes. For more information, see the about_Remote_Troubleshooting Help topic. 06/28/2011 12:54:47    Access to the path '\\<targetserver>\C$\Users\<account logging in>\AppData\Local\Temp' is denied. You might also get an error message like this: The WinRM client cannot process the request. A computer policy does not allow the delegation of the user credentials to the target computer. Explanation After looking at the event logs on the target environment, I noticed that there were a several Security Exceptions. After looking at the specifics around who was denied access, I was able to see the account that was being denied access, it was the client machine administrator account. Well of course that was never going to work!!! After some quick Googling, the last error message above will lead you to edit the Local Group Policy on the client server. And although there are instructions from microsoft around doing this, it really will not work in this scenario. Notice the Description and how it only applices to authentication mentioned? Resolution I can tell you what I did, but I wish there was a better way but I simply don't know if it's duable any other way. Because my development environment had it's own DC, I didn't really want to mess with Kerberos authentication. I would also not be smart to connect that server to the domain, considering it has it's own DC. I ended up installing SharePoint 2010 Diagnostic Studio on another Windows 7 Dev environment I have, and connected the machien to the domain. I ran all the necesary remote credentials commands mentioned here. Those commands add the group policy for you! Once I did this I was able to authenticate properly and I was able to get the reports. Conclusion   You can run SharePoint 2010 Diagnostic Studio Remotely but it will require some specific scenarions. A couple of things I should mention is that as far as I understand, SP2010 DS, will install agents on your target environment to run tests and retrieve the data. I was a Farm Administrator, and also a Server Admin on SharePoint Server. I am not 100% sure if you need all those permissions but I that's just what I have to my internal intranet.   I deally I would like to have a machine that I can have SharePoint 2010 DIagnostic Studio installed and I can run that against client environments. It appears that I will not be able to do that, unless I enable Kerberos on my Windows 7 Machine now. If you have it installed in the same way I would like to have it, please let me know, I'll keep trying to get what I'm after. Hope this helps someone out there doing the same.

    Read the article

  • Atheros AR2413 wireless not working after shutdown

    - by Chandrasekhar
    I am using a Ubuntu 11.04 on an Acer aspire 3680 laptop and my wifi is not working. I followed the below commands to install the madwifi driver: sudo su apt-get install subversion cd /usr/src svn checkout http://madwifi-project.org/svn/madwifi/trunk madwifi tar cfvz madwifi.tgz cd madwifi make && make install echo "blacklist ath5k" /etc/modprobe.d/blacklist.conf echo "ath_pci" /etc/modules modprobe ath_pci sudo reboot After installation I am facing the same problem. My wifi wont work after I shutdown. Infact it didn't work after suspend but I rectified that problem by the following commands: Command 1: sudo rmmod -f ath_pci sudo rfkill unblock all sudo modprobe ath_pci along with the command SUSPEND_MODULES=ath_pci added to the /etc/pm/config.d/madwifi directory. So if I suspend and then on my laptop the wifi loads well and doesn't create a problem. But if I shutdown my laptop the wifi never loads again and eachtime I have to run a Ubuntu 9.04 live CD to load it. I did try adding the Command 1 to the /etc/rc.local directory but still it doesn't work. So my question is: What should I do in order to make my wireless work without having to run a live CD of ubuntu 9.04 everytime after shutdown? Thanks. Here are the outputs which one might need: Output 1 chandru@chandru-acer:~$ lspci 00:00.0 Host bridge: Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 02) 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 02) 00:1c.2 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 3 (rev 02) 00:1d.0 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 02) 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 02) 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 02) 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 02) 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2) 00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02) 00:1f.2 IDE interface: Intel Corporation 82801GBM/GHM (ICH7 Family) SATA IDE Controller (rev 02) 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 02) 02:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8038 PCI-E Fast Ethernet Controller (rev 14) 0a:03.0 Ethernet controller: Atheros Communications Inc. AR2413 802.11bg NIC (rev 01) 0a:09.0 CardBus bridge: Texas Instruments PCIxx12 Cardbus Controller 0a:09.2 Mass storage controller: Texas Instruments 5-in-1 Multimedia Card Reader (SD/MMC/MS/MS PRO/xD) Output 2: lsmod Module Size Used by wlan_tkip 17074 2 binfmt_misc 13213 1 parport_pc 32111 0 ppdev 12849 0 snd_hda_codec_si3054 12924 1 snd_hda_codec_realtek 255882 1 joydev 17322 0 snd_atiixp_modem 18624 0 snd_via82xx_modem 18305 0 snd_intel8x0m 18493 0 snd_ac97_codec 105614 3 snd_atiixp_modem,snd_via82xx_modem,snd_intel8x0m snd_hda_intel 24113 2 ac97_bus 12642 1 snd_ac97_codec snd_hda_codec 90901 3 snd_hda_codec_si3054,snd_hda_codec_realtek,snd_hda_intel i915 451053 3 snd_hwdep 13274 1 snd_hda_codec snd_pcm 80042 7 snd_hda_codec_si3054,snd_atiixp_modem,snd_via82xx_modem,snd_intel8x0m,snd_ac97_codec,snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 snd_rawmidi 25269 1 snd_seq_midi drm_kms_helper 40971 1 i915 snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51291 2 snd_seq_midi,snd_seq_midi_event pcmcia 39671 0 snd_timer 28659 2 snd_pcm,snd_seq snd_seq_device 14110 3 snd_seq_midi,snd_rawmidi,snd_seq drm 184164 4 i915,drm_kms_helper yenta_socket 27230 0 tifm_7xx1 12898 0 wlan_scan_sta 21945 1 ath_rate_sample 17279 1 pcmcia_rsrc 18292 1 yenta_socket psmouse 73312 0 tifm_core 15040 1 tifm_7xx1 snd 55295 18 snd_hda_codec_si3054,snd_hda_codec_realtek,snd_atiixp_modem,snd_via82xx_modem,snd_intel8x0m,snd_ac97_codec,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device serio_raw 12990 0 i2c_algo_bit 13184 1 i915 soundcore 12600 1 snd pcmcia_core 21505 3 pcmcia,yenta_socket,pcmcia_rsrc video 19112 1 i915 ath_pci 183044 0 snd_page_alloc 14073 5 snd_atiixp_modem,snd_via82xx_modem,snd_intel8x0m,snd_hda_intel,snd_pcm wlan 224640 5 wlan_tkip,wlan_scan_sta,ath_rate_sample,ath_pci ath_hal 398701 3 ath_rate_sample,ath_pci lp 13349 0 parport 36746 3 parport_pc,ppdev,lp usbhid 41704 0 hid 77084 1 usbhid sky2 49172 0 Output 3 root@chandru-acer:~# lshw -C network PCI (sysfs) *-network description: Ethernet interface product: 88E8038 PCI-E Fast Ethernet Controller vendor: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 14 serial: 00:16:36:fb:aa:64 capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=sky2 driverversion=1.28 firmware=N/A latency=0 link=no multicast=yes port=twisted pair resources: irq:43 memory:44000000-44003fff ioport:2000(size=256) *-network description: Wireless interface product: AR2413 802.11bg NIC vendor: Atheros Communications Inc. physical id: 3 bus info: pci@0000:0a:03.0 logical name: wifi0 version: 01 serial: 00:19:7d:d3:0c:fd width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list logical ethernet physical wireless configuration: broadcast=yes driver=ath_pci ip=192.168.1.6 latency=96 maxlatency=28 mingnt=10 multicast=yes wireless=IEEE 802.11g resources: irq:18 memory:d0000000-d000ffff Output 4 root@chandru-acer:~# lsmod | grep ath_pci ath_pci 183044 0 wlan 224640 5 wlan_tkip,wlan_scan_sta,ath_rate_sample,ath_pci ath_hal 398701 3 ath_rate_sample,ath_pci

    Read the article

  • VS2010 crashes when opening a vsp generated using VS 2012

    - by Tarun Arora
    I recently profiled some web applications using Visual Studio 2012, a vsp (Visual Studio Profile) file was generated as a result of the profiling session. I could successfully open the vsp file in Visual Studio 2012 as expected but when I tried to open the vsp file in Visual Studio 2010 the VS2010 IDE crashed. As a responsible citizen I raised bug # 762202 on Microsoft Connect site using the Microsoft Visual Studio 2012 Feedback Client. Note – In case you didn’t already know, VSP generated in Visual Studio 2012 is not backward compatible. Please refer below for the steps to reproduce the issue and the resolution of the connect bug. 1. Behaviour and Steps to Reproduce the Issue Description I have generated a vsp file by using the Visual Studio 2012 Standalone profiler. When I try and open the vsp file in Visual Studio 2010 the IDE crashes. I understand that a vsp generated by using VS 2012 cannot be opened in VS 2010, but the IDE crashing is not the behaviour I would expect to see. Steps to Reproduce the Issue 1. Pick up the Stand lone profiler from the VS 2012 installation media. The folder has both x 64 and x86 installer, since the machine I am using is x64 bit. I have installed the x64 version of the standalone profiler. 2. I have configured the system path by setting the 'environment variable' path to where the profiler is installed. In my case this is, C:\Program Files (x86)\Microsoft Visual Studio 11.0\Team Tools\Performance Tools 3. Created a new environment variable _NT_SYMBOL_PATH and set its value to CACHE*C:\SYMBOLSCACHE;SRV*C:\SYMBOLSCACHE*HTTP://MSDL.MICROSOFT.COM/DOWNLOAD/SYMBOLS;\\FOO\BUILD1234 4. Open up CMD as an administrator and run 'VSPerfASPNETCmd /tip http://localhost:56180/ /o:C:\Temp\SampleEISK.vsp' 5. This generates the following message on the cmd       Microsoft (R) VSPerf ASP.NET Command, Version 11.0.0.0     Copyright (C) Microsoft Corporation. All rights reserved.     Configuring and attaching to ASP.NET process. Please wait.     Setting up profiling environment.     Starting monitor.     Launching ASP.NET service.     Attaching Monitor to process.     Launching Internet Explorer.     The profiler is attached to ASP.net. Please run your application scenario now.     Press Enter to stop data collection...   6. I perform certain actions and then I come back to the cmd and hit enter to shut down the profiling. Once I do this, the following message is written to the cmd, Press Enter to stop data collection... Profiling now shut down. Report file "C:\Temp\SampleEISK.vsp" was generated. Running VsPerfReport, packing symbols into the .VSP. Shutting down profiling and restarting ASP.NET. Please wait. Restarting w3wp.exe.   7. I look in the C:\Temp folder and I can see the SampleEISK.vsp file generated. I can successfully open this file in Visual Studio 2012. 8. When I am trying to open the vsp file in VS 2010 the VS 2010 IDE crashes. Kaboooom! What I would expect to happen I expect to receive a message "VS 2010 does not support the vsp file generated by VS 2012". What actually happened The VS 2010 IDE crashed 2. Resolution This is a valid bug! However, there isn’t much value in releasing a hotfix for this issue. Refer below to the resolution provided by the Visual Studio Profiler Team.  Thank you for taking the time to report this issue. We completely agree that Visual Studio 2010 should not crash. However in this particular case this is not a bug we are going to retroactively release a fix to 2010 for at this point. Given that a fix would not unblock the scenario of opening a 2012 created file on Visual Studio 2010, and there is not an active update channel for Visual Studio 2010 other than manually locating and installing hot fixes, we will not be fixing this particular issue. Best Regards, Visual Studio Profiler Team   Though it would be great to improve the behaviour however, this is not a defect that would stop you from progressing in any way. It’s important to note however that VSP files generated by Visual Studio 2012 are not backward compatible so you should refrain from opening these files in Visual Studio 2010.

    Read the article

  • GRUB is not Booting Correctly

    - by msknapp
    I have a PC with three hard disks. Windows 7 is installed on the first, Ubuntu 14.04 is installed on the third. After I re-booted, it went straight to Windows 7. So I tried explicitly telling my PC to boot using the third hard disk, but that just takes me to the grub rescue prompt. I followed Scott Severence's instructions here to try and recover. Essentially, I updated grub, reinstalled grub, and then updated it again. After re-booting, absolutely nothing had changed. So instead I tried using the boot-repair tool. In the past it had failed for me, saying that I had programs running and it could not unmount drives, when I was running nothing. I never figured out how to solve that problem, but it went away when I bought another hard drive and used that for my Ubuntu installation, I don't know why. In any case, I ran the boot-repair tool and this time it said it was successful. First time for everything right? I re-booted, only to be taken straight to the grub rescue prompt. So I changed my BIOS settings to use the third hard disk for boot start up. That is the same hard drive where I have Ubuntu and grub installed, and the same one that the grub-repair tool told me to use. It still took me straight to the grub rescue prompt. So I went from not being able to boot Ubuntu, to not being able to boot either OS installed on my system. Thanks boot-repair! Boot repair gave me this URL for future troubleshooting: http://paste.ubuntu.com/8131669 When I try to boot from the third hard disk, this is my console: Loading Operating System ... error: attempt to read or write outside of disk 'hd0'. Entering rescue mode... grub rescue> grub rescue> set cmdpath=(hd0) prefix=(hd0,gpt2)/boot/grub root=hd0,gpt2 grub rescue> ls (hd0) (hd0,gpt3) (hd0,gpt2) (hd0,gpt1) (hd1) (hd2) (hd2,gpt2) (hd2,gpt1) (hd3) Those values look correct to me. I have also experimented with changing some of those values, but 'insmod normal' always throws the same error. Somebody please tell me how to fix this. I have tried everything, reinstalling grub, and running boot-repair. =========================== Update: I think the problem might be that the ubuntu installer did not partition my hard disk correctly. I booted from live USB and then launched gparted and looked at how it partitioned things. This is what gparted says: Partition, File System, Size, Used, Unused, Flags /dev/sda1 (!), unknown, 1.00 MiB, ---, ---, bios_grub /dev/sda2, ext4, 2.71 TiB, 47.30 GiB, 2.67 TiB, /dev/sda3, linux-swap, 16.00 GiB, 0.00 B, 16.00 GiB, So that first line looks problematic. It is supposed to be the /boot partition. However, it was given only 1 MiB? I am assuming that MiB is actually supposed to mean megabyte, no idea why that 'i' is there. It also says the file system is unknown. I read the answer by andrew here, and he says he had to do a custom install, explicitly configuring the boot partition. So I think that maybe Ubuntu's installer has a bug in it, where it does not set up the boot partition correctly if you are not installing on the first hard disk in your computer. I am going to try reinstalling with a custom partition scheme. I read elsewhere (askubuntu won't let me post another link) that I don't even need a /boot partition any more. So instead of following Andrew's instructions ver batim, I'm first going to try having just two partitions: one for /, and another for my 16GB swap space. Both as primary partitions. The first will be formatted as ext4. If that doesn't work, I may try again using /boot. ======================== So I did my custom install with no /boot partition, and it did not work. When I rebooted, I had an error message saying that some address did not exist. So for the hundredth time, I booted from the live USB, and ran boot-repair. Now I get this message GPT detected. Please create a BIOS-Boot partition (>1MB, unformatted filesystem, bios_grub flag). This can be performed via tools such as Gparted. Then try again. I feel like I'm running in circles and nobody will help me.

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >