Search Results

Search found 13537 results on 542 pages for 'installation failure'.

Page 17/542 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Magento Installation error..redirects to localhost ?!

    - by user177913
    I am unable to login into magento admin. In magento, (in newest release..) it needs proper domain to login ...but how it is possible in a local machine... I found some solution ...in magento forum here... http://www.magentocommerce.com/boards/viewthread/4337/P15/ They asked to change localhost to h t t p ://127.0.0.1 but when tried it redirects to localhost...?! Kindly suggest.

    Read the article

  • Error 1935.An error occurred during the installation of assembly 'Microsoft.VC90.CRT,version="9.0.30729.4148"

    - by Milan Aleksic
    I have troubles installing VC runtime libraries to be able to install SQL Server Compact Edition. The same problem causes also other apps to fail when installing, but I chose this one as a good representative example of my problem (and also it's provided by Microsoft, so "installation should work"). I took a look at what is usually provided as logs/more information and I put on Dropbox on this location: https://www.dropbox.com/s/7zh7ajn50cxz7km/logs.zip 2 logs: installation log with more info procmon log of non-success and non-"result not found" while doing the installation step Any idea what could be the cause and how to fix it?

    Read the article

  • BizTalk 2009 - BizTalk Benchmark Wizard: Installation

    - by StuartBrierley
    As previously detailed, I have completed a single server installation of BizTalk Server 2009 standard on my development laptop; a MacBook Pro Core2Duo running at 2.16Ghz with 2Gb of RAM.  Following this I also posted on my use of the BizTalk Server Best Practices Anaylser and how to configure the BizTalk SQL Server Jobs.  All of which means that I should have some confidence that I have a decent working BizTalk Server 2009 environment, Next I thought that it would be a good idea to try and get some idea of how this setup performs by carrying out some baseline tests that can then be replicated on the test and live servers. The aim of this would be to allow confident predictions to be made of how any solutions developed on a single "server" installation may be expected to perform when deployed to these multi-server BizTalk Server 2009 standard installations. The BizTalk Benchmark Wizard would seem to be the perfect tool for the job. The BizTalk Benchmark Wizard is a ultility that can be used to gain some validation of a BizTalk installation, giving a level of guidance on whether it is performing as might be expected. This utility should be used after BizTalk Server has been installed and before any solutions are deployed to the environment.  This will ensure that you are getting consistent and clean results from the BizTalk Benchmark Wizard. The BizTalk Benchmark Wizard applies load to the BizTalk Server environment under a choice of specific scenarios. During these scenarios performance counter information is collected and assessed against statistics that are appropriate to the BizTalk Server environment: "The executed scenarios may or may not be relative to any realistic scenario, and is only intended for testing. The BizTalk Benchmark Wizard has been developed in relation to the BizTalk Server 2009 Scale Out Testing Study. More information about the study can be found here: http://msdn.microsoft.com/en-us/library/ee377068(BTS.10).aspx" After downloading and installing the wizard you will need set up the Hosts, Instances and Adapter handlers.  This is done by running a script file using the “cscript” detailed below.  To do this you will need to open a command prompt window and navigate to the script folder; assuming the default installation location this would be C:\Program Files\Blogical\BizTalk Benchmark Wizard\Artefacts\BizTalk. In this folder you should find an InstallHosts.vbs file which can be executed using the following parameters: NTGroupName - The name of the Windows NT group. UserName – The name of the user account running the service instances. Password – The password of the user account running the service instances. Receive Host – The name of the server where you want to run the receive host instance.  Send Host - The name of the server where you want to run the sen host instance. Processing Host - The name of the server where you want to run the process host instance. By default the script is set up for 64 bit hosts, so if you are running in 32 bit environment make sure that you change the following line in the script before continuing: from:   objHS.IsHost32BitOnly = False to:    objHS.IsHost32BitOnly = True If you have a single box installation, your script command might look like this: cscript InstallHosts.vbs "BizTalk Application Users" “\MyUser” “MyPassword” “BtsServer1” “BtsServer1” “BtsServer1” If you have a multi server installation, your script command might look like this: cscript InstallHosts.vbs "MyDomain\BizTalk Application Users" “MyDomain\MyUser” “MyPassword” “BtsServer1” “BtsServer2” “BtsServer2” Running this script will create: Three hosts (BBW_RxHost, BBW_TxHost and BBW_PxHost) Three host instances One send and one receive adapter handler for the WCF NetTcp adapter. You will then need to import the BizTalk MSI via the BizTalk Administration Console.  Open the BizTalk Administration Console, point to the “Applications” node and import the BizTalk Benchmark Wizard.msi found in the same folder as the script above. This will create a “BizTalk Benchmark Wizard” application along with all ports and orchestrations needed. To finish the installation you will need to run the BizTalk Benchmark Wizard.msi on all BizTalk servers to add the assemblies to the Global Assembly Cache (GAC). Next I will look at running the BizTalk Benchmark Wizard.

    Read the article

  • Lessons from a SAN Failure

    - by Bill Graziano
    At 1:10AM Sunday morning the main SAN at one of my clients suffered a “partial” failure.  Partial means that the SAN was still online and functioning but the LUNs attached to our two main SQL Servers “failed”.  Failed means that SQL Server wouldn’t start and the MDF and LDF files mostly showed a zero file size.  But they were online and responding and most other LUNs were available.  I’m not sure how SANs know to fail at 1AM on a Saturday night but they seem to.  From a personal standpoint this worked out poorly: I was out with friends and after more than a few drinks.  From a work standpoint this was about the best time to fail you could imagine.  Everything was running well before Monday morning.  But it was a long, long Sunday.  I started tipsy, got tired and ended up hung over later in the day. Note to self: Try not to go out drinking right before the SAN fails. This caught us at an interesting time.  We’re in the process of migrating to an entirely new set of servers so some things were partially moved.  This made it difficult to follow our procedures as cleanly as we’d like.  The benefit was that we had much better documentation of everything on the server.  I would encourage everyone to really think through the process of implementing your DR plan and document as much as possible.  Following a checklist is much easier than trying to remember at night under pressure in a hurry after a few drinks. I had a series of estimates on how long things would take.  They were accurate for any single server failure.  They weren’t accurate for a SAN failure that took two servers down.  This wasn’t bad but we should have communicated better. Don’t forget how many things are outside the database.  Logins, linked servers, DTS packages (yikes!), jobs, service broker, DTC (especially DTC), database triggers and any objects in the master database are all things you need backed up.  We’d done a decent job on this and didn’t find significant problems here.  That said this still took a lot of time.  There were many annoyances as a result of this.  Small settings like a login’s default database had a big impact on whether an application could run.  This is probably the single biggest area of concern when looking to recreate a server.  I’d encourage everyone to go through every single node of SSMS and look for user created objects or settings outside the database. Script out your logins with the proper SID and already encrypted passwords and keep it updated.  This makes life so much easier.  I used an approach based on KB246133 that worked well.  I’ll get my scripts posted over the next few days. The disaster can cause your DR process to fail in unexpected ways.  We have a job that scripts out all logins and role memberships and writes it to a file.  This runs on the DR server and pulls from the production server.  Upon opening the file I found that the contents were a “server not found” error.  Fortunately we had other copies and didn’t need to try and restore the master database.  This now runs on the production server and pushes the script to the DR site.  Soon we’ll get it pushed to our version control software. One of the biggest challenges is keeping your DR resources up to date.  Any server change (new linked server, new SQL Server Agent job, etc.) means that your DR plan (and scripts) is out of date.  It helps to automate the generation of these resources if possible. Take time now to test your database restore process.  We test ours quarterly.  If you have a large database I’d also encourage you to invest in a compressed backup solution.  Restoring backups was the single larger consumer of time during our recovery. And yes, there’s a database mirroring solution planned in our new architecture. I didn’t have much involvement in things outside SQL Server but this caused many, many things to change in our environment.  Many applications today aren’t just executables or web sites.  They are a combination of those plus network infrastructure, reports, network ports, IP addresses, DTS and SSIS packages, batch systems and many other things.  These all needed a little bit of attention to make sure they were functioning properly. Profiler turned out to be a handy tool.  I started a trace for failed logins and kept that running.  That let me fix a number of problems before people were able to report them.  I also ran traces to capture exceptions.  This helped identify problems with linked servers. Overall the thing that gave me the most problem was linked servers.  In order for a linked server to function properly you need to be pointed to the right server, have the proper login information, have the network routes available and have MSDTC configured properly.  We have a lot of linked servers and this created many failure points.  Some of the older linked servers used IP addresses and not DNS names.  This meant we had to go in and touch all those linked servers when the servers moved.

    Read the article

  • Automated “ubuntu-12.04.1-server-amd64” OS installation on physical machine

    - by user285336
    We are using Physical server and are in process of Automated “ubuntu-12.04.1-server-amd64” OS installation on it. There are two HDD for OS installation purpose and there are RAID1 relation between them. This setup has been done through BIOS. The kickstart configuration file looks like this: #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone Asia/Dili #Root password rootpw --iscrypted $1$Yl1QJyta$KzIT.kq3i9E5XaiQKcUJn/ #Initial user user ankit --fullname "Ankit" --iscrypted --password $1$c6Yflpea$pi1QQ59/jgywmGwBv25z3/ #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use Web installation url --url my_repo_location #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --fstype ext4 --size 100 --ondisk sda part / --fstype ext4 --size 10000 --ondisk sda part /var --fstype ext4 --size 10000 --ondisk sda part swap --size 1024 --ondisk sdb #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --enabled --trust=eth0 --http --ftp --ssh --telnet --smtp #X Window System configuration information xconfig --depth=8 --resolution=640x480 --defaultdesktop=GNOME But I am getting the below error : No root file system is defined Please suggest on this. Do we need to do any modification in kickstart configuration file. Any help in this regard will be very helpful for us. The automated Ubuntu OS installation is successful in Virtual Machine(VM) with the above ks.cfg (kickstart configuration file ) but failing in case of physical machine. Please suggest on this and if possible provide the new ks.cfg file to resolve above problem. Thanks & Regards, Rajesh Prasad

    Read the article

  • Windows setup claims the installation source is not accessible. How do I fix this?

    - by Wil
    I'm trying to install Windows 7 Ultimate over my existing Windows 7 Professional. I downloaded the ISO from Microsoft and burned the install disc at the slowest speed possible (x3). I booted to the DVD, but at the second screen I am already getting an error! That screen I am choosing between "Upgrade" and "Custom". I am trying to choose "Custom" but then I get the error: Windows installation encountered an unexpected error. Verify that the installation sources are accessible, and restart the installation. Error code: 0xE0000100

    Read the article

  • Default User Id at Login different from User Name in Terminal Shell

    - by Bill
    During the Ubuntu 12.04 LTS installation, I was prompted to enter a user name and password, so that a corresponding account could be created and set up for login. I replaced the one that was provided by default (i.e. '70319', which is the Windows 7 admin id) with a user name/id of my choosing. Now, when I turn on the computer, and choose to enter the Ubuntu operating system, the login id that is displayed is 70319 - that is, the one provided by Windows 7. However, when I open up a Unix/Terminal shell, the user id that is displayed at the prompt is the one I entered during installation. Otherwise, the installation of Ubuntu was a success! Is there some way of changing the user id that is displayed at the Login screen, so that it is consistent with the one I entered during installation? If it's any help, I installed Ubuntu using wubi on an ASUS Eee PC 1011PX running Windows 7, and ASUS Express Gate Cloud. Further details regarding the setup/installation can be found at the following link: Installing Ubuntu on an Eee PC 1011PX

    Read the article

  • Freeze after 'init-bottom' script before installation starts on a Lenovo G475

    - by NikitaBaksalyar
    I'm trying to install Ubuntu 11.04 x86 on a Lenovo G475 laptop (based on AMD Fusion), but installation freezes after loading ends on a splash screen. I've tried to run installation without quiet and splash parameters - but result is the same: installation freeze after running "init-bottom" script. I thought that it can be because of BIOS settings, but I've also tried to change them several times without any result.

    Read the article

  • Announcing the Drive Installation Scope

    Announcing the Drive Installation Scope On September 12, Google Drive released a new feature of great interest to many Drive web app developers: the installation scope. In this session we'll discuss the benefits of the installation scope, walk through the related documentation, and do a brief demo of how it works. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Understanding the SQL Server 2008 R2 Installation Center

    - by Enrique Lima
    What is available to us through those links?  Have you taken the time to explore and identify what could be useful to you? One of many gems that has come to my attention is the possibility of provisioning SQL Server to work in an image based environment (hint: Virtualization Template perhaps?!?).   Planning: Includes requirements information, documentation, how to guides, online documentation installation and other tools. Among the other tools you will find the System Configuration checker and The Upgrade Advisor. Both tools very important to ensure your deployment and installation would be successful.     Installation:  This sections focuses on getting installations going, from standalone to cluster when it comes to new instances.  Add new nodes to an existing cluster, and also perform upgrades (in this case to SQL Server 2008 R2).  Also part of this is the option to find updates available.   Maintenance: We find in this section, options that will assist us in tasks like repairing corrupt installations to removing nodes from a cluster. An option that is interesting (and we should discuss benefits later in another post) is to be able to do an Edition Upgrade, this is a feature expansion and addition based on your product installation (Developer to Enterprise, for example)   Tools:  From the System Configuration Checker to identify readiness for deployment in a successful manner, to being able to report on features installed.  And being able to run upgrades of existing packages developed in the 2005 offering to the 2008 R2 release for SSIS.   Resources: Useful and essential links to gather information and guidance.   Advanced: Here is where it gets interesting.  I break this down into 3 main groups: Installation Automation: When you install SQL Server there is a configuration file that gets dropped (ConfigurationFile.ini) that would allow for you to perform automated installations.  There are switches and options that go with this to have that process working. Cluster configuration for Sysprep: Create images that are cluster ready, 2 options, start the prep work, and then the complete once at the final destination. Stand-alone configuration for Sysprep:  Like the clustering counterpart, 2 options, prep and complete.  Giving you the option to create standard templates for your SQL Server deployments. I find it fitting that the 3 topics listed here should (and will) be additional topics I will discuss.   Options: Very clear and specific about what this means. Select the Processor Type or the Installation Media Root Path.

    Read the article

  • Mounting Wubi part from another Linux installation

    - by FrankSus
    hope someone can help. I need to access 11.04 ubuntu files residing on a second hard drive (dual boot XP/Ubuntu) from a 10.04 hard drive installation. The 11.04 system co-exists with XP on the second drive. This drive is HPFS/NTFS. My 10.04 installation mounts and displays the contents of the XP installation and all files but the 11.04 system is nowhere to be seen. How can I see and access the Ubuntu partition, please?

    Read the article

  • Post raid5 setup reboot shows single hard drive failure on ubuntu 12.10?

    - by junkie
    I just set up raid 5 on linux using three HDDs as per a guide. It all went fine until when I rebooted I got the following text: http://i.stack.imgur.com/Zsfjk.jpg. Does this mean one of my HDDs has failed? How do I check if any of them are failing? I tried using smartctl and didn't see any issues. Or is it nothing to do with failure and something else altogether? I would like to get the raid 5 working again but I'm not sure where to go from here. I'm using ubuntu 12.10 and the three raid disks each have a gpt partition with a single full size partition of filesystem type ext4. Note I only got an error on reboot not while I was creating the raid array which went fine. Thanks.

    Read the article

  • SSD becomes hot, disk failure warning

    - by Aegluin
    I have a two weeks old SSD (Kingston SSDnow 64GB). Yesterday, the computer shutdown twice and after rebooting I was bombarded with disk failure warnings. I usually take such warnings serious (and backed up), but skeptical. After cooling down, the laptop boots again and the only red Smart value was the temperature (Ubuntu did not show the temperature of failure, but the at that time 29°). After refreshing the Smart status and doing a "self test", everything is green. Before contacting Kingston support, I would like to know whether it could be due to a software issue: Is it possible that it is false alarm, and how can I check? I installed Ubuntu 12.04 32bit and took care of alignment. I supposed Ubuntu set up with optimal settings for SSDs, how can I check that there was no mistake? The current temperature is around 40-56°. Is such a temperature abnormal for SSDs? Output of sudo smartctl --all /dev/sda: http://pastebin.ubuntu.com/1175940/

    Read the article

  • Unavailable repository

    - by katrina
    I am new to Ubuntu and keep butting up against errors, such as this: Package libpng12-dev is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: libpng12-0 E: Unable to locate package subversion E: Package 'git-core' has no installation candidate E: Package 'build-essential' has no installation candidate E: Package 'autoconf' has no installation candidate E: Package 'libtool' has no installation candidate E: Unable to locate package libxml2-dev E: Unable to locate package libgeos-dev E: Unable to locate package libpq-dev E: Unable to locate package libbz2-dev E: Package 'proj' has no installation candidate E: Unable to locate package munin-node E: Unable to locate package munin E: Unable to locate package libprotobuf-c0-dev E: Unable to locate package protobuf-c-compiler E: Unable to locate package libfreetype6-dev E: Package 'libpng12-dev' has no installation candidate E: Unable to locate package libtiff4-dev E: Unable to locate package libicu-dev E: Unable to locate package libboost-all-dev E: Unable to locate package libgdal-dev E: Unable to locate package libcairo-dev E: Unable to locate package libcairomm-1.0-dev E: Couldn't find any package by regex 'libcairomm-1.0-dev' E: Unable to locate package apache2 E: Unable to locate package apache2-dev E: Unable to locate package libagg-dev when I want to do this: sudo apt-get install subversion git-core tar unzip wget bzip2 build-essential autoconf libtool libxml2-dev libgeos-dev libpq-dev libbz2-dev proj munin-node munin libprotobuf-c0-dev protobuf-c-compiler libfreetype6-dev libpng12-dev libtiff4-dev libicu-dev libboost-all-dev libgdal-dev libcairo-dev libcairomm-1.0-dev apache2 apache2-dev libagg-dev. Any help or advice would be greatly appreciated. Or referrals to other questions...

    Read the article

  • Recreating OMS instances in a HA environment when instances on all nodes are lost

    - by rnigam
    Oracle highly recommends deploying EM in a HA environment. The best practices for HA deployments, backup and housekeeping of your Enterprise Manager environment are documented in the Oracle Enterprise Manager Advanced Configuration Guide. It is imperative that there is a good disaster recovery plan in place for your EM deployment. In this post I want to talk about a customer who failed to do the correct planning and housekeeping for EM and landed in a situation where we the all the OMSes were nearly blown away had we not jumped to help. We recently hit an issue at a customer site where we had a two node OMS setup of the Enterprise Manager and a RAC Database being used as the EM repository. An accidental delete of the OMS oracle home left us with a single node deployment. While we were trying to figure out a possible path to recover the first node, the second node was rebooted under a maintenance window. What followed was a complete site outage as the Admin and managed servers would not start on either of the nodes. In my situation there were - No backups of the Oracle Homes from any node - No OMS Configuration snapshots (created using the “emctl exportconfig oms” command) and the instance home was completely lost on node 1 which also had the Admin Server  We did however have: - A copy of the emkey.ora that I found under the OMS_ORACLE_HOME/ of the second node (NOTE: it is a bad practice to have your emkey present under the OMS Oracle home directory on the same server as the OMS. The backup of the emkey should be maintained on some other server. In this case however it was a savior in my situation since there were no backups - The oms oracle home on the second node but missing a number of files and had a number of changes done to the files in the home. There were a number of attempts to start the server by modifying various files based on the Weblogic server logs to have atleast node up and running but all of them failed. Here is how you can recover from this scenario: Follow these steps: STEP 1: Check status of emkey.ora Check whether the emkey exists is present in the EM repository or not. Run the following command: $OMS_ORACLE_HOME/bin/emctl status emkey If the output is something like this below then you are good to go and the key is present in the repository ./emctl status emkey Oracle Enterprise Manager 11g Release 1 Grid Control Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved. Enter Enterprise Manager Root (SYSMAN) Password : The EMKey is configured properly. Here are the messages that you might see as the emctl status emkey output depending upon whether the EM Admin Server is up and if the key is configured properly: Case1:  AdminServer is up, emkey is proper in CredStore & not in repos. This is same as the output of the command shown above:The EMKey is configured properly Case 2: AdminServer is up, emkey is proper in CredStore & exists in repos:The EMKey is configured properly, but is not secure. Secure the EMKey by running "emctl config emkey -remove_from_repos".Case 3: AdminServer is down or emkey is corrupted in CredStore) & (emkey exists in repos): The EMKey exists in the Management Repository, but is not configured properly or is corrupted in the credential store.Configure the EMKey by running "emctl config emkey -copy_to_credstore".Case 4: (AdminServer is down or emkey is corrupted in CredStore) & (emkey does not exist in repos): The EMKey is not configured properly or is corrupted in the credential store and does not exist in the Management Repository. To correct the problem:1) Get the backed up emkey.ora file.2) Configure the emkey by running "emctl config emkey -copy_to_credstore_from_file". If not the key was not secured properly, we will have to be put in the repository before proceeding. Look at the next step 2 for doing this There may be cases (like mine) where running emctl may give errors like the following: $OMS_ORACLE_HOME/bin/emctl status emkey Exception in thread “Main Thread” java.lang.NoClassDefFoundError: oracle/security/pki/OracleWallet At oracle.sysman.emctl.config.oms.EMKeyCmds.main (EMKeyCmds.java:658) Just move to the next step to put the key back in the repository STEP 2: Put emkey.ora back in the repository Skip this step if your emkey.ora is present in the repository. If not, you need to put the key back in the repository See if you can run the following command (with sample output): $OMS_ORACLE_HOME/bin/emctl config emkey –copy_to_repos Oracle Enterprise Manager 11g Release 1 Grid Control Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved. The EMKey has been copied to the Management Repository. This operation will cause the EMKey to become unsecure. After the required operation has been completed, secure the EMKey by running "emctl config emkey -remove_from_repos". Typically the key is present under $OMS_ORACLE_HOME/sysman/config directory before being removed after the install as a best practice. If you hit any errors while running emctl commands like the one mentioned in step 1, jump to step 3 and we will take care of the emkey.ora in Step 5 STEP 3: Get the port information Check for the existing port information in the emd.properties file under EM_INSTANCE_DIRECTORY (typically gc_inst directory right above the Middleware home where you have deployed em. For eg. /u01/app/oracle/product/gc_inst in case your oms home is /u01/app/oracle/product/Middleware/oms11g) In my case I got the information from the emgc.properties present in the gc_inst on the second node. If you can run emctl you may want to try the following command as well $OMS_ORACLE_HOME/bin/emctl status oms –details Note this information as this will be used in the next step STEP 4: Perform cleanup on Node 1 Note the oracle home of the Weblogic and OMS, get the list of applied patches in the homes (using opatch lsinventory command), take a backup copy of the home just in case we need it and then de-install/remove oracle homes, update inventory and cleanup processes on the first node STEP 5: Perform Software Only Installation of OMS on Node 1 Perform Weblogic 10.3.2 installation exactly under the same location as present in the earlier installation. Perform software only installation of the OMS using the following command. This will not run any configuration assistants and bypass all user interface validations runInstaller –noconfig -validationaswarnings Select the “Additional OMS” option while performing the installation. Provide the same path for OMS and Instance directories like the previous installation Use the port information collected in Step 3 while performing the installation. Once the installation is complete run the allroot.sh script to complete the binary deployment STEP 6: Apply one-off patches At this point you can apply any patches to the OMS Oracle Home previously. You only need to run opatch to install the patch in the home and not required to run the SQLs STEP 7: Copy EM key This step is only required if you were not able to use emctl command to put the emkey back into the EM repository in STEP 2 Copy the emkey.ora file of the old installation you have under $OMS_ORACLE_HOME/sysman/config directory of the newly installed OMS STEP 8: Configure Grid Control Domain Run the following command to configure the EM domain and OMS. Note that you need to use a different GC Domain name than what you used earlier. For example I have used GCDOMAIN11 as the new domain name when my previous domain name was GCDOMAIN $OMS_ORACLE_HOME/bin/omsca new –AS_USERNAME weblogic –EM_DOMAIN_NAME GCDOMAIN11 –NM_USER nodemanager -nostart This command as shown below will prompt for a number of inputs like Admin Server hostname, port, password, etc. Verify if the defaults shown are correct by pressing enter or provide a new value STEP 9: Run Add-ON Configuration Assistant After this step run the following add-on configuration assistant. This was used in my case to configure the virtualization add-on $OMS_ORACLE_HOME/addonca -oui -omsonly -name vt -install gc STEP 10: Start the OMS Now start the OMS using $OMS_ORACLE_HOME/bin/emctl start oms In a multi-node setup like mine you would either have a software load balancer or DNS round robin (using a virtual host name that resolves to one of multiple OMS hostnames) being used for load balancing. Secure the OMS against the SLB or DNS virtual hostname using the following $ OMS_HOME/bin/emctl secure oms -host slb.example.com -secure_port 1159 -slb_port 1159 -slb_console_port 443 STEP 11: Configure the Agent From the $AGENT_ORACLE_HOME/bin run the ./agentca –f At this point you should have your OMS on node 1 fully re-covered. Clean up node 2 and use the normal Additional OMS installation process documented in the official installation guide to add the additional OMS on node 2 Summary It took us nearly a little over two days to completely recover the environment with some other non-EM related issues that hit us along the way as well. In the end a situation like this could have been completely avoided had the proper housekeeping and backup of the Enterprise Manager Deployment been done in the first place. This is going to a topic that we cover in the next post. In the meantime please do refer to the Oracle Enterprise Manager Advanced Configuration Guide for planning your EM installation, backup and housekeeping procedures. This can be found here: http://download.oracle.com/docs/cd/E11857_01/index.htm Thanks This post would not have been possible without Raj Aggarwal, Prasad Chebrolu and Ravikumar Basa who helped to recover the environment and provided all the support we needed

    Read the article

  • How to fix "Sub-process /usr/bin/dpkg returned an error code (1)" when installing and upgrading packages?

    - by soum
    I am getting this error whenever tring to install or update anything: "Sub-process /usr/bin/dpkg returned an error code (1)" I need help, as I cannot install or upgrade any packages on my Ubuntu 11.10 system. Here is the rest of the error: unknown argument `triggered' dpkg: error processing mtools (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager-pptp-gnome ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp-gnome (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-pptp ... postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-gnome ... /var/lib/dpkg/info/network-manager-gnome.postinst called with unknown argument `triggered' dpkg: error processing network-manager-gnome (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager ... No apport report written because MaxReports is reached already /var/lib/dpkg/info/network-manager.postinst called with unknown argument `triggered' dpkg: error processing network-manager (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mscompress ... postinst called with unknown argument `triggered' dpkg: error processing mscompress (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: netbase mtr-tiny module-init-tools mountmanager mono-4.0-gac mousetweaks mozilla-plugin-vlc mtools network-manager-pptp-gnome network-manager-pptp network-manager-gnome network-manager mscompress E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • How do I automate OS installation on 500+ machines?

    - by Igor
    My company has to image a large amount of machines by the end of the year. Each of the machines will have hardware RAID 1 and running CentOS 6. What options do I have for automating the OS installation on these systems? I have a little mini desktop I can set up as an install server, and we can get a switch to create an installation network, but I'm not sure how to go about actually performing the automated installs.

    Read the article

  • How long does a typical Windows 7 64bit installation take?

    - by Borek
    On a machine with Intel Core-i5 or i7 and a standard HDD, how long should an installation of Windows 7 64 bit roughly take? I'm after a rough estimate because my installation seems to be running quite slowly (previously, I've installed Win7 32bit on a laptop and it went much quicker but maybe there are differences between 32 and 64 bit editions).

    Read the article

  • Can this USB3 behaviour be anything else than a hardware failure?

    - by Jonas Wielicki
    While my motherboard is half a year old now (ASUS M5A99X EVO), I only recently made use of the USB3 boards (after purchase of USB3 external harddrive). However, I am encountering issues. I am running linux 3.6.7-4.fc16.x86_64. Initially, the harddrive worked fine with USB3 (amazing ˜160MB/s), but I had some problems after putting after putting the harddrive to sleep manually after use (backup) with hdparm -Y. After some time, the device disappears from lsusb and i see the following in dmesg: [ 1924.091107] xhci_hcd 0000:05:00.0: xHCI host not responding to stop endpoint command. [ 1924.091114] xhci_hcd 0000:05:00.0: Assuming host is dying, halting host. [ 1924.091147] xhci_hcd 0000:05:00.0: HC died; cleaning up [ 1924.091233] usb 11-1: USB disconnect, device number 2 [ 1924.091272] sd 6:0:0:0: Device offlined - not ready after error recovery Testing with my (USB3 capable) notebook, I could not immediately reproduce the behaivour. I put the drive to sleep with hdparm -Y and waited for like an hour, but it was still listed in lsusb and responded after a few seconds delay when I tried after the hour of waiting. After an hour, on the desktop, the device would've usually vanished. Googling for this issue, I came across hints that playing around with IOMMU settings and upgrading the BIOS might help. I upgraded the BIOS and tried both with and without IOMMU enabled, got similar results. Most disturbing is, that one of the two USB 3.0 hubs sometimes also disappears from lsusb (or does not show up after boot at all). I've also heard that there are some hardware issues with ASUS USB3 ports. Applying mechanic force to the capble doesn't push the issue to one side or the other. Also, udev seems to reenumerate all devices if I plug the HDD into the USB 3.0 port without success (I can notice from my keyboard layout being changed to the default, which I do not use normally). The drive is externally powered and the external power supply is plugged in (it also stays powered when unplugging from USB, although it will spin down then). So before I try to return the board, I wanted to find out whether this can be anything else than a failure on the motherboard?

    Read the article

  • Quick guide to Oracle IRM 11g: Server installation

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index This is the first of a set of articles designed to assist with the successful installation, configuration and deployment of a document security solution using Oracle IRM. This article goes through a set of simple instructions which detail how to download, install and configure the IRM server, the starting point for building a document security solution. This article contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Downloading the software Preparing a database Creating the schema WebLogic Server installation Installing Oracle IRM Introduction Because we are using Oracle Enterprise Linux in this guide, and before we get into the detail of IRM, i'd like to share some tips with Linux to make life a bit easier.Use a 64bit platform, IRM 11g runs just fine on a 32bit server but with 64bit you will build a more future proof service. Download and install the latest Java JDK package. Make sure you get the 64bit version if you are on a 64bit server. Configure Linux to use a good Yum server to simplify installing packages. For Oracle Enterprise Linux we maintain a great public Yum here. Have at least 20GB of free disk space on the partition you intend to install the IRM server. The downloads are big, then you extract them and then install. This quickly consumes disk space which you can easily recover by deleting the downloaded and extracted files after wards. But it's nice to have the disk space spare to keep these around in case you need to restart any part of the installation process again. Downloading the software OK, so before you can do anything, you need the software install kits. Luckily Oracle allows you to freely download every technology we create. You'll need to get the following; Oracle WebLogic Server Oracle Database Oracle Repository Creation Utility (rcu) Oracle IRM server You can use Microsoft SQL server 2005 or 2008, in this guide i've used Oracle RDBMS 11gR2 for Linux. Preparing the database I'm not going to go through the finer points of installing the database. There are many very good guides on installing the Oracle Database. However one thing I would suggest you think about is enabling TDE, network encryption and using Database Vault. These Oracle database security technologies are excellent for creating a complete end to end security solution. No point in going to all the effort to secure document access with IRM when someone can go directly to the database and assign themselves rights to documents. To understand this further, you can see a video of the IRM service using these database security technologies here. With a database up and running we need to create a schema to hold the IRM data. This schema contains the rights model, cryptographic keys, user account id's and associated rights etc. Creating the IRM database schema Oracle uses the Repository Creation Tool which builds your schema, extract the files from the rcu zip. Then in a terminal window; cd /oracle/install/rcu/bin ./rcu This will launch the Repository Creation Tool and you will be presented with the image to the right. Hit next and continue onto the next dialog. You are asked if you are going to be creating a new schema or wish to drop an existing one, you obviously just need to click next at this point to create a new schema. The RCU next needs to know where your database is so you'll need the following details of your database instance. Below, for reference, is the information for my installation. Hostname: irm.oracle.demo Port: 1521 (This is the default TCP port for the Oracle Database) Service Name: irm.oracle.demo. Note this is not the SID, but the service name. Username: sys Password: ******** Role: SYSDBA And then select next. Because the RCU contains schemas for many of the Oracle Technologies, you now need to select to just deploy the Oracle IRM schema. Open the section under "Enterprise Content Management" and tick the "Oracle Information Rights Management" component. Note that you also get the chance to select a prefix which defaults to "DEV" (for development). I usually change this to something that reflects my own install. PROD for a production system, INT for internal only etc. The next step asks for the passwords for the schema users. We are only creating one schema here so you just enter one password. Some brave souls store this password in an Excel spreadsheet which is then secure against the IRM server you're about to install in this guide. Nearing the end of the schema creation is the mapping of the tablespaces to the schema. Note I had setup a table space already that was encrypted using TDE and at this point I was able to select that tablespace by clicking in the "Default Tablespace" column. The next dialog confirms your actions and clicking on next causes it to create the schema and default data. After this you are presented with the completion summary. WebLogic Server installation The database is now ready and the next step is to install the application server. Oracle IRM 11g is a JEE application and currently only supported in Oracle WebLogic Server. So the next step is get WebLogic Server installed, which is pretty easy. Depending on the version you download, you either run the binary or for a 64 bit platform (like mine) run the following command. java -d64 -jar wls1033_generic.jar And in the resulting dialog hit next to start walking through the install. Next choose a directory into which you will install WebLogic Server. I like to change from the default and install into /oracle/. Then all my software goes into this one folder, all owned by the "oracle" user. The next dialog asks for your Oracle support information to ensure you are kept up to date. If you have an Oracle support account, enter your details but for most evaluation systems I leave these fields blank. Again, for evaluation or development systems, I usually stick with the "Typical" install type which you are next asked for. Next you are asked for the JDK which will be used for the server. When installing from the generic jar on a 64bit platform like in this guide, no JDK is bundled with the installer. But as you can see in the image on the right, that it does a good job of detecting the one you've got installed. Defaults for the install directories are usually taken, no changes here, just click next. And finally we are ready to install, hit next, sit back and relax. Typically this takes about 10 minutes. After the install, do not run the quick start, we need to deploy the IRM install itself from which we will create a new WebLogic domain. For now just hit done and lets move to the final step of the installation process. Installing Oracle IRM The last piece of the puzzle to getting your environment ready is to deploy the IRM files themselves. Unzip the Oracle Enterprise Content Management 11g zip file and it will create a Disk1 directory. Switch to this folder and in the console run ./runInstaller. This will launch the installer which will also ask for the location of the JDK. Look at the image on the right for the detail. You should now see the first stage of the IRM installation. The dialog warns you need to have a WebLogic server installed and have created the schema's, but you've just done all that above (I hope) so we are ready to go. The installer now checks that you have all the required libraries installed and other system parameters are correct. Because nearly all of my development and evaluation installations have the database server on the same system, the installer passes these checks without issue... Next... Now chose where to install the IRM files, you must install into the same Middleware Home as the WebLogic Server installation you just performed. Usually the installer already defaults to this location anyway. I also tend to change the Oracle Home Directory to Oracle_IRM so it's clear this is just an IRM install. The summary page tells you about space needed to deploy the files. Unfortunately the IRM install comes with all of the other Oracle ECM software, you can't just select the IRM files, everything gets deployed to disk and uses 1.6GB of space! Not fun, but Oracle has to package up similar technologies otherwise we would have a very large number of installers to QA and manage, again, not fun. Hit Install, time for another drink, maybe a piece of cake or a donut... on a half decent system this part of the install took under 10 minutes. Finally the installation of your IRM server is complete, click on finish and the next phase is to create the WebLogic domain and start configuring your server. Now move onto the next article in this guide... configuring your IRM server ready to seal your first document.

    Read the article

  • Can't update Nvidia driver and having error near the end of the installation

    - by user94843
    I had just got Ubuntu (first timer to Ubuntu so be very descriptive). I think there a problem with my Nvida update it won't let me update it. This is the name of the update in update manager NVIDIA binary xorg driver, kernel module and VDPAU library. When i attempt to install it, it starts out fine but near the end i get a window titaled package operation failed with these under the details installArchives() failed: Setting up nvidia-current (295.40-0ubuntu1) ... update-initramfs: deferring update (trigger activated) INFO:Enable nvidia-current DEBUG:Parsing /usr/share/nvidia-common/quirks/put_your_quirks_here DEBUG:Parsing /usr/share/nvidia-common/quirks/dell_latitude DEBUG:Parsing /usr/share/nvidia-common/quirks/lenovo_thinkpad DEBUG:Processing quirk Latitude E6530 DEBUG:Failure to match Gigabyte Technology Co., Ltd. with Dell Inc. DEBUG:Quirk doesn't match DEBUG:Processing quirk ThinkPad T420s DEBUG:Failure to match Gigabyte Technology Co., Ltd. with LENOVO DEBUG:Quirk doesn't match Removing old nvidia-current-295.40 DKMS files... Loading new nvidia-current-295.40 DKMS files... Error! DKMS tree already contains: nvidia-current-295.40 You cannot add the same module/version combo more than once. dpkg: error processing nvidia-current (--configure): subprocess installed post-installation script returned error exit status 3 Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-31-generic Warning: No support for locale: en_US.utf8 Errors were encountered while processing: nvidia-current Error in function: Setting up nvidia-current (295.40-0ubuntu1) ... update-initramfs: deferring update (trigger activated) INFO:Enable nvidia-current DEBUG:Parsing /usr/share/nvidia-common/quirks/put_your_quirks_here DEBUG:Parsing /usr/share/nvidia-common/quirks/dell_latitude DEBUG:Parsing /usr/share/nvidia-common/quirks/lenovo_thinkpad DEBUG:Processing quirk Latitude E6530 DEBUG:Failure to match Gigabyte Technology Co., Ltd. with Dell Inc. DEBUG:Quirk doesn't match DEBUG:Processing quirk ThinkPad T420s DEBUG:Failure to match Gigabyte Technology Co., Ltd. with LENOVO DEBUG:Quirk doesn't match Removing old nvidia-current-295.40 DKMS files... Loading new nvidia-current-295.40 DKMS files... Error! DKMS tree already contains: nvidia-current-295.40 You cannot add the same module/version combo more than once. dpkg: error processing nvidia-current (--configure): subprocess installed post-installation script returned error exit status 3 Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-31-generic Warning: No support for locale: en_US.utf8

    Read the article

  • package manager cannot create users? installation script fails?

    - by RapidWebs
    It seems that when trying to install any package which requires the creation of a system User and Group, the installer fails with the error: install: invalid user 'username' When it happened the first time, I assumed it was related to the package, or it's installation script. But now I am attempting a different package, and alas, the same issue arises! note: /etc/passwd and /etc/group is not chattr -i Here is the example output from an attempted installation (of varnish): Selecting previously unselected package varnish. Unpacking varnish (from .../varnish_3.0.2-1ubuntu0.1_amd64.deb) ... Processing triggers for man-db ... Setting up libvarnishapi1 (3.0.2-1ubuntu0.1) ... Setting up varnish (3.0.2-1ubuntu0.1) ... install: invalid user `varnish' dpkg: error processing varnish (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: varnish note: running ubuntu 12.04 For now, I have resolved the issue by manually creating the users, and running apt-get install -f, but this does not resolve the actual issue. Any ideas?

    Read the article

  • Failure Driven Development

    - by DevSolo
    At our shop, we strive to be agile. And I'd say we are making great strides. That said, a few of us have spotted a pattern we have started calling "Failure Driven Development". Failure Driven Development can basically be desribed as an agile release/iteration cycle where the bugs/features are guided not by tasks and stories with acceptance criteria, but with defects entered in the defect tracking software. Our team has a great Project Manager who strives to get the acceptance criteria from the customer(s), but it's not always possible. From my development chair, this is due to the customer either not knowing exactly what they want or (and this is the kicker) two different "camps" at the customer's main office conflict with how a story should be implemented. Camp A will losely dictate that Feature X works like this, then Camp B will fail it due not functioning like that. Hence, the term "FDD". The process is driven by "failures". This leads to my question: Has anyone else encountered this and if so, any tips/suggestions for dealing with it? We have, of course, tried to get Camp A and B to agree prior, but everyone knows this isn't always the case. Thanks

    Read the article

  • Authentication failure!

    - by veera
    At the time of installation i gave login password and that was the login keyring password and authentication password.. then once in user accounts-login options in dat for password options i set as none and then i locked.. after that the passwd which i gave at the time of installation remained as login keyring passwd but wen i entered that passwd for authentication while installing some packages it's showing authentication failure.pls try again.. so i couldnt download any packages or updates.. is there any possibilities to change/reset the authentication password.. pls help me..

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >