Search Results

Search found 7776 results on 312 pages for 'configure in'.

Page 75/312 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Setting up dual monitors, Xorg.conf issues

    - by JTS
    I just got a new computer (W520, Graphics card nVidia GF106 [Quadro 2000]) and installed ubuntu on it using wubi. I have everything working, so I wanted to set it up to be able to use two monitors with an extended screen. I figured I had to edit Xorg.conf, but the file didnt exist. So I tried to create it by booting in recovery mode, and executing Xorg -configure but I am getting these errors: (EE) Failed to load module "vmwgfx" (module does not exist, 0) (EE) vmware: Please ignore the above warnings about not being able to load module/driver vmwgfx (++) Using config file: "/root/xorg.conf.new" (==) Using system config directory "/usr/share/X11/xorg.conf.d" (EE) [drm] No DRICreatedPCIBusID symbol Number of created screens does not match number of detected devices. Configuration failed. ddxSigGiveUp: Closing log Any idea how I can get Xorg -configure to work, so that I can have an xorg.conf file that I can edit to enable twinview? EDIT: Another way I could ask the same question to solve this problem is, why can't I boot with an xorg.conf file generated by nvidia-xconfig? Is there something in the generated xorg.conf file that might need editing?

    Read the article

  • dpkg error when using apt-get install

    - by V-T
    I upgraded to Ubuntu 14.04 from 12.04 and every time I use apt-get install for any package it ends with a bunch of errors about processing some of my latex packages. Including a snippet below: Sometimes, not accepting conffile updates in /etc/texmf/updmap.d causes updmap-sys to fail. Please check for files with extension .dpkg-dist or .ucf-dist in this directory dpkg: error processing package tex-common (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of lmodern: lmodern depends on tex-common (>= 3); however: Package tex-common is not configured yet. Reproduced by using sudo dpkg --configure -a and a total list of packages with this error is included here: Errors were encountered while processing: tex-common texlive-publishers tex-gyre texlive-latex-extra-doc texlive-fonts-extra-doc texlive-lang-english texlive-luatex texlive-generic-recommended texlive-pstricks-doc texlive-fonts-recommended latex2html latex-xcolor texlive-pictures texlive-fonts-extra texlive-pictures-doc asymptote texlive-bibtex-extra texlive-latex-recommended-doc texlive-latex-recommended doxygen-latex texlive-pstricks tipa texlive-latex-base texlive-fonts-recommended-doc latex-beamer texlive-font-utils texlive-latex-base-doc texlive-latex-extra texlive-extra-utils texlive texlive-publishers-doc lmodern Any ideas on how to fix this?

    Read the article

  • Using Computer Management (MMC) with the Solaris CIFS Service (August 25, 2009)

    - by user12612012
    One of our goals for the Solaris CIFS Service is to provide seamless Windows interoperability: not just to deliver ubiquitous, multi-protocol file sharing, which is obviously a major part of this project, but to support Windows services at a fundamental level.  It's an ongoing mission and our latest update includes support for Windows remote management. Remote management is extremely important to Windows administrators and one of the mainstay tools is Computer Management. Computer Management is a Windows administration application, actually a collection of Microsoft Management Console (MMC) tools, that can be used to configure, monitor and manage local and remote services and resources.  The MMC is an extensible framework of registered components, known as snap-ins, which allows Computer Management to provide comprehensive management features for both the local system and remote systems on the network. Supported Computer Management features include: Share ManagementSupport for share management is relatively complete.  You can create, delete, list and configure shares.  It's not yet possible to change the maximum allowed or number of users properties but other properties, including the Share Permissions, can be managed via the MMC. Users, Groups and ConnectionsYou can view local SMB users and groups, monitor user connections and see the list of open files. If necessary, you can also disconnect users and/or close files. ServicesYou can view the SMF services running on an OpenSolaris system.  This is a read-only view - we don't support service management (the ability to start or stop) SMF services from Computer Management (yet). To ensure that only the appropriate users have access to administrative operations there are some access restrictions on these remote management features. Regular users can: List shares Only members of the Administrators or Power Users groups can: Manage shares List connections Only members of the Administrators group can: List open files and close files Disconnect users View SMF services View the EventLog Here's a screenshot when I was using Computer Management and Server Manager (another Windows remote management application) on Windows XP to view some open files on an OpenSolaris system to prepare a slide presentation on MMC support.

    Read the article

  • Ubuntu 12.04 nomodeset fixes boot problem but causes screen resolution to get stuck

    - by Thunder
    I've been searching the askubuntu forum for the past 3 days trying to figure out what's going on with my system and I have tried a lot of things but to no avail. So, I will explain my situation and tell you what I have tried and I hope someone can help me :) I have an: HP Workstation xw4100 Pentium(R) 4 3.00 GHz 1.5 GB RAM NVIDIA Quadro4 380 XGL graphics card It came with Windows XP and I set it up (with WUBI) to dual boot with Ubuntu 12.04 After installation I had the problem that so many people had with it booting to a black screen (mine was actually booting to the terminal basic shell) that is fixed by adding nomodeset into the grub. When I do that, MY screen resolution becomes stuck in 1280x768 (as opposed to 1366x768 before adding nomodeset) (and also, when running XP the best resolution is 1280x720) When I go to "additional drivers" it doesn't show any proprietary drivers, so I manually downloaded them using this command: sudo apt-add-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current but after rebooting, that made the graphics even worse (now stuck as 800x600) SO I tried to configure the drivers with sudo nvidia-xconfig but that simply created an empty xorg.config file. I found one place where a guy gave information to manually input into the xorg.config file but that had no effect at all. Lastly I tried to install previous versions of the NVIDIA drivers, but they wouldn't even fully install. So now I have just re-installed Ubuntu 12.04 and I either need to find a better solution to the first problem (nomodeset) or get the nouveau driver to correctly configure to work with my nvidia graphics. Thanks for your help ahead of time!

    Read the article

  • Oracle SOA Suite for healthcare integration Dashboard

    - by Nitesh Jain
    Oracle SOA Suite Healthcare came up with a new way of monitoring where user can configure a dashboard and follow the dynamic runtime changes.Oracle SOA Suite for healthcare integration dashboards display information about the current health of the endpoints in a healthcare integration application. You can create and configure multiple dashboards as needed to monitor the status and volume metrics for the endpoints you have defined. The Dashboards reflects changes that occur in the runtime repository, such as purging runtime instance data, new messages processed, and new error messages. You can display data for various time periods, and you can manually refresh the data in real time or set the dashboard to automatically refresh at set intervals.Dashboard shows the following information: Status: The current status of the endpoint, such as Running, Idle, Disabled, or Errors. Messages Sent: The number of messages sent by the endpoint in the specified time period. Messages Received: The number of messages received by the endpoint in the specified time period. Errors: The number of messages with errors for the endpoint in the given time period. Last Sent: The date and time the last message was sent from the endpoint. Last Received: The date and time the last message was received from the endpoint. Last Error: The date and time of the last error for the endpoint.  It also shows the detailed view of a specific Endpoint The document type. The number of messages received per second. The total number of message processed in the specified time period. The average size of each message.

    Read the article

  • Why can't I install from software center?

    - by user64720
    There was a problem upgrading to Firefox 13. This error kept returning: /var/cache/apt/archives/firefox_13.0+build1-0ubuntu0.12.04.1_i386.deb W: Waited for dpkg --assert-multi-arch but was not there - dpkgGo (10: There are no "child" processes). Now it seems that there is some problem with dpkg and I can't install anything from software center. I already tried to clean previous packages with sudo rm /var/lib/apt/lists/* -vf and then sudo apt-get update, it didn't work. When running sudo dpkg --configure -a, I get this: dpkg: problems with dependencies prevent the configuration of firefox-globalmenu: firefox-globalmenu depends on firefox (= 13.0+build1-0ubuntu0.12.04.1); however: The package is not installed. dpkg: error while processing firefox-globalmenu (--configure): problems com dependencies - leaving unconfigured There has been found errors while processing: firefox-globalmenu What should I do to fix this?? EDIT: I don't have the necessary expertise to understand why what I did worked and what was causing the conflict, but anyway, since there was a problem with firefox-globalmenu:, I went to synaptics package manager, I removed this particular package and reinstalled it. After that, I was able to install Firefox from synaptics and also any other applications from software center. However, still there was a problem, when running sudo apt-get update, the following kept returning: Failed to get gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages Verification code hash doesn't match. E: Some archives index failed at being downloaded. They have been ignored, or older copies are used instead. So I typed sudo rm /var/lib/apt/lists/* -vf in terminal and then again sudo apt-get update and everything is fine now. I did this before an answer was posted, anyway I agree the problem was that particular package and its removal. So I'll mark the below answer as accepted.

    Read the article

  • Best Architecture for ASP.NET WebForms Application

    - by stack man
    I have written an ASP.NET WebForms portal for a client. The project has kind of evolved rather than being properly planned and structured from the beginning. Consequently, all the code is mashed together within the same project and without any layers. The client is now happy with the functionality, so I would like to refactor the code such that I will be confident about releasing the project. As there seems to be many differing ways to design the architecture, I would like some opinions about the best approach to take. FUNCTIONALITY The portal allows administrators to configure HTML templates. Other associated "partners" will be able to display these templates by adding IFrame code to their site. Within these templates, customers can register and purchase products. An API has been implemented using WCF allowing external companies to interface with the system also. An Admin section allows Administrators to configure various functionality and view reports for each partner. The system sends out invoices and email notifications to customers. CURRENT ARCHITECTURE It is currently using EF4 to read/write to the database. The EF objects are used directly within the aspx files. This has facilitated rapid development while I have been writing the site but it is probably unacceptable to keep it like that as it is tightly coupling the db with the UI. Specific business logic has been added to partial classes of the EF objects. QUESTIONS The goal of refactoring will be to make the site scalable, easily maintainable and secure. 1) What kind of architecture would be best for this? Please describe what should be in each layer, whether I should use DTO's / POCO / Active Record pattern etc. 2) Is there a robust way to auto-generate DTO's / BOs so that any future enhancements will be simple to implement despite the extra layers? 3) Would it be beneficial to convert the project from WebForms to MVC?

    Read the article

  • How to install Ubuntu 12.04.1 in EFI mode with Encrypted LVM?

    - by g0lem
    I'm trying to properly install Ubuntu 12.04.1 LTS 64-bit PC (AMD64) with the alternate install CD ".iso" on a lenovo Thinkpad X220. Default Hard Disk (with a pre-installed version of Windows 7) has been replaced with a brand new SSD. The UEFI BIOS of the lenovo Thinkpad X220 is set to "UEFI Boot only" & "USB UEFI BIOS Support" is enabled (I'm using an external USB DVD reader to perform Ubuntu installation). The BIOS is a Phoenix SecureCore Tiano, BIOS version is 8DET56WW (1.26). The attempts below are made with the UEFI BIOS settings described above. Here's what I've tried so far: Boot on a live GParted CD Create a GPT partition table Create a FAT32 partition for UEFI System, set the partition to "EF00" type ("boot" flag) Leave remaining space unformated Boot on Ubuntu 12.04.1 LTS 64-bit PC (AMD64) with alternate CD: Perform the install with network updates enabled Use manual partitioning FAT32 partition created with GParted is used as "EFI System partition" Remaining space is set to be used as "Physical volume for LVM" Then "Configure encrypted volumes" using the previous "Physical volume for LVM" as the encrypted container, passphrase is setup. "Configure the Logical Volume Manager" creating a volume Group using the encrypted container /dev/mapper/sda2_crypt Creation of the Logical Volumes "Create logical volume", choosing the previously created volume Group Assign a mount point and file system to the Logical volumes : LV-root for / LV-var for /var LV-usr for /usr LV-usr-local for /usr/local LV-swap for swap LV-home for /home NOTE: /tmp would be in RAM only using TMPFS Bootloader step: neither my ESP partition (/dev/sda1, /dev/sda or MBR) seems to be the right place for GRUB, I get the following message (X suffix is for demonstration only): unable to install grub in /dev/sdaX Executing 'grub-install /dev/sdaX' failed This is a fatal error. Finish installation without the Bootloader & Reboot The system doesn't start, there's no EFI/GRUB menu at startup. What are the steps to perform a clean and working installation of Ubuntu 12.04.1 Precise Pangolin, 64bit version in U(EFI) mode using the encrypted LUKS + LVM scheme described above?

    Read the article

  • Install tmux on Mac OS X

    - by unixben
    This is a short run down on how to get tmux running on your Mac OS X system. The same methodology applies when compiling this on Solaris. What is tmux? According to the developer's page, "tmux is a terminal multiplexer: it enables a number of terminals (or windows), each running a separate program, to be created, accessed, and controlled from a single screen. tmux may be detached from a screen and continue running in the background, then later reattached". Why not just use screen? For me, the primary reason I switched to tmux from screen is the much easier configuration syntax that tmux offers. If you've ever struggled with formatting screen's caption or hardstatus line, then you will appreciate the ease with which you can achieve the same results in tmux. Preparing your environment You will need a C compiler installed. I believe that OS X ships by default with GNU make, but if not, then you will need to obtain it or use Xcode. Download the sources While I'm putting all this together, I like to keep everything neatly tucked away in a build directory. mkdir ~/build cd ~/build curl -OL http://downloads.sourceforge.net/tmux/tmux-1.5.tar.gz curl -OL http://downloads.sourceforge.net/project/levent/libevent/libevent-2.0/libevent-2.0.16-stable.tar.gz Unpack the sources tar xzf tmux-1.5.tar.gz tar xzf libevent-2.0.16-stable.tar.gz Compiling libevent cd libevent-2.0.16-stable ./configure --prefix=/opt make sudo make install Compiling tmux cd ../tmux-1.5 LDFLAGS="-L/opt/lib" CPPFLAGS="-I/opt/include" LIBS="-lresolv" ./configure --prefix=/opt make sudo make install That's all there is to it!

    Read the article

  • unreachable subnet in one direction

    - by Carl Michaud
    I'm unable to route traffic to a subnet in my home. Here's my topology: INET <- Router A <- Router B <- WIFI AP where: Router A - ARRIS TG862G/CT (from comcast) - WAN DHCP, LAN 10.0.0.99 Router B - Linksys WRT400N with DD-WRT - WAN 10.0.0.12 , LAN 10.0.1.1 I was able to use DD-WRT to configure a static route for the 10.0.1.x network back to the 10.0.0.x network. But I'm not sure if i can do the reverse on the ARRIS router and was looking for suggestions on how to fix that. I've set things up this wasy because I am using OpenDNS to mange internet content for my kids and of course I wasn't able to configure DNS to my liking on Router A either. So at present I'm using Router A on 10.0.0.x to provide a private unfiltered WIFI AP (no ssid broadcast) that I use for netflix only and then I use router B to provide a filtered WIFI AP for the rest of my WIFI devices on 10.0.1.x. However I would like to be able to connect to my 10.0.1.x devices from the internet via dynamic dns; but I can't see anything behind router B this way. Thoughts?

    Read the article

  • Package libxul not fount - Kiwix Wikpedia in Ubuntu Precise 12.04

    - by JHOSmAN
    I'm trying to install the service Kiwix but I need a library that is not available for Ubuntu 12.04 LTS Precise leave the log and if someone could tell me how to install Seller would appreciate. kiwix-0.9# ls aclocal.m4 COMPILE config.sub COPYING install-sh ltmain.sh missing static AUTHORS config.guess configure depcomp kiwix Makefile.am README CHANGELOG config.log configure.ac desktop libxul-dev_1.8.1.16+nobinonly-0ubuntu1_all.deb Makefile.in src root@ubuntu-MM061:/home/ubuntu/Escritorio/kiwix-0.9# ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking for g++... g++ checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking dependency style of g++... gcc3 checking for g++... g++ checking for cl... no checking for cl... no checking for Xcode... no checking for jar... jar checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a sed that does not truncate output... /bin/sed checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for fgrep... /bin/grep -F checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1572864 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for ar... ar checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc object... ok checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking whether we are using the GNU C++ compiler... (cached) yes checking whether g++ accepts -g... (cached) yes checking dependency style of g++... (cached) gcc3 checking how to run the C++ preprocessor... g++ -E checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC -DPIC checking if gcc PIC flag -fPIC -DPIC works... yes checking if gcc static flag -static works... yes checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usr/bin/ld) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... yes checking for ld used by g++... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking whether the g++ linker (/usr/bin/ld) supports shared libraries... yes checking for g++ option to produce PIC... -fPIC -DPIC checking if g++ PIC flag -fPIC -DPIC works... yes checking if g++ static flag -static works... yes checking if g++ supports -c -o file.o... yes checking if g++ supports -c -o file.o... (cached) yes checking whether the g++ linker (/usr/bin/ld) supports shared libraries... yes checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking for ranlib... (cached) ranlib checking whether make sets $(MAKE)... (cached) yes checking for pkg-config... pkg-config checking for perl... perl checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking float.h usability... yes checking float.h presence... yes checking for float.h... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdint.h... (cached) yes checking for stdlib.h... (cached) yes checking for string.h... (cached) yes checking for strings.h... (cached) yes checking sys/socket.h usability... yes checking sys/socket.h presence... yes checking for sys/socket.h... yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking for unistd.h... (cached) yes checking wchar.h usability... yes checking wchar.h presence... yes checking for wchar.h... yes checking for stdbool.h that conforms to C99... yes checking for _Bool... no checking for inline... inline checking for int16_t... yes checking for int32_t... yes checking for int64_t... yes checking for int8_t... yes checking for off_t... yes checking for pid_t... yes checking for size_t... yes checking for uint16_t... yes checking for uint32_t... yes checking for uint64_t... yes checking for uint8_t... yes checking for ptrdiff_t... yes checking vfork.h usability... no checking vfork.h presence... no checking for vfork.h... no checking for fork... yes checking for vfork... yes checking for working fork... yes checking for working vfork... (cached) yes checking for stdlib.h... (cached) yes checking for GNU libc compatible malloc... yes checking for working strtod... yes checking for getcwd... yes checking for gettimeofday... yes checking for memmove... yes checking for memset... yes checking for pow... yes checking for regcomp... yes checking for sqrt... yes checking for strcasecmp... yes checking for strchr... yes checking for strdup... yes checking for strerror... yes checking for strtol... yes Package libxul was not found in the pkg-config search path. Perhaps you should add the directory containing libxul.pc' to the PKG_CONFIG_PATH environment variable No package 'libxul' found Package libxul was not found in the pkg-config search path. Perhaps you should add the directory containinglibxul.pc' to the PKG_CONFIG_PATH environment variable No package 'libxul' found checking for /stable... no checking for "/nsISupports.idl"... no configure: error: unable to find nsISupports.idl apt-get install libxul Leyendo lista de paquetes... Hecho Creando árbol de dependencias Leyendo la información de estado... Hecho E: No se ha podido localizar el paquete libxul

    Read the article

  • Accessing Secure Web Services from ADF Mobile

    - by Shay Shmeltzer
    Most of the enterprise Web services you'll access are going to be secured - meaning they'll require you to pass a user/password in order to get to their data.  If you never created a secured Web service, it's simple in JDeveloper! For the below video I just right clicked on a Java class that I exposed as a Web service, and chose  "Web Service Properties" and then checked the "oracle/wss_username_token_service_policy" box from the list of options (that's the option supported by ADF Mobile right now): In the demo below we are going to use a "remote" login server that does the authentication of the user/pass.The easiest way to "create" a remote login server is to create a "regular" web ADF application, secure it, and deploy it on a server. The secured ADF application can just require ADF Authentication with a simple HTTP Basic Authentication - basically the next two images in the Application->Secure->Configure ADF Security menu wizard. ok - so now you have a secured ADF application - deploy it on a server and get the URL for that application.  From this point on you'll see the process in the video which deals with the configuration of your ADF Mobile app. First you'll need to enable security for your ADF mobile application, so it will prompt users to provide a user/pass combination. You'll also need to configure security on specific features. And you can have them use remote login pointing to your regular secured ADF application. Next define your Web service data control. Right click on the web service data control to "define Web Service Security". You'll also need to define the adfCredentialStoreKey property for the Web Service data control in the connections.xml file. This should be it. Here is the flow: If you haven't already - you can read more about this in the Mobile developer guide, and Andrejus has a sample for you.

    Read the article

  • How to Set Up a Hadoop Cluster Using Oracle Solaris (Hands-On Lab)

    - by Orgad Kimchi
    Oracle Technology Network (OTN) published the "How to Set Up a Hadoop Cluster Using Oracle Solaris" OOW 2013 Hands-On Lab. This hands-on lab presents exercises that demonstrate how to set up an Apache Hadoop cluster using Oracle Solaris 11 technologies such as Oracle Solaris Zones, ZFS, and network virtualization. Key topics include the Hadoop Distributed File System (HDFS) and the Hadoop MapReduce programming model. We will also cover the Hadoop installation process and the cluster building blocks: NameNode, a secondary NameNode, and DataNodes. In addition, you will see how you can combine the Oracle Solaris 11 technologies for better scalability and data security, and you will learn how to load data into the Hadoop cluster and run a MapReduce job. Summary of Lab Exercises This hands-on lab consists of 13 exercises covering various Oracle Solaris and Apache Hadoop technologies:     Install Hadoop.     Edit the Hadoop configuration files.     Configure the Network Time Protocol.     Create the virtual network interfaces (VNICs).     Create the NameNode and the secondary NameNode zones.     Set up the DataNode zones.     Configure the NameNode.     Set up SSH.     Format HDFS from the NameNode.     Start the Hadoop cluster.     Run a MapReduce job.     Secure data at rest using ZFS encryption.     Use Oracle Solaris DTrace for performance monitoring.  Read it now

    Read the article

  • How to Install WebLogic 12c ZIP on Linux

    - by Bruno.Borges
    I knew that WebLogic had this small ZIP distribution, of only 184M, but what I didn't know was that it is so easy to install it on Linux machines, specially for development purposes, that I thought I had to blog about it. You may want to check this blog, where I found the missing part on this how to, but I'm blogging this again because I wanted to put it in a simpler way, straight to the point. And if you are looking for a how to for Mac, check Arun Gupta's post.  So, here's the step-by-step: 1 - Download the ZIP distribution (don't worry if your system is x86_64)Don't forget to accept the OTN Free Developer License Agreement! 2 - Choose where to install your WebLogic server and your domain, and set as your MW_HOME environment variableI will use /opt/middleware/weblogic for this how to export MW_HOME=/opt/middleware/weblogicMake sure this path exists in your system. 'mydomains' will be used to keep your WebLogic domain. mkdir -p $MW_HOME/mydomain 3 - If you don't have your JAVA_HOME environment variable still configured, do it. Point it to where your JDK is installed. export JAVA_HOME=/usr/lib/jvm/default-java 4 - Unzip the downloaded file into MW_HOME unzip wls1211_dev.zip -d $MW_HOME 5 - Go to that directory and run configure.sh cd $MW_HOME ./configure.sh 6 - Call the setEnvs.sh script . $MW_HOME/wlserver/server/bin/setWLSEnv.sh7 - Create your development domain. It will ask you for username and password. I like to use weblogic / welcome1cd $MW_HOME/mydomain $JAVA_HOME/bin/java $JAVA_OPTIONS -Xmx1024m \ -Dweblogic.management.allowPasswordEcho=true weblogic.Server8 - Start WebLogic and access its web console(sh startWebLogic.sh &); sleep 10; firefox http://localhost:7001/consoleUsually, it takes only 10 seconds to start a domain, and 5 more to deploy the Administration Console (on my laptop). :-)Enjoy!

    Read the article

  • Time Zone on WebLogic Server

    - by adejuanc
    In order to configure the time zone with WebLogic Server, use the following JVM startup command: -Duser.timezone=<timezone> For example, in the java arguments in the admin console at Environments -> Servers -> Servername -> - Server Start tab, configure the startup settings that Node Manager will use to start the particular server. For example: -Duser.timezone='America/Arizona' There are many different time zones, each with its own code. For a complete list please refer to : http://en.wikipedia.org/wiki/List_of_zoneinfo_time_zones For testing, you can run the following code on WLS with a JSP, servlet, or deploying the class: import java.util.Calendar; import java.util.TimeZone; public class TestTimeZone {  public static void main(String[] args) {    Calendar calendar = Calendar.getInstance();    TimeZone timeZone = calendar.getTimeZone();    System.out.println(" your Current TimeZone is : " + timeZone.getDisplayName());    System.out.println(" Time Zone id : "+ timeZone.getID());  } }

    Read the article

  • install clamav from surceforge

    - by maria
    I'm using Ubuntu 10.04. I'm affraid I've installed a virus on my computer (I've clicked a link which was a virus), I'd prefere to check if everything is fine. I've installed clamAV using Synaptic. The installed version was 0.96.5, while the most recent version is 0.97.6. When I was trying to update virus database, I got a warning that the version of clamAV I'm using is outdated. Since I didn't know how to update the software by Synaptic or apt (both show the installed version as the newest) I've uninstalled all and downloaded the recent version from sourceforge. I've unpacked the tar.gz archive, entered the folder but when I type ./configure I'm getting the message: configure: error: Please install zlib and zlib-devel packages When I type sudo aptitude install zlib zlib-devel The terminal output says there is no such packages (there is no package zlib-devel and there is many packages which name contain zlib). I suppose the link contained probably the Windows virus, but I'd prefere to make sure there is nothing in my computer. As well as I'm not sure if the virus, even without harming my system, can send itself to my e-mail contacts or not.

    Read the article

  • Updated Business Activity Monitoring (BAM) Class

    - by Gary Barg
    We have just completed an extensive upgrade to the Business Activity Monitoring course, bringing it up to PS5 level and doing some major rework of content and topic flow. This should be a GREAT course for anyone needing to learn to use BAM effectively to analyze their SOA data. Details of the Course This course explains how to use Oracle BAM to monitor enterprise business activities across an enterprise in real time. You can measure your key performance indicators (KPIs), determine whether you are meeting service-level agreements (SLAs), and take corrective action in real time. Learn To: Create dashboards and alerts using a business-friendly, wizard-based design environment Monitor BPM and BPEL processes Configure drilling, driving, and time-based filtering Create alerts Build applications with a dynamic user interface Manage BAM users and roles In addition to learning Oracle BAM architecture, you learn how to perform administrative tasks related to Oracle BAM. You create and work with the different types of message sources that send data into Oracle BAM. You build interactive, real-time, actionable dashboards, and you configure alerts on abnormal conditions. You learn how to monitor both BPEL and BPM composite applications with Oracle BAM. Lastly, you create and use Oracle BAM data control to build applications with a dynamic user interface that changes based on real-time business events. Registration The Oracle University course page with more course details and registration information, is here. The next scheduled class: Date: 5-Dec-2012 Duration: 3 days Hours: 9:00 AM – 5:00 PM CT Location: Chicago, IL Class ID: 3325708

    Read the article

  • SharePoint 2007 Enabling Incoming Email Error

    - by Cherie Riesberg
    Symptom: When configuring incoming e-mail, the e-mails come through just fine if the server name is in the e-mail address: [email protected] but when you change it to a vanity name [email protected], the message is bounced back and you get this error: Delivery has failed to these recipients or distribution lists: [email protected] Your message wasn't delivered because of security policies. Microsoft Exchange will not try to redeliver this  message for you.    Please provide the following diagnostic text to your system administrator. The following organization rejected your message: servername01.fqdn.com.   Problem: The SharePoint server relay rejects the message because it doesn't recognize the name.  You have set it up in Exchange, but you need to set up an alias in the SMTP service on the SharePoint server;   Solution: Configure an Alias Domain An alias domain is an alias of the default domain. You can set up alias domains that use the same settings as the default domain. Messages that are received by the SMTP Service for an alias domain are placed in the Drop folder that is designated for the default domain. To configure an alias domain, follow these steps: Start IIS Manager or open the IIS snap-in. Expand Server_name, where Server_name is the name of the server, and then expand the SMTP virtual server that you want (for example, Default SMTP Virtual Server). Right-click Domains, point to New, and then click Domain. The New SMTP Domain Wizard starts. Click Alias, and then click Next. Type a name for the alias domain in the Name box, and then click Finish. Quit IIS Manager or close the IIS snap-in.

    Read the article

  • Dom U Installation on Ubuntu 11.10

    - by sridutt
    I am trying to add a DomU Operating system on Ubuntu 11.10. I have successfully installed Xen. Verified with xm info virsh-version which returns: Compiled against library: libvir 0.9.2 Using library: libvir 0.9.2 Using API: Xen 3.0.1 Running hypervisor: Xen 4.1. Now when I tried to install Dom0 it said: unable to connect to 'localhost:8000': , in VMM. So, I followed this bug link. I could now start adding DomU. When adding DomU, in last stage, it gives the following error: Unable to complete install: 'POST operation failed: xend_post: error from xen daemon: (xend.err "Error creating domain: device model '/usr/lib/xen/bin/qemu-dm' not found")' Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 44, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/create.py", line 1899, in do_install guest.start_install(False, meter=meter) File "/usr/lib/pymodules/python2.7/virtinst/Guest.py", line 1223, in start_install noboot) File "/usr/lib/pymodules/python2.7/virtinst/Guest.py", line 1291, in _create_guest dom = self.conn.createLinux(start_xml or final_xml, 0) File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1686, in createLinux if ret is None:raise libvirtError('virDomainCreateLinux() failed', conn=self) libvirtError: POST operation failed: xend_post: error from xen daemon: (xend.err "Error creating domain: device model '/usr/lib/xen/bin/qemu-dm' not found") I tried following this bug link that said, the bug is solved in the below package. When I run ./configure in this, I am getting an error: checking for LIBXML... no checking libxml2 xml2-config >= 2.6.0 ... configure: error: Could not find libxml2 anywhere (see config.log for details). What is the problem?

    Read the article

  • Grub2 mutual dependency issue

    - by A T
    For various reasons I am installing .deb dependencies for grub2 using dpkg directly (rather than apt-get). root@ubuntu:/dl# dpkg -i grub-gfxpayload-lists_0.6_amd64.deb Selecting previously unselected package grub-gfxpayload-lists. (Reading database ... 249808 files and directories currently installed.) Preparing to unpack grub-gfxpayload-lists_0.6_amd64.deb ... Unpacking grub-gfxpayload-lists (0.6) ... dpkg: dependency problems prevent configuration of grub-gfxpayload-lists: grub-gfxpayload-lists depends on grub-pc (>= 1.99~20101210-1ubuntu2); however: Package grub-pc is not configured yet. dpkg: error processing package grub-gfxpayload-lists (--install): dependency problems - leaving unconfigured Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: grub-gfxpayload-lists By configure I assume it means install+configure, so I tried: root@ubuntu:/dl# dpkg -i grub-pc_2.02~beta2-9_amd64.deb (Reading database ... 249818 files and directories currently installed.) Preparing to unpack grub-pc_2.02~beta2-9_amd64.deb ... Unpacking grub-pc (2.02~beta2-9) over (2.02~beta2-9) ... dpkg: dependency problems prevent configuration of grub-pc: grub-pc depends on grub2-common (= 2.02~beta2-9); however: Package grub2-common is not installed. grub-pc depends on grub-pc-bin (= 2.02~beta2-9); however: Package grub-pc-bin is not installed. grub-pc depends on grub-gfxpayload-lists; however: Package grub-gfxpayload-lists is not configured yet. dpkg: error processing package grub-pc (--install): dependency problems - leaving unconfigured Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: grub-pc How do I solve this problem?

    Read the article

  • If an entity is composed, is it still a god object?

    - by Telastyn
    I am working on a system to configure hardware. Unfortunately, there is tons of variety in the hardware, which means there's a wide variety of capabilities and configurations depending on what specific hardware the software connects to. To deal with this, we're using a Component Based Entity design where the "hardware" class itself is a very thin container for components that are composed at runtime based on what capabilities/configuration are available. This works great, and the design itself has worked well elsewhere (particularly in games). The problem is that all this software does is configure the hardware. As such, almost all of the code is a component of the hardware instance. While the consumer only ever works against the strongly typed interfaces for the components, it could be argued that the class that represents an instance of the hardware is a God Object. If you want to do anything to/with the hardware, you query an interface and work with it. So, even if the components of an object are modular and decoupled well, is their container a God Object and the downsides associated with the anti-pattern?

    Read the article

  • Eclipse buildtime and runtime classpaths for easy execution of junit test cases

    - by emeraldjava
    Hey, I've a large eclipse project with multiple junit classes. I'm trying to strike a balance between adding runtime resources to the eclipse project classpath, and the need to configure mutliple junit launch configurations. I realise the default eclipse build classpath is inherited by all unit test configurations, but some of my tests require extra runtime resources. I could add these resources to the build classpath, but this does slow my overall project build time (since it has to keep more files in synch). I don't like the idea of including * resources and jars on the runtime classpath. The two options that i have are these, the positive and negative cases as i see it are listed 1 : Add all runtime resources to eclipse classpath. POS I can select a unit test and run it without having to configure the test classpath. POS Extra resources on build classpath means eclipse slows down. NEG More difficult to ensure each test uses the correct resources. 2 : Configure the classpath of each unit test POS I know exactly what resources are being used by a test. POS Smaller build classpath means quicker build and execution by eclipse. NEG Its a pain having to setup multiple separate junit runtime classpaths. Ideally i'd like to configure one base junit runtime configuration, which takes the default eclipse build classpath, adds extra runtime jars and resources. This configuration could then be reused by the specific junit test cases, Is anything like this possible? Looking at a specific junit launch configuration which can be exported to a share project file <?xml version="1.0" encoding="UTF-8" standalone="no"?> <launchConfiguration type="org.eclipse.jdt.junit.launchconfig"> <stringAttribute key="bad_container_name" value="/CR-3089_5_1_branch."/> <listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_PATHS"> <listEntry value="/CR-3089_5_1_branch/src/com/x/y/z/ParserJUnitTest.java"/> </listAttribute> <listAttribute key="org.eclipse.debug.core.MAPPED_RESOURCE_TYPES"> <listEntry value="1"/> </listAttribute> <stringAttribute key="org.eclipse.jdt.junit.CONTAINER" value=""/> <booleanAttribute key="org.eclipse.jdt.junit.KEEPRUNNING_ATTR" value="false"/> <stringAttribute key="org.eclipse.jdt.junit.TESTNAME" value=""/> <stringAttribute key="org.eclipse.jdt.junit.TEST_KIND" value="org.eclipse.jdt.junit.loader.junit4"/> <stringAttribute key="org.eclipse.jdt.launching.MAIN_TYPE" value="com.x.y.z.ParserJUnitTest"/> <stringAttribute key="org.eclipse.jdt.launching.PROJECT_ATTR" value="CR-3089_5_1_branch"/> </launchConfiguration> is it possible to extend/reuse this configuration, and parameterise the 'org.eclipse.jdt.launching.MAIN_TYPE' value? I'm aware of the commons launch and jar manifest file solutions to configuring the classpath, but they both seem to assume that an ant build is run before the test can execute. I want to avoid any eclipse dependency on calling an ant target when refactoting code and executing tests. Basically - What is the the easiest way to seperate and maintain the eclipse buildtime classpath and junit runtime classpaths?

    Read the article

  • Using CMS for App Configuration - Part 1, Deploying Umbraco

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/06/04/using-cms-for-app-configurationndashpart-1-deploy-umbraco.aspxSince my last post on using CMS for semi-static API content, How about a new platform for your next API… a CMS?, I’ve been using the idea for centralized app configuration, and this post is the first in a series that will walk through how to do that, step-by-step. The approach gives you a platform-independent, easily configurable way to specify your application configuration for different environments, with a built-in approval workflow, change auditing and the ability to easily rollback to previous settings. It’s like Azure Web and Worker Roles where you can specify settings that change at runtime, but it's not specific to Azure - you can use it for any app that needs changeable config, provided it can access the Internet. The series breaks down into four posts: Deploying Umbraco – the CMS that will store your configurable settings and the current values; Publishing your config – create a document type that encapsulates your settings and a template to expose them as JSON; Consuming your config – in .NET, a simple client that uses dynamic objects to access settings; Config lifecycle management – how to publish, audit, and rollback settings. Let’s get started. Deploying Umbraco There’s an Umbraco package on Azure Websites, so deploying your own instance is easy – but there are a couple of things to watch out for, so this step-by-step will put you in a good place. Create From Gallery The easiest way to get started is with an Azure subscription, navigate to add a new Website and then Create From Gallery. Under CMS, you’ll see an Umbraco package (currently at version 7.1.3): Configure Your App For high availability and scale, you’ll want your CMS on separate kit from anything else you have in Azure, so in the configuration of Umbraco I’d create a new SQL Azure database – which Umbraco will use to store all its content: You can use the free 20mb database option if you don’t have demanding NFRs, or if you’re just experimenting. You’ll need to specify a password for a SQL Server account which the Umbraco service will use, and changing from the default username umbracouser is probably wise. Specify Database Settings You can create a new database on an existing server if you have one, or create new. If you create a new server *do not* use the same username for the database server login as you used for the Umbraco account. If you do, the deployment will fail later. Think of this as the SQL Admin account that you can use for managing the db, the previous account was the service account Umbraco uses to connect. Make Tea If you have a fast kettle. It takes about two minutes for Azure to create and provision the website and the database. Install Umbraco So far we’ve deployed an empty instance of Umbraco using the Azure package, and now we need to browse to the site and complete installation. My Website was called my-app-config, so to complete installation I browse to http://my-app-config.azurewebsites.net:   Enter the credentials you want to use to login – this account will have full admin rights to the Umbraco instance. Note that between deploying your new Umbraco instance and completing installation in this step, anyone can browse to your website and complete the installation themselves with their own credentials, if they know the URL. Remote possibility, but it’s there. From this page *do not* click the big green Install button. If you do, Umbraco will configure itself with a local SQL Server CE database (.sdf file on the Web server), and ignore the SQL Azure database you’ve carefully provisioned and may be paying for. Instead, click on the Customize link and: Configure Your Database You need to enter your SQL Azure database details here, so you’ll have to get the server name from the Azure Management Console. You don’t need to explicitly grant access to your Umbraco website for the database though. Click Continue and you’ll be offered a “starter” website to install: If you don’t know Umbraco at all (but you are familiar with ASP.NET MVC) then a starter website is worthwhile to see how it all hangs together. But after a while you’ll have a bunch of artifacts in your CMS that you don’t want and you’ll have to work out which you can safely delete. So I’d click “No thanks, I do not want to install a starter website” and give yourself a clean Umbraco install. When it completes, the installation will log you in to the welcome screen for managing Umbraco – which you can access from http://my-app-config.azurewebsites.net/umbraco: That’s It Easy. Umbraco is installed, using a dedicated SQL Azure instance that you can separately scale, sync and backup, and ready for your content. In the next post, we’ll define what our app config looks like, and publish some settings for the dev environment.

    Read the article

  • Look Inside WebLogic Server Embedded LDAP with an LDAP Explorer

    - by james.bayer
    Today a question came up on our internal WebLogic Server mailing lists about an issue deleting a Group from WebLogic Server.  The group had a special character in the name. The WLS console refused to delete the group with the message a java.net.MalformedURLException and another message saying “Errors must be corrected before proceeding.” as shown below. The group aa:bb is the one with the issue.  Click to enlarge. WebLogic Server includes an embedded LDAP server that can be used for managing users and groups for “reasonably small environments (10,000 or fewer users)”.  For organizations scaling larger or using more high-end features, I recommend looking at one of Oracle’s very popular enterprise directory services products like Oracle Internet Directory or Oracle Directory Server Enterprise Edition.  You can configure multiple authenicators in WebLogic Server so that you can use multiple directories at the same time. I am not sure WebLogic Server supports special characters in group names for the Embedded LDAP server, but in this case both the console and WLST reported the same issue deleting the group with the special character in the name.  Here’s the WLST output: wls:/hotspot_domain/serverConfig/SecurityConfiguration/hotspot_domain/Realms/myrealm/AuthenticationProviders/DefaultAuthenticator> cmo.removeGroup('aa:bb') Traceback (innermost last): File "<console>", line 1, in ? weblogic.security.providers.authentication.LDAPAtnDelegateException: [Security:090296]invalid URL ldap:///ou=people,ou=myrealm,dc=hotspot_domain??sub?(&(objectclass=person)(wlsMemberOf=cn=aa:bb,ou=groups,ou=myrealm,dc=hotspot_domain)) at weblogic.security.providers.authentication.LDAPAtnGroupMembersNameList.advance(LDAPAtnGroupMembersNameList.java:254) at weblogic.security.providers.authentication.LDAPAtnGroupMembersNameList.<init>(LDAPAtnGroupMembersNameList.java:119) at weblogic.security.providers.authentication.LDAPAtnDelegate.listGroupMembers(LDAPAtnDelegate.java:1392) at weblogic.security.providers.authentication.LDAPAtnDelegate.removeGroup(LDAPAtnDelegate.java:1989) at weblogic.security.providers.authentication.DefaultAuthenticatorImpl.removeGroup(DefaultAuthenticatorImpl.java:242) at weblogic.security.providers.authentication.DefaultAuthenticatorMBeanImpl.removeGroup(DefaultAuthenticatorMBeanImpl.java:407) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at weblogic.management.jmx.modelmbean.WLSModelMBean.invoke(WLSModelMBean.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$16.run(WLSMBeanServerInterceptorBase.java:449) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.invoke(WLSMBeanServerInterceptorBase.java:447) at weblogic.management.mbeanservers.internal.JMXContextInterceptor.invoke(JMXContextInterceptor.java:263) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$16.run(WLSMBeanServerInterceptorBase.java:449) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.invoke(WLSMBeanServerInterceptorBase.java:447) at weblogic.management.mbeanservers.internal.SecurityInterceptor.invoke(SecurityInterceptor.java:444) at weblogic.management.jmx.mbeanserver.WLSMBeanServer.invoke(WLSMBeanServer.java:323) at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder$11$1.run(JMXConnectorSubjectForwarder.java:663) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder$11.run(JMXConnectorSubjectForwarder.java:661) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363) at weblogic.management.mbeanservers.internal.JMXConnectorSubjectForwarder.invoke(JMXConnectorSubjectForwarder.java:654) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427) at javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265) at java.security.AccessController.doPrivileged(Native Method) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1367) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788) at javax.management.remote.rmi.RMIConnectionImpl_WLSkel.invoke(Unknown Source) at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:667) at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:522) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:146) at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:518) at weblogic.rmi.internal.wls.WLSExecuteRequest.run(WLSExecuteRequest.java:118) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:207) at weblogic.work.ExecuteThread.run(ExecuteThread.java:176) Caused by: java.net.MalformedURLException at netscape.ldap.LDAPUrl.readNextConstruct(LDAPUrl.java:651) at netscape.ldap.LDAPUrl.parseUrl(LDAPUrl.java:277) at netscape.ldap.LDAPUrl.<init>(LDAPUrl.java:114) at weblogic.security.providers.authentication.LDAPAtnGroupMembersNameList.advance(LDAPAtnGroupMembersNameList.java:224) ... 41 more It’s fairly clear that in order to work that the : character needs to be URL encoded to %3A or similar.  But all is not lost, there is another way.  You can configure an LDAP Explorer like JXplorer to WebLogic Server Embedded LDAP and browse/edit the entries. Follow the instructions here, being sure to change the authentication credentials to the Embedded LDAP server to some value you know, as by default they are some unknown value.  You’ll need to reboot the WebLogic Server Admin Server after making this change. Now configure JXplorer to connect as described in the documentation.  I’ve circled the important inputs.  In this example, my domain name is “hotspot_domain” which listens on the localhost listen address and port 7001.  The cn=Admin user name is a constant identifier for the Administrator of the embedded LDAP and that does not change, but you need to know what it is so you can enter it into the tool you use. Once you connect successfully, you can explore the entries and in this case delete the group that is no longer desired.

    Read the article

  • Installing SOA Suite 11.1.1.3

    - by James Taylor
    With the release of Oracle SOA Suite 11.1.1.3 last week (28 April 2010) I thought I would attempt to implement a complete SOA Environment with SOA Suite, BPM and OSB on the WLS infrastructure. One major point of difference with the 11.1.1.3 is that is is released as a point release so you must have 11.1.1.2 installed first, then upgrade to 11.1.1.3. This post is performing the upgrade on Linux, if upgrading on windows you will need to substitute the directories and files accordingly. This post assumes that you have SOA Suite 11.1.1.2 installed already. 1. Download 11.1.1.3 software from the following site: http://www.oracle.com/technology/software/products/middleware/htdocs/fmw_11_download.html WLS 11.1.1.3   RCU 11.1.1.3 SOA Suite 11.1.1.3 OSB 11.1.1.3 Copy files to a staging area. For the purpose of this document the staging area is: /u01/stage  2. Shutdown your existing SOA Suite 11.1.1.2 environment 3. Execute the WLS 11.1.1.3 install from the stage directory. wls1033_linux32.bin 4. Choose the existing 11.1.1.2 Middleware Home 5. Ignore the security update notification 6. Accept the default products to be upgraded. 7. Upgrade of WebLogic has been completed   8. Upgrade the SOA Suite database schemas using the RCU utility. Unzip the RCU utility into the staging area and run the install ./u01/stage/rcuHome/bin/rcu 9. Drop the existing Repository and provide connection details 9. Install SOA Suite patch set 11.1.1.3. Unzip the SOA Suite patchset and execute the runInstaller with the following command. ./u01/stage/Disk1/runInstaller –jreLoc $MW_HOME/jdk160_18/jre 10. Choose the existing 11.1.1.2 middleware home 11. Start Install 12. Your SOA Suite Install should now be completed. Now we need to update the database repository. Login to SQLPlus as sysdba and execute the following command. SELECT version, status FROM schema_version_registry where owner = 'DEV_SOAINFRA'; the result should be similar to this: VERSION                        STATUS      OWNER ------------------------------ ----------- ------------------------------ 11.1.1.2.0                     VALID       DEV_SOAINFRA As you can see the version if these repositories are still at 11.1.1.2. 13. To upgrade these versions you have 2 options. 1 install via RCU, but this will remove any existing services. The second option is to use the Patch Set Assistant. From the $MW_HOME directory run the following command ./Oracle_SOA1/bin/psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_SOAINFRA 14. Install OSB. For the OSB install I did not install the IDE, or the Examples. run the runInstaller from the command line, unzip the OSB download to the stage area. ./u01/stage/osb/Disk1/runInstaller –jreLoc $MW_HOME/jdk160_18/jre 15. Choose Custom Install NOT to install the IDE (Eclipse) or Examples. 16. Unselect the, Examples and IDE checkboxes. 17. Accept the defaults and start installing. 18. Once the install has been completed configure the domain by running the Configuration Wizard. $MW_HOME/oracle_common/common/bin/config.sh You can create a new domain. In this document I will extend the soa_domain. 19. Select the following from the check list. I have selected the BPM Suite, this is unrelated to OSB but wanted it for my development purposes. To use this functionality additional license are required. 20. Configure the database connectivity. 21. Configure the database connectivity for the OSB schema. 22. Accept the defaults if installing on standard machine, if you require a cluster or advanced configuration then choose the option for you. 23. Upgrade is complete and OSB has been installed. Now you can start your environment.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >