Search Results

Search found 21740 results on 870 pages for 'andrew service'.

Page 26/870 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • How can I get the user object from a service in Symfony2

    - by pogo
    My site is using a third party service for authentication as well as other bits of functionality so I have setup a service that does all the API calls for me and it's this service that I need to be able to access the user object. I've tried injecting the security.context service into my API service but I get a ServiceCircularReferenceException because my user authentication provider references the API service (it has to in order to authenticate the user). So I get a chain of security.context -> authentication provider -> user provider -> API service -> security.context I'm struggling to this of another way of getting the user object and I can't see any obvious way of splitting up this chain. My configs are all defined in config.yml, here are the relevant bits myapp.database: class: Pogo\MyAppBundle\Service\DatabaseService arguments: siteid: %siteid% entityManager: "@doctrine.orm.entity_manager" myapp.apiservice: class: Pogo\MyAppBundle\Service\TicketingService arguments: entityManager: "@myapp.database" myapp.user_provider: class: Pogo\MyAppBundle\Service\APIUserProvider arguments: entityManager: "@myapp.database" ticketingAdapter: "@myapp.apiservice" securityContext: "@security.context" myapp.authenticationprovider: class: Pogo\MyAppBundle\Service\APIAuthenticationProvider arguments: userChecker: "@security.user_checker" encoderFactory: "@security.encoder_factory" userProvider: "@myapp.user_provider" myapp.user_provider is the service that I've defined as my user provider service in security.yml which I presume is how it's being referenced by security.context

    Read the article

  • Setting up a local AI server - easy with Solaris 11

    - by Stefan Hinker
    Many things are new in Solaris 11, Autoinstall is one of them.  If, like me, you've known Jumpstart for the last 2 centuries or so, you'll have to start from scratch.  Well, almost, as the concepts are similar, and it's not all that difficult.  Just new. I wanted to have an AI server that I could use for demo purposes, on the train if need be.  That answers the question of hardware requirements: portable.  But let's start at the beginning. First, you need an OS image, of course.  In the new world of Solaris 11, it is now called a repository.  The original can be downloaded from the Solaris 11 page at Oracle.   What you want is the "Oracle Solaris 11 11/11 Repository Image", which comes in two parts that can be combined using cat.  MD5 checksums for these (and all other downloads from that page) are available closer to the top of the page. With that, building the repository is quick and simple: # zfs create -o mountpoint=/export/repo rpool/ai/repo # zfs create rpool/ai/repo/s11 # mount -o ro -F hsfs /tmp/sol-11-1111-repo-full.iso /mnt # rsync -aP /mnt/repo /export/repo/s11 # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@fcs # pkgrepo info -s /export/repo/sol11/repo PUBLISHER PACKAGES STATUS UPDATED solaris 4292 online 2012-03-12T20:47:15.378639Z That's all there's to it.  Let's make a snapshot, just to be on the safe side.  You never know when one will come in handy.  To use this repository, you could just add it as a file-based publisher: # pkg set-publisher -g file:///export/repo/sol11/repo solaris In case I'd want to access this repository through a (virtual) network, i'll now quickly activate the repository-service: # svccfg -s application/pkg/server \ setprop pkg/inst_root=/export/repo/sol11/repo # svccfg -s application/pkg/server setprop pkg/readonly=true # svcadm refresh application/pkg/server # svcadm enable application/pkg/server That's all you need - now point your browser to http://localhost/ to view your beautiful repository-server. Step 1 is done.  All of this, by the way, is nicely documented in the README file that's contained in the repository image. Of course, we already have updates to the original release.  You can find them in MOS in the Oracle Solaris 11 Support Repository Updates (SRU) Index.  You can simply add these to your existing repository or create separate repositories for each SRU.  The individual SRUs are self-sufficient and incremental - SRU4 includes all updates from SRU2 and SRU3.  With ZFS, you can also get both: A full repository with all updates and at the same time incremental ones up to each of the updates: # mount -o ro -F hsfs /tmp/sol-11-1111-sru4-05-incr-repo.iso /mnt # pkgrecv -s /mnt/repo -d /export/repo/sol11/repo '*' # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@sru4 # zfs set snapdir=visible rpool/ai/repo/sol11 # svcadm restart svc:/application/pkg/server:default The normal repository is now updated to SRU4.  Thanks to the ZFS snapshots, there is also a valid repository of Solaris 11 11/11 without the update located at /export/repo/sol11/.zfs/snapshot/fcs . If you like, you can also create another repository service for each update, running on a separate port. But now lets continue with the AI server.  Just a little bit of reading in the dokumentation makes it clear that we will need to run a DHCP server for this.  Since I already have one active (for my SunRay installation) and since it's a good idea to have these kinds of services separate anyway, I decided to create this in a Zone.  So, let's create one first: # zfs create -o mountpoint=/export/install rpool/ai/install # zfs create -o mountpoint=/zones rpool/zones # zonecfg -z ai-server zonecfg:ai-server> create create: Using system default template 'SYSdefault' zonecfg:ai-server> set zonepath=/zones/ai-server zonecfg:ai-server> add dataset zonecfg:ai-server:dataset> set name=rpool/ai/install zonecfg:ai-server:dataset> set alias=install zonecfg:ai-server:dataset> end zonecfg:ai-server> commit zonecfg:ai-server> exit # zoneadm -z ai-server install # zoneadm -z ai-server boot ; zlogin -C ai-server Give it a hostname and IP address at first boot, and there's the Zone.  For a publisher for Solaris packages, it will be bound to the "System Publisher" from the Global Zone.  The /export/install filesystem, of course, is intended to be used by the AI server.  Let's configure it now: #zlogin ai-server root@ai-server:~# pkg install install/installadm root@ai-server:~# installadm create-service -n x86-fcs -a i386 \ -s pkg://solaris/install-image/[email protected],5.11-0.175.0.0.0.2.1482 \ -d /export/install/fcs -i 192.168.2.20 -c 3 With that, the core AI server is already done.  What happened here?  First, I installed the AI server software.  IPS makes that nice and easy.  If necessary, it'll also pull in the required DHCP-Server and anything else that might be missing.  Watch out for that DHCP server software.  In Solaris 11, there are two different versions.  There's the one you might know from Solaris 10 and earlier, and then there's a new one from ISC.  The latter is the one we need for AI.  The SMF service names of both are very similar.  The "old" one is "svc:/network/dhcp-server:default". The ISC-server comes with several SMF-services. We at least need "svc:/network/dhcp/server:ipv4".  The command "installadm create-service" creates the installation-service. It's called "x86-fcs", serves the "i386" architecture and gets its boot image from the repository of the system publisher, using version 5.11,5.11-0.175.0.0.0.2.1482, which is Solaris 11 11/11.  (The option "-a i386" in this example is optional, since the installserver itself runs on a x86 machine.) The boot-environment for clients is created in /export/install/fcs and the DHCP-server is configured for 3 IP-addresses starting at 192.168.2.20.  This configuration is stored in a very human readable form in /etc/inet/dhcpd4.conf.  An AI-service for SPARC systems could be created in the very same way, using "-a sparc" as the architecture option. Now we would be ready to register and install the first client.  It would be installed with the default "solaris-large-server" using the publisher "http://pkg.oracle.com/solaris/release" and would query it's configuration interactively at first boot.  This makes it very clear that an AI-server is really only a boot-server.  The true source of packets to install can be different.  Since I don't like these defaults for my demo setup, I did some extra config work for my clients. The configuration of a client is controlled by manifests and profiles.  The manifest controls which packets are installed and how the filesystems are layed out.  In that, it's very much like the old "rules.ok" file in Jumpstart.  Profiles contain additional configuration like root passwords, primary user account, IP addresses, keyboard layout etc.  Hence, profiles are very similar to the old sysid.cfg file. The easiest way to get your hands on a manifest is to ask the AI server we just created to give us it's default one.  Then modify that to our liking and give it back to the installserver to use: root@ai-server:~# mkdir -p /export/install/configs/manifests root@ai-server:~# cd /export/install/configs/manifests root@ai-server:~# installadm export -n x86-fcs -m orig_default \ -o orig_default.xml root@ai-server:~# cp orig_default.xml s11-fcs.small.local.xml root@ai-server:~# vi s11-fcs.small.local.xml root@ai-server:~# more s11-fcs.small.local.xml <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="S11 Small fcs local"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="IPS"> <destination>  </destination> <source> <publisher name="solaris"> <origin name="http://192.168.2.12/"/> </publisher> </source> <!-- By default the latest build available, in the specified IPS repository, is installed. If another build is required, the build number has to be appended to the 'entire' package in the following form: <name>pkg:/[email protected]#</name> --> <software_data action="install"> <name>pkg:/[email protected],5.11-0.175.0.0.0.2.0</name> <name>pkg:/group/system/solaris-small-server</name> </software_data> </software> </ai_instance> </auto_install> root@ai-server:~# installadm create-manifest -n x86-fcs -d \ -f ./s11-fcs.small.local.xml root@ai-server:~# installadm list -m -n x86-fcs Manifest Status Criteria -------- ------ -------- S11 Small fcs local Default None orig_default Inactive None The major points in this new manifest are: Install "solaris-small-server" Install a few locales less than the default.  I'm not that fluid in French or Japanese... Use my own package service as publisher, running on IP address 192.168.2.12 Install the initial release of Solaris 11:  pkg:/[email protected],5.11-0.175.0.0.0.2.0 Using a similar approach, I'll create a default profile interactively and use it as a template for a few customized building blocks, each defining a part of the overall system configuration.  The modular approach makes it easy to configure numerous clients later on: root@ai-server:~# mkdir -p /export/install/configs/profiles root@ai-server:~# cd /export/install/configs/profiles root@ai-server:~# sysconfig create-profile -o default.xml root@ai-server:~# cp default.xml general.xml; cp default.xml mars.xml root@ai-server:~# cp default.xml user.xml root@ai-server:~# vi general.xml mars.xml user.xml root@ai-server:~# more general.xml mars.xml user.xml :::::::::::::: general.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/timezone"> <instance enabled="true" name="default"> <property_group type="application" name="timezone"> <propval type="astring" name="localtime" value="Europe/Berlin"/> </property_group> </instance> </service> <service version="1" type="service" name="system/environment"> <instance enabled="true" name="init"> <property_group type="application" name="environment"> <propval type="astring" name="LANG" value="C"/> </property_group> </instance> </service> <service version="1" type="service" name="system/keymap"> <instance enabled="true" name="default"> <property_group type="system" name="keymap"> <propval type="astring" name="layout" value="US-English"/> </property_group> </instance> </service> <service version="1" type="service" name="system/console-login"> <instance enabled="true" name="default"> <property_group type="application" name="ttymon"> <propval type="astring" name="terminal_type" value="vt100"/> </property_group> </instance> </service> <service version="1" type="service" name="network/physical"> <instance enabled="true" name="default"> <property_group type="application" name="netcfg"> <propval type="astring" name="active_ncp" value="DefaultFixed"/> </property_group> </instance> </service> <service version="1" type="service" name="system/name-service/switch"> <property_group type="application" name="config"> <propval type="astring" name="default" value="files"/> <propval type="astring" name="host" value="files dns"/> <propval type="astring" name="printer" value="user files"/> </property_group> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="system/name-service/cache"> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="network/dns/client"> <property_group type="application" name="config"> <property type="net_address" name="nameserver"> <net_address_list> <value_node value="192.168.2.1"/> </net_address_list> </property> </property_group> <instance enabled="true" name="default"/> </service> </service_bundle> :::::::::::::: mars.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="network/install"> <instance enabled="true" name="default"> <property_group type="application" name="install_ipv4_interface"> <propval type="astring" name="address_type" value="static"/> <propval type="net_address_v4" name="static_address" value="192.168.2.100/24"/> <propval type="astring" name="name" value="net0/v4"/> <propval type="net_address_v4" name="default_route" value="192.168.2.1"/> </property_group> <property_group type="application" name="install_ipv6_interface"> <propval type="astring" name="stateful" value="yes"/> <propval type="astring" name="stateless" value="yes"/> <propval type="astring" name="address_type" value="addrconf"/> <propval type="astring" name="name" value="net0/v6"/> </property_group> </instance> </service> <service version="1" type="service" name="system/identity"> <instance enabled="true" name="node"> <property_group type="application" name="config"> <propval type="astring" name="nodename" value="mars"/> </property_group> </instance> </service> </service_bundle> :::::::::::::: user.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/config-user"> <instance enabled="true" name="default"> <property_group type="application" name="root_account"> <propval type="astring" name="login" value="root"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="role"/> </property_group> <property_group type="application" name="user_account"> <propval type="astring" name="login" value="stefan"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="normal"/> <propval type="astring" name="description" value="Stefan Hinker"/> <propval type="count" name="uid" value="12345"/> <propval type="count" name="gid" value="10"/> <propval type="astring" name="shell" value="/usr/bin/bash"/> <propval type="astring" name="roles" value="root"/> <propval type="astring" name="profiles" value="System Administrator"/> <propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/> </property_group> </instance> </service> </service_bundle> root@ai-server:~# installadm create-profile -n x86-fcs -f general.xml root@ai-server:~# installadm create-profile -n x86-fcs -f user.xml root@ai-server:~# installadm create-profile -n x86-fcs -f mars.xml \ -c ipv4=192.168.2.100 root@ai-server:~# installadm list -p Service Name Profile ------------ ------- x86-fcs general.xml mars.xml user.xml root@ai-server:~# installadm list -n x86-fcs -p Profile Criteria ------- -------- general.xml None mars.xml ipv4 = 192.168.2.100 user.xml None Here's the idea behind these files: "general.xml" contains settings valid for all my clients.  Stuff like DNS servers, for example, which in my case will always be the same. "user.xml" only contains user definitions.  That is, a root password and a primary user.Both of these profiles will be valid for all clients (for now). "mars.xml" defines network settings for an individual client.  This profile is associated with an IP-Address.  For this to work, I'll have to tweak the DHCP-settings in the next step: root@ai-server:~# installadm create-client -e 08:00:27:AA:3D:B1 -n x86-fcs root@ai-server:~# vi /etc/inet/dhcpd4.conf root@ai-server:~# tail -5 /etc/inet/dhcpd4.conf host 080027AA3DB1 { hardware ethernet 08:00:27:AA:3D:B1; fixed-address 192.168.2.100; filename "01080027AA3DB1"; } This completes the client preparations.  I manually added the IP-Address for mars to /etc/inet/dhcpd4.conf.  This is needed for the "mars.xml" profile.  Disabling arbitrary DHCP-replies will shut up this DHCP server, making my life in a shared environment a lot more peaceful ;-)Now, I of course want this installation to be completely hands-off.  For this to work, I'll need to modify the grub boot menu for this client slightly.  You can find it in /etc/netboot.  "installadm create-client" will create a new boot menu for every client, identified by the client's MAC address.  The template for this can be found in a subdirectory with the name of the install service, /etc/netboot/x86-fcs in our case.  If you don't want to change this manually for every client, modify that template to your liking instead. root@ai-server:~# cd /etc/netboot root@ai-server:~# cp menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org root@ai-server:~# vi menu.lst.01080027AA3DB1 root@ai-server:~# diff menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org 1,2c1,2 < default=1 < timeout=10 --- > default=0 > timeout=30 root@ai-server:~# more menu.lst.01080027AA3DB1 default=1 timeout=10 min_mem64=0 title Oracle Solaris 11 11/11 Text Installer and command line kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install_media=htt p://$serverIP:5555//export/install/fcs,install_service=x86-fcs,install_svc_addre ss=$serverIP:5555 module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive title Oracle Solaris 11 11/11 Automated Install kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install=true,inst all_media=http://$serverIP:5555//export/install/fcs,install_service=x86-fcs,inst all_svc_address=$serverIP:5555,livemode=text module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive Now just boot the client off the network using PXE-boot.  For my demo purposes, that's a client from VirtualBox, of course.  That's all there's to it.  And despite the fact that this blog entry is a little longer - that wasn't that hard now, was it?

    Read the article

  • How can I prevent my Android app/service from being "killed" from a task manager?

    - by Don
    It is very important that my service stay running until someone with a password stops the service from my UI screen. My app runs great but it is designed to be turned on/off by parents (with a password) on their kids phones. I have managed to make everything work but the problem I'm having is that if the kid uses a task manager to kill my service then my app is useless. I would be grateful to anyone who knows a way to either 1) monitor the service and start it back up automatically if its "killed" or 2) prevent someone from being able to kill it except from the activity (administration screen) that launched the service. Or both? I'm sorry if I'm not very clear in describing the problem, I'm a beginner. I've made great progress so far but I am stuck at this last hurdle.

    Read the article

  • Can't connect to STunnel when it's running as a service

    - by John Francis
    I've got STunnel configured to proxy non SSL POP3 requests to GMail on port 111. This is working fine when STunnel is running as a desktop app, but when I run the STunnel service, I can't connect to port 111 on the machine (using Outlook Express for example). The Stunnel log file shows the port binding is succeeding, but it never sees a connection. There's something preventing the connection to that port when STunnel is running as a service? Here's stunnel.conf cert = stunnel.pem ; Some performance tunings socket = l:TCP_NODELAY=1 socket = r:TCP_NODELAY=1 ; Some debugging stuff useful for troubleshooting debug = 7 output = stunnel.log ; Use it for client mode client = yes ; Service-level configuration [gmail] accept = 127.0.0.1:111 connect = pop.gmail.com:995 stunnel.log from service 2010.10.07 12:14:22 LOG5[80444:72984]: Reading configuration from file stunnel.conf 2010.10.07 12:14:22 LOG7[80444:72984]: Snagged 64 random bytes from C:/.rnd 2010.10.07 12:14:23 LOG7[80444:72984]: Wrote 1024 new random bytes to C:/.rnd 2010.10.07 12:14:23 LOG7[80444:72984]: PRNG seeded successfully 2010.10.07 12:14:23 LOG7[80444:72984]: Certificate: stunnel.pem 2010.10.07 12:14:23 LOG7[80444:72984]: Certificate loaded 2010.10.07 12:14:23 LOG7[80444:72984]: Key file: stunnel.pem 2010.10.07 12:14:23 LOG7[80444:72984]: Private key loaded 2010.10.07 12:14:23 LOG7[80444:72984]: SSL context initialized for service gmail 2010.10.07 12:14:23 LOG5[80444:72984]: Configuration successful 2010.10.07 12:14:23 LOG5[80444:72984]: No limit detected for the number of clients 2010.10.07 12:14:23 LOG7[80444:72984]: FD=156 in non-blocking mode 2010.10.07 12:14:23 LOG7[80444:72984]: Option SO_REUSEADDR set on accept socket 2010.10.07 12:14:23 LOG7[80444:72984]: Service gmail bound to 0.0.0.0:111 2010.10.07 12:14:23 LOG7[80444:72984]: Service gmail opened FD=156 2010.10.07 12:14:23 LOG5[80444:72984]: stunnel 4.34 on x86-pc-mingw32-gnu with OpenSSL 1.0.0a 1 Jun 2010 2010.10.07 12:14:23 LOG5[80444:72984]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 stunnel.log from desktop (working) process 2010.10.07 12:10:31 LOG5[80824:81200]: Reading configuration from file stunnel.conf 2010.10.07 12:10:31 LOG7[80824:81200]: Snagged 64 random bytes from C:/.rnd 2010.10.07 12:10:32 LOG7[80824:81200]: Wrote 1024 new random bytes to C:/.rnd 2010.10.07 12:10:32 LOG7[80824:81200]: PRNG seeded successfully 2010.10.07 12:10:32 LOG7[80824:81200]: Certificate: stunnel.pem 2010.10.07 12:10:32 LOG7[80824:81200]: Certificate loaded 2010.10.07 12:10:32 LOG7[80824:81200]: Key file: stunnel.pem 2010.10.07 12:10:32 LOG7[80824:81200]: Private key loaded 2010.10.07 12:10:32 LOG7[80824:81200]: SSL context initialized for service gmail 2010.10.07 12:10:32 LOG5[80824:81200]: Configuration successful 2010.10.07 12:10:32 LOG5[80824:81200]: No limit detected for the number of clients 2010.10.07 12:10:32 LOG7[80824:81200]: FD=156 in non-blocking mode 2010.10.07 12:10:32 LOG7[80824:81200]: Option SO_REUSEADDR set on accept socket 2010.10.07 12:10:32 LOG7[80824:81200]: Service gmail bound to 0.0.0.0:111 2010.10.07 12:10:32 LOG7[80824:81200]: Service gmail opened FD=156 2010.10.07 12:10:33 LOG5[80824:81200]: stunnel 4.34 on x86-pc-mingw32-gnu with OpenSSL 1.0.0a 1 Jun 2010 2010.10.07 12:10:33 LOG5[80824:81200]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.10.07 12:10:33 LOG7[80824:81844]: Service gmail accepted FD=188 from 127.0.0.1:24813 2010.10.07 12:10:33 LOG7[80824:81844]: Creating a new thread 2010.10.07 12:10:33 LOG7[80824:81844]: New thread created 2010.10.07 12:10:33 LOG7[80824:25144]: Service gmail started 2010.10.07 12:10:33 LOG7[80824:25144]: FD=188 in non-blocking mode 2010.10.07 12:10:33 LOG7[80824:25144]: Option TCP_NODELAY set on local socket 2010.10.07 12:10:33 LOG5[80824:25144]: Service gmail accepted connection from 127.0.0.1:24813 2010.10.07 12:10:33 LOG7[80824:25144]: FD=212 in non-blocking mode 2010.10.07 12:10:33 LOG6[80824:25144]: connect_blocking: connecting 209.85.227.109:995 2010.10.07 12:10:33 LOG7[80824:25144]: connect_blocking: s_poll_wait 209.85.227.109:995: waiting 10 seconds 2010.10.07 12:10:33 LOG5[80824:25144]: connect_blocking: connected 209.85.227.109:995 2010.10.07 12:10:33 LOG5[80824:25144]: Service gmail connected remote server from 192.168.1.9:24814 2010.10.07 12:10:33 LOG7[80824:25144]: Remote FD=212 initialized 2010.10.07 12:10:33 LOG7[80824:25144]: Option TCP_NODELAY set on remote socket 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): before/connect initialization 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write client hello A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server hello A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server certificate A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server done A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write client key exchange A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write change cipher spec A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write finished A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 flush data 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read finished A 2010.10.07 12:10:33 LOG7[80824:25144]: 1 items in the session cache 2010.10.07 12:10:33 LOG7[80824:25144]: 1 client connects (SSL_connect()) 2010.10.07 12:10:33 LOG7[80824:25144]: 1 client connects that finished 2010.10.07 12:10:33 LOG7[80824:25144]: 0 client renegotiations requested 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server connects (SSL_accept()) 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server connects that finished 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server renegotiations requested 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache hits 2010.10.07 12:10:33 LOG7[80824:25144]: 0 external session cache hits 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache misses 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache timeouts 2010.10.07 12:10:33 LOG6[80824:25144]: SSL connected: new session negotiated 2010.10.07 12:10:33 LOG6[80824:25144]: Negotiated ciphers: RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5 2010.10.07 12:10:34 LOG7[80824:25144]: SSL socket closed on SSL_read 2010.10.07 12:10:34 LOG7[80824:25144]: Sending socket write shutdown 2010.10.07 12:10:34 LOG5[80824:25144]: Connection closed: 53 bytes sent to SSL, 118 bytes sent to socket 2010.10.07 12:10:34 LOG7[80824:25144]: Service gmail finished (0 left)

    Read the article

  • Can't connect to STunnel when it's running as a service

    - by John Francis
    I've got STunnel configured to proxy non SSL POP3 requests to GMail on port 111. This is working fine when STunnel is running as a desktop app, but when I run the STunnel service, I can't connect to port 111 on the machine (using Outlook Express for example). The Stunnel log file shows the port binding is succeeding, but it never sees a connection. There's something preventing the connection to that port when STunnel is running as a service? Here's stunnel.conf cert = stunnel.pem ; Some performance tunings socket = l:TCP_NODELAY=1 socket = r:TCP_NODELAY=1 ; Some debugging stuff useful for troubleshooting debug = 7 output = stunnel.log ; Use it for client mode client = yes ; Service-level configuration [gmail] accept = 127.0.0.1:111 connect = pop.gmail.com:995 stunnel.log from service 2010.10.07 12:14:22 LOG5[80444:72984]: Reading configuration from file stunnel.conf 2010.10.07 12:14:22 LOG7[80444:72984]: Snagged 64 random bytes from C:/.rnd 2010.10.07 12:14:23 LOG7[80444:72984]: Wrote 1024 new random bytes to C:/.rnd 2010.10.07 12:14:23 LOG7[80444:72984]: PRNG seeded successfully 2010.10.07 12:14:23 LOG7[80444:72984]: Certificate: stunnel.pem 2010.10.07 12:14:23 LOG7[80444:72984]: Certificate loaded 2010.10.07 12:14:23 LOG7[80444:72984]: Key file: stunnel.pem 2010.10.07 12:14:23 LOG7[80444:72984]: Private key loaded 2010.10.07 12:14:23 LOG7[80444:72984]: SSL context initialized for service gmail 2010.10.07 12:14:23 LOG5[80444:72984]: Configuration successful 2010.10.07 12:14:23 LOG5[80444:72984]: No limit detected for the number of clients 2010.10.07 12:14:23 LOG7[80444:72984]: FD=156 in non-blocking mode 2010.10.07 12:14:23 LOG7[80444:72984]: Option SO_REUSEADDR set on accept socket 2010.10.07 12:14:23 LOG7[80444:72984]: Service gmail bound to 0.0.0.0:111 2010.10.07 12:14:23 LOG7[80444:72984]: Service gmail opened FD=156 2010.10.07 12:14:23 LOG5[80444:72984]: stunnel 4.34 on x86-pc-mingw32-gnu with OpenSSL 1.0.0a 1 Jun 2010 2010.10.07 12:14:23 LOG5[80444:72984]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 stunnel.log from desktop (working) process 2010.10.07 12:10:31 LOG5[80824:81200]: Reading configuration from file stunnel.conf 2010.10.07 12:10:31 LOG7[80824:81200]: Snagged 64 random bytes from C:/.rnd 2010.10.07 12:10:32 LOG7[80824:81200]: Wrote 1024 new random bytes to C:/.rnd 2010.10.07 12:10:32 LOG7[80824:81200]: PRNG seeded successfully 2010.10.07 12:10:32 LOG7[80824:81200]: Certificate: stunnel.pem 2010.10.07 12:10:32 LOG7[80824:81200]: Certificate loaded 2010.10.07 12:10:32 LOG7[80824:81200]: Key file: stunnel.pem 2010.10.07 12:10:32 LOG7[80824:81200]: Private key loaded 2010.10.07 12:10:32 LOG7[80824:81200]: SSL context initialized for service gmail 2010.10.07 12:10:32 LOG5[80824:81200]: Configuration successful 2010.10.07 12:10:32 LOG5[80824:81200]: No limit detected for the number of clients 2010.10.07 12:10:32 LOG7[80824:81200]: FD=156 in non-blocking mode 2010.10.07 12:10:32 LOG7[80824:81200]: Option SO_REUSEADDR set on accept socket 2010.10.07 12:10:32 LOG7[80824:81200]: Service gmail bound to 0.0.0.0:111 2010.10.07 12:10:32 LOG7[80824:81200]: Service gmail opened FD=156 2010.10.07 12:10:33 LOG5[80824:81200]: stunnel 4.34 on x86-pc-mingw32-gnu with OpenSSL 1.0.0a 1 Jun 2010 2010.10.07 12:10:33 LOG5[80824:81200]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.10.07 12:10:33 LOG7[80824:81844]: Service gmail accepted FD=188 from 127.0.0.1:24813 2010.10.07 12:10:33 LOG7[80824:81844]: Creating a new thread 2010.10.07 12:10:33 LOG7[80824:81844]: New thread created 2010.10.07 12:10:33 LOG7[80824:25144]: Service gmail started 2010.10.07 12:10:33 LOG7[80824:25144]: FD=188 in non-blocking mode 2010.10.07 12:10:33 LOG7[80824:25144]: Option TCP_NODELAY set on local socket 2010.10.07 12:10:33 LOG5[80824:25144]: Service gmail accepted connection from 127.0.0.1:24813 2010.10.07 12:10:33 LOG7[80824:25144]: FD=212 in non-blocking mode 2010.10.07 12:10:33 LOG6[80824:25144]: connect_blocking: connecting 209.85.227.109:995 2010.10.07 12:10:33 LOG7[80824:25144]: connect_blocking: s_poll_wait 209.85.227.109:995: waiting 10 seconds 2010.10.07 12:10:33 LOG5[80824:25144]: connect_blocking: connected 209.85.227.109:995 2010.10.07 12:10:33 LOG5[80824:25144]: Service gmail connected remote server from 192.168.1.9:24814 2010.10.07 12:10:33 LOG7[80824:25144]: Remote FD=212 initialized 2010.10.07 12:10:33 LOG7[80824:25144]: Option TCP_NODELAY set on remote socket 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): before/connect initialization 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write client hello A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server hello A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server certificate A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server done A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write client key exchange A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write change cipher spec A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write finished A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 flush data 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read finished A 2010.10.07 12:10:33 LOG7[80824:25144]: 1 items in the session cache 2010.10.07 12:10:33 LOG7[80824:25144]: 1 client connects (SSL_connect()) 2010.10.07 12:10:33 LOG7[80824:25144]: 1 client connects that finished 2010.10.07 12:10:33 LOG7[80824:25144]: 0 client renegotiations requested 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server connects (SSL_accept()) 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server connects that finished 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server renegotiations requested 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache hits 2010.10.07 12:10:33 LOG7[80824:25144]: 0 external session cache hits 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache misses 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache timeouts 2010.10.07 12:10:33 LOG6[80824:25144]: SSL connected: new session negotiated 2010.10.07 12:10:33 LOG6[80824:25144]: Negotiated ciphers: RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5 2010.10.07 12:10:34 LOG7[80824:25144]: SSL socket closed on SSL_read 2010.10.07 12:10:34 LOG7[80824:25144]: Sending socket write shutdown 2010.10.07 12:10:34 LOG5[80824:25144]: Connection closed: 53 bytes sent to SSL, 118 bytes sent to socket 2010.10.07 12:10:34 LOG7[80824:25144]: Service gmail finished (0 left)

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • JBoss 7.1.1 + Spring 3.1 + JPA 2 (Hibernate 3.6.8) > Cannot parse beans.xml

    - by Mashrur
    Trying to deploy a JSF 2 app with Spring 3.1 + JPA 2 (Hibernte 3.6.8) into JBoss 7.1.1 AS. The app was working fine on tomcat 7. Now, I have already added and changed some configurations. Added jboss-deployment-structure.xml <jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.1"> <deployment> <exclusions> <module name="org.apache.log4j" /> <module name="javax.faces.api" slot="main"/> <module name="com.sun.jsf-impl" slot="main"/> <module name="org.hibernate"/> <module name="org.javassist" /> <module name="javaee.api" /> <module name="org.hibernate.validator" /> <module name="org.jboss.as.web" /> <module name="org.jboss.logging" /> <module name="javax.persistence.api" /> <module name="org.jboss.interceptor" /> </exclusions> </deployment> </jboss-deployment-structure> 2. Inside web.xml added these lines: <persistence-unit-ref> <persistence-unit-ref-name>persistence/persistenceUnit</persistence-unit-ref-name> <persistence-unit-name>persistenceUnit</persistence-unit-name> </persistence-unit-ref> 3. Inside applicationContext.xml, changed bean definition for entityManagerFactory by adding this property <property name="persistenceXmlLocation" value="classpath*:META-INF/persistence.xml"/> Now, do I still need to do something more than these which is obvious to you? Because, while I am trying to deploy it from eclipse indigo SR2, getting this 14:10:32,046 INFO [org.jboss.modules] JBoss Modules version 1.1.1.GA 14:10:33,054 INFO [org.jboss.msc] JBoss MSC version 1.0.2.GA 14:10:33,200 INFO [org.jboss.as] JBAS015899: JBoss AS 7.1.1.Final "Brontes" starting 14:10:36,375 INFO [org.xnio] XNIO Version 3.0.3.GA 14:10:36,384 INFO [org.jboss.as.server] JBAS015888: Creating http management service using socket-binding (management-http) 14:10:36,432 INFO [org.xnio.nio] XNIO NIO Implementation Version 3.0.3.GA 14:10:36,462 INFO [org.jboss.remoting] JBoss Remoting version 3.2.3.GA 14:10:36,587 INFO [org.jboss.as.logging] JBAS011502: Removing bootstrap log handlers 14:10:36,714 INFO [org.jboss.as.security] (ServerService Thread Pool -- 44) JBAS013101: Activating Security Subsystem 14:10:36,735 INFO [org.jboss.as.naming] (MSC service thread 1-2) JBAS011802: Starting Naming Service 14:10:37,043 INFO [org.jboss.as.mail.extension] (MSC service thread 1-6) JBAS015400: Bound mail session [java:jboss/mail/Default] 14:10:37,208 INFO [org.jboss.as.connector] (MSC service thread 1-7) JBAS010408: Starting JCA Subsystem (JBoss IronJacamar 1.0.9.Final) 14:10:37,295 INFO [org.jboss.as.security] (MSC service thread 1-7) JBAS013100: Current PicketBox version=4.0.7.Final 14:10:37,740 INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 27) JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3) 14:10:38,792 INFO [org.jboss.ws.common.management.AbstractServerConfig] (MSC service thread 1-3) JBoss Web Services - Stack CXF Server 4.0.2.GA 14:10:38,833 INFO [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-2) Starting Coyote HTTP/1.1 on http-localhost-127.0.0.1-8080 14:10:39,534 INFO [org.jboss.as.server.deployment.scanner] (MSC service thread 1-1) JBAS015012: Started FileSystemDeploymentService for directory F:\work\softwares\Application Servers\JBoss\jboss-as-7.1.1.Final\jboss-as-7.1.1.Final\standalone\deployments 14:10:39,618 INFO [org.jboss.as.remoting] (MSC service thread 1-3) JBAS017100: Listening on localhost/127.0.0.1:4447 14:10:39,623 INFO [org.jboss.as.remoting] (MSC service thread 1-2) JBAS017100: Listening on /127.0.0.1:9999 14:10:39,698 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015014: Re-attempting failed deployment treasury.war 14:10:39,706 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found com.misl.treasury.ui.war in deployment directory. To trigger deployment create a file called com.misl.treasury.ui.war.dodeploy 14:10:40,105 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) JBAS010400: Bound data source [java:jboss/datasources/ExampleDS] 14:10:40,399 INFO [org.jboss.as.server.deployment] (MSC service thread 1-1) JBAS015876: Starting deployment of "com.misl.treasury.ui.war" 14:10:40,405 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015876: Starting deployment of "treasury.war" 14:10:55,283 WARN [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015893: Encountered invalid class name 'com.sun.faces.vendor.Tomcat6InjectionProvider:org.apache.catalina.util.DefaultAnnotationProcessor' for service type 'com.sun.faces.spi.injectionprovider' 14:10:55,289 WARN [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015893: Encountered invalid class name 'com.sun.faces.vendor.Jetty6InjectionProvider:org.mortbay.jetty.plus.annotation.InjectionCollection' for service type 'com.sun.faces.spi.injectionprovider' 14:10:55,428 INFO [org.jboss.as.jpa] (MSC service thread 1-7) JBAS011401: Read persistence.xml for persistenceUnit 14:10:55,843 WARN [org.jboss.as.server.deployment] (MSC service thread 1-4) JBAS015893: Encountered invalid class name 'com.sun.faces.vendor.Tomcat6InjectionProvider:org.apache.catalina.util.DefaultAnnotationProcessor' for service type 'com.sun.faces.spi.injectionprovider' 14:10:55,849 WARN [org.jboss.as.server.deployment] (MSC service thread 1-4) JBAS015893: Encountered invalid class name 'com.sun.faces.vendor.Jetty6InjectionProvider:org.mortbay.jetty.plus.annotation.InjectionCollection' for service type 'com.sun.faces.spi.injectionprovider' 14:10:56,010 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-2) MSC00001: Failed to start service jboss.deployment.unit."com.misl.treasury.ui.war".POST_MODULE: org.jboss.msc.service.StartException in service jboss.deployment.unit."com.misl.treasury.ui.war".POST_MODULE: Failed to process phase POST_MODULE of deployment "com.misl.treasury.ui.war" at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_23] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_23] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_23] Caused by: java.lang.NoClassDefFoundError: javax/servlet/ServletOutputStream at java.lang.Class.getDeclaredConstructors0(Native Method) [rt.jar:1.6.0_23] at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389) [rt.jar:1.6.0_23] at java.lang.Class.getConstructor0(Class.java:2699) [rt.jar:1.6.0_23] at java.lang.Class.getConstructor(Class.java:1657) [rt.jar:1.6.0_23] at org.jboss.as.web.deployment.jsf.JsfManagedBeanProcessor.deploy(JsfManagedBeanProcessor.java:108) at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] ... 5 more Caused by: java.lang.ClassNotFoundException: javax.servlet.ServletOutputStream from [Module "deployment.com.misl.treasury.ui.war:main" from Service Module Loader] at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:190) at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:468) at org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:456) at org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:423) at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:398) at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:120) ... 11 more 14:10:56,628 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-4) MSC00001: Failed to start service jboss.deployment.unit."treasury.war".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."treasury.war".PARSE: Failed to process phase PARSE of deployment "treasury.war" at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_23] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_23] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_23] Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: SAXException parsing vfs:/F:/work/softwares/Application Servers/JBoss/jboss-as-7.1.1.Final/jboss-as-7.1.1.Final/bin/content/treasury.war/WEB-INF/beans.xml at org.jboss.as.weld.deployment.BeansXmlParser.parse(BeansXmlParser.java:100) at org.jboss.as.weld.deployment.processors.BeansXmlProcessor.parseBeansXml(BeansXmlProcessor.java:133) at org.jboss.as.weld.deployment.processors.BeansXmlProcessor.deploy(BeansXmlProcessor.java:97) at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] ... 5 more Caused by: org.xml.sax.SAXParseException: Premature end of file. at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:196) at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:175) at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:394) at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:322) at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:281) at org.apache.xerces.impl.XMLScanner.reportFatalError(XMLScanner.java:1459) at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(XMLDocumentScannerImpl.java:903) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:324) at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:845) at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:768) at org.apache.xerces.parsers.XMLParser.parse(XMLParser.java:108) at org.apache.xerces.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1196) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:555) at org.apache.xerces.jaxp.SAXParserImpl.parse(SAXParserImpl.java:289) at org.jboss.as.weld.deployment.BeansXmlParser.parse(BeansXmlParser.java:94) ... 8 more 14:10:56,670 INFO [org.jboss.as] (MSC service thread 1-1) JBAS015951: Admin console listening on http://127.0.0.1:9990 14:10:56,672 ERROR [org.jboss.as] (MSC service thread 1-1) JBAS015875: JBoss AS 7.1.1.Final "Brontes" started (with errors) in 25527ms - Started 143 of 222 services (2 services failed or missing dependencies, 76 services are passive or on-demand) 14:10:56,671 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS015871: Deploy of deployment "treasury.war" was rolled back with no failure message 14:10:56,679 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS015870: Deploy of deployment "com.misl.treasury.ui.war" was rolled back with failure message {"JBAS014671: Failed services" => {"jboss.deployment.unit.\"com.misl.treasury.ui.war\".POST_MODULE" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"com.misl.treasury.ui.war\".POST_MODULE: Failed to process phase POST_MODULE of deployment \"com.misl.treasury.ui.war\""}} 14:10:56,851 INFO [org.jboss.as.server.deployment] (MSC service thread 1-8) JBAS015877: Stopped deployment com.misl.treasury.ui.war in 172ms 14:10:57,068 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) JBAS015877: Stopped deployment treasury.war in 395ms 14:10:57,070 INFO [org.jboss.as.controller] (DeploymentScanner-threads - 2) JBAS014774: Service status report JBAS014777: Services which failed to start: service jboss.deployment.unit."treasury.war".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."treasury.war".PARSE: Failed to process phase PARSE of deployment "treasury.war" service jboss.deployment.unit."com.misl.treasury.ui.war".POST_MODULE: org.jboss.msc.service.StartException in service jboss.deployment.unit."com.misl.treasury.ui.war".POST_MODULE: Failed to process phase POST_MODULE of deployment "com.misl.treasury.ui.war" 14:10:57,079 ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) {"JBAS014653: Composite operation failed and was rolled back. Steps that failed:" => {"Operation step-2" => {"JBAS014671: Failed services" => {"jboss.deployment.unit.\"com.misl.treasury.ui.war\".POST_MODULE" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"com.misl.treasury.ui.war\".POST_MODULE: Failed to process phase POST_MODULE of deployment \"com.misl.treasury.ui.war\""}}}} 14:10:57,087 ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS014654: Composite operation was rolled back And by the way, JBOSS_HOME\modules\javax\api\main folder only contains a module.xml, no jars. Tried to add jboss-servlet-api_3.0_spec-1.0.0.Final.jar file in that directory and updated module.xml too. But, it shows a very long trail of stacktraces :) I have even removed beans.xml. No change. I have never tried JBoss before. Any help would be highly appreciated.

    Read the article

  • Can the Windows Indexing Service restart an app pool if it doesn't index the Web.Config?

    - by JamesEggers
    I am having a slight debate with a colleague of mine on this subject. Scenario: A web site has a sub directory that is indexed using the Windows Indexing Service. There is not a web.config in this directory and the indexing service is not indexing the parent directory which is the site root. Something is causing the app pool to restart at supposedly random times. From what I am finding online, the only time the Windows Indexing service would cause an app pool restart is if it's indexing the web.config (or machine.config). My colleague is saying that the indexing service is causing the restarts just because it's in a child folder of the root. So...does anyone have any information or experience on this? If the indexing service is indexing a child folder of the root but not the root directory itself, can it cause IIS to restart the site's app pool? EDIT: The sub directory in question is NOT the bin directory. I know it would cause such as well; however, the bin directory is not the target of the indexing service.

    Read the article

  • Java Windows Service with Procrun, StopMethod problem

    - by mcdon
    My Java class contains the following methods: public static void main(String[] argv) //start the service public static void stop() //stop the service I'm using Procrun from Apache Commons Daemon to setup the Java class as a Windows Service. I get an error when I try to stop the service "[618 javajni.c] [error] Static method 'void main(String[])' in Class com/caphyon/service/JavaService not found". The problem is I am trying to call the stop method when the service is shutting down, but it appears to override the StopMethod with 'void main(String[])'. Here are the stop and start parameters I am using when setting up the service: prunsrv.exe //US//JavaService --StartClass=com.caphyon.service.JavaService --StartMode=jvm --StartPath="C:\JavaService" prunsrv.exe //US//JavaService --StopClass=com.caphyon.service.JavaService --StopMode=jvm --StopPath="C:\JavaService" --StopMethod="void stop()" How do I set the StopMethod prunsrv will call when shutting down the service?

    Read the article

  • How to design service that can provide interface as JAX-WS web service, or via JMS, or as local meth

    - by kevinegham
    Using a typical JEE framework, how do I develop and deploy a service that can be called as a web service (with a WSDL interface), be invoked via JMS messages, or called directly from another service in the same container? Here's some more context: Currently I am responsible for a service (let's call it Service X) with the following properties: Interface definition is a human readable document kept up-to-date manually. Accepts HTTP form-encoded requests to a single URL. Sends plain old XML responses (no schema). Uses Apache to accept requests + a proprietary application server (not servlet or EJB based) containing all logic which runs in a seperate tier. Makes heavy use of a relational database. Called both by internal applications written in a variety of languages and also by a small number of third-parties. I want to (or at least, have been told to!): Switch to a well-known (pref. open source) JEE stack such as JBoss, Glassfish, etc. Split Service X into Service A and Service B so that we can take Service B down for maintenance without affecting Service A. Note that Service B will depend on (i.e. need to make requests to) Service A. Make both services easier for third parties to integrate with by providing at least a WS-I style interface (WSDL + SOAP + XML + HTTP) and probably a JMS interface too. In future we might consider a more lightweight API too (REST + JSON? Google Protocol Buffers?) but that's a nice to have. Additional consideration are: On a smaller deployment, Service A and Service B will likely to running on the same machine and it would seem rather silly for them to use HTTP or a message bus to communicate; better if they could run in the same container and make method calls to each other. Backwards compatibility with the existing ad-hoc Service X interface is not required, and we're not planning on re-using too much of the existing code for the new services. I'm happy with either contract-first (WSDL I guess) or (annotated) code-first development. Apologies if my terminology is a bit hazy - I'm pretty experienced with Java and web programming in general, but am finding it quite hard to get up to speed with all this enterprise / SOA stuff - it seems I have a lot to learn! I'm also not very used to using a framework rather than simply writing code that calls some packages to do things. I've got as far as downloading Glassfish, knocking up a simple WSDL file and using wsimport + a little dummy code to turn that into a WAR file which I've deployed.

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • BackupExec errors when starting services

    - by blade
    Hi, I've installed backupexec 2010 trial on my server, with an appropriately-privileged AD account, but get errors when starting the required services from the login page: Processing services Start services on server: WIN-HQ7JSCRTTSQ Starting Enterprise Vault Admin Service on WIN-HQ7JSCRTTSQ. The service Enterprise Vault Admin Service is already running on WIN-HQ7JSCRTTSQ. Starting Backup Exec Remote Agent for Windows Systems on WIN-HQ7JSCRTTSQ. The service Backup Exec Remote Agent for Windows Systems is already running on WIN-HQ7JSCRTTSQ. Starting Backup Exec Device & Media Service on WIN-HQ7JSCRTTSQ. Error starting the service Backup Exec Device & Media Service on WIN-HQ7JSCRTTSQ. Service-specific error code returned: 0x2000e2d3 (536928979) Starting Backup Exec Server on WIN-HQ7JSCRTTSQ. Error starting the service Backup Exec Server on WIN-HQ7JSCRTTSQ. The dependency service or group failed to start. Starting Backup Exec Job Engine on WIN-HQ7JSCRTTSQ. Error starting the service Backup Exec Job Engine on WIN-HQ7JSCRTTSQ. The dependency service or group failed to start. Starting Backup Exec Agent Browser on WIN-HQ7JSCRTTSQ. Error starting the service Backup Exec Agent Browser on WIN-HQ7JSCRTTSQ. The dependency service or group failed to start. Starting Backup Exec DLO Administration Service on WIN-HQ7JSCRTTSQ. Error starting the service Backup Exec DLO Administration Service on WIN-HQ7JSCRTTSQ. Error code returned: Starting Backup Exec DLO Maintenance Service on WIN-HQ7JSCRTTSQ. The service Backup Exec DLO Maintenance Service is already running on WIN-HQ7JSCRTTSQ. Starting Backup Exec Web Service on WIN-HQ7JSCRTTSQ. The service Backup Exec Web Service is already running on WIN-HQ7JSCRTTSQ. Start services on server WIN-HQ7JSCRTTSQ completed. Processing services completed! How can I resolve this?

    Read the article

  • How can I tell what user account is being used by a service to access a network share on a Windows 2008 server?

    - by Mike B
    I've got a third-party app/service running on a Windows 2003 SP2 server that is trying to fetch something from a network share on Windows 2008 box. Both boxes are members of an AD domain. For some reason, the app is complaining about having insufficient permissions to read/write to the store. The app itself doesn't have any special options for acting on the authority of another user account. It just asks for a UNC path. The service is running with a "log on as" setting of Local System account. I'd like to confirm what account it's using when trying to communicate with the network share. Conversely, I'd also like more details on if/why it's being rejected by the Windows 2008 network share. Are there server-side logs on 2008 that could tell me exactly why a connection attempt to a share was rejected?

    Read the article

  • What is the best free service to host images and mp3 files?

    - by Edward Tanguay
    I am making an educational social software silverlight application. I would like users to be able to point the application to a URL with text, images, and audio files which they have created. Many users will not have their own website to do this, so we are looking for a free service they can use to upload, and manage their own text/image/audio content. What is the best free service for non-technical users to upload and make available text, images and audio? For instance, sites.google.com allows you to upload pictures and access them via http so that would work, but that is more about making a website. For this purpose we just need the ability to upload files, without the website creation tools.

    Read the article

  • File storage service that allows clients to upload large files to my account?

    - by deceze
    Can anyone recommend an online file storage service which fulfills these requirements? I can create an account I can invite clients to upload files into my account clients do not need to register to be able to upload clients must not be able to see anything but their own files or they must not see any files at all, they get only a dropbox only I can access the uploaded files, everything is non-public service is multi-lingual I just need clients to be able to send me potentially large files in a dead simple manner online, that's all. No registration step to go through, no software to download, no synching or sharing. No setting up of individual folders and permissions for each individual client. No copying and pasting of links (a la Mediafire, Rapidshare etc).

    Read the article

  • Have you successfully installed Active Directory Management Gateway Service on 2008?

    - by ssg31415926
    I've got a pair of 2008 DCs onto which I've been trying to install the Active Directory Management Gateway Service. (I only wanted it on one but when that didn't work I tried another.) Both are failing with: "The update does not apply to your system". They're both Windows Server 2008 Domain Controllers, x64 version. Both have .NET 3.5 SP1 installed on them. One has Service Pack 2 installed and the other has the hotfix from 967574 installed. Both servers were rebooted after the installed of each of the required updates. I have attempted to install Windows6.0-KB968934-x64.msu on both and get the error reported above. According to the docs, my machines meet the requirements, so something must be missing from the requirements. Or there's something odd about these DCs. Has anyone succeeded? If so, did you have to install/remove anything (else) to get it installed?

    Read the article

  • Will restarting the W3SVC Web Publishing Service, also restart the app pool in IIS?

    - by Mark Rogers
    Background: I'm trying to stop and start IIS on a windows 7 build box, in order to run acceptance tests. But from what I have read, most of the remote web management features have been disabled by Microsoft in a retarted attempt to sell more Windows Server 2008 licenses. Still for some reason they didn't disable the least user-friendly ways of controlling IIS remotely, but they are all a total pain. What I can do easily is remotely stop and start the WWW service from a msbuild task. But I want to be sure that this also restarts the app pool, so that when I copy in new web files the web server there will be serving the most recent version of the website. Question: If I restart the W3SVC windows service, will that also restart the app pool?

    Read the article

  • SMF restarting service whenever there's output?

    - by Phillip Oldham
    I'm trying to add a custom service to SMF's configuration, which seems successful in that the service starts and there is a log file, but therein lies the problem; the service, on start-up, prints some logging messages to the stderr. It seems that SMF is seeing those messages and, believing them to be errors, restarts the service, giving up after a number of tries and leaving the service off. Here's part of the log output: [ Mar 30 14:59:54 Enabled. ] [ Mar 30 14:59:54 Executing start method ("java server.CustomServer"). ] Starting server... [ Mar 30 15:00:04 Method or service exit timed out. Killing contract 107. ] Running the server directly on the commandline is fine, and AFACS there are no errors being encountered during startup, other than the output. What would be the best way to manage this service with SMF? The logging is needed for diagnosing problems, and would be problematic to disable. Is it possible to configure this service to only restart if the service exists?

    Read the article

  • Get More From Your Service Request

    - by Get Proactive Customer Adoption Team
    Leveraging Service Request Best Practices Use best practices to get there faster. In the daily conversations I have with customers, they sometimes express frustration over their Service Requests. They often feel powerless to make needed changes, so their sense of frustration grows. To help you avoid some of the frustration you might feel in dealing with your Service Requests (SR), here are a few pointers that come from our best practice discussions. Be proactive. If you can anticipate some of the questions that Support will ask, or the information they may need, try to provide this up front, when you log the SR. This could be output from the Remote Diagnostic Agent (RDA), if this is a database issue, or the output from another diagnostic tool, if you’re an EBS customer. Any information you can supply that helps us understand the situation better, helps us resolve the issue sooner. As you use some of these tools proactively, you might even find the solution to the problem before you log an SR! Be right. Make sure you have the correct severity level. Since you select the initial severity level, it’s easy to accept the default without considering how significant this may be. Business impact is the driving factor, so make sure you take a moment to select the severity level that is appropriate to the situation. Also, make sure you ask us to change the severity level, should the situation dictate. Be responsive! If this is an important issue to you, quickly follow up on any action plan submitted to you by Oracle Support. The support engineer assigned to your Service Request will be able to move the issue forward more aggressively when they have the needed information. This is crucial in resolving your issues in a timely manner. Be thorough. If there are five questions in the action plan, make sure you provide an answer for all five questions in one response, rather than trickling them in one at a time. This will allow the engineer to look at all of the information as a whole and to avoid multiple trips to your SR, saving valuable time and getting you a resolution sooner. Be your own advocate! You know your situation best; make sure Oracle Support understands both how and why this issue is important to you and your company. Use the escalation process if you're concerned that your SR isn't going the right direction, the right pace, or through the right person. Don't wait until you're frustrated and angry. An escalation is as simple as a quick conversation on the phone and can be amazingly effective in getting your issues back on track. The support manager you speak with is empowered to make any needed changes. Be our partner. You can make your support experience better. When your SR has been resolved, you may receive a survey request. This is intended to get your feedback about how your SR went and what we can do to improve your overall support experience. Oracle Support is here to help you. Our goal with any Service Request is to provide the best possible solution as quickly as possible. With your help, we’ll be able to do this with your Service Request too.  

    Read the article

  • VirtualCenter Server service terminated with service-specific error 2 (0x2).

    - by LucD
    Hello! This is not a critical case but really annoying. I'm running vCenter 4 (Build 208156) and when we reeboot the server the VirtualCenter Server service can't start and gives the following error "The VMware VirtualCenter Server service terminated with service-specific error 2 (0x2)." And if I try to start the service it fails again giving the same error. What I'll need to do is to stop the IIS Admin service and then start the VirtualCenter Server service and then the IIS Admin service, and everything runs fine until the next reboot. Can someone tell me how to fix this annoying error? Regards

    Read the article

  • Error in WCF service - Silverlight client communication.

    - by David
    I created a WCF service and I planned to consume this in a Silverlight application. So I created the WCF service in the Website host project. The service is a simple WCF service that only returns a number - something like a Hello World WCF-SL. So after adding a service reference in the silverlight client project to the Service URI, after calling async the service method (by using the generated proxy), I get the following exception in the callback method: An error occurred while trying to make a request to URI 'http://localhost:4566/SLService.svc'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details. I only created a HelloWorld WCF service with nothing else but a simple method that returns a dumb number and it's hosted on my locally. Must I have clientaccesspolicy.xml or crossdomain.xml? I acces my service locally. Every time I create a new simple/dumb WCF-SL solution, I get this error. I use VS2010 and Silverlight 4. I cannot get a simple/dumb WCF-SL solution working locally. Is there something wrong with the configuration? On another machine in the same network, it does work properly, so I assume something is misconfigured. Any thoughts?

    Read the article

  • Client unable to authenticate when connecting to WCF service

    - by davecoulter
    I have a WCF service hosted in a Windows service. The application is an intranet app, and I have programmatically set the bindings on both the service and the client as: NetTcpBinding aBinding = new NetTcpBinding(SecurityMode.Transport); aBinding.Security.Transport.ClientCredentialType = TcpClientCredentialType.Windows; aBinding.Security.Transport.ProtectionLevel = System.Net.Security.ProtectionLevel.EncryptAndSign; Both the service and client have endpoints configured with SPNs: EndpointAddress = new EndpointAddress(uri, EndpointIdentity.CreateSpnIdentity("Service1")); As far as I know, I have setup the bindings correctly-- and I am usually able to connect to the service just fine. I did however run into a case where on a server running Windows Server 2003 R2, x64, SP2 I get the following exception immediately when the client tries to connect: INNEREXCEPTION -- Exception Message: InvalidCredentialException: Either the target name is incorrect or the server has rejected the client credentials. Stack Trace: at System.Net.Security.NegoState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.Security.NegotiateStream.AuthenticateAsClient(NetworkCredential credential, String targetName, ProtectionLevel requiredProtectionLevel, TokenImpersonationLevel allowedImpersonationLevel) at System.ServiceModel.Channels.WindowsStreamSecurityUpgradeProvider.WindowsStreamSecurityUpgradeInitiator.OnInitiateUpgrade(Stream stream, SecurityMessageProperty& remoteSecurity) I get the exception when I try to connect to the service from another machine in the domain, but if I connect to the service on the same machine running the service it works fine. The hosting service itself is running as a domain user account-- but I have tried running the service as a Local System and Network Service to no avail. I have checked the Local Security Policies for the server and didn't see anything amiss (i.e. 'Access this computer from the network' includes 'Everyone'). Anyone have an idea of what could resolve this? I am wondering if I need to do something in Active Directory with respect to the service's SPN? I have read some about using setspn.exe to register or refresh SPNs, but I haven't needed to do this before. Why would this be working with other configurations but not the one above?

    Read the article

  • Service Accounts LastLogonTimestamp

    - by Ryan Ries
    In an Active Directory domain, if I configure a Windows service on a domain member computer to start with an AD user account (aka "ye olde service account",) and the then the service stays running but I don't restart the service or reboot the machine for a year... does the LastLogonTimestamp of the service account's user object continue to update? Edit: If you say "it depends on the service," then use MS SQL Server as an example. I set MSSQL Engine to run as contoso\sql-service. Then I leave it alone for a year.

    Read the article

  • How to get a service to listen on port 80 on Windows Server 2003

    - by Miky D
    I've coded a custom windows service that listens on TCP port 80 but when I try to install it on a Windows Server 2003 machine it fails to start because some other service is already listening on that port. So far I've disabled the IIS Admin service and the HTTP SSL service but no luck. When I run netstat -a -n -o | findstr 0.0:80 it gives me the process id 4 as the culprit, but when I look at the running processes that process id points to the "System" process. What can I do to get the System process to stop listening on port 80 and get my service to listen instead? P.S. I should point out that the service runs fine if I install it on my Windows XP or Windows 7 development boxes. Also, I should specify that this has nothing to do with it being a service. I've tried starting a regular application that attempts to bind to port 80 on the Windows Server 2003 with the same outcome - it fails because another application is already bound to that port.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >