Search Results

Search found 3500 results on 140 pages for 'factory defaults'.

Page 21/140 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • warning on nginx auto start disable

    - by Nanda
    I removed nginx from startup by running: $ sudo update-rc.d -f nginx disable I get the below output: update-rc.d: using dependency based boot sequencing insserv: warning: current start runlevel(s) (empty) of script `nginx' overrides LSB defaults (2 3 4 5). insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `nginx' overrides LSB defaults (0 1 6). Are these harmless warnings or is there something I should do?

    Read the article

  • Ubuntu 12.04 Automounting ntfs partition

    - by kuzyt
    Ive looked everywhere to fix this problem but I cant seem to figure out why its doing this. I have the following /etc/fstab entry to mount a ntfs partition using ntfs-3g. UUID=01CD842715EC2180 /media/mediahd02 ntfs defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 The volume label for this partition is "MEDIA02" So I have had no problems with the fstab mounting. The problem however is that it automounts again using MEDIA02 label. I'm not sure automounting is the right term for this as its just an empty directory. Deleting this directory and rebooting is causing it to appear again. So listing /media I see both MEDIA02 & mediahd02 htpc@htpc:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdf1 during installation UUID=ec027544-b0e7-4145-99a4-905543a9781a / ext4 errors=remount-ro,noatime,discard 0 1 # swap was on /dev/sdf5 during installation UUID=1794409e-723f-41ac-9f31-ae059f377613 none swap sw 0 0 # Added all the lines below this tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 UUID=0F70-3B06 /media/mediahd01 vfat defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 UUID=01CD842715EC2180 /media/mediahd02 ntfs defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 htpc@htpc:~$ cat /etc/mtab /dev/sdc1 / ext4 rw,noatime,errors=remount-ro,discard 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /tmp tmpfs rw,noatime,mode=1777 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sdc1 /media/usbhd-sdc1 ext4 rw,relatime 0 0 /dev/sdb1 /media/mediahd02 fuseblk rw,noexec,nosuid,nodev,allow_other,default_permissions,blksize=4096 0 0 /dev/sda5 /media/mediahd01 vfat rw,noexec,nosuid,nodev,uid=1000,gid=1000,dmask=007,fmask=117 0 0 /dev/sdh1 /media/Windows_7 fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 Can someone shed some light as to why its doing this ?

    Read the article

  • application custom stock icons not working in ubuntu unity top panel menu (aka appmenu) ("Menus Have Icons" ON)

    - by giuspen
    I recently noticed that in ubuntu unity the top menu of my apps does not show the (custom) icons I added to the gtk stock, but only the basic gtk stock icons. This happens only since the top menu is displayed in the unity top panel (appmenu) and not in the application window. In place of the correct custom icons I see "gtk-missing-image". On my apps toolbars and other menus those icons are displayed properly, the problem is only with the top menu. This happens either with pygtk2 (e.g. http://www.giuspen.com/cherrytree/) and gobject introspection (e.g. http://www.giuspen.com/nautilus-pyextensions/). I use gtk ui manager after integrating the stock icons this way: factory = gtk.IconFactory() pixbuf = gtk.gdk.pixbuf_new_from_file(filepath) iconset = gtk.IconSet(pixbuf) factory.add(stock_name, iconset) factory.add_default() If anybody solved this problem please help. Cheers, Giuseppe.

    Read the article

  • How do I set the default size of /dev/shm?

    - by Richard
    I'd like to change the default size of /dev/shm in Lubuntu 11.10 (also known as /run/shm now, I guess). It doesn't seem to appear in my fstab: # / was on /dev/sdb5 during installation UUID=66ac63f0-45fa-4766-9d20-7c56bcd0460d / ext3 noatime,errors=remount-ro 0 1 # /home was on /dev/sdb7 during installation UUID=227f1b29-5d04-4c28-9c9c-ea70b1726434 /home ext3 noatime 0 2 # swap was on /dev/sdb6 during installation #UUID=9e13b7cc-1f75-4b4e-9e79-c0f7368de353 none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 tmpfs /tmp tmpfs defaults,noexec,nosuid 0 0 tmpfs /var/tmp tmpfs defaults,noexec,nosuid 0 0

    Read the article

  • ASP.NET Charting Control no longer working with .NET 4

    - by Moose Factory
    I've just upgraded to .NET 4 and my ASP.NET Chart Control no longer displays. For .NET 3.5, the HTML produced by the control used to look like this: <img id="20_Chart" src="/ChartImg.axd?i=chart_5f6a8fd179a246a5a0f4f44fcd7d5e03_0.png&amp;g=16eb7881335e47dcba16fdfd8339ba1a" alt="" style="height:300px;width:300px;border-width:0px;" /> and now, for .NET 4, it looks like this (note the change in the source path): <img id="20_Chart" src="/Statistics/Summary/ChartImg.axd?i=chart_5f6a8fd179a246a5a0f4f44fcd7d5e03_0.png&amp;g=16eb7881335e47dcba16fdfd8339ba1a" alt="" style="height:300px;width:300px;border-width:0px;" /> The chart is in an MVC partial view that is in an MVC Area folder called "Statistics" and a MVC Views folder called "Summary" (i.e. "/Areas/Statistics/Views/Summary"), so this is obviously where the change of path is coming from. All I've done is to switch the System.Web.DataVisualization assembly from, 3.5 to 4.0. Any help greatly appreciated.

    Read the article

  • Determine the colours used by the ASP.NET Chart control

    - by Moose Factory
    I'd like to find out which colours are used for a particular pallette in the ASP.NET Chart control. I already know there is an enum on the Chart class to set the palette, e.g. myChart.Palette = ChartColorPalette.Berry; But I'd like to know which colours belong to the palette. Before anyone asks - as I know you will - the reason I need to know the colours is because I want to create my own legend outside of the chart image. I also know that I can set my own colours on the DataPoints for the chart, but I'd rather not have to implement my own palette.

    Read the article

  • ASP.NET MVC Default URL View

    - by Moose Factory
    I'm trying to set the Default URL of my MVC application to a view within an area of my application. The area is called "Common", the controller "Home" and the view "Index". I've tried setting the defaultUrl in the forms section of web.config to "~/Common/Home/Index" with no success. I've also tried mapping a new route in global.asax, thus: routes.MapRoute( "Area", "{area}/{controller}/{action}/{id}", new { area = "Common", controller = "Home", action = "Index", id = "" } ); Again, to no avail.

    Read the article

  • Is there a shorthand way to denullify a string in C#?

    - by Moose Factory
    Is there a shorthand way to denullify a string in C#? It would be the equivalent of (if 'x' is a string): string y = x == null ? "" : x; I guess I'm hoping there's some operator that would work something like: string y = #x; Wishful thinking, huh? The closest I've got so far is an extension method on the string class: public static string ToNotNull(this string value) { return value == null ? "" : value; } which allows me to do: string y = x.ToNotNull(); Any improvements on that, anyone?

    Read the article

  • Upgrade Debian to unstable on VirtualBox: udev problem

    - by Ken
    I'm running Debian stable on VirtualBox on Windows Vista 64-bit Ultimate. It's been running great, but I needed some newer packages, so I put sid in my sources.list to upgrade to unstable (as I've done a dozen times on various Linux boxes over the years). When I upgraded, something went screwy and it asked me to run apt-get -f install to fix them, which gave this: (Reading database ... 77846 files and directories currently installed.) Preparing to replace udev 0.125-7+lenny3 (using .../archives/udev_151-3_amd64.deb) ... Since release 150, udev requires that support for the CONFIG_SYSFS_DEPRECATED feature is disabled in the running kernel. Please upgrade your kernel before or while upgrading udev. AT YOUR OWN RISK, you can force the installation of this version of udev WHICH DOES NOT WORK WITH YOUR RUNNING KERNEL AND WILL BREAK YOUR SYSTEM AT THE NEXT REBOOT by creating the /etc/udev/kernel-upgrade file. There is always a safer way to upgrade, do not try this unless you understand what you are doing! dpkg: error processing /var/cache/apt/archives/udev_151-3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). Errors were encountered while processing: /var/cache/apt/archives/udev_151-3_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I have the VirtualBox extensions installed, and it looks like the udev install doesn't know what to make of them. But I don't know exactly where/how they're installed (I just ran the VBoxLinuxAdditions-amd64.run script, basically), so I don't know how to disable them. Any ideas? Thanks!

    Read the article

  • How to install a private user script in Chrome 21+?

    - by Mathias Bynens
    In Chrome 20 and older versions, you could simply open any .user.js file in Chrome and it would prompt you to install the user script. However, in Chrome 21 and up, it downloads the file instead, and displays a warning at the top saying “Extensions, apps, and user scripts can only be added from the Chrome Web Store”. The “Learn More” link points to http://support.google.com/chrome_webstore/bin/answer.py?hl=en&answer=2664769, but that page doesn’t say anything about user scripts, only about extensions in .crx format, apps, and themes. This part sounded interesting: Enterprise Administrators: You can specify URLs that are allowed to install extensions, apps, and themes directly through the ExtensionInstallSources policy. So, I ran the following commands, then restarted Chrome and Chrome Canary: defaults write com.google.Chrome ExtensionInstallSources -array "https://gist.github.com/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://gist.github.com/*" Sadly, these settings only seem to affect extensions, apps, and themes (as it says in the text), not user scripts. (I’ve filed a bug asking to make this setting affect user scripts as well.) Any ideas on how to install a private user script (that I don’t want to add to the Chrome Web Store) in Chrome 21+? Update: The problem was that gist.github.com’s raw URLs redirect to a different domain. So, use these commands instead: # Allow installing user scripts via GitHub or Userscripts.org defaults write com.google.Chrome ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" This works!

    Read the article

  • EC2: How dangerous is it to turn off fsck for EBS volumes?

    - by Janine
    I have been tearing my hair out trying to figure out why my EC2 instances (made from my own custom AMIs) were taking many tries to come up properly. They would fail with the following error: fsck.ext3: No such file or directory while trying to open /dev/sdf For both of the EBS volumes I was attaching during startup. Finally, I figured out the problem. I had put this in /etc/fstab: /dev/sdf /export ext3 defaults 1 2 /dev/sdi /export2 ext3 defaults 1 2 The 2 tells the system to fsck the drives on the way up. Changing this to /dev/sdf /export ext3 defaults 1 0 /dev/sdi /export2 ext3 defaults 1 0 Avoids the problem completely, but now the volumes are never going to be fsck'd. How much does this matter? Once the instance goes into production it's going to be running pretty much 24/7, so not many fscks would be happening anyway, but still... this just feels like a bad idea. I have not been able to find anyone else even reporting this problem (there are people with the same error message, but different causes). It seems unbelievable that I could be the only person to ever make this mistake, but perhaps I'm just talented that way. :) If there is another solution to the problem I would love to hear it; I have not been able to find one.

    Read the article

  • CentOS OpenVZ fail to boot after kernel update

    - by SkechBoy
    After upgrading to latest OpenVZ kernel CentOS server won't boot. When i try go boot the latest kernel server is stuck at this point: (note that images are taken from virtual kvm) http://i.stack.imgur.com/4lusz.jpg Then i try to start the server on some old kernels and than i get this error message: kernel panic - not syncing - attempted to kill init better shown on this image: http://i.stack.imgur.com/2SReF.jpg Here is some useful information fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 2995.7 GB, 2995739688960 bytes 255 heads, 63 sectors/track, 364211 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004c4e4 Device Boot Start End Blocks Id System /dev/sda1 1 523 4199044+ 82 Linux swap / Solaris /dev/sda2 524 785 2104515 83 Linux /dev/sda3 786 261869 2097157230 83 Linux /dev/sda4 261870 364211 822062115 83 Linux /etc/fstab proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/sda1 none swap sw 0 0 /dev/sda2 /boot ext3 defaults 0 0 /dev/sda3 / ext3 defaults 0 0 /dev/sda4 /home ext3 defaults 0 0 and grub config file: title OpenVZ (2.6.18-274.18.1.el5.028stab098.1) root (hd0,1) kernel /vmlinuz-2.6.18-274.18.1.el5.028stab098.1 ro root=/dev/sda3 vga=0x317 selinux=0 initrd /initrd-2.6.18-274.18.1.el5.028stab098.1.img title OpenVZ (2.6.18-274.7.1.el5.028stab095.1) root (hd0,1) kernel /vmlinuz-2.6.18-274.7.1.el5.028stab095.1 ro root=/dev/sda3 vga=0x317 selinux=0 initrd /initrd-2.6.18-274.7.1.el5.028stab095.1.img title OpenVZ (2.6.18-194.8.1.el5.028stab070.4) root (hd0,1) kernel /vmlinuz-2.6.18-194.8.1.el5.028stab070.4 ro root=/dev/sda3 vga=0x317 initrd /initrd-2.6.18-194.8.1.el5.028stab070.4.img Any help is greatly appreciated Thanks.

    Read the article

  • Why is my filesystem being mounted read-only in linux?

    - by Tim
    I am trying to set up a small linux system based on Gentoo on a VirtualBox machine, as a step towards deploying the same system onto a low-spec Single Board Computer. For some reason, my filesystem is being mounted read-only. In my /etc/fstab, I have: /dev/sda1 / ext3 defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 none /dev/shm tmpfs defaults 0 0 However, once booted /proc/mounts shows rootfs / rootfs rw 0 0 /dev/root / ext3 ro,relatime,errors=continue,barrier=0,data=writeback 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 udev /dev tmpfs rw,nosuid,relatime,size=10240k,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0 none /dev/shm tmpfs rw,relatime 0 0 usbfs /proc/bus/usb usbfs rw,nosuid,noexec,relatime,devgid=85,devmode=664 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 (the above may contain errors: there's no practical way to copy and paste) The partition at /dev/hda1 is clearly being mounted OK, since I can read all the data, but it's not being mounted as described in fstab. How might I go about diagnosing / resolving this? Edit: I can remount with mount -o remount,rw / and it works as expected, except that /proc/mounts reports /dev/root mounted at / rather than /dev/sda1 as I'd expect. If I try to remount with mount -a I get mount: none already mounted or /sys busy mount: according to mtab, sysfs is already mounted on /sys Edit 2: I resolved the problem with mount -a (the same error was occuring during startup, it turned out) by changing the sysfs and proc lines to proc /proc proc [...] sysfs /sys sysfs [...] Now mount -a doesn't complain, but it doesn't result in a read-write root partition. mount -o remount / does cause the root partition to be remounted, however.

    Read the article

  • Reset rc.d so software starts at boot again

    - by natli
    I ran the following 2 commands on my VPS box and now it boots without starting any software at all. According to rcconf it's still supposed to start my chosen software (ssh etc.) but it doesn't. update-rc.d vz defaults update-rc.d vzeventd defaults I already tried removing them again with update-rc.d -f vz remove update-rc.d -f vzeventd remove But that didnt't change anything. /etc/rc.local also still correctly lists some scripts I want to run at start-up, but they don't seem to be called either. I expect the top 2 commands to be responsible, but here's everything I did: mkdir /var/openvz-dl cd /var/openvz-dl wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-devel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-core-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-lib-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/vzquota/3.1/vzquota-3.1-1.x86_64.rpm apt-get install fakeroot alien fakeroot alien --to-deb --scripts --keep-version vz*.rpm ploop*.rpm dpkg -i vz*.deb ploop*.deb --force-overwrite update-rc.d vz defaults update-rc.d vzeventd defaults reboot A huge part of that failed because I was running it on an OpenVZ VPS which has a shared kernel that can't be altered, so I also had to fix the dpkg like so (it was moaning about wanting to install vzkernel with a package not being found); rm /var/lib/dpkg/info/vzkernel* dpkg-reconfigure vzkernel --force dpkg --purge --force-all vzkernel But that didn't fix the boot issue either. How do I make my software start at boot again?

    Read the article

  • Assembling Software RAID in Live CD for data recovery

    - by Maletor
    I need help recovering some data that's on my RAID which is on a LVM on my server running Ubuntu. What happened was I deleted the logical volume that controlled my swap space which was on a partition on drives sda2, sdb2, sdc2, and sdd2 in RAID1. This foobared my whole system for one reason or another. Booting leave me with grub rescue and an error saying that it is an unknown filesystem. When I boot to a live cd I can see my RAID arrays and I can even start them up. However, it doesn't appear to mount them anywhere so I can't see the data. I am in the live cd now and I have done sudo apt-get install mdadm lvm2 so it should be mounting them correctly. I just can't see why it wouldn't. Please any help is appreciated here. Here is some output. By the way, there are 3 RAIDs, 1) /boot 100mb RAID1, 2) swap 10gb RAID1, 3) root 990GB RAID5 ubuntu@ubuntu:~$ df -h Filesystem Size Used Avail Use% Mounted on aufs 124M 101M 18M 86% / none 2.0G 324K 2.0G 1% /dev /dev/sde1 2.0G 826M 1.2G 42% /cdrom /dev/loop0 667M 667M 0 100% /rofs none 2.0G 164K 2.0G 1% /dev/shm tmpfs 2.0G 28K 2.0G 1% /tmp none 2.0G 92K 2.0G 1% /var/run none 2.0G 0 2.0G 0% /var/lock none 2.0G 0 2.0G 0% /lib/init/rw /dev/md1 91M 73M 15M 84% /media/5ac3dbf1-a6c5-409c-96ae-edc6e27992c7 ubuntu@ubuntu:~$ cat /etc/fstab aufs / aufs rw 0 0 tmpfs /tmp tmpfs nosuid,nodev 0 0 /dev/sda2 swap swap defaults 0 0 /dev/sdb2 swap swap defaults 0 0 /dev/sdc2 swap swap defaults 0 0 /dev/sdd2 swap swap defaults 0 0

    Read the article

  • Trouble installing Matlab

    - by user94143
    So whenever I try to install Matlab, the first image appears, but then goes away and that's the end of it. Here's what I'm doing- user@host$> cd ~/mathworks_downloads user@host$> unzip matlab_R2012a_student_glnx86_installer.zip user@host$> ./install The Matlab logo and image pop up here for a few seconds before going away. This is the output I receive- Could not find JRE for glnxa64. Trying glnx86. Preparing installation files ... Installing ... Exception in thread "main" java.lang.NoClassDefFoundError: Could not initialize class java.awt.Component at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at $Proxy11.<clinit>(Unknown Source) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at java.lang.reflect.Proxy.newProxyInstance(Unknown Source) at com.google.inject.internal.ConstructionContext.createProxy(ConstructionContext.java:81) at com.google.inject.ConstructorInjector.construct(ConstructorInjector.java:70) at com.google.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:111) at com.google.inject.FactoryProxy.get(FactoryProxy.java:56) at com.google.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42) at com.google.inject.Scopes$1$1.get(Scopes.java:54) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.InjectorImpl$4$1.call(InjectorImpl.java:758) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.InjectorImpl$4.get(InjectorImpl.java:754) at com.google.inject.spi.ProviderLookup$1.get(ProviderLookup.java:89) at com.google.inject.spi.ProviderLookup$1.get(ProviderLookup.java:89) at com.google.inject.internal.ProviderMethod.get(ProviderMethod.java:95) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42) at com.google.inject.Scopes$1$1.get(Scopes.java:54) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42) at com.google.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66) at com.google.inject.ConstructorInjector.construct(ConstructorInjector.java:84) at com.google.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:111) at com.google.inject.FactoryProxy.get(FactoryProxy.java:56) at com.google.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42) at com.google.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66) at com.google.inject.ConstructorInjector.construct(ConstructorInjector.java:84) at com.google.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:111) at com.google.inject.FactoryProxy.get(FactoryProxy.java:56) at com.google.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42) at com.google.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66) at com.google.inject.ConstructorInjector.construct(ConstructorInjector.java:84) at com.google.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:111) at com.google.inject.FactoryProxy.get(FactoryProxy.java:56) at com.google.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42) at com.google.inject.Scopes$1$1.get(Scopes.java:54) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.InjectorImpl$4$1.call(InjectorImpl.java:758) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.InjectorImpl$4.get(InjectorImpl.java:754) at com.google.inject.spi.ProviderLookup$1.get(ProviderLookup.java:89) at com.google.inject.spi.ProviderLookup$1.get(ProviderLookup.java:89) at com.google.inject.internal.ProviderMethod.get(ProviderMethod.java:95) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42) at com.google.inject.Scopes$1$1.get(Scopes.java:54) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42) at com.google.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66) at com.google.inject.ConstructorInjector.construct(ConstructorInjector.java:84) at com.google.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:111) at com.google.inject.FactoryProxy.get(FactoryProxy.java:56) at com.google.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42) at com.google.inject.Scopes$1$1.get(Scopes.java:54) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.InjectorImpl$4$1.call(InjectorImpl.java:758) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:804) at com.google.inject.InjectorImpl$4.get(InjectorImpl.java:754) at com.google.inject.InjectorImpl.getInstance(InjectorImpl.java:793) at com.mathworks.wizard.WizardLauncher.startWizard(WizardLauncher.java:160) at com.mathworks.wizard.WizardLauncher.start(WizardLauncher.java:75) at com.mathworks.wizard.AbstractLauncher.launch(AbstractLauncher.java:27) at com.mathworks.wizard.AbstractLauncher.launchStandalone(AbstractLauncher.java:18) at com.mathworks.studentinstaller.Launcher.main(Launcher.java:23) Finished After that, nothing happens, although apparently a Matlab window is supposed to open. Is this a problem caused by the fact that I'm running OpenJKD 7 instead of JRE 7u7?

    Read the article

  • Implementing a modern web application with Web API on top of old services

    - by Gaui
    My company has many WCF services which may or may not be replaced in the near future. The old web application is written in WebForms and communicates straight with these services via SOAP and returns DataTables. Now I am designing a new modern web application in a modern style, an AngularJS client which communicates with an ASP.NET Web API via JSON. The Web API then communicates with the WCF services via SOAP. In the future I want to let the Web API handle all requests and go straight to the database, but because the business logic implemented in the WCF services is complicated it's going to take some time to rewrite and replace it. Now to the problem: I'm trying to make it easy in the near future to replace the WCF services with some other data storage, e.g. another endpoint, database or whatever. I also want to make it easy to unit test the business logic. That's why I have structured the Web API with a repository layer and a service layer. The repository layer has a straight communication with the data storage (WCF service, database, or whatever) and the service layer then uses the repository (Dependency Injection) to get the data. It doesn't care where it gets the data from. Later on I can be in control and structure the data returned from the data storage (DataTable to POCO) and be able to test the logic in the service layer with some mock repository (using Dependency Injection). Below is some code to explain where I'm going with this. But my question is, does this all make sense? Am I making this overly complicated and could this be simplified in any way possible? Does this simplicity make this too complicated to maintain? My main goal is to make it as easy as possible to switch to another data storage later on, e.g. an ORM and be able to test the logic in the service layer. And because the majority of the business logic is implemented in these WCF services (and they return DataTables), I want to be in control of the data and the structure returned to the client. Any advice is greatly appreciated. Update 20/08/14 I created a repository factory, so services would all share repositories. Now it's easy to mock a repository, add it to the factory and create a provider using that factory. Any advice is much appreciated. I want to know if I'm making things more complicated than they should be. So it looks like this: 1. Repository Factory public class RepositoryFactory { private Dictionary<Type, IServiceRepository> repositories; public RepositoryFactory() { this.repositories = new Dictionary<Type, IServiceRepository>(); } public void AddRepository<T>(IServiceRepository repo) where T : class { if (this.repositories.ContainsKey(typeof(T))) { this.repositories.Remove(typeof(T)); } this.repositories.Add(typeof(T), repo); } public dynamic GetRepository<T>() { if (this.repositories.ContainsKey(typeof(T))) { return this.repositories[typeof(T)]; } throw new RepositoryNotFoundException("No repository found for " + typeof(T).Name); } } I'm not very fond of dynamic but I don't know how to retrieve that repository otherwise. 2. Repository and service // Service repository interface // All repository interfaces extend this public interface IServiceRepository { } // Invoice repository interface // Makes it easy to mock the repository later on public interface IInvoiceServiceRepository : IServiceRepository { List<Invoice> GetInvoices(); } // Invoice repository // Connects to some data storage to retrieve invoices public class InvoiceServiceRepository : IInvoiceServiceRepository { public List<Invoice> GetInvoices() { // Get the invoices from somewhere // This could be a WCF, a database, or whatever using(InvoiceServiceClient proxy = new InvoiceServiceClient()) { return proxy.GetInvoices(); } } } // Invoice service // Service that handles talking to a real or a mock repository public class InvoiceService { // Repository factory RepositoryFactory repoFactory; // Default constructor // Default connects to the real repository public InvoiceService(RepositoryFactory repo) { repoFactory = repo; } // Service function that gets all invoices from some repository (mock or real) public List<Invoice> GetInvoices() { // Query the repository return repoFactory.GetRepository<IInvoiceServiceRepository>().GetInvoices(); } }

    Read the article

  • Create a WCF REST Client Proxy Programatically (in C#)

    - by Tawani
    I am trying to create a REST Client proxy programatically in C# using the code below but I keep getting a CommunicationException error. Am I missing something? public static class WebProxyFactory { public static T Create<T>(string url) where T : class { ServicePointManager.Expect100Continue = false; WebHttpBinding binding = new WebHttpBinding(); binding.MaxReceivedMessageSize = 1000000; WebChannelFactory<T> factory = new WebChannelFactory<T>(binding, new Uri(url)); T proxy = factory.CreateChannel(); return proxy; } public static T Create<T>(string url, string userName, string password) where T : class { ServicePointManager.Expect100Continue = false; WebHttpBinding binding = new WebHttpBinding(); binding.Security.Mode = WebHttpSecurityMode.TransportCredentialOnly; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Basic; binding.UseDefaultWebProxy = false; binding.MaxReceivedMessageSize = 1000000; WebChannelFactory<T> factory = new WebChannelFactory<T>(binding, new Uri(url)); ClientCredentials credentials = factory.Credentials; credentials.UserName.UserName = userName; credentials.UserName.Password = password; T proxy = factory.CreateChannel(); return proxy; } } So that I can use it as follows: IMyRestService proxy = WebProxyFactory.Create<IMyRestService>(url, usr, pwd); var result = proxy.GetSomthing(); // Fails right here

    Read the article

  • Can not call web service with basic authentication using WCF

    - by RexM
    I've been given a web service written in Java that I'm not able to make any changes to. It requires the user authenticate with basic authentication to access any of the methods. The suggested way to interact with this service in .NET is by using Visual Studio 2005 with WSE 3.0 installed. This is an issue, since the project is already using Visual Studio 2008 (targeting .NET 2.0). I could do it in VS2005, however I do not want to tie the project to VS2005 or do it by creating an assembly in VS2005 and including that in the VS2008 solution (which basically ties the project to 2005 anyway for any future changes to the assembly). I think that either of these options would make things complicated for new developers by forcing them to install WSE 3.0 and keep the project from being able to use 2008 and features in .NET 3.5 in the future... ie, I truly believe using WCF is the way to go. I've been looking into using WCF for this, however I'm unsure how to get the WCF service to understand that it needs to send the authentication headers along with each request. I'm getting 401 errors when I attempt to do anything with the web service. This is what my code looks like: WebHttpBinding webBinding = new WebHttpBinding(); ChannelFactory<MyService> factory = new ChannelFactory<MyService>(webBinding, new EndpointAddress( "http://127.0.0.1:80/Service/Service/")); factory.Endpoint.Behaviors.Add(new WebHttpBehavior()); factory.Credentials.UserName.UserName = "username"; factory.Credentials.UserName.Password = "password"; MyService proxy = factory.CreateChannel(); proxy.postSubmission(_postSubmission); This will run and throw the following exception: "The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was 'Basic realm=realm'." And this has an inner exception of: "The remote server returned an error: (401) Unauthorized." Any thoughts about what might be causing this issue would be greatly appreciated.

    Read the article

  • Javassist: how to create proxy of proxy?

    - by Bozho
    I'm creating proxies with javassist ProxyFactory. When creating a single proxy all works fine. However, when I pass a proxied class to the proxying mechanism, it fails with javassist.bytecode.DuplicateMemberException: duplicate method: setHandler in com.mypackage.Bean_$$_javassist_0_$$_javassist_1 I'm creating the proxies with this: public Object createProxiedInstance(Object originalInstance) throws Exception { Class<?> originalClass = instance.getClass(); ProxyFactory factory = new ProxyFactory(); factory.setSuperclass(originalClass); factory.setHandler(new MethodHandler() {..}); Class<T> proxyClass = factory.createClass(); return proxyClass.newInstance(); } So, how do I create proxies of proxies? Update: The actual problems is that each proxy implements the ProxyObject which defines setHandler(..) method. So the 2nd proxy is trying to redefine the method, instead of overriding it in the subclass.

    Read the article

  • Should I refactor this code?

    - by user156814
    The code is for a view debate page. The code is supposed to determine whether or not to show an add reply form to the viewing user. If the user is logged in, and the user is not the creator of the debate, then check if the user already replied to the debate. If the user did not already reply to the debate then show the form... Otherwise, Check If the user wants to edit their already existing reply by looking in the url for the reply id If any of these tests dont pass, Then I save the reason as an int and pass that to a switch statement in the view. The logic seems easy enough, but my code seems a little sloppy. Here's the code.. (using Kohana V2.3.4) public function view($id = 0) { $debate = ORM::factory('debate')->with('user')->with('category')->find($id); if ($debate->loaded == FALSE) { url::redirect(); } // series of tests to show an add reply form if ($this->logged_in) { // is the viewer the creator? if ($this->user->id != $debate->user->id) { // has the user already replied? if (ORM::factory('reply') ->where(array('debate_id' => $id, 'user_id' => $this->user->id)) ->count_all() == 0) { $form = $errors = array ( 'body' => '', 'choice_id' => '', 'add' => '' ); if ($post = $this->input->post()) { $reply = ORM::factory('reply'); // validate and insert the reply if ($reply->add($post, TRUE)) { url::redirect(url::current()); } $form = arr::overwrite($form, $post->as_array()); $errors = arr::overwrite($errors, $post->errors('reply_errors')); } } // editing a reply? else if (($rid = (int) $this->input->get('edit')) AND ($reply = ORM::factory('reply') ->where(array('debate_id' => $id, 'user_id' => $this->user->id)) ->find($rid))) { $form = $errors = array ( 'body' => '', 'choice_id' => '', 'add' => '' ); // autocomplete the form $form = arr::overwrite($form, $reply->as_array()); if ($post = $this->input->post()) { // validate and insert the reply if ($reply->edit($post, TRUE)) { url::redirect(url::current()); } $form = arr::overwrite($form, $post->as_array()); $errors = arr::overwrite($errors, $post->errors('reply_errors')); } } else { // user already replied $reason = 3; } } else { // user started the debate $reason = 2; } } else { // user is not logged in. $reason = 1; } $limits = Kohana::config('app/debate.limits'); $page = (int) $this->input->get('page', 1); $offset = ($page > 0) ? ($page - 1) * $limits['replies'] : 0; $replies = ORM::factory('reply')->with('user')->with('choice')->where('replies.debate_id', $id); $this->template->title = $debate->topic; $this->template->debate = $debate; $this->template->body = View::factory('debate/view') ->set('debate', $debate) ->set('replies', $replies->find_all($limits['replies'], $offset)) ->set('pagination', Pagination::factory(array ( 'style' => 'digg', 'items_per_page' => $limits['replies'], 'query_string' => 'page', 'auto_hide' => TRUE, 'total_items' => $total = $replies->count_last_query() )) ) ->set('total', $total); // are we showing the add reply form? if (isset($form, $errors)) { $this->template->body->add_reply_form = View::factory('reply/add_reply_form') ->set('debate', $debate) ->set('form', $form) ->set('errors', $errors); } else { $this->template->body->reason = $reason; } } Heres the view, theres some logic in here that determines what message to show the user. <!-- Add Reply Form --> <?php if (isset($add_reply_form)): ?> <?php echo $add_reply_form; ?> <?php else: ?> <?php switch ($reason) { case 1 : // not logged in, show a message $message = 'Add your ' . html::anchor('login?url=' . url::current(TRUE), '<b>vote</b>') . ' to this discussion'; break; case 2 : // started the debate. dont show a message for that. $message = NULL; break; case 3: // already replied, show a message $message = 'You have already replied to this debate'; break; default: // unknown reason. dont show a message $message = NULL; break; } ?> <?php echo app::show_message($message, 'h2'); ?> <?php endif; ?> <!-- End Add Reply Form --> Should I refactor the add reply logic into another function or something.... It all works, it just seems real sloppy. Thanks

    Read the article

  • Remote JMS connection still using localhost

    - by James
    I have a created a JMS Connection Factory on a remote glassfish server and want to use that server from a java client app on my local machine. I have the following configuration to get the context and connection factory: Properties env = new Properties(); env.setProperty("java.naming.factory.initial", "com.sun.enterprise.naming.SerialInitContextFactory"); env.setProperty("java.naming.factory.url.pkgs", "com.sun.enterprise.naming"); env.setProperty("java.naming.factory.state", "com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl"); env.setProperty("org.omg.CORBA.ORBInitialHost", JMS_SERVER_NAME); env.setProperty("org.omg.CORBA.ORBInitialPort", "3700"); initialContext = new InitialContext(env); TopicConnectionFactory topicConnectionFactory = (TopicConnectionFactory) initialContext.lookup("jms/MyConnectionFactory"); topicConnection = topicConnectionFactory.createTopicConnection(); topicConnection.start(); This seems to work and when I delete the ConnectionFactory from the glassfish server I get a exception indicating that is can't find jms/MyConnectionFactory as expected. However when I subsequently use my topicConnection to get a topic it tries to connect to localhost:7676 (this fails as I am not running glassfish locally). If I dynamically create a topic: TopicSession pubSession = topicConnection.createTopicSession(false, Session.AUTO_ACKNOWLEDGE); Topic topic = pubSession.createTopic(topicName); TopicPublisher publisher = pubSession.createPublisher(topic); Message mapMessage = pubSession.createTextMessage(message); publisher.publish(mapMessage); and the glassfish server is not running locally I get the same connection refused however, if I start my local glassfish server the topics are created locally and I can see them in the glassfish admin console. In case you ask I do not have jms/MyConnectionFactory on my local glassfish instance, it is only available on the remote server. I can't see what I am doing wrong here and why it is trying to use localhost at all. Any ideas? Cheers, James

    Read the article

  • Common JNDI resources in Tomcat

    - by Lehane
    Hi, I’m running a couple of servlet applications in Tomcat (5.5). All of the servlets use a common factory resource that is shared out using JNDI. At the moment, I can get everything working by including the factory resource as a GlobalNamingResource in the /conf/server.xml file, and then having each servlet’s META-INF/context.xml file include a ResourceLink to the resource. Snippets from the XML files are included below. NOTE: I’m not that familiar with tomcat, so I’m not saying that this is a good configuration!!! However, I now want to be able install these servlets into multiple tomcat instances automatically using an RPM. The RPM will firstly copy the WARs to the webapps directory, and the jars for the factory into the common/lib directory (which is fine). But it will also need to make sure that the factory resource is included as a resource for all of the servlets. What is the best way add the resource globally? I’m not too keen on writing a script that goes into the server.xml file and adds in the resource that way. Is there any way for me to add in multiple server.xml files so that I can write a new server-app.xml file and it will concatenate my settings to server.xml? Or, better still to add this JNDI resource to all the servlets without using server.xml at all? p.s. Restarting the server will not be an issue, so I don’t mind if the changes don’t get picked up automatically. Thanks Snippet from server.xml <!-- Global JNDI resources --> <GlobalNamingResources> <Resource name="bean/MyFactory" auth="Container" type="com.somewhere.Connection" factory="com.somewhere.MyFactory"/> </GlobalNamingResources> The entire servlet’s META-INF/context.xml file <?xml version="1.0" encoding="UTF-8"?> <Context> <ResourceLink global="bean/MyFactory" name="bean/MyFactory" type="com.somewhere.MyFactory"/> </Context>

    Read the article

  • Spring MVC configuration problems

    - by Smek
    i have some problems with configuring Spring MVC. I made a maven multi module project with the following modules: /api /domain /repositories /webapp I like to share the domain and the repositories between the api and the webapp (both web projects). First i want to configure the webapp to use the repositories module so i added the dependencies in the xml file like this: <dependency> <groupId>${project.groupId}</groupId> <artifactId>domain</artifactId> <version>1.0-SNAPSHOT</version> </dependency> <dependency> <groupId>${project.groupId}</groupId> <artifactId>repositories</artifactId> <version>1.0-SNAPSHOT</version> </dependency> And my controller in the webapp module looks like this: package com.mywebapp.webapp; import com.mywebapp.domain.Person; import com.mywebapp.repositories.services.PersonService; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.stereotype.Controller; import org.springframework.ui.ModelMap; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; @Controller @RequestMapping("/") @Configuration @ComponentScan("com.mywebapp.repositories") public class PersonController { @Autowired PersonService personservice; @RequestMapping(method = RequestMethod.GET) public String printWelcome(ModelMap model) { Person p = new Person(); p.age = 23; p.firstName = "John"; p.lastName = "Doe"; personservice.createNewPerson(p); model.addAttribute("message", "Hello world!"); return "index"; } } In my webapp module i try to load configuration files in my web.xml like this: <context-param> <param-name>contextConfigLocation</param-name> <param-value>classpath:/META-INF/persistence-context.xml, classpath:/META-INF/service-context.xml</param-value> </context-param> These files cannot be found so i get the following error: org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from class path resource [META-INF/persistence-context.xml]; nested exception is java.io.FileNotFoundException: class path resource [META-INF/persistence-context.xml] cannot be opened because it does not exist These files are in the repositories module so my first question is how can i make Spring to find these files? I also have trouble Autowiring the PersonService to my Controller class did i forget to configure something in my XML? Here is the error message: [INFO] [talledLocalContainer] SEVERE: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener [INFO] [talledLocalContainer] org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'personServiceImpl': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private com.mywebapp.repositories.repository.PersonRepository com.mywebapp.repositories.services.PersonServiceImpl.personRepository; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No matching bean of type [com.mywebapp.repositories.repository.PersonRepository] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {@org.springframework.beans.factory.annotation.Autowired(required=true)} PersonServiceImple.java: package com.mywebapp.repositories.services; import com.mywebapp.domain.Person; import com.mywebapp.repositories.repository.PersonRepository; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.data.mongodb.core.MongoTemplate; import org.springframework.stereotype.Service; @Service public class PersonServiceImpl implements PersonService{ @Autowired public PersonRepository personRepository; @Autowired public MongoTemplate personTemplate; @Override public Person createNewPerson(Person person) { return personRepository.save(person); } } PersonService.java package com.mywebapp.repositories.services; import com.mywebapp.domain.Person; public interface PersonService { Person createNewPerson(Person person); } PersonRepository.java: package com.mywebapp.repositories.repository; import com.mywebapp.domain.Person; import org.springframework.data.mongodb.repository.MongoRepository; import org.springframework.stereotype.Repository; import java.math.BigInteger; @Repository public interface PersonRepository extends MongoRepository<Person, BigInteger> { } persistance-context.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:mongo="http://www.springframework.org/schema/data/mongo" xsi:schemaLocation= "http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/data/mongo http://www.springframework.org/schema/data/mongo/spring-mongo-1.0.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <context:property-placeholder location="classpath:mongo.properties"/> <mongo:mongo host="${mongo.host}" port="${mongo.port}" id="mongo"> <mongo:options connections-per-host="${mongo.connectionsPerHost}" threads-allowed-to-block-for-connection-multiplier="${mongo.threadsAllowedToBlockForConnectionMultiplier}" connect-timeout="${mongo.connectTimeout}" max-wait-time="${mongo.maxWaitTime}" auto-connect-retry="${mongo.autoConnectRetry}" socket-keep-alive="${mongo.socketKeepAlive}" socket-timeout="${mongo.socketTimeout}" slave-ok="${mongo.slaveOk}" write-number="1" write-timeout="0" write-fsync="true"/> </mongo:mongo> <mongo:db-factory dbname="person" mongo-ref="mongo" id="mongoDbFactory"/> <bean id="personTemplate" name="personTemplate" class="org.springframework.data.mongodb.core.MongoTemplate"> <constructor-arg name="mongoDbFactory" ref="mongoDbFactory"/> </bean> <mongo:repositories base-package="com.mywebapp.repositories.repository" mongo-template-ref="personTemplate"> <mongo:repository id="personRepository" repository-impl-postfix="PersonRepository" mongo-template-ref="personTemplate" create-query-indexes="true"/> </mongo:repositories> Thanks

    Read the article

  • Configure WebLogic MDB to listen to Foreing AMQ Server

    - by eliel.lobo
    I'm trying to create an MDB(EJB 3.0) on WebLogic 10.3.5. to listen to a Queue in an external AMQ server. but after much work and combination of tutorials i get the followin error when deployin on WwebLogic. [EJB:015027]The Message-Driven EJB is transactional but JMS connection factory referenced by the JNDI name: ActiveMQXAConnectionFactory is not a JMS XA connection factory. Here is a brief of the work i have done: I have added the corresponding libraries to my WLS classpath (following thos tuturial http://amadei.com.br/blog/index.php/connecting-weblogic-and-activemq) and I have created the corresponding JMS Modules as indicated in the tutorial. As connection factory I have used ActiveMQConnectionFactory initially and ActiveMQXAConnectionFactory later, I also ignome the jms. notation an just put plain names as testQueue. Then create a simple MDB whit the following structure. I explicitly defined "connectionFactoryJndiName" property because otherwise it assumes a WebLogic connection factory which is not found an then raises an error. @MessageDriven( activationConfig = { @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"), @ActivationConfigProperty(propertyName = "destination", propertyValue = "testQueue"), @ActivationConfigProperty(propertyName = "connectionFactoryJndiName", propertyValue = "ActiveMQXAConnectionFactory") }, mappedName = "testQueue") public class ROMELReceiver implements MessageListener { /** * Default constructor. */ public ROMELReceiver() { // TODO Auto-generated constructor stub } /** * @see MessageListener#onMessage(Message) */ public void onMessage(Message message) { System.out.println("Message received"); } } At this point I'm stuck with the error mentioned above. Even though I use ActiveMQXAConnectionFactory instead of simply ActiveMQConnectionFactory, JNDI resources tree in web logic server shows org.apache.activemq.ActiveMQConnectionFactory as class for my configured connection factory. am i missing something? or is this just a completely wrong way to connect WebLogic whith AMQ? Thanks in advance.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >