Search Results

Search found 18132 results on 726 pages for 'connection timeout'.

Page 205/726 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • Persistence problem when installing USB Ubuntu variant using Windows

    - by Derek Redfern
    I'm part of a project called One2One2Go - we're developing a Live USB Ubuntu variant for use in schools in Somerville, MA. We have the project files compiled into an iso, and when installed using the native Ubuntu Startup Disk Creator, the USB works fine. When installed using a tool on a Windows machine (LiLi at linuxliveusb.com or liveusb-creator at fedorahosted.org/liveusb-creator), the USB works, but does not have persistence. This happens even if the creator is specifically set to allocate an area for persistent files. When comparing files on sticks created in Windows or Ubuntu, the one file that is different is syslinux/syslinux.cfg. I have printed the contents of the file below: Installed on Windows: DEFAULT nomodset LABEL debug menu label ^debug kernel /casper/vmlinuz append boot=casper xforcevesa initrd=/casper/initrd.gz -- LABEL nomodset menu label ^nomodset kernel /casper/vmlinuz append boot=casper quiet splash nomodset initrd=/casper/initrd.gz -- LABEL memtest menu label ^Memory test kernel /install/memtest append - LABEL hd menu label ^Boot from first hard disk localboot 0x80 append - PROMPT 0 TIMEOUT 1 Installed on Ubuntu: DEFAULT nomodset LABEL debug menu label ^debug kernel /casper/vmlinuz append noprompt cdrom-detect/try-usb=true persistent boot=casper xforcevesa initrd=/casper/initrd.gz -- LABEL nomodset menu label ^nomodset kernel /casper/vmlinuz append noprompt cdrom-detect/try-usb=true persistent boot=casper quiet splash nomodset initrd=/casper/initrd.gz -- LABEL memtest menu label ^Memory test kernel /install/memtest append - LABEL hd menu label ^Boot from first hard disk localboot 0x80 append - PROMPT 0 TIMEOUT 1 For troubleshooting reasons, I changed the Windows-created syslinux.cfg to match the one created by Ubuntu and persistence worked on it. I think the problem is that on the stick created by Windows, there is no "persistent" flag, but I don't know why. Is this a problem with our disk image or with the creator? How would I go about fixing this problem? Thanks in advance for your help. Derek Redfern

    Read the article

  • Resetting Update Manager

    - by Emre
    How can I fix Update Manager in 12.04, which hangs when I try to install any update, while sudo apt-get upgrade works fine? I suspect it has something to do with my python installation. This is the error message: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/defer/__init__.py", line 475, in _inline_callbacks result = gen.send(result) File "/usr/lib/python2.7/dist-packages/aptdaemon/client.py", line 1622, in _run_transaction_helper daemon = get_aptdaemon(self.bus) File "/usr/lib/python2.7/dist-packages/aptdaemon/client.py", line 1696, in get_aptdaemon False), File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 241, in get_object follow_name_owner_changes=follow_name_owner_changes) File "/usr/lib/python2.7/dist-packages/dbus/proxies.py", line 248, in __init__ self._named_service = conn.activate_name_owner(bus_name) File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 180, in activate_name_owner self.start_service_by_name(bus_name) File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 278, in start_service_by_name 'su', (bus_name, flags))) File "/usr/lib/python2.7/dist-packages/dbus/connection.py", line 651, in call_blocking message, timeout) dbus.exceptions.DBusException: org.freedesktop.DBus.Error.Spawn.ChildExited: Launch helper exited with unknown return code 1 Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/defer/__init__.py", line 475, in _inline_callbacks result = gen.send(result) File "/usr/lib/python2.7/dist-packages/aptdaemon/client.py", line 1622, in _run_transaction_helper daemon = get_aptdaemon(self.bus) File "/usr/lib/python2.7/dist-packages/aptdaemon/client.py", line 1696, in get_aptdaemon False), File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 241, in get_object follow_name_owner_changes=follow_name_owner_changes) File "/usr/lib/python2.7/dist-packages/dbus/proxies.py", line 248, in __init__ self._named_service = conn.activate_name_owner(bus_name) File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 180, in activate_name_owner self.start_service_by_name(bus_name) File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 278, in start_service_by_name 'su', (bus_name, flags))) File "/usr/lib/python2.7/dist-packages/dbus/connection.py", line 651, in call_blocking message, timeout) dbus.exceptions.DBusException: org.freedesktop.DBus.Error.Spawn.ChildExited: Launch helper exited with unknown return code 1 dbus.exceptions.DBusException: org.freedesktop.DBus.Error.Spawn.ChildExited: Launch helper exited with unknown return code 1 Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/defer/__init__.py", line 473, in _inline_callbacks result = gen.throw(result.type, result.value, result.traceback) File "/usr/lib/python2.7/dist-packages/UpdateManager/backend/InstallBackendAptdaemon.py", line 52, in commit downgrade, defer=True) dbus.exceptions.DBusException: org.freedesktop.DBus.Error.Spawn.ChildExited: Launch helper exited with unknown return code 1

    Read the article

  • Why does "quickly share --ppa share" abort with a "can't create" error?

    - by desgua
    I can not figure out what I am doing wrong. The package builds ok with quickly package, I could submit it, but I can not update my ppa. Here is what I got: desgua@desguai7:~/quickly/sbk$ quickly share --ppa sbk Get Launchpad Settings Launchpad connection is ok ..........An error has occurred when creating debian packaging ERROR: can't create or update ubuntu package ERROR: share command failed Aborting Edit The name of my ppa was wrong, but even using ppa:desgua/sbk still doesn't work: desgua@desguai7:~/quickly/sbk$ quickly share --ppa ppa:desgua/sbk Get Launchpad Settings Traceback (most recent call last): File "/usr/share/quickly/templates/ubuntu-application/share.py", line 101, in launchpad = launchpadaccess.initialize_lpi() File "/usr/lib/python2.7/dist-packages/quickly/launchpadaccess.py", line 91, in initialize_lpi allow_access_levels=["WRITE_PRIVATE"]) File "/usr/lib/python2.7/dist-packages/launchpadlib/launchpad.py", line 539, in login_with credential_save_failed, version) File "/usr/lib/python2.7/dist-packages/launchpadlib/launchpad.py", line 359, in _authorize_token_and_login service_root, cache, timeout, proxy_info, version) File "/usr/lib/python2.7/dist-packages/launchpadlib/launchpad.py", line 198, in __init__ credentials, service_root, cache, timeout, proxy_info, version) File "/usr/lib/python2.7/dist-packages/lazr/restfulclient/resource.py", line 460, in __init__ self._wadl = self._browser.get_wadl_application(self._root_uri) File "/usr/lib/python2.7/dist-packages/lazr/restfulclient/_browser.py", line 299, in get_wadl_application response, content = self._request(url, media_type=wadl_type) File "/usr/lib/python2.7/dist-packages/lazr/restfulclient/_browser.py", line 242, in _request str(url), method=method, body=data, headers=headers) File "/usr/lib/python2.7/dist-packages/lazr/restfulclient/_browser.py", line 211, in _request_and_retry url, method=method, body=body, headers=headers) File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1414, in request (response, new_content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey) File "/usr/lib/python2.7/dist-packages/launchpadlib/launchpad.py", line 126, in _request LaunchpadOAuthAwareHttp, self)._request(*args) File "/usr/lib/python2.7/dist-packages/lazr/restfulclient/_browser.py", line 130, in _request redirections, cachekey) File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1196, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1138, in _conn_request raise ServerNotFoundError("Unable to find the server at %s" % conn.host) httplib2.ServerNotFoundError: Unable to find the server at api.launchpad.net ERROR: share command failed Aborting Any ideas? How could I troubleshot this error?

    Read the article

  • How to find the Fastest DNS servers to host our domain?

    - by Denis Volovik
    The question was born because lately we've seen a pretty odd (well, at least for us, for the first time) - error message in Google webmaster tools - "DNS lookup timeout" ... I was pretty sure that with eNom's 5 DNS servers (dns1... to dns5.name-services.com) we're pretty set... But it appears that from (Europe/Hungary), for example - dns1.name-services.com takes 170ms. to respond on a ping... while GoDaddy's ns75.domaincontrol.com - takes only 40 ms. to respond... and at the same time - dns2 to dns5.name-services.com - each result with a timeout error (on ping)... This issue came to our attention right in the final stages of optimizing our web-site (almost to death) - basically, just in time... I would love to move our domains to a fast (fastest?) and reliable DNS server.. - but how do I find one ? Also - I did the ping tests from various geographic locations (we have servers in many countries) and GoDaddy seemed to be faster than eNom almost in every case. I'd be very thankful for any hints on this! Edited: Well.. maybe this one does not have an answer, after all...

    Read the article

  • Top 10 solution documents for Weblogic Server J2EE Feb 2014 - May 2014

    - by jhpierce -Oracle
    The following are the top 10 documents linked to SRs as solutions, for Weblogic Server J2EE issues, from Feb 2014 thru May 2014. 1163020.1 How to configure Filtering class loader in weblogic.xml   To configure the Filtering Class Loader to specify a certain package is loaded from an application, add a prefer-application-packages descriptor element. 1276593.1 WLS - How to supress servlet/JSP version details In WebLogic HTTP response header The string "X-Powered-By: Servlet/2.4 JSP/2.0" is showing up in the servlet response header.How to stop Weblogic from including servlet/JSP version details in the x-powered-by HTTP response header. 1490080.1 WebLogic Server 12.1.1.0 in a Cluster Environment Throws NotSerializableException for CDI Applications at com.sun.jersey.server.impl.cdi.CDIExtension When running in clustered environment, server start-up is not clean when you have CDI applications deployed. 1268138.1 Sample TwoWay SSL implementation for JAX-WS Webservice!   In this sample provided the recipient checks for the initiator's public certificate. Note that the client certificate can be used for authentication. 1584779.1 Socket Leaks When Calling Web-Service Over SSL This is a known bug 16810786 1598617.1 Secure WebService call throwing CANNOT RESOLVE URL FOR PROTOCOL HTTP/HTTPS through web server(APACHE) plug-in.    1056121.1 How to Timeout Weblogic Webservice Client   How to timeout a WebService client with and without using Stubs. 1568638.1 When packaging Jersey JAX-RS libraries into webapp throws NoSuchMethodError()  When attempting to include custom Jersey implementation libraries in to web application in a OSB domain. 1118264.1 WLS 10.3: Intermittent XA error: XAResource.XAER_RMERR In WebLogic 10.3, a CMP EJB sometimes throws the exception.   1608951.1 How to get More Details About Error BEA-101215 Malformed Request. Request parsing failed Code: -1   Which was seen when accessing the application via loadbalancer?

    Read the article

  • Kubuntu 12.04 - DNS Issues

    - by AndrewJesaitis
    Starting yesterday (6/11/12), I've been having many network problems. When requesting a page in chrome, the page hangs on "Sending request" and then will eventually timeout. I'm within a VPN that has it's own DNS server. I've tried to manually set my DNS through the Network-Manager applet and by editing /etc/network/interfaces. Having no luck I unlinked the resolv.conf file and dumped the contents of my old resolv.conf into it. Again having no luck, I deactivated the dnsmasq server in /etc/NetworkManager/NetworkManager.conf by commenting out the dns=dnsmasq. $ cat NetworkManager.conf [main] plugins=ifupdown,keyfile #dns=dnsmasq no-auto-default=D0:67:E5:EA:B6:6B, [ifupdown] managed=false $ nm-tool NetworkManager Tool State: connected (global) - Device: eth0 [Wired connection 1] ------------------------------------------- Type: Wired Driver: tg3 State: connected Default: yes HW Address: D0:67:E5:EA:B6:6B Capabilities: Carrier Detect: yes Speed: 1000 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.254.122 Prefix: 24 (255.255.255.0) Gateway: 192.168.254.2 DNS: 192.168.254.1 What is strange is that the network will work fine for a few minutes then start to timeout. A few minutes later it will work again. I'm unable to hit internal or external sites when it is timing out. When I $dig local sites, I receive no answer. I do receive an answer from google.com. At this point, I would usually blame the DNS Server, especially since when I change to Google's DNS server things work. But, I need to use our internal DNS to hit our internal sites. Nobody else is having issues and they are all using DHCP. This group includes one user who is using 11.04. At this point, I'm at a loss for what to do, so any help would be appreciated.

    Read the article

  • Does Hauppauge WinTV HVR-900 (r2) [USB ID 2040:6502] work with ubuntu 12.04 LTS?

    - by nightfly
    I have this DVB+Analog usb tv tuner Hauppauge WinTV HVR-900 (r2) [USB ID 2040:6502]. This used to work under ubuntu 10.04 LTS. But in 12.04 there seems to be a problem. I have linux-firmware-nonfree and ivtv-utils installed. I am running Ubuntu 12.04.1 LTS 64 bit with all updates installed and the default unity environment. When I run mplayer tv:// -tv driver=v4l2:device=/dev/video1:input=1:norm=PAL I get a solid green screen and no picture. Here input 1 is the composite input of the card. MPlayer svn r34540 (Ubuntu), built with gcc-4.6 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing tv://. TV file format detected. Selected driver: v4l2 name: Video 4 Linux 2 input author: Martin Olschewski comment: first try, more to come ;-) Selected device: Hauppauge WinTV HVR 900 (R2) Tuner cap: Tuner rxs: Capabilities: video capture VBI capture device tuner audio read/write streaming supported norms: 0 = NTSC; 1 = NTSC-M; 2 = NTSC-M-JP; 3 = NTSC-M-KR; 4 = NTSC-443; 5 = PAL; 6 = PAL-BG; 7 = PAL-H; 8 = PAL-I; 9 = PAL-DK; 10 = PAL-M; 11 = PAL-N; 12 = PAL-Nc; 13 = PAL-60; 14 = SECAM; 15 = SECAM-B; 16 = SECAM-G; 17 = SECAM-H; 18 = SECAM-DK; 19 = SECAM-L; 20 = SECAM-Lc; inputs: 0 = Television; 1 = Composite1; 2 = S-Video; Current input: 1 Current format: YUYV v4l2: current audio mode is : MONO v4l2: ioctl set format failed: Invalid argument v4l2: ioctl set format failed: Invalid argument v4l2: ioctl set format failed: Invalid argument v4l2: ioctl query control failed: Invalid argument v4l2: ioctl query control failed: Invalid argument v4l2: ioctl query control failed: Invalid argument v4l2: ioctl query control failed: Invalid argument Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or directory [vdpau] Error when calling vdp_device_create_x11: 1 ========================================================================== Opening video decoder: [raw] RAW Uncompressed Video Movie-Aspect is undefined - no prescaling applied. VO: [xv] 640x480 = 640x480 Packed YUY2 Selected video codec: [rawyuy2] vfm: raw (RAW YUY2) ========================================================================== Audio: no sound Starting playback... v4l2: select timeout V: 0.0 2/ 2 ??% ??% ??,?% 0 0 v4l2: select timeout V: 0.0 4/ 4 ??% ??% ??,?% 0 0 v4l2: select timeout V: 0.0 6/ 6 ??% ??% ??,?% 0 0 v4l2: select timeout v4l2: 0 frames successfully processed, 1 frames dropped. Exiting... (Quit) Here is the dmesg of the card when plugged in.. [12742.228097] usb 1-4: new high-speed USB device number 3 using ehci_hcd [12742.367289] em28xx: New device WinTV HVR-900 @ 480 Mbps (2040:6502, interface 0, class 0) [12742.367296] em28xx: Audio Vendor Class interface 0 found [12742.367585] em28xx #0: chip ID is em2882/em2883 [12742.550086] em28xx #0: i2c eeprom 00: 1a eb 67 95 40 20 02 65 d0 12 5c 03 82 1e 6a 18 [12742.550104] em28xx #0: i2c eeprom 10: 00 00 24 57 66 07 01 00 00 00 00 00 00 00 00 00 [12742.550120] em28xx #0: i2c eeprom 20: 46 00 01 00 f0 10 02 00 b8 00 00 00 5b e0 00 00 [12742.550135] em28xx #0: i2c eeprom 30: 00 00 20 40 20 6e 02 20 10 01 01 01 00 00 00 00 [12742.550150] em28xx #0: i2c eeprom 40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [12742.550165] em28xx #0: i2c eeprom 50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [12742.550181] em28xx #0: i2c eeprom 60: 00 00 00 00 00 00 00 00 00 00 18 03 34 00 30 00 [12742.550196] em28xx #0: i2c eeprom 70: 32 00 37 00 38 00 32 00 33 00 39 00 30 00 31 00 [12742.550211] em28xx #0: i2c eeprom 80: 00 00 1e 03 57 00 69 00 6e 00 54 00 56 00 20 00 [12742.550226] em28xx #0: i2c eeprom 90: 48 00 56 00 52 00 2d 00 39 00 30 00 30 00 00 00 [12742.550241] em28xx #0: i2c eeprom a0: 84 12 00 00 05 50 1a 7f d4 78 23 fa fd d0 28 89 [12742.550257] em28xx #0: i2c eeprom b0: ff 00 00 00 04 84 0a 00 01 01 20 77 00 40 1d b7 [12742.550272] em28xx #0: i2c eeprom c0: 13 f0 74 02 01 00 01 79 63 00 00 00 00 00 00 00 [12742.550287] em28xx #0: i2c eeprom d0: 84 12 00 00 05 50 1a 7f d4 78 23 fa fd d0 28 89 [12742.550302] em28xx #0: i2c eeprom e0: ff 00 00 00 04 84 0a 00 01 01 20 77 00 40 1d b7 [12742.550317] em28xx #0: i2c eeprom f0: 13 f0 74 02 01 00 01 79 63 00 00 00 00 00 00 00 [12742.550334] em28xx #0: EEPROM ID= 0x9567eb1a, EEPROM hash = 0x2bbf3bdd [12742.550338] em28xx #0: EEPROM info: [12742.550340] em28xx #0: AC97 audio (5 sample rates) [12742.550343] em28xx #0: 500mA max power [12742.550346] em28xx #0: Table at 0x24, strings=0x1e82, 0x186a, 0x0000 [12742.552590] em28xx #0: Identified as Hauppauge WinTV HVR 900 (R2) (card=18) [12742.555516] tveeprom 15-0050: Hauppauge model 65018, rev B2C0, serial# 1292061 [12742.555523] tveeprom 15-0050: tuner model is Xceive XC3028 (idx 120, type 71) [12742.555529] tveeprom 15-0050: TV standards PAL(B/G) PAL(I) PAL(D/D1/K) ATSC/DVB Digital (eeprom 0xd4) [12742.555534] tveeprom 15-0050: audio processor is None (idx 0) [12742.555537] tveeprom 15-0050: has radio [12742.570297] tuner 15-0061: Tuner -1 found with type(s) Radio TV. [12742.570327] xc2028 15-0061: creating new instance [12742.570332] xc2028 15-0061: type set to XCeive xc2028/xc3028 tuner [12742.573685] xc2028 15-0061: Loading 80 firmware images from xc3028-v27.fw, type: xc2028 firmware, ver 2.7 [12742.624056] xc2028 15-0061: Loading firmware for type=BASE MTS (5), id 0000000000000000. [12744.126591] xc2028 15-0061: Loading firmware for type=MTS (4), id 000000000000b700. [12744.153586] xc2028 15-0061: Loading SCODE for type=MTS LCD NOGD MONO IF SCODE HAS_IF_4500 (6002b004), id 000000000000b700. [12744.280963] Registered IR keymap rc-hauppauge [12744.281151] input: em28xx IR (em28xx #0) as /devices/pci0000:00/0000:00:1a.7/usb1/1-4/rc/rc1/input10 [12744.281541] rc1: em28xx IR (em28xx #0) as /devices/pci0000:00/0000:00:1a.7/usb1/1-4/rc/rc1 [12744.282454] em28xx #0: Config register raw data: 0xd0 [12744.284709] em28xx #0: AC97 vendor ID = 0xffffffff [12744.285829] em28xx #0: AC97 features = 0x6a90 [12744.285832] em28xx #0: Empia 202 AC97 audio processor detected [12744.359211] em28xx #0: v4l2 driver version 0.1.3 [12744.404066] xc2028 15-0061: Loading firmware for type=BASE F8MHZ MTS (7), id 0000000000000000. [12745.915089] MTS (4), id 00000000000000ff: [12745.915100] xc2028 15-0061: Loading firmware for type=MTS (4), id 0000000100000007. [12746.161668] em28xx #0: V4L2 video device registered as video1 [12746.161673] em28xx #0: V4L2 VBI device registered as vbi0 [12746.162845] em28xx-audio.c: probing for em28xx Audio Vendor Class [12746.162848] em28xx-audio.c: Copyright (C) 2006 Markus Rechberger [12746.162851] em28xx-audio.c: Copyright (C) 2007-2011 Mauro Carvalho Chehab [12746.221099] xc2028 15-0061: attaching existing instance [12746.221105] xc2028 15-0061: type set to XCeive xc2028/xc3028 tuner [12746.221109] em28xx #0: em28xx #0/2: xc3028 attached [12746.221113] DVB: registering new adapter (em28xx #0) [12746.221118] DVB: registering adapter 0 frontend 0 (Micronas DRXD DVB-T)... [12746.221869] em28xx #0: Successfully loaded em28xx-dvb [13111.196055] xc2028 15-0061: Loading firmware for type=BASE F8MHZ MTS (7), id 0000000000000000. [13112.720062] MTS (4), id 00000000000000ff: [13112.720072] xc2028 15-0061: Loading firmware for type=MTS (4), id 0000000100000007. [13214.956057] xc2028 15-0061: Loading firmware for type=BASE F8MHZ MTS (7), id 0000000000000000. [13216.479806] MTS (4), id 00000000000000ff: [13216.479816] xc2028 15-0061: Loading firmware for type=MTS (4), id 0000000100000007. [13276.408056] xc2028 15-0061: Loading firmware for type=BASE F8MHZ MTS (7), id 0000000000000000. [13277.932093] MTS (4), id 00000000000000ff: [13277.932104] xc2028 15-0061: Loading firmware for type=MTS (4), id 0000000100000007. [13305.032076] xc2028 15-0061: Loading firmware for type=BASE F8MHZ MTS (7), id 0000000000000000. [13306.556449] MTS (4), id 00000000000000ff: [13306.556460] xc2028 15-0061: Loading firmware for type=MTS (4), id 0000000100000007. [13392.236055] xc2028 15-0061: Loading firmware for type=BASE F8MHZ MTS (7), id 0000000000000000. [13393.760123] MTS (4), id 00000000000000ff: [13393.760133] xc2028 15-0061: Loading firmware for type=MTS (4), id 0000000100000007. [13637.534053] usb 1-4: USB disconnect, device number 3 [13637.534183] em28xx #0: disconnecting em28xx #0 video [13637.560214] em28xx #0: V4L2 device vbi0 deregistered [13637.560335] em28xx #0: V4L2 device video1 deregistered [13637.561237] xc2028 15-0061: destroying instance [13639.772120] usb 1-4: new high-speed USB device number 4 using ehci_hcd [13639.911351] em28xx: New device WinTV HVR-900 @ 480 Mbps (2040:6502, interface 0, class 0) [13639.911357] em28xx: Audio Vendor Class interface 0 found [13639.911637] em28xx #0: chip ID is em2882/em2883 [13640.094262] em28xx #0: i2c eeprom 00: 1a eb 67 95 40 20 02 65 d0 12 5c 03 82 1e 6a 18 [13640.094280] em28xx #0: i2c eeprom 10: 00 00 24 57 66 07 01 00 00 00 00 00 00 00 00 00 [13640.094295] em28xx #0: i2c eeprom 20: 46 00 01 00 f0 10 02 00 b8 00 00 00 5b e0 00 00 [13640.094311] em28xx #0: i2c eeprom 30: 00 00 20 40 20 6e 02 20 10 01 01 01 00 00 00 00 [13640.094326] em28xx #0: i2c eeprom 40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [13640.094341] em28xx #0: i2c eeprom 50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [13640.094356] em28xx #0: i2c eeprom 60: 00 00 00 00 00 00 00 00 00 00 18 03 34 00 30 00 [13640.094371] em28xx #0: i2c eeprom 70: 32 00 37 00 38 00 32 00 33 00 39 00 30 00 31 00 [13640.094386] em28xx #0: i2c eeprom 80: 00 00 1e 03 57 00 69 00 6e 00 54 00 56 00 20 00 [13640.094401] em28xx #0: i2c eeprom 90: 48 00 56 00 52 00 2d 00 39 00 30 00 30 00 00 00 [13640.094416] em28xx #0: i2c eeprom a0: 84 12 00 00 05 50 1a 7f d4 78 23 fa fd d0 28 89 [13640.094432] em28xx #0: i2c eeprom b0: ff 00 00 00 04 84 0a 00 01 01 20 77 00 40 1d b7 [13640.094447] em28xx #0: i2c eeprom c0: 13 f0 74 02 01 00 01 79 63 00 00 00 00 00 00 00 [13640.094462] em28xx #0: i2c eeprom d0: 84 12 00 00 05 50 1a 7f d4 78 23 fa fd d0 28 89 [13640.094477] em28xx #0: i2c eeprom e0: ff 00 00 00 04 84 0a 00 01 01 20 77 00 40 1d b7 [13640.094492] em28xx #0: i2c eeprom f0: 13 f0 74 02 01 00 01 79 63 00 00 00 00 00 00 00 [13640.094509] em28xx #0: EEPROM ID= 0x9567eb1a, EEPROM hash = 0x2bbf3bdd [13640.094512] em28xx #0: EEPROM info: [13640.094515] em28xx #0: AC97 audio (5 sample rates) [13640.094517] em28xx #0: 500mA max power [13640.094521] em28xx #0: Table at 0x24, strings=0x1e82, 0x186a, 0x0000 [13640.097391] em28xx #0: Identified as Hauppauge WinTV HVR 900 (R2) (card=18) [13640.099617] tveeprom 15-0050: Hauppauge model 65018, rev B2C0, serial# 1292061 [13640.099623] tveeprom 15-0050: tuner model is Xceive XC3028 (idx 120, type 71) [13640.099629] tveeprom 15-0050: TV standards PAL(B/G) PAL(I) PAL(D/D1/K) ATSC/DVB Digital (eeprom 0xd4) [13640.099634] tveeprom 15-0050: audio processor is None (idx 0) [13640.099637] tveeprom 15-0050: has radio [13640.112849] tuner 15-0061: Tuner -1 found with type(s) Radio TV. [13640.112877] xc2028 15-0061: creating new instance [13640.112882] xc2028 15-0061: type set to XCeive xc2028/xc3028 tuner [13640.115930] xc2028 15-0061: Loading 80 firmware images from xc3028-v27.fw, type: xc2028 firmware, ver 2.7 [13640.164057] xc2028 15-0061: Loading firmware for type=BASE MTS (5), id 0000000000000000. [13641.666643] xc2028 15-0061: Loading firmware for type=MTS (4), id 000000000000b700. [13641.693262] xc2028 15-0061: Loading SCODE for type=MTS LCD NOGD MONO IF SCODE HAS_IF_4500 (6002b004), id 000000000000b700. [13641.820765] Registered IR keymap rc-hauppauge [13641.820958] input: em28xx IR (em28xx #0) as /devices/pci0000:00/0000:00:1a.7/usb1/1-4/rc/rc2/input11 [13641.821335] rc2: em28xx IR (em28xx #0) as /devices/pci0000:00/0000:00:1a.7/usb1/1-4/rc/rc2 [13641.822256] em28xx #0: Config register raw data: 0xd0 [13641.824526] em28xx #0: AC97 vendor ID = 0xffffffff [13641.825503] em28xx #0: AC97 features = 0x6a90 [13641.825507] em28xx #0: Empia 202 AC97 audio processor detected [13641.899015] em28xx #0: v4l2 driver version 0.1.3 [13641.944064] xc2028 15-0061: Loading firmware for type=BASE F8MHZ MTS (7), id 0000000000000000. [13643.470765] MTS (4), id 00000000000000ff: [13643.470776] xc2028 15-0061: Loading firmware for type=MTS (4), id 0000000100000007. [13643.717713] em28xx #0: V4L2 video device registered as video1 [13643.717718] em28xx #0: V4L2 VBI device registered as vbi0 [13643.718770] em28xx-audio.c: probing for em28xx Audio Vendor Class [13643.718775] em28xx-audio.c: Copyright (C) 2006 Markus Rechberger [13643.718778] em28xx-audio.c: Copyright (C) 2007-2011 Mauro Carvalho Chehab [13643.777148] xc2028 15-0061: attaching existing instance [13643.777154] xc2028 15-0061: type set to XCeive xc2028/xc3028 tuner [13643.777158] em28xx #0: em28xx #0/2: xc3028 attached [13643.777162] DVB: registering new adapter (em28xx #0) [13643.777167] DVB: registering adapter 0 frontend 0 (Micronas DRXD DVB-T)... [13643.777876] em28xx #0: Successfully loaded em28xx-dvb And here goes the lsmod output lsmod|grep em28xx em28xx_dvb 18579 0 dvb_core 110619 1 em28xx_dvb em28xx_alsa 18305 0 em28xx 109365 2 em28xx_dvb,em28xx_alsa v4l2_common 16454 3 tuner,tvp5150,em28xx videobuf_vmalloc 13589 1 em28xx videobuf_core 26390 2 em28xx,videobuf_vmalloc rc_core 26412 10 rc_hauppauge,ir_lirc_codec,ir_mce_kbd_decoder,ir_sony_decoder,ir_jvc_decoder,ir_rc6_decoder,ir_rc5_decoder,em28xx,ir_nec_decoder snd_pcm 97188 3 em28xx_alsa,snd_hda_intel,snd_hda_codec tveeprom 21249 1 em28xx videodev 98259 5 tuner,tvp5150,em28xx,v4l2_common,uvcvideo snd 78855 14 em28xx_alsa,snd_hda_codec_conexant,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device Isn't this driver mainline now? Or this card is not supported? Or the analog functionality is screwed? I need the analog capture working for this card. Please help!

    Read the article

  • Why doesn't my grub background show?

    - by luri
    I've tried to change resolution, colors and background image for my grub menu, but I get no background (well, just a black one, no image).... What am I doing wrong? This is my grub.cfg (omitting the OS's part): # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="${saved_entry}" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga } insmod part_msdos insmod ext2 set root='(hd1,msdos5)' search --no-floppy --fs-uuid --set 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=1280x1024x24 load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(hd1,msdos5)' search --no-floppy --fs-uuid --set 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 set locale_dir=($root)/boot/grub/locale set lang=es insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### insmod part_msdos insmod ext2 set root='(hd1,msdos5)' search --no-floppy --fs-uuid --set 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 insmod jpeg if background_image /boot/grub/Serenity_Enchanted_by_sirpecangum.jpg ; then set color_normal=black/white set color_highlight=brown/light-gray else set menu_color_normal=white/black set menu_color_highlight=black/light-gray fi ### END /etc/grub.d/05_debian_theme ### The selected image has been copied to /boot/grub/Serenity_Enchanted_by_sirpecangum.jpg with no luck. I'm for sure missing something (probably something obvious) but I don't really get it...

    Read the article

  • Am I the only one this anal / obsessive about code? [closed]

    - by Chris
    While writing a shared lock class for sql server for a web app tonight, I found myself writing in the code style below as I always do: private bool acquired; private bool disposed; private TimeSpan timeout; private string connectionString; private Guid instance = Guid.NewGuid(); private Thread autoRenewThread; Basically, whenever I'm declaring a group of variables or writing a sql statement or any coding activity involving multiple related lines, I always try to arrange them where possible so that they form a bell curve (imagine rotating the text 90deg CCW). As an example of something that peeves the hell out of me, consider the following alternative: private bool acquired; private bool disposed; private string connectionString; private Thread autoRenewThread; private Guid instance = Guid.NewGuid(); private TimeSpan timeout; In the above example, declarations are grouped (arbitrarily) so that the primitive types appear at the top. When viewing the code in Visual Studio, primitive types are a different color than non-primitives, so the grouping makes sense visually, if for no other reason. But I don't like it because the right margin is less of an aesthetic curve. I've always chalked this up to being OCD or something, but at least in my mind, the code is "prettier". Am I the only one?

    Read the article

  • Can't start system due to mdadm failing

    - by user101212
    I used to have a 5-disk RAID5 partition, all working very well. I have then decided to add 3 more disks on it, totaling 8 equal disks. I've opened Webmin and just asked to add the disks. Then I've realized the three disks had NTFS partitions, wich mdadm didn't complain, so I tried to stop the growing to remove the Windows partitions. I've tried to remove a disk using the same Webmin, but (as you might guess and call me fool...), the system became unstable. By restarting the system, I've started receiving these messages: "udev[126]: timeout: killing '/sbin/mdadm --incremental /dev/sdh1' [311]" "udev[124]: timeout: killing '/sbin/mdadm --detail --export /dev/md0' [316]" I've formated the system disk, hoping to get a system up and running. I did that with all RAID disks disconected, so everything was fine. I then reconnected the disks, wich was also ok. And finally installed mdadm using apt-get. By reboot, the system has found the mdadm intention of growing the system, so the same messages appear again. I other words: I can't even reach a command prompt to do something. Any ideas of what to do? I believe I could turn off the system, disconnect the disks and look for the mdadm.conf file. Would that be a good idead? I'm no Linux expert, so I'm really lost here.

    Read the article

  • How do I add a boot from cd option to yaboot?

    - by Sergiu
    So I'm dual-booting Ubuntu 12.04.1 on my iMac G5 powepc alongside Mac OS X and I want to add a boot cd option to yaboot because I'm trying to boot a scratched Mac OS X installation DVD that takes a while to read and the frst bootstrap moves on too fast. How do I edit the timeout for the first bootstrap anyways? So, my main question is, how do I add a cd booting option to yaboot and then, how doI boot it? The devalias from OpenFrmware tells me that 1 have 2 cd-rom instaled, on is /ht/pci@3/ata-6/disk@0 and the other on ends with a 1 instead of a zero. These are the contents of my yaboot.conf file: yaboot.conf generated by the Ubuntu installer run: "man yaboot.conf" for details. Do not make changes until you have!! see also: /usr/share/doc/yaboot/examples for example configurations. For a dual-boot menu, add one or more of: bsd=/dev/hdaX, macos=/dev/hdaY, macosx=/dev/hdaZ boot="/dev/disk/by-id/scsi-SATA_ST3160023AS_5MT1GCWA-part2" device=/ht@0,f2000000/pci@3/k2-sata-root@c/@0/@0 partition=4 root="UUID=798a048f-ee48-49e0-bba3-111aed8dee04" timeout=12000 install=/usr/lib/yaboot/yaboot magicboot=/usr/lib/yaboot/ofboot enablecdboot macosx="/dev/disk/by-id/scsi-SATA_ST3160023AS_5MT1GCWA-part3" image=/boot/vmlinux label=Linux read-only initrd=/boot/initrd.img append="quiet splash" What do I add here so that yaboot will boot from my cd in like 3 minutes after startup? Thanks!

    Read the article

  • How to Assign a Static IP Address in XP, Vista, or Windows 7

    - by Mysticgeek
    When organizing your home network it’s easier to assign each computer it’s own IP address than using DHCP. Here we will take a look at doing it in XP, Vista, and Windows 7. If you have a home network with several computes and devices, it’s a good idea to assign each of them a specific address. If you use DHCP (Dynamic Host Configuration Protocol), each computer will request and be assigned an address every time it’s booted up. When you have to do troubleshooting on your network, it’s annoying going to each machine to figure out what IP they have. Using Static IPs prevents address conflicts between devices and allows you to manage them more easily. Assigning IPs to Windows is essentially the same process, but getting to where you need to be varies between each version. Windows 7 To change the computer’s IP address in Windows 7, type network and sharing into the Search box in the Start Menu and select Network and Sharing Center when it comes up.   Then when the Network and Sharing Center opens, click on Change adapter settings. Right-click on your local adapter and select Properties. In the Local Area Connection Properties window highlight Internet Protocol Version 4 (TCP/IPv4) then click the Properties button. Now select the radio button Use the following IP address and enter in the correct IP, Subnet mask, and Default gateway that corresponds with your network setup. Then enter your Preferred and Alternate DNS server addresses. Here we’re on a home network and using a simple Class C network configuration and Google DNS. Check Validate settings upon exit so Windows can find any problems with the addresses you entered. When you’re finished click OK. Now close out of the Local Area Connections Properties window. Windows 7 will run network diagnostics and verify the connection is good. Here we had no problems with it, but if you did, you could run the network troubleshooting wizard. Now you can open the command prompt and do an ipconfig  to see the network adapter settings have been successfully changed.   Windows Vista Changing your IP from DHCP to a Static address in Vista is similar to Windows 7, but getting to the correct location is a bit different. Open the Start Menu, right-click on Network, and select Properties. The Network and Sharing Center opens…click on Manage network connections. Right-click on the network adapter you want to assign an IP address and click Properties. Highlight Internet Protocol Version 4 (TCP/IPv4) then click the Properties button. Now change the IP, Subnet mask, Default Gateway, and DNS Server Addresses. When you’re finished click OK. You’ll need to close out of Local Area Connection Properties for the settings to go into effect. Open the Command Prompt and do an ipconfig to verify the changes were successful.   Windows XP In this example we’re using XP SP3 Media Center Edition and changing the IP address of the Wireless adapter. To set a Static IP in XP right-click on My Network Places and select Properties. Right-click on the adapter you want to set the IP for and select Properties. Highlight Internet Protocol (TCP/IP) and click the Properties button. Now change the IP, Subnet mask, Default Gateway, and DNS Server Addresses. When you’re finished click OK. You will need to close out of the Network Connection Properties screen before the changes go into effect.   Again you can verify the settings by doing an ipconfig in the command prompt. In case you’re not sure how to do this, click on Start then Run.   In the Run box type in cmd and click OK. Then at the prompt type in ipconfig and hit Enter. This will show the IP address for the network adapter you changed.   If you have a small office or home network, assigning each computer a specific IP address makes it a lot easier to manage and troubleshoot network connection problems. Similar Articles Productive Geek Tips Change Ubuntu Desktop from DHCP to a Static IP AddressChange Ubuntu Server from DHCP to a Static IP AddressVista Breadcrumbs for Windows XPCreate a Shortcut or Hotkey for the Safely Remove Hardware DialogCreate a Shortcut or Hotkey to Eject the CD/DVD Drive TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos

    Read the article

  • SQL SERVER – Database Dynamic Caching by Automatic SQL Server Performance Acceleration

    - by pinaldave
    My second look at SafePeak’s new version (2.1) revealed to me few additional interesting features. For those of you who hadn’t read my previous reviews SafePeak and not familiar with it, here is a quick brief: SafePeak is in business of accelerating performance of SQL Server applications, as well as their scalability, without making code changes to the applications or to the databases. SafePeak performs database dynamic caching, by caching in memory result sets of queries and stored procedures while keeping all those cache correct and up to date. Cached queries are retrieved from the SafePeak RAM in microsecond speed and not send to the SQL Server. The application gets much faster results (100-500 micro seconds), the load on the SQL Server is reduced (less CPU and IO) and the application or the infrastructure gets better scalability. SafePeak solution is hosted either within your cloud servers, hosted servers or your enterprise servers, as part of the application architecture. Connection of the application is done via change of connection strings or adding reroute line in the c:\windows\system32\drivers\etc\hosts file on all application servers. For those who would like to learn more on SafePeak architecture and how it works, I suggest to read this vendor’s webpage: SafePeak Architecture. More interesting new features in SafePeak 2.1 In my previous review of SafePeak new I covered the first 4 things I noticed in the new SafePeak (check out my article “SQLAuthority News – SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration”): Cache setup and fine-tuning – a critical part for getting good caching results Database templates Choosing which database to cache Monitoring and analysis options by SafePeak Since then I had a chance to play with SafePeak some more and here is what I found. 5. Analysis of SQL Performance (present and history): In SafePeak v.2.1 the tools for understanding of performance became more comprehensive. Every 15 minutes SafePeak creates and updates various performance statistics. Each query (or a procedure execute) that arrives to SafePeak gets a SQL pattern, and after it is used again there are statistics for such pattern. An important part of this product is that it understands the dependencies of every pattern (list of tables, views, user defined functions and procs). From this understanding SafePeak creates important analysis information on performance of every object: response time from the database, response time from SafePeak cache, average response time, percent of traffic and break down of behavior. One of the interesting things this behavior column shows is how often the object is actually pdated. The break down analysis allows knowing the above information for: queries and procedures, tables, views, databases and even instances level. The data is show now on all arriving queries, both read queries (that can be cached), but also any types of updates like DMLs, DDLs, DCLs, and even session settings queries. The stats are being updated every 15 minutes and SafePeak dashboard allows going back in time and investigating what happened within any time frame. 6. Logon trigger, for making sure nothing corrupts SafePeak cache data If you have an application with many parts, many servers many possible locations that can actually update the database, or the SQL Server is accessible to many DBAs or software engineers, each can access some database directly and do some changes without going thru SafePeak – this can create a potential corruption of the data stored in SafePeak cache. To make sure SafePeak cache is correct it needs to get all updates to arrive to SafePeak, and if a DBA will access the database directly and do some changes, for example, then SafePeak will simply not know about it and will not clean SafePeak cache. In the new version, SafePeak brought a new feature called “Logon Trigger” to solve the above challenge. By special click of a button SafePeak can deploy a special server logon trigger (with a CLR object) on your SQL Server that actually monitors all connections and informs SafePeak on any connection that is coming not from SafePeak. In SafePeak dashboard there is an interface that allows to control which logins can be ignored based on login names and IPs, while the rest will invoke cache cleanup of SafePeak and actually locks SafePeak cache until this connection will not be closed. Important to note, that this does not interrupt any logins, only informs SafePeak on such connection. On the Dashboard screen in SafePeak you will be able to see those connections and then decide what to do with them. Configuration of this feature in SafePeak dashboard can be done here: Settings -> SQL instances management -> click on instance -> Logon Trigger tab. Other features: 7. User management ability to grant permissions to someone without changing its configuration and only use SafePeak as performance analysis tool. 8. Better reports for analysis of performance using 15 minute resolution charts. 9. Caching of client cursors 10. Support for IPv6 Summary SafePeak is a great SQL Server performance acceleration solution for users who want immediate results for sites with performance, scalability and peak spikes challenges. Especially if your apps are packaged or 3rd party, since no code changes are done. SafePeak can significantly increase response times, by reducing network roundtrip to the database, decreasing CPU resource usage, eliminating I/O and storage access. SafePeak team provides a free fully functional trial www.safepeak.com/download and actually provides a one-on-one assistance during such trial. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Install Control Center Agent on Oracle Application Server

    - by qianqian.wu
    Control Center Agent (CCA) The Control Center Agent is the OWB component that runs the Template Mappings in the Oracle Containers for J2EE (OC4J) server; also referred to as the J2EE Runtime. The Control Center Agent provides a Java-based runtime environment that can be installed on Oracle and non-Oracle database hosts. The Control Center Agent provides fundamental infrastructure for the heterogeneous, Code Template-based mapping support and Web services-related features of OWB in this release. In Oracle Warehouse Builder 11gR2 the Control Center Agent, by default will run in the built-in OC4J that is bundled in the Oracle Home. Besides that, you also have ability to install the Control Center Agent in an Oracle Application Server install. In this article, you will find step-by-step instructions how to install the Control Center Agent on an Oracle Application Server instance. The instructions cover the following tasks: Task 1: Install and Configure the Application Server Task 2: Deploy the Control Center Agent to the Application Server Task 3: Optional Configuration Tasks   Task 1: Install and Configure the Application Server Before configuring the Application Server, you need to install it from Oracle Application Server CD-ROM, or by downloading the installation program from Oracle Technology Network (OTN). Once the installation is completed, you are ready to configure the Application Server. The purpose of the configuration task is to make sure the Control Center Agent ear file can be deployed and runs in the Application Server successfully. The essential configuration tasks are outlined below: · Modify the OC4J Startup Script · Set up Control Center Agent Server Side Logging · Set up Audit Table Data Source · Copy ct_permissions.properties File · Set up Security Roles for Control Center Agent · Create JMS Queues · Install JDBC Drivers to OC4J Modify the OC4J Startup Script The OC4J startup script “opmn.xml” is located in Application Server configuration directory, $AS_HOME/opmn/conf. $AS_HOME stands for the root home directory of the application server. Open the file opmn.xml in a text editor, and alter the contents of the file as displayed in the following sample. You need to make sure that: The MaxPerSize is set to 128M. This is to ensure that you allocate enough PermGen space to OC4J to run Control Center Agent. This will prevent java.lang.OutOfMemoryError when running the agent. The Python.path sets the path for the Python library files used by the Control Center Agent: jython_lib.zip and jython_owblib.jar. These two files are in the $OWB_HOME/owb/lib/int directory, where $OWB_HOME is the directory where owb is installed. · The km_security_needed determines whether restrictions will be applied to the kinds of operating system commands allowed to be executed by the OWB Code Template script executed by Control Center Agent. Setting km_security_needed to “true” enforces such restriction while setting it to “false” removes such restrictions. Set up Control Center Agent Server Side Logging Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file j2ee-logging.xml in a text editor and add the following lines to the log handler section. The jrt-internal-log-handler is the handler used by Control Center Agent runtime logger to create log files. Then add the following entry into the loggers section to create the logger for Control Center Agent runtime auditing. Set up Audit Table Data Source To enable Audit Table logging, a managed data source and connection pool need to be set up before Control Center Agent deployment. Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file data-sources.xml in a text editor. Define the audit data source shown below in the file, <managed-data-source name="AuditDS" connection-pool-name="OWBSYS Audit   Connection Pool" jndi-name="jdbc/AuditDS"/> <connection-pool name="OWBSYS Audit Connection Pool">   <connection-factory factory-class="oracle.jdbc.pool.OracleDataSource"     user="owbsys_audit" password="owbsys_audit"     url="jdbc:oracle:thin:@//localhost:1521/ORCL"/> </connection-pool> Copy ct_permissions.properties File The ct_permissions.properties can be obtained from $OWB_HOME /owb/jrt/config/ directory. You need to copy the file to $AS_HOME/j2ee/home/config directory.This properties file takes effect when the setting km-security is set to true in Control Center Agent. By default the ALLOWED_CMD is commented out in ct_permissions.properties file. This prevents all system command from being invoked from scripts executed in Control Center Agent (when km-security is set to true). To allow certain system commands to be invoked, ALLOWED_CMD needs to be uncommented out, and the system commands (allowed to be invoked) need to be added to the ALLOWED_CMD. Set up Security Roles for Control Center Agent You can set up the Control Center Agent security roles through Oracle Enterprise Manager. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Security. Click the task icon next to Security Providers. 2. On Security Providers page click on the button “Instance Level Security”. On Instance Level Security page, go to “Realms” tab. You will see a row for the default realm “jazn.com” in the results table. It has a “Roles” column and a “Users” column. Click on the number in “Roles” column. In the “Roles” page it will display all the roles available for the realm. Click on “Create” button to create a new role “OWB_J2EE_ EXECUTOR”. 3. On the Add Role screen, enter Name OWB_J2EE_EXECUTOR, and click OK. 4. Follow the same steps as before, and create a new role “OWB_J2EE_OPERATOR”. 5. Assign role “oc4j-administrators” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_OPERATOR” by moving these roles from “Available Roles” and click “OK” to save. 6. Go back to Instance Level Security page and create a new role “OWB_J2EE_ADMINISTRATOR”. 7. Assign roles “OWB_J2EE_ OPERATOR” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_ ADMINISTRATOR” by moving these roles from “Available Roles” and click “OK” to save. 8.Go back to Instance Level Security page. This time, click on the number in “Users” column for the realm “jazn.com”. In the “Users” page, it shows all the users defined for this realm. Locate the user “oc4jadmin” in the results table and click on it. 9. Assign the roles “OWB_J2EE_ADMINISTRATOR” and “oc4j-app-administrators” to this user by moving the role from the “Available Roles” selection box to “Selected Roles” box and click “Apply” to save. 10. Go back to Instance Level Security page and create a new role “OWB_INTERNAL_USERS”, assign no user or role to this role. Simply click “OK” to create this role. Now you have finished creating the security roles required for Control Center Agent. Create JMS Queues You need to create two JMS queues for Control Center Agent: owbQueue and abort_owbQueue. 1. Now go to OC4J home Page. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Services and then expand Enterprise Messaging Service. Click the task icon next to JMS Destinations. 2. On JMS Destinations page, click “Create New” button to create a new JMS queue. On Add Destination page, choose “Queue” as Destination Type. Put “owbQueue” as Destination Name. Select “In Memory Persistence Only” as the Persistence Type and put “jms/owbQueue” as JNDI Location and click on “OK” to finish. 3. Follow the same instruction as above to create the owb_abortQueue. Now you have finished creating the JMS queues required for Control Center Agent. Install JDBC Drivers to OC4J In order to execute Code Templates using commercial databases other than Oracle, e.g. DB2, SQL Server etc, the corresponding jdbc driver files need to be added to $AS_HOME/j2ee/home/applib directory. 1. To install other JDBC drivers to OC4J, first obtain the .jar file containing the JDBC driver. All the external JDBC drivers .jar files can be found in the directory: $OWB_HOME/owb/lib/ext/. For DB2, the files needed are db2jcc.jar and db2jcc_license_cu.jar. For SQL Server the file is sqljdbc.jar. For sunopsis JDBC drivers, the file needed is snpsxmlo.jar. 2. Copy the required JDBC driver file into the directory $AS_HOME/j2ee/home/applib. Now you have finished the Application Server configuration. To make the configuration to take an effect, you need to restart the Application Server.   Task 2: Deploy the Control Center Agent to the Application Server Now you can deploy the Control Center Agent to the Application Server. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Applications tab. Click Deploy to begin deploying Control Center Agent. 2. On the Deploy: Select Archive screen, under Archive, select Archive is present on local host. Upload the archive to the server where Application Server Control is running. Click Browse and locate the jrt.ear file in the $OWB_HOME/owb/jrt/applications directory. Under Deployment Plan, select Automatically create a new deployment plan. Click Next. 3. Wait for the ear file to be uploaded to Application Server. On the Deploy: Application Attributes screen, enter Application Name jrt, and Context Root jrt. Leave the other attributes at their default values. Click Next. 4. On Deploy: Deployment Settings screen, leave all attributes at their default values, and click Deploy. This will take about 1 minute or so and when the application is deployed successfully, a confirmation message will be displayed. Now the Control Center Agent is started automatically. Go back to OC4J home page and click on Applications tab to make sure the deployed application jrt is showing in the applications list.   Task 3: Optional Configuration Tasks The optional configuration tasks contain: · Secure Control Center Agent Web Service · Setting the PATH Environment Variable Secure Control Center Agent Web Service If you want to use JRTWebService with a secure website, you need to do the following steps, 1. Create a file “secure-web-site.xml” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. A sample secure-web-site.xml is shown as below. We need to modify the “protocol” to “https”, and “secure” to “true”, also choose an port as the secure http port. Also we need to add the entry “ssl-config” in the file. Remember to use the absolute path for the key store file. 2. Modify the file “server.xml” that is located at $AS_HOME/j2ee/home/config directory. Then add the <web-site> element in the file for the secure-web-site. 3. Create a key store file “serverkeystore.jks” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. After the three files are altered, restart the application server. Now you can access the JRTWebService in SSL way through https://hostname:4443/jrt/webservice. Setting the PATH Environment Variable Sometimes, some system commands such as linux ls, sh etc, can not be executed successfully during the script execution due to they are not found in PATH. To ensure they work normally, you can setup the environment variable PATH. Let’s navigate to the Enterprise Manager Homepage. 1. Go to OC4J home screen and click the Administration tab. Expand Administration Tasks, then expand Properties. Click the task icon next to Server Properties. 2. On the Server Properties screen, scroll down to Environment Variables section. Under Environment Variables, click Add Another Row. Enter PATH in Name, and fill Value with directories that contain the system commands. Click Apply.   After you work through this article, I believe you have developed a deeper understanding of the Control Center Agent installation process, and you can apply this knowledge in other installation plan such as Control Center Agent installation on Standalone OC4J.

    Read the article

  • ODI and OBIEE 11g Integration

    - by David Allan
    Here we will see some of the connectivity options to OBIEE 11g using the JDBC driver. You’ll see based upon some connection properties how the physical or presentation layers can be utilized. In the integrators guide for OBIEE 11g you will find a brief statement indicating that there actually is a JDBC driver for OBIEE. In OBIEE 11g its now possible to connect directly to the physical layer, Venkat has an informative post here on this topic. In ODI 11g the Oracle BI technology is shipped with the product along with KMs for reverse engineering, and using OBIEE models for a data source. When you install OBIEE in 11g a light weight demonstration application is preinstalled in the server, when you open this in the BI Administration tool we see the regular 3 panel view within the administration tool. To interrogate this system via JDBC (just like ODI does using the KMs) need a couple of things; the JDBC driver from OBIEE 11g, a java client program and the credentials. In my java client program I want to connect to the OBIEE system, when I connect I can interrogate what the JDBC driver presents for the metadata. The metadata projected via the JDBC connection’s DatabaseMetadata changes depending on whether the property NQ_SESSION.SELECTPHYSICAL is set when the java client connects. Let’s use the sample app to illustrate. I have a java client program here that will print out the tables in the DatabaseMetadata, it will also output the catalog and schema. For example if I execute without any special JDBC properties as follows; java -classpath .;%BIHOMEDIR%\clients\bijdbc.jar meta_jdbc oracle.bi.jdbc.AnaJdbcDriver jdbc:oraclebi://localhost:9703/ weblogic mypass Then I get the following returned representing the presentation layer, the sample I used is XML, and has no schema; Catalog Schema Table Sample Sales Lite null Base Facts Sample Sales Lite null Calculated Facts …     Sample Targets Lite null Base Facts …     Now if I execute with the only difference being the JDBC property NQ_SESSION.SELECTPHYSICAL with the value Yes, then I see a different set of values representing the physical layer in OBIEE; java -classpath .;%BIHOMEDIR%\clients\bijdbc.jar meta_jdbc oracle.bi.jdbc.AnaJdbcDriver jdbc:oraclebi://localhost:9703/ weblogic mypass NQ_SESSION.SELECTPHYSICAL=Yes The following is returned; Catalog Schema Table Sample App Lite Data null D01 Time Day Grain Sample App Lite Data null F10 Revenue Facts (Order grain) …     System DB (Update me)     …     If this was a database system such as Oracle, the catalog value would be the OBIEE database name and the schema would be the Oracle database schema. Other systems which have real catalog structure such as SQLServer would use its catalog value. Its this ‘Catalog’ and ‘Schema’ value that is important when integration OBIEE with ODI. For the demonstration application in OBIEE 11g, the following illustration shows how the information from OBIEE is related via the JDBC driver through to ODI. In the XML example above, within ODI’s physical schema definition on the right, we leave the schema blank since the XML data source has no schema. When I did this at first, I left the default value that ODI places in the Schema field since which was ‘<Undefined>’ (like image below) but this string is actually used in the RKM so ended up not finding any tables in this schema! Entering an empty string resolved this. Below we see a regular Oracle database example that has the database, schema, physical table structure, and how this is defined in ODI.   Remember back to the physical versus presentation layer usage when we passed the special property, well to do this in ODI, the data server has a panel for properties where you can define key/value pairs. So if you want to select physical objects from the OBIEE server, then you must set this property. An additional changed in ODI 11g is the OBIEE connection pool support, this has been implemented via a ‘Connection Pool’ flex field for the Oracle BI data server. So here you set the connection pool name from the OBIEE system that you specifically want to use and this is used by the Oracle BI to Oracle (DBLINK) LKM, so if you are using this you must set this flex field. Hopefully a useful insight into some of the mechanics of how this hangs together.

    Read the article

  • Using Oracle Proxy Authentication with JPA (eclipselink-Style)

    - by olaf.heimburger
    Security is a very intriguing topic. You will find it everywhere and you need to implement it everywhere. Yes, you need. Unfortunately, one can easily forget it while implementing the last mile. The Last Mile In a multi-tier application it is a common practice to use connection pools between the business layer and the database layer. Connection pools are quite useful to speed database connection creation and to split the load. Another very common practice is to use a specific, often called technical, user to connect to the database. This user has authentication and authorization rules that apply to all application users. Imagine you've put every effort to define roles for different types of users that use your application. These roles are necessary to differentiate between normal users, premium users, and administrators (I bet you will find or already have more roles in your application). While these user roles are pretty well used within your application, once the flow of execution enters the database everything is gone. Each and every user just has one role and is the same database user. Issues? What Issues? As long as things go well, this is not a real issue. However, things do not go well all the time. Once your application becomes famous performance decreases in certain situations or, more importantly, current and upcoming regulations and laws require that your application must be able to apply different security measures on a per user role basis at every stage of your application. If you only have a bunch of users with the same name and role you are not able to find the application usage profile that causes the performance issue, or which user has accessed data that he/she is not allowed to. Another thread to your role concept is that databases tend to be used by different applications and tools. These tools can be developer tools like SQL*Plus, SQL Developer, etc. or end user applications like BI Publisher, Oracle Forms and so on. These tools have no idea of your applications role concept and access the database the way they think is appropriate. A big oversight for your perfect role model and a big nightmare for your Chief Security Officer. Speaking of the CSO, brings up another issue: Password management. Once your technical user account is compromised, every user is able to do things that he/she is not expected to do from the design of your application. Counter Measures In the Oracle world a common counter measure is to use Virtual Private Database (VPD). This restricts the values a database user can see to the allowed minimum. However, it doesn't help in regard of a connection pool user, because this one is still not the real user. Oracle Proxy Authentication Another feature of the Oracle database is Proxy Authentication. First introduced with version 9i it is a quite useful feature for nearly every situation. The main idea behind Proxy Authentication is, to create a crippled database user who has only connect rights. Even if this user is compromised the risks are well understood and fairly limited. This user can be used in every situation in which you need to connect to the database, no matter which tool or application (see above) you use.The proxy user is perfect for multi-tier connection pools. CREATE USER app_user IDENTIFIED BY abcd1234; GRANT CREATE SESSION TO app_user; But what if you need to access real data? Well, this is the primary use case, isn't it? Now is the time to bring the application's role concept into play. You define database roles that define the grants for your identified user groups. Once you have these groups you grant access through the proxy user with the application role to the specific user. CREATE ROLE app_role_a; GRANT app_role_a TO scott; ALTER USER scott GRANT CONNECT THROUGH app_user WITH ROLE app_role_a; Now, hr has permission to connect to the database through the proxy user. Through the role you can restrict the hr's rights the are needed for the application only. If hr connects to the database directly all assigned role and permissions apply. Testing the Setup To test the setup you can use SQL*Plus and connect to your database: $ sqlplus app_user[hr]/abcd1234 Java Persistence API The Java Persistence API (JPA) is a fairly easy means to build applications that retrieve data from the database and put it into Java objects. You use plain old Java objects (POJOs) and mixin some Java annotations that define how the attributes of the object are used for storing data from the database into the Java object. Here is a sample for objects from the HR sample schema EMPLOYEES table. When using Java annotations you only specify what can not be deduced from the code. If your Java class name is Employee but the table name is EMPLOYEES, you need to specify the table name, otherwise it will fail. package demo.proxy.ejb; import java.io.Serializable; import java.sql.Timestamp; import java.util.List; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.ManyToOne; import javax.persistence.NamedQueries; import javax.persistence.NamedQuery; import javax.persistence.OneToMany; import javax.persistence.Table; @Entity @NamedQueries({ @NamedQuery(name = "Employee.findAll", query = "select o from Employee o") }) @Table(name = "EMPLOYEES") public class Employee implements Serializable { @Column(name="COMMISSION_PCT") private Double commissionPct; @Column(name="DEPARTMENT_ID") private Long departmentId; @Column(nullable = false, unique = true, length = 25) private String email; @Id @Column(name="EMPLOYEE_ID", nullable = false) private Long employeeId; @Column(name="FIRST_NAME", length = 20) private String firstName; @Column(name="HIRE_DATE", nullable = false) private Timestamp hireDate; @Column(name="JOB_ID", nullable = false, length = 10) private String jobId; @Column(name="LAST_NAME", nullable = false, length = 25) private String lastName; @Column(name="PHONE_NUMBER", length = 20) private String phoneNumber; private Double salary; @ManyToOne @JoinColumn(name = "MANAGER_ID") private Employee employee; @OneToMany(mappedBy = "employee") private List employeeList; public Employee() { } public Employee(Double commissionPct, Long departmentId, String email, Long employeeId, String firstName, Timestamp hireDate, String jobId, String lastName, Employee employee, String phoneNumber, Double salary) { this.commissionPct = commissionPct; this.departmentId = departmentId; this.email = email; this.employeeId = employeeId; this.firstName = firstName; this.hireDate = hireDate; this.jobId = jobId; this.lastName = lastName; this.employee = employee; this.phoneNumber = phoneNumber; this.salary = salary; } public Double getCommissionPct() { return commissionPct; } public void setCommissionPct(Double commissionPct) { this.commissionPct = commissionPct; } public Long getDepartmentId() { return departmentId; } public void setDepartmentId(Long departmentId) { this.departmentId = departmentId; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public Long getEmployeeId() { return employeeId; } public void setEmployeeId(Long employeeId) { this.employeeId = employeeId; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public Timestamp getHireDate() { return hireDate; } public void setHireDate(Timestamp hireDate) { this.hireDate = hireDate; } public String getJobId() { return jobId; } public void setJobId(String jobId) { this.jobId = jobId; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getPhoneNumber() { return phoneNumber; } public void setPhoneNumber(String phoneNumber) { this.phoneNumber = phoneNumber; } public Double getSalary() { return salary; } public void setSalary(Double salary) { this.salary = salary; } public Employee getEmployee() { return employee; } public void setEmployee(Employee employee) { this.employee = employee; } public List getEmployeeList() { return employeeList; } public void setEmployeeList(List employeeList) { this.employeeList = employeeList; } public Employee addEmployee(Employee employee) { getEmployeeList().add(employee); employee.setEmployee(this); return employee; } public Employee removeEmployee(Employee employee) { getEmployeeList().remove(employee); employee.setEmployee(null); return employee; } } JPA could be used in standalone applications and Java EE containers. In both worlds you normally create a Facade to retrieve or store the values of the Entities to or from the database. The Facade does this via an EntityManager which will be injected by the Java EE container. Here is sample Facade Session Bean for a Java EE container. package demo.proxy.ejb; import java.util.HashMap; import java.util.List; import javax.ejb.Local; import javax.ejb.Remote; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.persistence.Query; import javax.interceptor.AroundInvoke; import javax.interceptor.InvocationContext; import oracle.jdbc.driver.OracleConnection; import org.eclipse.persistence.config.EntityManagerProperties; import org.eclipse.persistence.internal.jpa.EntityManagerImpl; @Stateless(name = "DataFacade", mappedName = "ProxyUser-TestEJB-DataFacade") @Remote @Local public class DataFacadeBean implements DataFacade, DataFacadeLocal { @PersistenceContext(unitName = "TestEJB") private EntityManager em; private String username; public Object queryByRange(String jpqlStmt, int firstResult, int maxResults) { // setSessionUser(); Query query = em.createQuery(jpqlStmt); if (firstResult 0) { query = query.setFirstResult(firstResult); } if (maxResults 0) { query = query.setMaxResults(maxResults); } return query.getResultList(); } public Employee persistEmployee(Employee employee) { // setSessionUser(); em.persist(employee); return employee; } public Employee mergeEmployee(Employee employee) { // setSessionUser(); return em.merge(employee); } public void removeEmployee(Employee employee) { // setSessionUser(); employee = em.find(Employee.class, employee.getEmployeeId()); em.remove(employee); } /** select o from Employee o */ public List getEmployeeFindAll() { Query q = em.createNamedQuery("Employee.findAll"); return q.getResultList(); } Putting Both Together To use Proxy Authentication with JPA and within a Java EE container you have to take care of the additional requirements: Use an OCI JDBC driver Provide the user name that connects through the proxy user Use an OCI JDBC driver To use the OCI JDBC driver you need to set up your JDBC data source file to use the correct JDBC URL. hr jdbc:oracle:oci8:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SID=XE))) oracle.jdbc.OracleDriver user app_user 62C32F70E98297522AD97E15439FAC0E SQL SELECT 1 FROM DUAL jdbc/hrDS Application Additionally you need to make sure that the version of the shared libraries of the OCI driver match the version of the JDBC driver in your Java EE container or Java application and are within your PATH (on Windows) or LD_LIBRARY_PATH (on most Unix-based systems). Installing the Oracle Database Instance Client software works perfectly. Provide the user name that connects through the proxy user This part needs some modification of your application software and session facade. Session Facade Changes In the Session Facade we must ensure that every call that goes through the EntityManager must be prepared correctly and uniquely assigned to this session. The second is really important, as the EntityManager works with a connection pool and can not guarantee that we set the proxy user on the connection that will be used for the database activities. To avoid changing every method call of the Session Facade we provide a method to set the username of the user that connects through the proxy user. This method needs to be called by the Facade client bfore doing anything else. public void setUsername(String name) { username = name; } Next we provide a means to instruct the TopLink EntityManager Delegate to use Oracle Proxy Authentication. (I love small helper methods to hide the nitty-gritty details and avoid repeating myself.) private void setSessionUser() { setSessionUser(username); } private void setSessionUser(String user) { if (user != null && !user.isEmpty()) { EntityManagerImpl emDelegate = ((EntityManagerImpl)em.getDelegate()); emDelegate.setProperty(EntityManagerProperties.ORACLE_PROXY_TYPE, OracleConnection.PROXYTYPE_USER_NAME); emDelegate.setProperty(OracleConnection.PROXY_USER_NAME, user); emDelegate.setProperty(EntityManagerProperties.EXCLUSIVE_CONNECTION_MODE, "Always"); } } The final step is use the EJB 3.0 AroundInvoke interceptor. This interceptor will be called around every method invocation. We therefore check whether the Facade methods will be called or not. If so, we set the user for proxy authentication and the normal method flow continues. @AroundInvoke public Object proxyInterceptor(InvocationContext invocationCtx) throws Exception { if (invocationCtx.getTarget() instanceof DataFacadeBean) { setSessionUser(); } return invocationCtx.proceed(); } Benefits Using Oracle Proxy Authentification has a number of additional benefits appart from implementing the role model of your application: Fine grained access control for temporary users of the account, without compromising the original password. Enabling database auditing and logging. Better identification of performance bottlenecks. References Effective Oracle Database 10g Security by Design, David Knox TopLink Developer's Guide, Chapter 98

    Read the article

  • Real-world SignalR example, ditching ghetto long polling

    - by Jeff
    One of the highlights of BUILD last week was the announcement that SignalR, a framework for real-time client to server (or cloud, if you will) communication, would be a real supported thing now with the weight of Microsoft behind it. Love the open source flava! If you aren’t familiar with SignalR, watch this BUILD session with PM Damian Edwards and dev David Fowler. Go ahead, I’ll wait. You’ll be in a happy place within the first ten minutes. If you skip to the end, you’ll see that they plan to ship this as a real first version by the end of the year. Insert slow clap here. Writing a few lines of code to move around a box from one browser to the next is a way cool demo, but how about something real-world? When learning new things, I find it difficult to be abstract, and I like real stuff. So I thought about what was in my tool box and the decided to port my crappy long-polling “there are new posts” feature of POP Forums to use SignalR. A few versions back, I added a feature where a button would light up while you were pecking out a reply if someone else made a post in the interim. It kind of saves you from that awkward moment where someone else posts some snark before you. While I was proud of the feature, I hated the implementation. When you clicked the reply button, it started polling an MVC URL asking if the last post you had matched the last one the server, and it did it every second and a half until you either replied or the server told you there was a new post, at which point it would display that button. The code was not glam: // in the reply setup PopForums.replyInterval = setInterval("PopForums.pollForNewPosts(" + topicID + ")", 1500); // called from the reply setup and the handler that fetches more posts PopForums.pollForNewPosts = function (topicID) { $.ajax({ url: PopForums.areaPath + "/Forum/IsLastPostInTopic/" + topicID, type: "GET", dataType: "text", data: "lastPostID=" + PopForums.currentTopicState.lastVisiblePost, success: function (result) { var lastPostLoaded = result.toLowerCase() == "true"; if (lastPostLoaded) { $("#MorePostsBeforeReplyButton").css("visibility", "hidden"); } else { $("#MorePostsBeforeReplyButton").css("visibility", "visible"); clearInterval(PopForums.replyInterval); } }, error: function () { } }); }; What’s going on here is the creation of an interval timer to keep calling the server and bugging it about new posts, and setting the visibility of a button appropriately. It looks like this if you’re monitoring requests in FireBug: Gross. The SignalR approach was to call a message broker when a reply was made, and have that broker call back to the listening clients, via a SingalR hub, to let them know about the new post. It seemed weird at first, but the server-side hub’s only method is to add the caller to a group, so new post notifications only go to callers viewing the topic where a new post was made. Beyond that, it’s important to remember that the hub is also the means to calling methods at the client end. Starting at the server side, here’s the hub: using Microsoft.AspNet.SignalR.Hubs; namespace PopForums.Messaging { public class Topics : Hub { public void ListenTo(int topicID) { Groups.Add(Context.ConnectionId, topicID.ToString()); } } } Have I mentioned how awesomely not complicated this is? The hub acts as the channel between the server and the client, and you’ll see how JavaScript calls the above method in a moment. Next, the broker class and its associated interface: using Microsoft.AspNet.SignalR; using Topic = PopForums.Models.Topic; namespace PopForums.Messaging { public interface IBroker { void NotifyNewPosts(Topic topic, int lasPostID); } public class Broker : IBroker { public void NotifyNewPosts(Topic topic, int lasPostID) { var context = GlobalHost.ConnectionManager.GetHubContext<Topics>(); context.Clients.Group(topic.TopicID.ToString()).notifyNewPosts(lasPostID); } } } The NotifyNewPosts method uses the static GlobalHost.ConnectionManager.GetHubContext<Topics>() method to get a reference to the hub, and then makes a call to clients in the group matched by the topic ID. It’s calling the notifyNewPosts method on the client. The TopicService class, which handles the reply data from the MVC controller, has an instance of the broker new’d up by dependency injection, so it took literally one line of code in the reply action method to get things moving. _broker.NotifyNewPosts(topic, post.PostID); The JavaScript side of things wasn’t much harder. When you click the reply button (or quote button), the reply window opens up and fires up a connection to the hub: var hub = $.connection.topics; hub.client.notifyNewPosts = function (lastPostID) { PopForums.setReplyMorePosts(lastPostID); }; $.connection.hub.start().done(function () { hub.server.listenTo(topicID); }); The important part to look at here is the creation of the notifyNewPosts function. That’s the method that is called from the server in the Broker class above. Conversely, once the connection is done, the script calls the listenTo method on the server, letting it know that this particular connection is listening for new posts on this specific topic ID. This whole experiment enables a lot of ideas that would make the forum more Facebook-like, letting you know when stuff is going on around you.

    Read the article

  • WebLogic JDBC Use of Oracle Wallet for SSL

    - by Steve Felts
    Introduction Secure Sockets Layer (SSL) can be used to secure the connection between the middle tier “client”, WebLogic Server (WLS) in this case, and the Oracle database server.  Data between WLS and database can be encrypted.  The server can be authenticated so you have proof that the database can be trusted by validating a certificate from the server.  The client can be authenticated so that the database only accepts connections from clients that it trusts. Similar to the discussion in an earlier article about using the Oracle wallet for database credentials, the Oracle wallet can also be used with SSL to store the keys and certificates.  By using it correctly, clear text passwords can be eliminated from the JDBC configuration and client/server configuration can be simplified by sharing the wallet across multiple datasources. There is a very good Oracle Technical White Paper on using SSL with the Oracle thin driver at http://www.oracle.com/technetwork/database/enterprise-edition/wp-oracle-jdbc-thin-ssl-130128.pdf [LINK1].  The link http://www.oracle.com/technetwork/middleware/weblogic/index-087556.html [LINK2] describes how to use WebLogic Server with Oracle JDBC Driver SSL. The information in this article is a guide on what steps need to be taken in the variety of available options; use the links above for details. SSL from the driver to the database server is basically turned on by specifying a protocol of “tcps” in the URL.  However, there is a fair amount of setup needed.  Also remember that there is an overhead in performance. Creating the wallets The common use cases are 1. “data encryption and server-only authentication”, requiring just a trust store, or 2. “data encryption and authentication of both tiers” (client and server), requiring a trust store and a key store. It is recommended to use the auto-login wallet type so that clear text passwords are not needed in the datasource configuration to open the wallet.  The store type for an auto-login wallet is “SSO” (Single Sign On), not “JKS” or “PKCS12” as in [LINK2].  The file name is “cwallet.sso”. Wallets are created using the orapki tool.  They need to be created based on the usage (encryption and/or authentication).  This is discussed in detail in [LINK1] in Appendix B or in the Advanced Security Administrator’s Guide of the Database documentation. Database Server Configuration It is necessary to update the sqlnet.ora and listener.ora files with the directory location of the wallet using WALLET_LOCATION.  These files also indicate whether or not SSL_CLIENT_AUTHENTICATION is being used (true or false). The Oracle Listener must also be configured to use the TCPS protocol.  The recommended port is 2484. LISTENER = (ADDRESS_LIST= (ADDRESS=(PROTOCOL=tcps)(HOST=servername)(PORT=2484))) WebLogic Server Classpath The WebLogic Server CLASSPATH must have three additional security files. The files that need to be added to the WLS CLASSPATH are $MW_HOME/modules/com.oracle.osdt_cert_1.0.0.0.jar $MW_HOME/modules/com.oracle.osdt_core_1.0.0.0.jar $MW_HOME/modules/com.oracle.oraclepki_1.0.0.0.jar One way to do this is to add them to PRE_CLASSPATH environment variable for use with the standard WebLogic scripts. Setting the Oracle Security Provider It’s necessary to enable the Oracle PKI provider on the client side.  This can either be done statically by updating the java.security file under the JRE or dynamically by setting it in a WLS startup class using java.security.Security.insertProviderAt(new oracle.security.pki.OraclePKIProvider (), 3); See the full example of the startup class in [LINK2]. Datasource Configuration When creating a WLS datasource, set the PROTOCOL in the URL to tcps as in the following. jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=host)(PORT=port))(CONNECT_DATA=(SERVICE_NAME=myservice))) For encryption and server authentication, use the datasource connection properties: - javax.net.ssl.trustStore=location of wallet file on the client - javax.net.ssl.trustStoreType=”SSO” For client authentication, use the datasource connection properties: - javax.net.ssl.keyStore=location of wallet file on the client - javax.net.ssl.keyStoreType=”SSO” Note that the driver connection properties for the wallet require a file name, not a directory name. Active GridLink ONS over SSL For completeness, there is another SSL usage for WLS datasources.  The communication with the Oracle Notification Service (ONS) for load balancing information and node up/down events can use SSL also. Create an auto-login wallet and use the wallet on the client and server.  The following is a sample sequence to create a test wallet for use with ONS. orapki wallet create -wallet ons -auto_login -pwd ONS_Wallet orapki wallet add -wallet ons -dn "CN=ons_test,C=US" -keysize 1024 -self_signed -validity 9999 -pwd ONS_Wallet orapki wallet export -wallet ons -dn "CN=ons_test,C=US" -cert ons/cert.txt -pwd ONS_Wallet On the database server side, it’s necessary to define the walletfile directory in the file $CRS_HOME/opmn/conf/ons.config and run onsctl stop/start. When configuring an Active GridLink datasource, the connection to the ONS must be defined.  In addition to the host and port, the wallet file directory must be specified.  By not giving a password, a SSO wallet is assumed. Summary To use SSL with the Oracle thin driver without any clear text passwords, use an SSO Oracle Wallet.  SSL support in the Oracle thin driver is available starting in 10g Release 2.

    Read the article

  • Airline mess - what a journey

    - by Mike Dietrich
    What a day, what a journey ... Flew this noon from Munich to Zuerich for catch my ongoing flight to San Francisco with Swiss. And that day did start very well as Lufthansa messed up the connection flight by 42 minutes for a 35 minute flight. And as I was obviously the only passenger connection to San Francisco nobody picked me up at the airplane to bring me directly to my connection as Swiss did for the 8 passengers connection to Miami. So I missed my flight. What a start - and many thanks to Lufthansa. I was not the only one missing a connection as Lufthansa/Swiss had canceled the flight before due to "technical problems". In Zuerich Swiss did rebook me via Frankfurt with Lufthansa to board a United Airlines flight to San Francisco. "Ouch" I thought. I had my share of experience with United already as they've messed up my luggage on the way to San Francisco some years ago and it took them five (!!!) days to fly my bag over and deliver it. But actually it was the only option today. So I said "Yes". A big mistake as I've learned later on. The Frankfurt flight was delayed as well "due to a late incoming aircraft". But there was plenty of time. And I went to the Swiss counter at the gate and let them check if my baggage is on that flight to Frankfurt. They've said "Yes". Boarding the plane with a delay of 45 minutes (the typical Lufthansa delay these days) I spotted my Rimowa trolley right next to the plane on the airfield. So I was sure that it will be send to Frankfurt. In Frankfurt I went to the United counter once it did open - had to go through the passport check they do for US flights as well - and they've said "Yes, your luggage is with us". Well ... Arriving in San Francisco with just a bit of a some minutes delay and a very fast immigration procedure I saw the first bags with Priority tags getting pushed to the baggage claim - but mine was not there. I did wait ... and wait ... and wait. Well, thanks United, you did it again!!! I flew twice in the past years United Airlines - and in both cases they've messed up my luggage on the way to San Francisco. How lovely is that ... Now the real fun started again as the lady at the "Lost and Found" counter for luggage spotted my luggage in her system in Zuerich - and told me it's supposed to be sent with LH1191 to Frankfurt on Sept 27. But this was yesterday in Europe - it's already Sept 28 - and I saw my luggage in front of the airplane. So I'd suppose it's in Frankfurt already. But what could she do? Nothing but doing the awful paperwork. And "No Mr Dietrich, we don't call international numbers". Thank you, United. Next time I'll try to get a contract for a US land line in advance. They can't even tell you which plane will bring your luggage. It may be tomorrow with UA flight arriving around 4pm in SFO. I'm looking forward to some hours in the wonderful United Airlines call center waiting line. Last time I did spend 60-90 minutes every day until I got my luggage. If it takes again that long then OOW will be over by then. I love airline travel - and especially with United Airlines. And by the way ... they gave us these nice fancy packages during the flight:  That looks good - what's in that box??? Yes, really ... a bag of potato chips. Pure fat - very healthy.  I doubt that I'll ever fly United Airlines again!!!

    Read the article

  • Cloud Computing Forces Better Design Practices

    - by Herve Roggero
    Is cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles?  A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications. So the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal. Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables. Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough). The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality. Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better… … the cloud forces better design practices. My 2 cents.

    Read the article

  • openvpn WARNING: No server certificate verification method has been enabled

    - by tmedtcom
    I tried to install openvpn on debian squeez (server) and connect from my fedora 17 as (client). Here is my configuration: server configuration ###cat server.conf # Serveur TCP ** proto tcp** port 1194 dev tun # Cles et certificats ca /etc/openvpn/easy-rsa/keys/ca.crt cert /etc/openvpn/easy-rsa/keys/server.crt key /etc/openvpn/easy-rsa/keys/server.key dh /etc/openvpn/easy-rsa/keys/dh1024.pem # Reseau #Adresse virtuel du reseau vpn server 192.170.70.0 255.255.255.0 #Cette ligne ajoute sur le client la route du reseau vers le serveur push "route 192.168.1.0 255.255.255.0" #Creer une route du server vers l'interface tun. #route 192.170.70.0 255.255.255.0 # Securite keepalive 10 120 #type d'encryptage des données **cipher AES-128-CBC** #activation de la compression comp-lzo #nombre maximum de clients autorisés max-clients 10 #pas d'utilisateur et groupe particuliers pour l'utilisation du VPN user nobody group nogroup #pour rendre la connexion persistante persist-key persist-tun #Log d'etat d'OpenVPN status /var/log/openvpn-status.log #logs openvpnlog /var/log/openvpn.log log-append /var/log/openvpn.log #niveau de verbosité verb 5 ###cat client.conf # Client client dev tun [COLOR="Red"]proto tcp-client[/COLOR] remote <my server wan IP> 1194 resolv-retry infinite **cipher AES-128-CBC** # Cles ca ca.crt cert client.crt key client.key # Securite nobind persist-key persist-tun comp-lzo verb 3 Message from the host client (fedora 17) in the log file / var / log / messages: Dec 6 21:56:00 GlobalTIC NetworkManager[691]: <info> Starting VPN service 'openvpn'... Dec 6 21:56:00 GlobalTIC NetworkManager[691]: <info> VPN service 'openvpn' started (org.freedesktop.NetworkManager.openvpn), PID 7470 Dec 6 21:56:00 GlobalTIC NetworkManager[691]: <info> VPN service 'openvpn' appeared; activating connections Dec 6 21:56:00 GlobalTIC NetworkManager[691]: <info> VPN plugin state changed: starting (3) Dec 6 21:56:01 GlobalTIC NetworkManager[691]: <info> VPN connection 'Connexion VPN 1' (Connect) reply received. Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]: OpenVPN 2.2.2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] built on Sep 5 2012 Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]:[COLOR="Red"][U][B] WARNING: No server certificate verification method has been enabled.[/B][/U][/COLOR] See http://openvpn.net/howto.html#mitm for more info. Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]:[COLOR="Red"] WARNING: file '/home/login/client/client.key' is group or others accessible[/COLOR] Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]: UDPv4 link local: [undef] Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]: UDPv4 link remote: [COLOR="Red"]<my server wan IP>[/COLOR]:1194 Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]: [COLOR="Red"]read UDPv4 [ECONNREFUSED]: Connection refused (code=111)[/COLOR] Dec 6 21:56:03 GlobalTIC nm-openvpn[7472]: [COLOR="Red"]read UDPv4[/COLOR] [ECONNREFUSED]: Connection refused (code=111) Dec 6 21:56:07 GlobalTIC nm-openvpn[7472]: read UDPv4 [ECONNREFUSED]: Connection refused (code=111) Dec 6 21:56:15 GlobalTIC nm-openvpn[7472]: read UDPv4 [ECONNREFUSED]: Connection refused (code=111) Dec 6 21:56:31 GlobalTIC nm-openvpn[7472]: read UDPv4 [ECONNREFUSED]: Connection refused (code=111) Dec 6 21:56:41 GlobalTIC NetworkManager[691]: <warn> VPN connection 'Connexion VPN 1' (IP Conf[/CODE] ifconfig on server host(debian): ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:16:21:ac inet addr:192.168.1.6 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe16:21ac/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9059 errors:0 dropped:0 overruns:0 frame:0 TX packets:5660 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:919427 (897.8 KiB) TX bytes:1273891 (1.2 MiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:192.170.70.1 P-t-P:192.170.70.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ifconfig on the client host (fedora 17) as0t0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 5.5.0.1 netmask 255.255.252.0 destination 5.5.0.1 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 200 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes 321 (321.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 as0t1: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 5.5.4.1 netmask 255.255.252.0 destination 5.5.4.1 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 200 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes 321 (321.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 as0t2: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 5.5.8.1 netmask 255.255.252.0 destination 5.5.8.1 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 200 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes 321 (321.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 as0t3: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 5.5.12.1 netmask 255.255.252.0 destination 5.5.12.1 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 200 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes 321 (321.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 **p255p1**: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.2 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::21d:baff:fe20:b7e6 prefixlen 64 scopeid 0x20<link> ether 00:1d:ba:20:b7:e6 txqueuelen 1000 (Ethernet) RX packets 4842070 bytes 3579798184 (3.3 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3996158 bytes 2436442882 (2.2 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 16 p255p1 is label for eth0 interface and on the server : root@hoteserver:/etc/openvpn# tree . +-- client ¦** +-- ca.crt ¦** +-- client.conf ¦** +-- client.crt ¦** +-- client.csr ¦** +-- client.key ¦** +-- client.ovpn ¦* ¦** +-- easy-rsa ¦** +-- build-ca ¦** +-- build-dh ¦** +-- build-inter ¦** +-- build-key ¦** +-- build-key-pass ¦** +-- build-key-pkcs12 ¦** +-- build-key-server ¦** +-- build-req ¦** +-- build-req-pass ¦** +-- clean-all ¦** +-- inherit-inter ¦** +-- keys ¦** ¦** +-- 01.pem ¦** ¦** +-- 02.pem ¦** ¦** +-- ca.crt ¦** ¦** +-- ca.key ¦** ¦** +-- client.crt ¦** ¦** +-- client.csr ¦** ¦** +-- client.key ¦** ¦** +-- dh1024.pem ¦** ¦** +-- index.txt ¦** ¦** +-- index.txt.attr ¦** ¦** +-- index.txt.attr.old ¦** ¦** +-- index.txt.old ¦** ¦** +-- serial ¦** ¦** +-- serial.old ¦** ¦** +-- server.crt ¦** ¦** +-- server.csr ¦** ¦** +-- server.key ¦** +-- list-crl ¦** +-- Makefile ¦** +-- openssl-0.9.6.cnf.gz ¦** +-- openssl.cnf ¦** +-- pkitool ¦** +-- README.gz ¦** +-- revoke-full ¦** +-- sign-req ¦** +-- vars ¦** +-- whichopensslcnf +-- openvpn.log +-- openvpn-status.log +-- server.conf +-- update-resolv-conf on the client: [login@hoteclient openvpn]$ tree . |-- easy-rsa | |-- 1.0 | | |-- build-ca | | |-- build-dh | | |-- build-inter | | |-- build-key | | |-- build-key-pass | | |-- build-key-pkcs12 | | |-- build-key-server | | |-- build-req | | |-- build-req-pass | | |-- clean-all | | |-- list-crl | | |-- make-crl | | |-- openssl.cnf | | |-- README | | |-- revoke-crt | | |-- revoke-full | | |-- sign-req | | `-- vars | `-- 2.0 | |-- build-ca | |-- build-dh | |-- build-inter | |-- build-key | |-- build-key-pass | |-- build-key-pkcs12 | |-- build-key-server | |-- build-req | |-- build-req-pass | |-- clean-all | |-- inherit-inter | |-- keys [error opening dir] | |-- list-crl | |-- Makefile | |-- openssl-0.9.6.cnf | |-- openssl-0.9.8.cnf | |-- openssl-1.0.0.cnf | |-- pkitool | |-- README | |-- revoke-full | |-- sign-req | |-- vars | `-- whichopensslcnf |-- keys -> ./easy-rsa/2.0/keys/ `-- server.conf the problem source is cipher AES-128-CBC ,proto tcp-client or UDP or the interface p255p1 on fedora17 or file authentification ta.key is not found ????

    Read the article

  • Openvpn plugin openvpn-auth-ldap does not bind to Active Directory

    - by Selivanov Pavel
    I'm trying to configure OpenVPN with openvpn-auth-ldap plugin to authorize users via Active Directory LDAP. When I use the same server config without plugin option, and add client config with generated client key and cert, connection is successful, so problem is in the plugin. server.conf: plugin /usr/lib/openvpn/openvpn-auth-ldap.so "/etc/openvpn-test/openvpn-auth-ldap.conf" port 1194 proto tcp dev tun keepalive 10 60 topology subnet server 10.0.2.0 255.255.255.0 tls-server ca ca.crt dh dh1024.pem cert server.crt key server.key #crl-verify crl.pem persist-key persist-tun user nobody group nogroup verb 3 mute 20 openvpn-auth-ldap.conf: <LDAP> URL ldap://dc1.domain:389 TLSEnable no BindDN cn=bot_auth,cn=Users,dc=domain Password bot_auth Timeout 15 FollowReferrals yes </LDAP> <Authorization> BaseDN "cn=Users,dc=domain" SearchFilter "(sAMAccountName=%u)" RequireGroup false # <Group> # BaseDN "ou=groups,dc=mycompany,dc=local" # SearchFilter "(|(cn=developers)(cn=artists))" # MemberAttribute uniqueMember # </Group> </Authorization> Top-level domain in AD is used by historical reasons. Analogue configuration is working for Apache 2.2 in mod-authzn-ldap. User and password are correct. client.conf: remote server_name port 1194 proto tcp client pull remote-cert-tls server dev tun resolv-retry infinite nobind ca ca.crt ; with keys - works fine #cert test.crt #key test.key ; without keys - by password auth-user-pass persist-tun verb 3 mute 20 In server log there is string PLUGIN_INIT: POST /usr/lib/openvpn/openvpn-auth-ldap.so '[/usr/lib/openvpn/openvpn-auth-ldap.so] [/etc/openvpn-test/openvpn-auth-ldap.conf]' which indicates, that plugin failed. I can telnet to dc1.domain:389, so this is not network/firewall problem. Later server says TLS Error: TLS object -> incoming plaintext read error TLS handshake failed - without plugin it tryes to do usal key authentification. server log: Tue Nov 22 03:06:20 2011 OpenVPN 2.1.3 i486-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Oct 21 2010 Tue Nov 22 03:06:20 2011 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Tue Nov 22 03:06:20 2011 PLUGIN_INIT: POST /usr/lib/openvpn/openvpn-auth-ldap.so '[/usr/lib/openvpn/openvpn-auth-ldap.so] [/etc/openvpn-test/openvpn-auth-ldap.conf]' intercepted=PLUGIN_AUTH_USER_PASS_VERIFY|PLUGIN_CLIENT_CONNECT|PLUGIN_CLIENT_DISCONNECT Tue Nov 22 03:06:20 2011 Diffie-Hellman initialized with 1024 bit key Tue Nov 22 03:06:20 2011 /usr/bin/openssl-vulnkey -q -b 1024 -m <modulus omitted> Tue Nov 22 03:06:20 2011 Control Channel Authentication: using 'ta.key' as a OpenVPN static key file Tue Nov 22 03:06:20 2011 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Nov 22 03:06:20 2011 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Nov 22 03:06:20 2011 TLS-Auth MTU parms [ L:1543 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Nov 22 03:06:20 2011 Socket Buffers: R=[87380->131072] S=[16384->131072] Tue Nov 22 03:06:20 2011 TUN/TAP device tun1 opened Tue Nov 22 03:06:20 2011 TUN/TAP TX queue length set to 100 Tue Nov 22 03:06:20 2011 /sbin/ifconfig tun1 10.0.2.1 netmask 255.255.255.0 mtu 1500 broadcast 10.0.2.255 Tue Nov 22 03:06:20 2011 Data Channel MTU parms [ L:1543 D:1450 EF:43 EB:4 ET:0 EL:0 ] Tue Nov 22 03:06:20 2011 GID set to nogroup Tue Nov 22 03:06:20 2011 UID set to nobody Tue Nov 22 03:06:20 2011 Listening for incoming TCP connection on [undef] Tue Nov 22 03:06:20 2011 TCPv4_SERVER link local (bound): [undef] Tue Nov 22 03:06:20 2011 TCPv4_SERVER link remote: [undef] Tue Nov 22 03:06:20 2011 MULTI: multi_init called, r=256 v=256 Tue Nov 22 03:06:20 2011 IFCONFIG POOL: base=10.0.2.2 size=252 Tue Nov 22 03:06:20 2011 MULTI: TCP INIT maxclients=1024 maxevents=1028 Tue Nov 22 03:06:20 2011 Initialization Sequence Completed Tue Nov 22 03:07:10 2011 MULTI: multi_create_instance called Tue Nov 22 03:07:10 2011 Re-using SSL/TLS context Tue Nov 22 03:07:10 2011 Control Channel MTU parms [ L:1543 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Nov 22 03:07:10 2011 Data Channel MTU parms [ L:1543 D:1450 EF:43 EB:4 ET:0 EL:0 ] Tue Nov 22 03:07:10 2011 Local Options hash (VER=V4): 'c413e92e' Tue Nov 22 03:07:10 2011 Expected Remote Options hash (VER=V4): 'd8421bb0' Tue Nov 22 03:07:10 2011 TCP connection established with [AF_INET]10.0.0.9:47808 Tue Nov 22 03:07:10 2011 TCPv4_SERVER link local: [undef] Tue Nov 22 03:07:10 2011 TCPv4_SERVER link remote: [AF_INET]10.0.0.9:47808 Tue Nov 22 03:07:11 2011 10.0.0.9:47808 TLS: Initial packet from [AF_INET]10.0.0.9:47808, sid=a2cd4052 84b47108 Tue Nov 22 03:07:11 2011 10.0.0.9:47808 TLS_ERROR: BIO read tls_read_plaintext error: error:140890C7:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:peer did not return a certificate Tue Nov 22 03:07:11 2011 10.0.0.9:47808 TLS Error: TLS object -> incoming plaintext read error Tue Nov 22 03:07:11 2011 10.0.0.9:47808 TLS Error: TLS handshake failed Tue Nov 22 03:07:11 2011 10.0.0.9:47808 Fatal TLS error (check_tls_errors_co), restarting Tue Nov 22 03:07:11 2011 10.0.0.9:47808 SIGUSR1[soft,tls-error] received, client-instance restarting Tue Nov 22 03:07:11 2011 TCP/UDP: Closing socket client log: Tue Nov 22 03:06:18 2011 OpenVPN 2.1.3 x86_64-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Oct 22 2010 Enter Auth Username:user Enter Auth Password: Tue Nov 22 03:06:25 2011 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Tue Nov 22 03:06:25 2011 Control Channel Authentication: using 'ta.key' as a OpenVPN static key file Tue Nov 22 03:06:25 2011 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Nov 22 03:06:25 2011 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Nov 22 03:06:25 2011 Control Channel MTU parms [ L:1543 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Nov 22 03:06:25 2011 Socket Buffers: R=[87380->131072] S=[16384->131072] Tue Nov 22 03:06:25 2011 Data Channel MTU parms [ L:1543 D:1450 EF:43 EB:4 ET:0 EL:0 ] Tue Nov 22 03:06:25 2011 Local Options hash (VER=V4): 'd8421bb0' Tue Nov 22 03:06:25 2011 Expected Remote Options hash (VER=V4): 'c413e92e' Tue Nov 22 03:06:25 2011 Attempting to establish TCP connection with [AF_INET]10.0.0.2:1194 [nonblock] Tue Nov 22 03:06:26 2011 TCP connection established with [AF_INET]10.0.0.2:1194 Tue Nov 22 03:06:26 2011 TCPv4_CLIENT link local: [undef] Tue Nov 22 03:06:26 2011 TCPv4_CLIENT link remote: [AF_INET]10.0.0.2:1194 Tue Nov 22 03:06:26 2011 TLS: Initial packet from [AF_INET]10.0.0.2:1194, sid=7a3c2a0f bd35bca7 Tue Nov 22 03:06:26 2011 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this Tue Nov 22 03:06:26 2011 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funston/CN=Fort-Funston_CA/[email protected] Tue Nov 22 03:06:26 2011 Validating certificate key usage Tue Nov 22 03:06:26 2011 ++ Certificate has key usage 00a0, expects 00a0 Tue Nov 22 03:06:26 2011 VERIFY KU OK Tue Nov 22 03:06:26 2011 Validating certificate extended key usage Tue Nov 22 03:06:26 2011 ++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication Tue Nov 22 03:06:26 2011 VERIFY EKU OK Tue Nov 22 03:06:26 2011 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funston/CN=server/[email protected] Tue Nov 22 03:06:26 2011 Connection reset, restarting [0] Tue Nov 22 03:06:26 2011 TCP/UDP: Closing socket Tue Nov 22 03:06:26 2011 SIGUSR1[soft,connection-reset] received, process restarting Tue Nov 22 03:06:26 2011 Restart pause, 5 second(s) ^CTue Nov 22 03:06:27 2011 SIGINT[hard,init_instance] received, process exiting Does anybody know how to get openvpn-auth-ldap wirking?

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Problem using Hibernate-Search

    - by KCore
    Hi, I am using hibernate search for my application. It is well configured and running perfectly till some time back, when it stopped working suddenly. The reason according to me being the number of my model (bean) classes. I have some 90 classes, which I add to my configuration, while building my Hibernate Configuration. When, I disable hibernate search (remove the search annotations and use Configuration instead of AnnotationsConfiguration), I try to start my application, it Works fine. But,the same app when I enable search, it just hangs up. I tried debugging and found the exact place where it hangs. After adding all the class to my AnnotationsConfiguration object, when I say cfg.buildSessionfactory(), It never comes out of that statement. (I have waited for hours!!!) Also when I decrease the number of my model classes (like say to half i.e. 50) it comes out of that statement and the application works fine.. Can Someone tell why is this happening?? My versions of hibernate are: hibernate-core-3.3.1.GA.jar hibernate-annotations-3.4.0.GA.jar hibernate-commons-annotations-3.1.0.GA.jar hibernate-search-3.1.0.GA.jar Also if need to avoid using AnnotationsConfiguration, I read that I need to configure the search event listeners explicitly.. can anyone list all the neccessary listeners and their respective classes? (I tried the standard ones given in Hibernate Search books, but they give me ClassNotFound exception and I have all the neccesarty libs in classpath) Here are the last few lines of hibernate trace I managed to pull : 16:09:32,814 INFO AnnotationConfiguration:369 - Hibernate Validator not found: ignoring 16:09:32,892 INFO ConnectionProviderFactory:95 - Initializing connection provider: org.hibernate.connection.C3P0ConnectionProvider 16:09:32,895 INFO C3P0ConnectionProvider:103 - C3P0 using driver: com.mysql.jdbc.Driver at URL: jdbc:mysql://localhost:3306/autolinkcrmcom_data 16:09:32,898 INFO C3P0ConnectionProvider:104 - Connection properties: {user=root, password=****} 16:09:32,900 INFO C3P0ConnectionProvider:107 - autocommit mode: false 16:09:33,694 INFO SettingsFactory:116 - RDBMS: MySQL, version: 5.1.37-1ubuntu5.1 16:09:33,696 INFO SettingsFactory:117 - JDBC driver: MySQL-AB JDBC Driver, version: mysql-connector-java-3.1.10 ( $Date: 2005/05/19 15:52:23 $, $Revision: 1.1.2.2 $ ) 16:09:33,701 INFO Dialect:175 - Using dialect: org.hibernate.dialect.MySQLDialect 16:09:33,707 INFO TransactionFactoryFactory:59 - Using default transaction strategy (direct JDBC transactions) 16:09:33,709 INFO TransactionManagerLookupFactory:80 - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 16:09:33,711 INFO SettingsFactory:170 - Automatic flush during beforeCompletion(): disabled 16:09:33,714 INFO SettingsFactory:174 - Automatic session close at end of transaction: disabled 16:09:32,814 INFO AnnotationConfiguration:369 - Hibernate Validator not found: ignoring 16:09:32,892 INFO ConnectionProviderFactory:95 - Initializing connection provider: org.hibernate.connection.C3P0ConnectionProvider 16:09:32,895 INFO C3P0ConnectionProvider:103 - C3P0 using driver: com.mysql.jdbc.Driver at URL: jdbc:mysql://localhost:3306/autolinkcrmcom_data 16:09:32,898 INFO C3P0ConnectionProvider:104 - Connection properties: {user=root, password=****} 16:09:32,900 INFO C3P0ConnectionProvider:107 - autocommit mode: false 16:09:33,694 INFO SettingsFactory:116 - RDBMS: MySQL, version: 5.1.37-1ubuntu5.1 16:09:33,696 INFO SettingsFactory:117 - JDBC driver: MySQL-AB JDBC Driver, version: mysql-connector-java-3.1.10 ( $Date: 2005/05/19 15:52:23 $, $Revision: 1.1.2.2 $ ) 16:09:33,701 INFO Dialect:175 - Using dialect: org.hibernate.dialect.MySQLDialect 16:09:33,707 INFO TransactionFactoryFactory:59 - Using default transaction strategy (direct JDBC transactions) 16:09:33,709 INFO TransactionManagerLookupFactory:80 - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 16:09:33,711 INFO SettingsFactory:170 - Automatic flush during beforeCompletion(): disabled 16:09:33,714 INFO SettingsFactory:174 - Automatic session close at end of transaction: disabled 16:09:33,716 INFO SettingsFactory:181 - JDBC batch size: 15 16:09:33,719 INFO SettingsFactory:184 - JDBC batch updates for versioned data: disabled 16:09:33,721 INFO SettingsFactory:189 - Scrollable result sets: enabled 16:09:33,723 DEBUG SettingsFactory:193 - Wrap result sets: disabled 16:09:33,725 INFO SettingsFactory:197 - JDBC3 getGeneratedKeys(): enabled 16:09:33,727 INFO SettingsFactory:205 - Connection release mode: auto 16:09:33,730 INFO SettingsFactory:229 - Maximum outer join fetch depth: 2 16:09:33,732 INFO SettingsFactory:232 - Default batch fetch size: 1000 16:09:33,735 INFO SettingsFactory:236 - Generate SQL with comments: disabled 16:09:33,737 INFO SettingsFactory:240 - Order SQL updates by primary key: disabled 16:09:33,740 INFO SettingsFactory:244 - Order SQL inserts for batching: disabled 16:09:33,742 INFO SettingsFactory:420 - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 16:09:33,744 INFO ASTQueryTranslatorFactory:47 - Using ASTQueryTranslatorFactory 16:09:33,747 INFO SettingsFactory:252 - Query language substitutions: {} 16:09:33,750 INFO SettingsFactory:257 - JPA-QL strict compliance: disabled 16:09:33,752 INFO SettingsFactory:262 - Second-level cache: enabled 16:09:33,754 INFO SettingsFactory:266 - Query cache: disabled 16:09:33,757 INFO SettingsFactory:405 - Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge 16:09:33,759 INFO RegionFactoryCacheProviderBridge:61 - Cache provider: net.sf.ehcache.hibernate.EhCacheProvider 16:09:33,762 INFO SettingsFactory:276 - Optimize cache for minimal puts: disabled 16:09:33,764 INFO SettingsFactory:285 - Structured second-level cache entries: disabled 16:09:33,766 INFO SettingsFactory:314 - Statistics: disabled 16:09:33,769 INFO SettingsFactory:318 - Deleted entity synthetic identifier rollback: disabled 16:09:33,771 INFO SettingsFactory:333 - Default entity-mode: pojo 16:09:33,774 INFO SettingsFactory:337 - Named query checking : enabled 16:09:33,869 INFO Version:20 - Hibernate Search 3.1.0.GA 16:09:35,134 DEBUG DocumentBuilderIndexedEntity:157 - Field selection in projections is set to false for entity **com.xyz.abc**. recognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernateDocumentBuilderIndexedEntity Donno what the last line indicates ??? (hibernaterecognized....) After the last line it doesnt do anything (no trace too ) and just hangs....

    Read the article

  • Hibernate unknown entity (not missing @Entity or import javax.persistence.Entity )

    - by david99world
    I've got a really simple class... import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; @Entity @Table(name = "users") public class User { @Column(name = "firstName") private String firstName; @Column(name = "lastName") private String lastName; @Column(name = "email") private String email; @Id @GeneratedValue(strategy=GenerationType.AUTO) @Column(name = "id") private long id; public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public long getId() { return id; } public void setId(long id) { this.id = id; } } I call it using... public class Main { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub HibernateUtil.buildSessionFactory(); Session session = HibernateUtil.getSessionFactory().getCurrentSession(); session.beginTransaction(); User u = new User(); u.setEmail("[email protected]"); u.setFirstName("David"); u.setLastName("Gray"); session.save(u); session.getTransaction().commit(); System.out.println("Record committed"); session.close(); } } I keep getting... Exception in thread "main" org.hibernate.MappingException: Unknown entity: org.assessme.com.entity.User at org.hibernate.internal.SessionFactoryImpl.getEntityPersister(SessionFactoryImpl.java:1172) at org.hibernate.internal.SessionImpl.getEntityPersister(SessionImpl.java:1316) at org.hibernate.event.internal.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:117) at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.saveWithGeneratedOrRequestedId(DefaultSaveOrUpdateEventListener.java:204) at org.hibernate.event.internal.DefaultSaveEventListener.saveWithGeneratedOrRequestedId(DefaultSaveEventListener.java:55) at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.entityIsTransient(DefaultSaveOrUpdateEventListener.java:189) at org.hibernate.event.internal.DefaultSaveEventListener.performSaveOrUpdate(DefaultSaveEventListener.java:49) at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:90) at org.hibernate.internal.SessionImpl.fireSave(SessionImpl.java:670) at org.hibernate.internal.SessionImpl.save(SessionImpl.java:662) at org.hibernate.internal.SessionImpl.save(SessionImpl.java:658) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.hibernate.context.internal.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:352) at $Proxy4.save(Unknown Source) at Main.main(Main.java:20) hibernateUtil is... import org.hibernate.SessionFactory; import org.hibernate.cfg.Configuration; import org.hibernate.service.ServiceRegistry; import org.hibernate.service.ServiceRegistryBuilder; public class HibernateUtil { private static SessionFactory sessionFactory; private static ServiceRegistry serviceRegistry; public static SessionFactory buildSessionFactory() { try { // Create the SessionFactory from hibernate.cfg.xml Configuration configuration = new Configuration(); configuration.configure(); serviceRegistry = new ServiceRegistryBuilder().applySettings(configuration.getProperties()).buildServiceRegistry(); return new Configuration().configure().buildSessionFactory(serviceRegistry); } catch (Throwable ex) { // Make sure you log the exception, as it might be swallowed System.err.println("Initial SessionFactory creation failed." + ex); throw new ExceptionInInitializerError(ex); } } public static SessionFactory getSessionFactory() { sessionFactory = new Configuration().configure().buildSessionFactory(serviceRegistry); return sessionFactory; } } does anyone have any ideas as I've looked at so many duplicates but the resolutions don't appear to work for me. hibernate.cfg.xml shown below... <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <!-- Database connection settings --> <property name="connection.driver_class">com.mysql.jdbc.Driver</property> <property name="connection.url">jdbc:mysql://localhost/ssme</property> <property name="connection.username">root</property> <property name="connection.password">mypassword</property> <!-- JDBC connection pool (use the built-in) --> <property name="connection.pool_size">1</property> <!-- SQL dialect --> <property name="dialect">org.hibernate.dialect.MySQLDialect</property> <!-- Enable Hibernate's automatic session context management --> <property name="current_session_context_class">thread</property> <!-- Disable the second-level cache --> <property name="cache.provider_class">org.hibernate.cache.NoCacheProvider</property> <!-- Echo all executed SQL to stdout --> <property name="show_sql">true</property> <!-- Drop and re-create the database schema on startup --> <property name="hbm2ddl.auto">update</property> </session-factory> </hibernate-configuration>

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >