Search Results

Search found 8408 results on 337 pages for 'cgi bin'.

Page 315/337 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • Slow boot on Ubuntu 12.04

    - by Hailwood
    My Ubuntu is booting really slow (Windows is booting faster...). I am using Ubuntu a Dell Inspiron 1545 Pentium(R) Dual-Core CPU T4300 @ 2.10GHz, 4GB Ram, 500GB HDD running Ubuntu 12.04 with gnome-shell 3.4.1. After running dmesg the culprit seems to be this section, in particular the last three lines: [26.557659] ADDRCONF(NETDEV_UP): eth0: link is not ready [26.565414] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.355355] Console: switching to colour frame buffer device 170x48 [27.362346] fb0: radeondrmfb frame buffer device [27.362347] drm: registered panic notifier [27.362357] [drm] Initialized radeon 2.12.0 20080528 for 0000:01:00.0 on minor 0 [27.617435] init: udev-fallback-graphics main process (1049) terminated with status 1 [30.064481] init: plymouth-stop pre-start process (1500) terminated with status 1 [51.708241] CE: hpet increased min_delta_ns to 20113 nsec [59.448029] eth2: no IPv6 routers present But I have no idea how to start debugging this. sudo lshw -C video $ sudo lshw -C video *-display description: VGA compatible controller product: RV710 [Mobility Radeon HD 4300 Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:48 memory:e0000000-efffffff ioport:de00(size=256) memory:f6df0000-f6dfffff memory:f6d00000-f6d1ffff After loading the propriety driver my new dmesg log is below (starting from the first major time gap): [2.983741] EXT4-fs (sda6): mounted filesystem with ordered data mode. Opts: (null) [25.094327] ADDRCONF(NETDEV_UP): eth0: link is not ready [25.119737] udevd[520]: starting version 175 [25.167086] lp: driver loaded but no devices found [25.215341] fglrx: module license 'Proprietary. (C) 2002 - ATI Technologies, Starnberg, GERMANY' taints kernel. [25.215345] Disabling lock debugging due to kernel taint [25.231924] wmi: Mapper loaded [25.318414] lib80211: common routines for IEEE802.11 drivers [25.318418] lib80211_crypt: registered algorithm 'NULL' [25.331631] [fglrx] Maximum main memory to use for locked dma buffers: 3789 MBytes. [25.332095] [fglrx] vendor: 1002 device: 9552 count: 1 [25.334206] [fglrx] ioport: bar 1, base 0xde00, size: 0x100 [25.334229] pci 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [25.334235] pci 0000:01:00.0: setting latency timer to 64 [25.337109] [fglrx] Kernel PAT support is enabled [25.337140] [fglrx] module loaded - fglrx 8.96.4 [Mar 12 2012] with 1 minors [25.342803] Adding 4189180k swap on /dev/sda7. Priority:-1 extents:1 across:4189180k [25.364031] type=1400 audit(1338241723.027:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=606 comm="apparmor_parser" [25.364491] type=1400 audit(1338241723.031:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=606 comm="apparmor_parser" [25.364760] type=1400 audit(1338241723.031:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=606 comm="apparmor_parser" [25.394328] wl 0000:0c:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [25.394343] wl 0000:0c:00.0: setting latency timer to 64 [25.415531] acpi device:36: registered as cooling_device2 [25.416688] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A03:00/device:34/LNXVIDEO:00/input/input6 [25.416795] ACPI: Video Device [VID] (multi-head: yes rom: no post: no) [25.416865] [Firmware Bug]: Duplicate ACPI video bus devices for the same VGA controller, please try module parameter "video.allow_duplicates=1"if the current driver doesn't work. [25.425133] lib80211_crypt: registered algorithm 'TKIP' [25.448058] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [25.448321] snd_hda_intel 0000:00:1b.0: irq 47 for MSI/MSI-X [25.448353] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [25.738867] eth1: Broadcom BCM4315 802.11 Hybrid Wireless Controller 5.100.82.38 [25.761213] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 [25.761406] input: HDA Intel Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 [25.783432] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2) [25.908318] EXT4-fs (sda6): re-mounted. Opts: errors=remount-ro [25.928155] input: Dell WMI hotkeys as /devices/virtual/input/input9 [25.960561] udevd[543]: renamed network interface eth1 to eth2 [26.285688] init: failsafe main process (835) killed by TERM signal [26.396426] input: PS/2 Mouse as /devices/platform/i8042/serio2/input/input10 [26.423108] input: AlpsPS/2 ALPS GlidePoint as /devices/platform/i8042/serio2/input/input11 [26.511297] Bluetooth: Core ver 2.16 [26.511383] NET: Registered protocol family 31 [26.511385] Bluetooth: HCI device and connection manager initialized [26.511388] Bluetooth: HCI socket layer initialized [26.511391] Bluetooth: L2CAP socket layer initialized [26.512079] Bluetooth: SCO socket layer initialized [26.530164] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [26.530168] Bluetooth: BNEP filters: protocol multicast [26.553893] type=1400 audit(1338241724.219:5): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=928 comm="apparmor_parser" [26.554860] Bluetooth: RFCOMM TTY layer initialized [26.554866] Bluetooth: RFCOMM socket layer initialized [26.554868] Bluetooth: RFCOMM ver 1.11 [26.557910] type=1400 audit(1338241724.223:6): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=927 comm="apparmor_parser" [26.559166] type=1400 audit(1338241724.223:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=928 comm="apparmor_parser" [26.559574] type=1400 audit(1338241724.223:8): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=928 comm="apparmor_parser" [26.575519] type=1400 audit(1338241724.239:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=931 comm="apparmor_parser" [26.581100] type=1400 audit(1338241724.247:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=931 comm="apparmor_parser" [26.582794] type=1400 audit(1338241724.247:11): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince" pid=929 comm="apparmor_parser" [26.605672] ppdev: user-space parallel port driver [27.592475] sky2 0000:09:00.0: eth0: enabling interface [27.604329] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.606962] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.852509] vesafb: mode is 1024x768x32, linelength=4096, pages=0 [27.852513] vesafb: scrolling: redraw [27.852515] vesafb: Truecolor: size=0:8:8:8, shift=0:16:8:0 [27.852523] mtrr: type mismatch for e0000000,400000 old: write-back new: write-combining [27.852527] mtrr: type mismatch for e0000000,200000 old: write-back new: write-combining [27.852531] mtrr: type mismatch for e0000000,100000 old: write-back new: write-combining [27.852534] mtrr: type mismatch for e0000000,80000 old: write-back new: write-combining [27.852538] mtrr: type mismatch for e0000000,40000 old: write-back new: write-combining [27.852541] mtrr: type mismatch for e0000000,20000 old: write-back new: write-combining [27.852544] mtrr: type mismatch for e0000000,10000 old: write-back new: write-combining [27.852548] mtrr: type mismatch for e0000000,8000 old: write-back new: write-combining [27.852551] mtrr: type mismatch for e0000000,4000 old: write-back new: write-combining [27.852554] mtrr: type mismatch for e0000000,2000 old: write-back new: write-combining [27.852558] mtrr: type mismatch for e0000000,1000 old: write-back new: write-combining [27.853154] vesafb: framebuffer at 0xe0000000, mapped to 0xffffc90005580000, using 3072k, total 3072k [27.853405] Console: switching to colour frame buffer device 128x48 [27.853426] fb0: VESA VGA frame buffer device [28.539800] fglrx_pci 0000:01:00.0: irq 48 for MSI/MSI-X [28.540552] [fglrx] Firegl kernel thread PID: 1168 [28.540679] [fglrx] Firegl kernel thread PID: 1169 [28.540789] [fglrx] Firegl kernel thread PID: 1170 [28.540932] [fglrx] IRQ 48 Enabled [29.845620] [fglrx] Gart USWC size:1236 M. [29.845624] [fglrx] Gart cacheable size:489 M. [29.845629] [fglrx] Reserved FB block: Shared offset:0, size:1000000 [29.845632] [fglrx] Reserved FB block: Unshared offset:fc21000, size:3df000 [29.845635] [fglrx] Reserved FB block: Unshared offset:1fffb000, size:5000 [59.700023] eth2: no IPv6 routers present

    Read the article

  • Killing Stuck Child JVM's

    - by ACShorten
    Note: This facility only applies to Oracle Utilities Application Framework products using COBOL. In some situations, the Child JVM's may spin. This causes multiple startup/shutdown Child JVM messages to be displayed and recursive child JVM's to be initiated and shunned. If the following: Unable to establish connection on port …. after waiting .. seconds.The issue can be caused intermittently by CPU spins in connection to the creation of new processes, specifically Child JVMs. Recursive (or double) invocation of the System.exit call in the remote JVM may be caused by a Process.destroy call that the parent JVM always issues when shunning a JVM. The issue may happen when the thread in the parent JVM that is responsible for the recycling gets stuck and it affects all child JVMs. If this issue occurs at your site then there are a number of options to address the issue: Configure an Operating System level kill command to force the Child JVM to be shunned when it becomes stuck. Configure a Process.destroy command to be used if the kill command is not configured or desired. Specify a time tolerance to detect stuck threads before issuing the Process.destroy or kill commands. Note: This facility is also used when the Parent JVM is also shutdown to ensure no zombie Child JVM's exit. The following additional settings must be added to the spl.properties for the Business Application Server to use this facility: spl.runtime.cobol.remote.kill.command – Specify the command to kill the Child JVM process. This can be a command or specify a script to execute to provide additional information. The kill.command property can accept two arguments, {pid} and {jvmNumber}, in the specified string. The arguments must be enclosed in curly braces as shown here. Note: The PID will be appended to the killcmd string, unless the {pid} and {jvmNumber} arguments are specified. The jvmNumber can be useful if passed to a script for logging purposes. Note: If a script is used it must be in the path and be executable by the OS user running the system. spl.runtime.cobol.remote.destroy.enabled – Specify whether to use the Process.destroy command instead of the kill command. Specify true or false. Default value is false. Note: Unless otherwise required, it is recommended to use the kill command option if shunning JVM's is an issue. There this value can remain its default value, false, unless otherwise required. spl.runtime.cobol.remote.kill.delaysecs – Specify the number of seconds to wait for the Child JVM to terminate naturally before issuing the Process.destroy or kill commands. Default is 10 seconds. For example: spl.runtime.cobol.remote.kill.command=kill -9 {pid} {jvmNumber}spl.runtime.cobol.remote.destroy.enabled=falsespl.runtime.cobol.remote.kill.delaysecs=10 When a Child JVM is to be recycled, these properties are inspected and the spl.runtime.cobol.remote.kill.command, executed if provided. This is done after waiting for spl.runtime.cobol.remote.kill.delaysecs seconds to give the JVM time to shut itself down. The spl.runtime.cobol.remote.destroy.enabled property must be set to true AND the spl.runtime.cobol.remote.kill.command omitted for the original Process.destroy command to be used on the process. Note: By default the spl.runtime.cobol.remote.destroy enabled is set to false and is therefore disabled. If neither spl.runtime.cobol.remote.kill.command nor spl.runtime.cobol.remote.destroy.enabled is specified, child JVMs will not beforcibly killed. They will be left to shut themselves down (which may lead to orphan JVMs). If both are specified, the spl.runtime.cobol.remote.kill.command is preferred and spl.runtime.cobol.remote.destroy.enabled defaulted to false.It is recommended to invoke a script to issue the direct kill command instead of directly using the kill -9 commands.For example, the following sample script ensures that the process Id is an active cobjrun process before issuing the kill command: forcequit.sh #!/bin/shTHETIME=`date +"%Y-%m-%d %H:%M:%S"`if [ "$1" = "" ]then  echo "$THETIME: Process Id is required" >>$SPLSYSTEMLOGS/forcequit.log  exit 1fijavaexec=cobjrunps e $1 | grep -c $javaexecif [ $? = 0 ]then  echo "$THETIME: Process $1 is an active $javaexec process -- issuing kill-9 $1" >>$SPLSYSTEMLOGS/forcequit.log  kill -9 $1exit 0else  echo "$THETIME: Process id $1 is not a $javaexec process or not active --  kill will not be issued" >>$SPLSYSTEMLOGS/forcequit.logexit 1fi This script's name would then be specified as the value for the spl.runtime.cobol.remote.kill.command property, for example: spl.runtime.cobol.remote.kill.command=forcequit.sh The forcequit script does not have any explicit parameters but pid is passed automatically. To use the jvmNumber parameter it must explicitly specified in the command. For example, to call script forcequit.sh and pass it the pid and the child JVM number, specify it as follows: spl.runtime.cobol.remote.kill.command=forcequit.sh {pid} {jvmNumber} The script can then use the JVM number for logging purposes or to further ensure that the correct pid is being killed.If the arguments are omitted, the pid is automatically appended to the spl.runtime.cobol.remote.kill.command string. To use this facility the following patches must be installed: Patch 13719584 for Oracle Utilities Application Framework V2.1, Patches 13684595 and 13634933 for Oracle Utilities Application Framework V2.2 Group Fix 4 (as Patch 13640668) for Oracle Utilities Application Framework V4.1.

    Read the article

  • MySQL server stopped working after upgrade

    - by umpirsky
    I upgraded to 12.04 and my MySQL server just stopped working. It throws: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I tried to reinstall it from software center, but it fails with: Package operation failed The installation or removal of a software package failed. installArchives() failed: Selecting previously unselected package mysql-server. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 243412 files and directories currently installed.) Unpacking mysql-server (from .../mysql-server_5.5.22-0ubuntu1_all.deb) ... Setting up mysql-server-5.5 (5.5.22-0ubuntu1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: mysql-server-5.5 mysql-server Error in function: Setting up mysql-server-5.5 (5.5.22-0ubuntu1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured I also tried: $ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: mysql-server-5.5 mysql-server-core-5.5 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up mysql-server-5.5 (5.5.22-0ubuntu1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: mysql-server-5.5 E: Sub-process /usr/bin/dpkg returned an error code (1) Any idea? EDIT: Crash report is being auto generated. EDIT: After trying and trying I got suggestion to do: #apt-get --purge remove mysql-server-5.1 mysql-server-5.5 Reading package lists... Done Building dependency tree Reading state information... Done Virtual packages like 'mysql-server-5.1' can't be removed The following packages will be REMOVED: mysql-server-5.5* 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 31.3 MB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 243407 files and directories currently installed.) Removing mysql-server-5.5 ... Purging configuration files for mysql-server-5.5 ... Processing triggers for ureadahead ... Processing triggers for man-db ... The most important part is: Virtual packages like 'mysql-server-5.1' can't be removed Any idea?

    Read the article

  • Resetting Your Oracle User Password with SQL Developer

    - by thatjeffsmith
    There’s nothing more annoying than having to email, call, or log a support ticket to have one of your accounts reset. This is no less annoying in the Oracle database. Those pesky security folks have determined that your password should only be valid for X days, and your time is up. Time to reset the password! Except…you can’t log into the database to reset your password. What now? Wait a second, look at this nifty thing I see in SQL Developer: Right click on my connection, reset password not available! Why not? The JDBC Driver Doesn’t Support This Operation We can’t make this call over the Oracle JDBC layer, because it hasn’t been implemented. However our primary interface, OCI, does indeed support this. In order to use the Oracle Call Interface (OCI), you need to have an Oracle Client on your machine. The good news is that this is fairly easy to get going. The Instant Client will do. You have two options, the full or ‘Lite’ Instant Clients. If you want SQL*Plus and the other client tools, go for the full. If you just want the basic drivers, go for the Lite. Either of these is fine, but mind the bit level and version of Oracle! Make sure you get a 32 bit Instant Client if you run 32 bit SQL Developer or 64 bit if you run 64 Here’s the download link What, you didn’t believe me? Mind the version of Oracle too! You want to be at the same level or higher of the database you’re working with. You can use a 11.2.0.3 client with 11.2.0.1 database but not a 10gR2 client with 11gR2 database. Clear as mud? Download and Extract Put it where you want – Program Files is as good as place as any if you have the rights. When you’re done, copy that directory path you extracted the archive to, because we’re going to add it to your Windows PATH environment variable. The easiest way to find this in Windows 7 is to open the Start dialog and type ‘path’. In Windows 8 you’ll cast your spell and wave at your screen until something happens. I recommend you put it up front so we find our DLLs first. Now with that set, let’s start up SQL Developer. Check the Connection Context menu again Bingo! What happened there? SQL Developer looks to see if it can find the OCI resources. Guess where it looks? That’s right, the PATH. If it finds what it’s looking for, and confirms the bit level is right, it will activate the Reset Password option. We have a Preference to ‘force’ an OCI/THICK connection that gives you a few other edge case features, but you do not need to enable this to activate the Reset Password. Not necessary, but won’t hurt anything either. There are a few actual benefits to using OCI powered connections, but that’s beyond the scope of today’s blog post…to be continued. Ok, so we’re ready to go. Now, where was I again? Oh yeah, my password has expired… Right click on your connection and now choose ‘Reset Password’ You’ll need to know your existing password and select a new one that meets your databases’s security standards. I Need Another Option, This Ain’t Working! If you have another account in the database, you can use the DBA Panel to reset a user’s password, or of course you can spark up a SQL*Plus session and issue the ALTER USER JEFF IDENTIFIED BY _________; command – but you knew this already, yes? I need more help ‘installing’ the Instant Client, help! There are lots and lots of resources out there on this subject. But I also know from personal experience that many of you have problems getting this to ‘work.’ The key things to remember is to download the right bit level AND make sure the client install directory is in your path. I know many folks that will just ‘install’ the Instant Client directly to one of their ‘bin’ type directories. You can do that if you want, but I prefer the cleaner method. Of course if you lack admin privs to change the PATH variable, that might be your only option. Or you could do what the original ORA- message indicated and ‘contact your DBA.’

    Read the article

  • Use WLST to Delete All JMS Messages From a Destination

    - by james.bayer
    I got a question today about whether WebLogic Server has any tools to delete all messages from a JMS Queue.  It just so happens that the WLS Console has this capability already.  It’s available on the screen after the “Show Messages” button is clicked on a destination’s Monitoring tab as seen in the screen shot below. The console is great for something ad-hoc, but what if I want to automate this?  Well it just so happens that the console is just a weblogic application layered on top of the JMX Management interface.  If you look at the MBean Reference, you’ll find a JMSDestinationRuntimeMBean that includes the operation deleteMessages that takes a JMS Message Selector as an argument.  If you pass an empty string, that is essentially a wild card that matches all messages. Coding a stand-alone JMX client for this is kind of lame, so let’s do something more suitable to scripting.  In addition to the console, WebLogic Scripting Tool (WLST) based on Jython is another way to browse and invoke MBeans, so an equivalent interactive shell session to delete messages from a destination would looks like this: D:\Oracle\fmw11gr1ps3\user_projects\domains\hotspot_domain\bin>setDomainEnv.cmd D:\Oracle\fmw11gr1ps3\user_projects\domains\hotspot_domain>java weblogic.WLST   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   wls:/offline> connect('weblogic','welcome1','t3://localhost:7001') Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'hotspot_domain'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   wls:/hotspot_domain/serverConfig> serverRuntime() Location changed to serverRuntime tree. This is a read-only tree with ServerRuntimeMBean as the root. For more help, use help(serverRuntime)   wls:/hotspot_domain/serverRuntime> cd('JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0') wls:/hotspot_domain/serverRuntime/JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0> ls() dr-- DurableSubscribers   -r-- BytesCurrentCount 0 -r-- BytesHighCount 174620 -r-- BytesPendingCount 0 -r-- BytesReceivedCount 253548 -r-- BytesThresholdTime 0 -r-- ConsumersCurrentCount 0 -r-- ConsumersHighCount 0 -r-- ConsumersTotalCount 0 -r-- ConsumptionPaused false -r-- ConsumptionPausedState Consumption-Enabled -r-- DestinationInfo javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=DestinationInfo,items=((itemName=ApplicationName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=ModuleName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName openmbean.SimpleType(name=java.lang.Boolean)),(itemName=SerializedDestination,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=ServerName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=Topic,itemType=javax.management.openmbean.SimpleType(name=java.lang.Boolean)),(itemName=VersionNumber,itemType=javax.management.op ule-0!Queue-0, Queue=true, SerializedDestination=rO0ABXNyACN3ZWJsb2dpYy5qbXMuY29tbW9uLkRlc3RpbmF0aW9uSW1wbFSmyJ1qZfv8DAAAeHB3kLZBABZTeXN0ZW1Nb2R1bGUtMCFRdWV1ZS0wAAtKTVNTZXJ2ZXItMAAOU3lzdGVtTW9kdWxlLTABAANBbGwCAlb6IS6T5qL/AAAACgEAC0FkbWluU2VydmVyAC2EGgJW+iEuk+ai/wAAAAsBAAtBZG1pblNlcnZlcgAthBoAAQAQX1dMU19BZG1pblNlcnZlcng=, ServerName=JMSServer-0, Topic=false, VersionNumber=1}) -r-- DestinationType Queue -r-- DurableSubscribers null -r-- InsertionPaused false -r-- InsertionPausedState Insertion-Enabled -r-- MessagesCurrentCount 0 -r-- MessagesDeletedCurrentCount 3 -r-- MessagesHighCount 2 -r-- MessagesMovedCurrentCount 0 -r-- MessagesPendingCount 0 -r-- MessagesReceivedCount 3 -r-- MessagesThresholdTime 0 -r-- Name SystemModule-0!Queue-0 -r-- Paused false -r-- ProductionPaused false -r-- ProductionPausedState Production-Enabled -r-- State advertised_in_cluster_jndi -r-- Type JMSDestinationRuntime   -r-x closeCursor Void : String(cursorHandle) -r-x deleteMessages Integer : String(selector) -r-x getCursorEndPosition Long : String(cursorHandle) -r-x getCursorSize Long : String(cursorHandle) -r-x getCursorStartPosition Long : String(cursorHandle) -r-x getItems javax.management.openmbean.CompositeData[] : String(cursorHandle),Long(start),Integer(count) -r-x getMessage javax.management.openmbean.CompositeData : String(cursorHandle),Long(messageHandle) -r-x getMessage javax.management.openmbean.CompositeData : String(cursorHandle),String(messageID) -r-x getMessage javax.management.openmbean.CompositeData : String(messageID) -r-x getMessages String : String(selector),Integer(timeout) -r-x getMessages String : String(selector),Integer(timeout),Integer(state) -r-x getNext javax.management.openmbean.CompositeData[] : String(cursorHandle),Integer(count) -r-x getPrevious javax.management.openmbean.CompositeData[] : String(cursorHandle),Integer(count) -r-x importMessages Void : javax.management.openmbean.CompositeData[],Boolean(replaceOnly) -r-x moveMessages Integer : String(java.lang.String),javax.management.openmbean.CompositeData,Integer(java.lang.Integer) -r-x moveMessages Integer : String(selector),javax.management.openmbean.CompositeData -r-x pause Void : -r-x pauseConsumption Void : -r-x pauseInsertion Void : -r-x pauseProduction Void : -r-x preDeregister Void : -r-x resume Void : -r-x resumeConsumption Void : -r-x resumeInsertion Void : -r-x resumeProduction Void : -r-x sort Long : String(cursorHandle),Long(start),String[](fields),Boolean[](ascending)   wls:/hotspot_domain/serverRuntime/JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0> cmo.deleteMessages('') 2 where the domain name is “hotspot_domain”, the JMS Server name is “JMSServer-0”, the Queue name is “Queue-0” and the System Module is named “SystemModule-0”.  To invoke the operation, I use the “cmo” object, which is the “Current Management Object” that represents the currently navigated to MBean.  The 2 indicates that two messages were deleted.  Combining this WLST code with a recent post by my colleague Steve that shows you how to use an encrypted file to store the authentication credentials, you could easily turn this into a secure automated script.  If you need help with that step, a long while back I blogged about some WLST basics.  Happy scripting.

    Read the article

  • Screen (command-line program) bug?

    - by VioletCrime
    fired up my Minecraft server again after about a year off. My server used to run 11.04, which has since been upgraded to 12.04; I lost my management scripts in the upgrade (thought I'd backed up the user's home directory), but whatever, I enjoy developing stuff like that anyways. However, this time around, I'm running into issues. I start the Minecraft server using a detached screen, however the script is unable to 'stuff' commands into the screen instance until I attach to the screen then detach again? Once I do that, I can stuff anything I want into the server's terminal using the -X option, until I stop/start the server again, then I have to reattach and detach in order to restore functionality? Here's the manager script: #!/bin/bash #Name of the screen housing the server (set with the -S flag at startup) SCREENNAME=minecraft1 #Name of the folder housing the Minecraft world FOLDERNAME=world1 startServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Cannot start Minecraft server; it is already running!" else screen -dmS $SCREENNAME java -Xmx1024M -Xms1024M -jar minecraft_server.jar sleep 2 if screen -list | grep "$SCREENNAME" > /dev/null then echo "Minecraft server started; happy mining!" else echo "ERROR: Minecraft server failed to start!" fi fi; } stopServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (halt) in one minute." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down (halt) in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' else echo "Server is not running; nothing to stop." fi; } stopServerNow (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 5-second warning." screen -S $SCREENNAME -X stuff "/say EMERGENCY SHUTDOWN! 5 seconds to halt." screen -S $SCREENNAME -X stuff $'\015' sleep 5 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' else echo "Server is not running; nothing to stop." fi; } restartServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (restart) in one minute." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down (restart) in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' sleep 2 startServer else echo "Cannot restart server: it isn't running." fi; } #In order for this function to work, a directory 'backup/$FOLDERNAME' must exist in the same #directory that '$FOLDERNAME' resides backupWorld (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (backup) in one minute." screen -S $SCREENNAME -X stuff $'\015' screen -S $SCREENNAME -X stuff "/say Server should be down for no more than a few seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down for backup in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' fi sleep 2 if screen -list | grep $SCREENNAME > /dev/null then echo "Server is still running? Error." else cd .. tar -czvf backup0.tar.gz $FOLDERNAME mv backup0.tar.gz backup/$FOLDERNAME cd backup/$FOLDERNAME rm backup10.tar.gz mv backup9.tar.gz backup10.tar.gz mv backup8.tar.gz backup9.tar.gz mv backup7.tar.gz backup8.tar.gz mv backup6.tar.gz backup7.tar.gz mv backup5.tar.gz backup6.tar.gz mv backup4.tar.gz backup5.tar.gz mv backup3.tar.gz backup4.tar.gz mv backup2.tar.gz backup3.tar.gz mv backup1.tar.gz backup2.tar.gz mv backup0.tar.gz backup1.tar.gz cd ../../$FOLDERNAME screen -dmS $SCREENNAME java -Xmx1024M -Xms1024M -jar minecraft_server.jar; sleep 2 if screen -list | grep "$SCREENNAME" > /dev/null then echo "Minecraft server restarted; happy mining!" else echo "ERROR: Minecraft server failed to start!" fi fi; } printCommands (){ echo echo "$0 usage:" echo echo "Start : Starts the server on a detached screen." echo "Stop : Stop the server; includes a 1-minute warning." echo "StopNOW : Stops the server with only a 5-second warning." echo "Restart : Stops the server and starts the server again." echo "Backup : Stops the server (1 min), backs up the world, and restarts." echo "Help : Display this message." } #Forces case-insensitive string comparisons shopt -s nocasematch #Primary 'Switch' if [[ $1 = "start" ]] then startServer elif [[ $1 = "stop" ]] then stopServer elif [[ $1 = "stopnow" ]] then stopServerNow elif [[ $1 = "backup" ]] then backupWorld elif [[ $1 = "restart" ]] then restartServer else printCommands fi

    Read the article

  • Web Site Performance and Assembly Versioning – Part 2 Versioning Combined Files Using Subversion

    - by capgpilk
    Ok so it took a while to post this second part. Many apologies, we had a big roll out of a new platform at work and many things had to get sidelined. So this is the second part in a short series of website performance and using versioning to help improve it. Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion – this post Versioning Combined Files Using Mercurial – published shortly In the previous post we used AjaxMin to shrink js and css files then concatenated them into one file each which had the file name of site-script.combined.min.js and site-style.combined.min.css. These file names are fine, but you can configure IIS 7 to cache these static files and so lower the amount of data transferred between server and client. This is done by editing the response headers in IIS. 1. In IIS7 Manager, choose the directory where these files are located and select HTTP Response Headers. 2. Check the Expire Web Content and set a time period well into the future. 3. When refreshing the web page, the server will respond with HTTP 304 forcing the browser to retrieve the file from its cache. 4. As can be seen in FireBug, the Cache-Control header has a max age of 31536000 seconds which equates to 365 days.   The server will always send this HTTP 304 message unless the file changes forcing it to send new content. To help force this we can change the file name based on the latest build using the SVN revision number in the filename. So we have lowered data transfer on content that hasn’t changed, but forced it to be sent when you have made a change to the css or js files. Now to get the SVN revision number in to the file name. 1. Import the MSBuildCommunityTasks targets which can be dowloaded from here. 1: <Import Project="$(MSBuildExtensionsPath) 2: \MSBuildCommunityTasks 3: \MSBuild.Community.Tasks.Targets" /> 2. Edit the BeforeBuild target to call out to svn and get the latest revision 1: <SvnVersion LocalPath="$(MSBuildProjectDirectory)" 2: ToolPath="$(ProgramFiles)\VisualSVN Server\bin"> 3: <Output TaskParameter="Revision" PropertyName="Revision" /> 4: </SvnVersion> 3. Set it to update the project AssemblyInfo.cs file for the svn revision. 1: <FileUpdate Files="Properties\AssemblyInfo.cs" 2: Regex="(\d+)\.(\d+)\.(\d+)\.(\d+)" 3: ReplacementText="$1.$2.$3.$(Revision)" /> 4. Now edit the AfterBuild target to get the full dll version. You could combine these two steps and just get the version from svn, I am working on one project that updates the AssemblyInfo file and another project that allows manual editing of the file, but needs that version within the file name; so I just combined the two for this post. 1: <MSBuild.ExtensionPack.Framework.Assembly 2: TaskAction="GetInfo" 3: NetAssembly="$(OutputPath)\mydll.dll"> 4: <Output TaskParameter="OutputItems" ItemName="Info" /> 5: </MSBuild.ExtensionPack.Framework.Assembly> 6: <Message Text="Version: %(Info.AssemblyVersion)" 7: Importance="High" /> 5. Use this Info.AssemblyVersion to write out the combined css and js files as described in the last post. 1: <WriteLinestoFile File="Scripts\site-%(Info.AssemblyVersion).combined.min.js" 2: Lines="@(JSLinesSite)" Overwrite="true" />   In the next post I will cover doing the same, but for a Mercurial repository.

    Read the article

  • GlassFish Clustering with DCOM on Windows

    - by ByronNevins
    DCOM - Distributed COM, a Microsoft protocol for communicating with Windows machines. Why use DCOM? In GlassFish 3.1 SSH is used as the standard way to run commands on remote nodes for clustering.  It is very difficult for users to get SSH configured properly on Windows.  SSH does not come with Windows so we have to depend on third party tools.  And then the user is forced to install and configure these tools -- which can be tricky. DCOM is available on all supported platforms.  It is built-in to Windows. The idea is to use DCOM to communicate with remote Windows nodes.  This has the huge advantage that the user has to do minimal, if any, configuration on the Windows nodes. Implementation HighlightsTwo open Source Libraries have been added to GlassFish: Jcifs – a SAMBA implementation in Java J-interop – A Java implementation for making DCOM calls to remote Windows computers.   Note that any supported platform can use DCOM to work with Windows nodes -- not just Windows.E.g. you can have a Linux DAS work with Windows remote instances.All existing SSH commands now have a corresponding DCOM command – except for setup-ssh which isn’t needed for DCOM.  validate-dcom is an all new command. New DCOM Commands create-node-dcom delete-node-dcom install-node-dcom list-nodes-dcom ping-node-dcom uninstall-node-dcom update-node-dcom validate-dcom setup-local-dcom (This is only available via Update Center for GlassFish 3.1.2) These commands are in-place in the trunk (4.0).  And in the branch (3.1.2) Windows Configuration Challenges There are an infinite number of possible configurations of Windows if you look at it as a combination of main release, service-pack, special drivers, software, configuration etc.  Later versions of Windows err on the side of tightening security be default.  This means that the Windows host may need to have configuration changes made.These configuration changes mostly need to be made by the user.  setup-local-dcom will assist you in making required changes to the Windows Registry.  See the reference blogs for details. The validate-dcom Command validate-dcom is a crucial command.  It should be run before any other commands.  If it does not run successfully then there is no point in running other commands.The validate-dcom command must be used from a DAS machine to test a different Windows machine.  If  validate-dcom runs successfully you can be confident that all the DCOM commands will work.  Conversely, the opposite is also true:  If validate-dcom fails, then no DCOM commands will work. What validate-dcom does Verify that the remote host is not the local machine. Resolves the remote host name Checks that the remote DCOM port is being listened on (135, 139) Checks that the remote host’s File Sharing is enabled (port 445) It copies a file (a script) to the remote host to verify that SAMBA is working and authorization is correct It runs a script that it copied on-the-fly to the remote host. Tips and Tricks The bread and butter commands that use DCOM are existing commands like create-instance, start-instance etc.   All of the commands that have dcom in their name are for dealing with the actual nodes. The way the software works is to call asadmin.bat on the remote machine and run a command.  This means that you can track these commands easily on the remote machine with the usual tools.  E.g. using AS_LOGFILE, looking at log files, etc.  It’s easy to attach a debugger to the remote asadmin process, “just in time”, if necessary. How to debug the remote commands:Edit the asadmin.bat file that is in the glassfish/bin folder.  Use glassfish/lib/nadmin.bat in GlassFish 4.0+Add these options to the java call:-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1234  Now if you run, say start-instance on DAS, you can attach your debugger, at your leisure, to the remote machines port 1234.  It will be running start-local-instance and patiently waiting for you to attach.

    Read the article

  • Memory Efficient Windows SOA Server

    - by Antony Reynolds
    Installing a Memory Efficient SOA Suite 11.1.1.6 on Windows Server Well 11.1.1.6 is now available for download so I thought I would build a Windows Server environment to run it.  I will minimize the memory footprint of the installation by putting all functionality into the Admin Server of the SOA Suite domain. Required Software 64-bit JDK SOA Suite If you want 64-bit then choose “Generic” rather than “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” This has links to all the required software. If you choose “Generic” then the Repository Creation Utility link does not show, you still need this so change the platform to “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” to get the software. Similarly if you need a database then you need to change the platform to get the link to XE for Windows or Linux. If possible I recommend installing a 64-bit JDK as this allows you to assign more memory to individual JVMs. Windows XE will work, but it is better if you can use a full Oracle database because of the limitations on XE that sometimes cause it to run out of space with large or multiple SOA deployments. Installation Steps The following flow chart outlines the steps required in installing and configuring SOA Suite. The steps in the diagram are explained below. 64-bit? Is a 64-bit installation required?  The Windows & Linux installers will install 32-bit versions of the Sun JDK and JRockit.  A separate JDK must be installed for 64-bit. Install 64-bit JDK The 64-bit JDK can be either Hotspot or JRockit.  You can choose either JDK 1.7 or 1.6. Install WebLogic If you are using 64-bit then install WebLogic using “java –jar wls1036_generic.jar”.  Make sure you include Coherence in the installation, the easiest way to do this is to accept the “Typical” installation. SOA Suite Required? If you are not installing SOA Suite then you can jump straight ahead and create a WebLogic domain. Install SOA Suite Run the SOA Suite installer and point it at the existing Middleware Home created for WebLogic.  Note to run the SOA installer on Windows the user must have admin privileges.  I also found that on Windows Server 2008R2 I had to start the installer from a command prompt with administrative privileges, granting it privileges when it ran caused it to ignore the jreLoc parameter. Database Available? Do you have access to a database into which you can install the SOA schema.  SOA Suite requires access to an Oracle database (it is supported on other databases but I would always use an oracle database). Install Database I use an 11gR2 Oracle database to avoid XE limitations.  Make sure that you set the database character set to be unicode (AL32UTF8).  I also disabled the new security settings because they get in the way for a developer database.  Don’t forget to check that number of processes is at least 150 and number of sessions is not set, or is set to at least 200 (in the DB init parameters). Run RCU The SOA Suite database schemas are created by running the Repository Creation Utility.  Install the “SOA and BPM Infrastructure” component to support SOA Suite.  If you keep the schema prefix as “DEV” then the config wizard is easier to complete. Run Config Wizard The Config wizard creates the domain which hosts the WebLogic server instances.  To get a minimum footprint SOA installation choose the “Oracle Enterprise Manager” and “Oracle SOA Suite for developers” products.  All other required products will be automatically selected. The “for developers” installs target the appropriate components at the AdminServer rather than creating a separate managed server to house them.  This reduces the number of JVMs required to run the system and hence the amount of memory required.  This is not suitable for anything other than a developer environment as it mixes the admin and runtime functions together in a single server.  It also takes a long time to load all the required modules, making start up a slow process. If it exists I would recommend running the config wizard found in the “oracle_common/common/bin” directory under the middleware home.  This should have access to all the templates, including SOA. If you also want to run BAM in the same JVM as everything else then you need to “Select Optional Configuration” for “Managed Servers, Clusters and Machines”. To target BAM at the AdminServer delete the “bam_server1” managed server that is created by default.  This will result in BAM being targeted at the AdminServer. Installation Issues I had a few problems when I came to test everything in my mega-JVM. Following applications were not targeted and so I needed to target them at the AdminServer: b2bui composer Healthcare UI FMW Welcome Page Application (11.1.0.0.0) How Memory Efficient is It? On a Windows 2008R2 Server running under VirtualBox I was able to bring up both the 11gR2 database and SOA/BPM/BAM in 3G memory.  I allocated a minimum 512M to the PermGen and a minimum of 1.5G for the heap.  The setting from setSOADomainEnv are shown below: set DEFAULT_MEM_ARGS=-Xms1536m -Xmx2048m set PORT_MEM_ARGS=-Xms1536m -Xmx2048m set DEFAULT_MEM_ARGS=%DEFAULT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m set PORT_MEM_ARGS=%PORT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m I arrived at these numbers by monitoring JVM memory usage in JConsole. Task Manager showed total system memory usage at 2.9G – just below the 3G I allocated to the VM. Performance is not stellar but it runs and I could run JDeveloper alongside it on my 8G laptop, so in that sense it was a result!

    Read the article

  • Why does a bash-zenity script has that title on Unity Panel and that icon on Unity Launcher?

    - by Sadi
    I have this small bash script which helps use Infinality font rendering options via a more user-friendly Zenity window. But whenever I launch it I have this "Color Picker" title on Unity Panel together with the icon assigned for "Color Picker" utility. I wonder why and how this is happening and how I can change it? #!/bin/bash # A simple script to provide a basic, zenity-based GUI to change Infinality Style. # v.1.2 # infinality_current=`cat /etc/profile.d/infinality-settings.sh | grep "USE_STYLE=" | awk -F'"' '{print $2}'` sudo_password="$( gksudo --print-pass --message 'Provide permission to make system changes: Enter your password to start or press Cancel to quit.' -- : 2>/dev/null )" # Check for null entry or cancellation. if [[ ${?} != 0 || -z ${sudo_password} ]] then # Add a zenity message here if you want. exit 4 fi # Check that the password is valid. if ! sudo -kSp '' [ 1 ] <<<"${sudo_password}" 2>/dev/null then # Add a zenity message here if you want. exit 4 fi # menu(){ im="zenity --width=500 --height=490 --list --radiolist --title=\"Change Infinality Style\" --text=\"Current <i>Infinality Style</i> is\: <b>$infinality_current</b>\n? To <i>change</i> it, select any other option below and press <b>OK</b>\n? To <i>quit without changing</i>, press <b>Cancel</b>\" " im=$im" --column=\" \" --column \"Options\" --column \"Description\" " im=$im"FALSE \"DEFAULT\" \"Use default settings - a compromise that should please most people\" " im=$im"FALSE \"OSX\" \"Simulate OSX rendering\" " im=$im"FALSE \"IPAD\" \"Simulate iPad rendering\" " im=$im"FALSE \"UBUNTU\" \"Simulate Ubuntu rendering\" " im=$im"FALSE \"LINUX\" \"Generic Linux style - no snapping or certain other tweaks\" " im=$im"FALSE \"WINDOWS\" \"Simulate Windows rendering\" " im=$im"FALSE \"WIN7\" \"Simulate Windows 7 rendering with normal glyphs\" " im=$im"FALSE \"WINLIGHT\" \"Simulate Windows 7 rendering with lighter glyphs\" " im=$im"FALSE \"VANILLA\" \"Just subpixel hinting\" " im=$im"FALSE \"CLASSIC\" \"Infinality rendering circa 2010 - No snapping.\" " im=$im"FALSE \"NUDGE\" \"Infinality - Classic with lightly stem snapping and tweaks\" " im=$im"FALSE \"PUSH\" \"Infinality - Classic with medium stem snapping and tweaks\" " im=$im"FALSE \"SHOVE\" \"Infinality - Full stem snapping and tweaks without sharpening\" " im=$im"FALSE \"SHARPENED\" \"Infinality - Full stem snapping, tweaks, and Windows-style sharpening\" " im=$im"FALSE \"INFINALITY\" \"Infinality - Standard\" " im=$im"FALSE \"DISABLED\" \"Act without extra infinality enhancements - just subpixel hinting\" " } # option(){ choice=`echo $im | sh -` # if echo $choice | grep "DEFAULT" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"DEFAULT\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "OSX" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"OSX\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "IPAD" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"IPAD\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "UBUNTU" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"UBUNTU\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "LINUX" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"LINUX\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "WINDOWS" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"WINDOWS\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "WIN7" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"WINDOWS7\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "WINLIGHT" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"WINDOWS7LIGHT\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "VANILLA" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"VANILLA\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "CLASSIC" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"CLASSIC\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "NUDGE" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"NUDGE\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "PUSH" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"PUSH\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "SHOVE" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"SHOVE\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "SHARPENED" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"SHARPENED\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "INFINALITY" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"INFINALITY\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "DISABLED" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"DISABLED\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # } # menu option # if test ${#choice} -gt 0; then echo "Operation completed" fi # exit 0

    Read the article

  • Part 1 - 12c Database and WLS - Overview

    - by Steve Felts
    The download of Oracle 12c database became available on June 25, 2013.  There are some big new features in 12c database and WebLogic Server will take advantage of them. Immediately, we will support using 12c database and drivers with WLS 10.3.6 and 12.1.1.  When the next version of WLS ships, additional functionality will be supported (those rows in the table below with all "No" values will get a "Yes).  The following table maps the Oracle 12c Database features supported with various combinations of currently available WLS releases, 11g and 12c Drivers, and 11g and 12c Databases. Feature WebLogic Server 10.3.6/12.1.1 with 11g drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 11g drivers and 12c DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 12c DB JDBC replay No No No Yes (Active GridLink only in 10.3.6, add generic in 12.1.1) Multi Tenant Database No Yes (except set container) No Yes (except set container) Dynamic switching between Tenants No No No No Database Resident Connection pooling (DRCP) No No No No Oracle Notification Service (ONS) auto configuration No No No No Global Database Services (GDS) No Yes (Active GridLink only) No Yes (Active GridLink only) JDBC 4.1 (using ojdbc7.jar files & JDK 7) No No Yes Yes  The My Oracle Support (MOS) document covering this is "WebLogic Server 12.1.1 and 10.3.6 Support for Oracle 12c Database [ID 1564509.1]" at the link https://support.oracle.com/epmos/faces/DocumentDisplay?id=1564509.1. The following documents are also key references:12c Oracle Database Developer Guide http://docs.oracle.com/cd/E16655_01/appdev.121/e17620/toc.htm 12c Oracle Database Administrator's Guide http://docs.oracle.com/cd/E16655_01/server.121/e17636/toc.htm . I plan to write some related blog articles not to duplicate existing product documentation but to introduce the features, provide some examples, and tie together some information to make it easier to understand. How do you get started with 12c?  The easiest way is to point your data source at a 12c database.  The only change on the WLS side is to update the URL in your data source (assuming that you are not just upgrading your database).  You can continue to use the 11.2.0.3 driver jar files that shipped with WLS 10.3.6 or 12.1.1.  You shouldn't see any changes in your application.  You can take advantage of enhancements on the database side that don't affect the mid-tier.  On the WLS side, you can take advantage of using Global Data Service or connecting to a tenant in a multi-tenant database transparently. If you want to use the 12c client jar files, it's a bit of work because they aren't shipped with WLS and you can't just drop in ojdbc6.jar as in the old days.  You need to use a matched set of jar files and they need to come before existing jar files in the CLASSPATH.  The MOS article is written from the standpoint that you need to get the jar files directly - download almost 1G and install over 600M footprint to get 15 jar files.  Assuming that you have the database installed and you can get access to the installation (or ask the DBA), you need to copy the 15 jar files to each machine with a WLS installation and get them in your CLASSPATH.  You can play with setting the PRE_CLASSPATH but the more practical approach may be to just update WL_HOME/common/bin/commEnv.sh directly.  There's a change in the transaction completion behavior (read the MOS) so if you think you might run into that, you will want to set -Doracle.jdbc.autoCommitSpecCompliant=false.  Also if you are running with Active GridLink, you must set -Doracle.ucp.PreWLS1212Compatible=true (how's that for telling you that this is fixed in WLS 12.1.2).  Once you get the configuration out of the way, you can start using the new ojdbc7.jar in place of the ojdbc6.jar to get the new JDBC 4.1 API's.  You can also start using Application Continuity.  This feature is also known as JDBC Replay because when a connection fails you get a new one with all JDBC operations up to the failure point automatically replayed.  As you might expect, there are some limitations but it's an interesting feature.  Obviously I'm going to focus on the 12c database features that we can leverage in WLS data source.  You will need to read other sources or the product documentation to get all of the new features.

    Read the article

  • Downloading stuff from Oracle: an example

    - by user12587121
    Introduction Oracle has a lot of software on offer.  Components of the stack can evolve at different rates and different versions of the components may be in use at any given time.  All this means that even the process of downloading the bits you need can be somewhat daunting.  Here, by way of example, and hopefully to convince you that there is method in the downloading madness,  we describe how to go about downloading the bits for Oracle Identity Manager  (OIM) 11.1.1.5.Firstly, a couple of preliminary points: Folks with Oracle products already installed and looking for bug fixes, patch bundles or patch sets would go directly to the Oracle support website. This Oracle document is a comprehensive description of the Oracle FMW download process and the licensing that applies to downloaded software.   Downloading Oracle Identity Manager 11.1.1.5     To be sure we download the right versions, first locate the Certification Matrix for OIM 11.1.1.5: first go to the Fusion Certification Page then go to the “System Requirements and Supported Platforms for Oracle Identity and Access Management 11gR1” link. Let’s assume you have a 64 bit Linux Machine and an Oracle database already.  Then our  goal is to end up with a list of files like the following: jdk-6u29-linux-x64.bin                    (Java JDK)V26017-01.zip                             (the Repository Creation Utility to create the DB schemas)wls1035_generic.jar                       (the Weblogic Application Server)ofm_iam_generic_11.1.1.5.0_disk1_1of1.zip (the Identity Managament bits)ofm_soa_generic_11.1.1.5.0_disk1_1of2.zip (the SOA bits)ofm_soa_generic_11.1.1.5.0_disk1_2of2.zip jdevstudio11115install.exe                (optional: JDeveloper IDE)soa-jdev-extension.zip                    (optional: SOA extensions for JDeveloper) Downloading the bits 1.    Download the Java JDK, 64 bit version 1.6.0_24+.2.    Download the RCU: here you will see that the RCU is mentioned on the Identity Management home page but no link is provided.  Do not panic.  Due to the amount and turnover of software available only the latest versions are available for download from the main Oracle site.  Over time software gets moved on to the Oracle edelivery site and it is here that we find the RCU version we require: a.    Go to edelivery: https://edelivery.oracle.com b.    Choose Pack ‘ Oracle Fusion Middleware’ and ‘Linux x86-64’ c.    Click on ‘Oracle Fusion Middleware 11g Media Pack for Linux x86-64’ d.    Download: ‘Oracle Fusion Middleware Repository Creation Utility 11g (11.1.1.5.0) for Linux x86’ (V26017.zip) 3.    Download the Weblogic Application Server: WLS 10.3.54.    Download the Oracle Identity Manager bits: one point to clarify here is that currently  the Identity Management bits come in two trains, essentially one for the Directory Services piece and the other for the Access Management and Identity Management parts.  We need to be careful not to confuse the two, in particular to be clear which of the trains is being referred to by  the documentation: a.   So, with this in mind, go to ‘ Oracle Identity and Access Management (11.1.1.5.0)’ and download Disk1. 5.    Download the SOA bits: a.    Go to the edelivery area as for the RCU and download: i.    Oracle SOA Suite 11g Patch Set 4 (11.1.1.5.0) (Part 1 of 2) ii.    Oracle SOA Suite 11g Patch Set 4 (11.1.1.5.0) (Part 2 of 2) 6.    You will want to download some development tooling (for plugins or BPEL workflow development): a.    Download Jdeveloper 11.1.1.5 (11.1.1.6 may work but best to stick to the versions that correspond to the WLS version we are using) b.    Go to the site for  SOA tools and download the SOA Composite Editor 11.1.1.5 That’s it, you may proceed to the installation. 

    Read the article

  • Why is my Ubuntu system not using the correct kernel?

    - by Brooks Moses
    We're having a bit of confusion on a Ubuntu remote system -- /boot/grub/menu.lst suggests the system should boot into kernel 2.6.35-30-generic, but it is actually running kernel 2.6.32-27-generic. Where should I look to start figuring out why this is happening and how to fix it? Specifically, /boot/grub/menu.lst has default 0 and the first entry is title Ubuntu 10.10, kernel 2.6.35-30-generic uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/vmlinuz-2.6.35-30-generic root=UUID=67717ee3-cbf9-45d2- ae97-820256f4c4fd ro quiet splash initrd /boot/initrd.img-2.6.35-30-generic Further, I've confirmed that /boot/vmlinuz-2.6.35-30-generic and /boot/initrd.img-2.6.35-30-generic exist and have appropriate permissions. Meanwhile, uname -a returns: $ uname -a Linux cuda2 2.6.32-27-generic #49-Ubuntu SMP Thu Dec 2 00:51:09 UTC 2010 x86_64 GNU/Linux Edit: I've also tried re-running update-grub, and rebooting; no luck. Here's the full menu.lst, as requested by a commenter: # menu.lst - See: grub(8), info grub, update-grub(8) # grub-install(8), grub-floppy(8), # grub-md5-crypt, /usr/share/doc/grub # and /usr/share/doc/grub-legacy-doc/. ## default num # Set the default entry to the entry number NUM. Numbering starts from 0, and # the entry number 0 is the default if the command is not used. # # You can specify 'saved' instead of a number. In this case, the default entry # is the entry saved with the command 'savedefault'. # WARNING: If you are using dmraid do not use 'savedefault' or your # array will desync and will not let you boot your system. default 0 ## timeout sec # Set a timeout, in SEC seconds, before automatically booting the default entry # (normally the first entry defined). timeout 3 ## hiddenmenu # Hides the menu by default (press ESC to see the menu) hiddenmenu # Pretty colours #color cyan/blue white/blue ## password ['--md5'] passwd # If used in the first section of a menu file, disable all interactive editing # control (menu entry editor and command-line) and entries protected by the # command 'lock' # e.g. password topsecret # password --md5 $1$gLhU0/$aW78kHK1QfV3P2b2znUoe/ # password topsecret # # examples # # title Windows 95/98/NT/2000 # root (hd0,0) # makeactive # chainloader +1 # # title Linux # root (hd0,1) # kernel /vmlinuz root=/dev/hda2 ro # # # Put static boot stanzas before and/or after AUTOMAGIC KERNEL LIST ### BEGIN AUTOMAGIC KERNELS LIST ## lines between the AUTOMAGIC KERNELS LIST markers will be modified ## by the debian update-grub script except for the default options below ## DO NOT UNCOMMENT THEM, Just edit them to your needs ## ## Start Default Options ## ## default kernel options ## default kernel options for automagic boot options ## If you want special options for specific kernels use kopt_x_y_z ## where x.y.z is kernel version. Minor versions can be omitted. ## e.g. kopt=root=/dev/hda1 ro ## kopt_2_6_8=root=/dev/hdc1 ro ## kopt_2_6_8_2_686=root=/dev/hdc2 ro # kopt=root=UUID=67717ee3-cbf9-45d2-ae97-820256f4c4fd ro ## default grub root device ## e.g. groot=(hd0,0) # groot=67717ee3-cbf9-45d2-ae97-820256f4c4fd ## should update-grub create alternative automagic boot options ## e.g. alternative=true ## alternative=false # alternative=true ## should update-grub lock alternative automagic boot options ## e.g. lockalternative=true ## lockalternative=false # lockalternative=false ## additional options to use with the default boot option, but not with the ## alternatives ## e.g. defoptions=vga=791 resume=/dev/hda5 # defoptions=quiet splash ## should update-grub lock old automagic boot options ## e.g. lockold=false ## lockold=true # lockold=false ## Xen hypervisor options to use with the default Xen boot option # xenhopt= ## Xen Linux kernel options to use with the default Xen boot option # xenkopt=console=tty0 ## altoption boot targets option ## multiple altoptions lines are allowed ## e.g. altoptions=(extra menu suffix) extra boot options ## altoptions=(recovery) single # altoptions=(recovery mode) single ## controls how many kernels should be put into the menu.lst ## only counts the first occurence of a kernel, not the ## alternative kernel options ## e.g. howmany=all ## howmany=7 # howmany=all ## specify if running in Xen domU or have grub detect automatically ## update-grub will ignore non-xen kernels when running in domU and vice versa ## e.g. indomU=detect ## indomU=true ## indomU=false # indomU=detect ## should update-grub create memtest86 boot option ## e.g. memtest86=true ## memtest86=false # memtest86=true ## should update-grub adjust the value of the default booted system ## can be true or false # updatedefaultentry=false ## should update-grub add savedefault to the default options ## can be true or false # savedefault=false ## ## End Default Options ## title Ubuntu 10.10, kernel 2.6.35-30-generic uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/vmlinuz-2.6.35-30-generic root=UUID=67717ee3-cbf9-45d2-ae97-820256f4c4fd ro quiet splash initrd /boot/initrd.img-2.6.35-30-generic title Ubuntu 10.10, kernel 2.6.35-30-generic (recovery mode) uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/vmlinuz-2.6.35-30-generic root=UUID=67717ee3-cbf9-45d2-ae97-820256f4c4fd ro single initrd /boot/initrd.img-2.6.35-30-generic title Ubuntu 10.10, kernel 2.6.32-32-server uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/vmlinuz-2.6.32-32-server root=UUID=67717ee3-cbf9-45d2-ae97-820256f4c4fd ro quiet splash initrd /boot/initrd.img-2.6.32-32-server title Ubuntu 10.10, kernel 2.6.32-32-server (recovery mode) uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/vmlinuz-2.6.32-32-server root=UUID=67717ee3-cbf9-45d2-ae97-820256f4c4fd ro single initrd /boot/initrd.img-2.6.32-32-server title Ubuntu 10.10, kernel 2.6.32-27-generic uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/vmlinuz-2.6.32-27-generic root=UUID=67717ee3-cbf9-45d2-ae97-820256f4c4fd ro quiet splash initrd /boot/initrd.img-2.6.32-27-generic title Ubuntu 10.10, kernel 2.6.32-27-generic (recovery mode) uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/vmlinuz-2.6.32-27-generic root=UUID=67717ee3-cbf9-45d2-ae97-820256f4c4fd ro single initrd /boot/initrd.img-2.6.32-27-generic title Chainload into GRUB 2 root 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/grub/core.img title Ubuntu 10.10, memtest86+ uuid 67717ee3-cbf9-45d2-ae97-820256f4c4fd kernel /boot/memtest86+.bin ### END DEBIAN AUTOMAGIC KERNELS LIST To add complication and joy to my life, this is a desktop machine in a remote datacenter; we don't have either local access or serial-console access. Suggestions?

    Read the article

  • Upgrading SSIS Custom Components for SQL Server 2012

    Having finally got around to upgrading my custom components to SQL Server 2012, I thought I’d share some notes on the process. One of the goals was minimal duplication, so the same code files are used to build the 2008 and 2012 components, I just have a separate project file. The high level steps are listed below, followed by some more details. Create a 2012 copy of the project file Upgrade project, just open the new project file is VS2010 Change target framework to .NET 4.0 Set conditional compilation symbol for DENALI Change any conditional code, including assembly version and UI type name Edit project file to change referenced assemblies for 2012 Change target framework to .NET 4.0 Open the project properties. On the Applications page, change the Target framework to .NET Framework 4. Set conditional compilation symbol for DENALI Re-open the project properties. On the Build tab, first change the Configuration to All Configurations, then set a Conditional compilation symbol of DENALI. Change any conditional code, including assembly version and UI type name The value doesn’t have to be DENALI, it can actually be anything you like, that is just what I use. It is how I control sections of code that vary between versions. There were several API changes between 2005 and 2008, as well as interface name changes. Whilst we don’t have the same issues between 2008 and 2012, I still have some sections of code that do change such as the assembly attributes. #if DENALI [assembly: AssemblyDescription("Data Generator Source for SQL Server Integration Services 2012")] [assembly: AssemblyCopyright("Copyright © 2012 Konesans Ltd")] [assembly: AssemblyVersion("3.0.0.0")] #else [assembly: AssemblyDescription("Data Generator Source for SQL Server Integration Services 2008")] [assembly: AssemblyCopyright("Copyright © 2008 Konesans Ltd")] [assembly: AssemblyVersion("2.0.0.0")] #endif The Visual Studio editor automatically formats the code based on the current compilation symbols, hence in this case the 2008 code is grey to indicate it is disabled. As you can see in the previous example I have distinct assembly version attributes, ensuring I can run both 2008 and 2012 versions of my component side by side. For custom components with a user interface, be sure to update the UITypeName property of the DtsTask or DtsPipelineComponent attributes. As above I use the conditional compilation symbol to control the code. #if DENALI [DtsTask ( DisplayName = "File Watcher Task", Description = "File Watcher Task", IconResource = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask.ico", UITypeName = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTaskUI,Konesans.Dts.Tasks.FileWatcherTask,Version=3.0.0.0,Culture=Neutral,PublicKeyToken=b2ab4a111192992b", TaskContact = "File Watcher Task; Konesans Ltd; Copyright © 2012 Konesans Ltd; http://www.konesans.com" )] #else [DtsTask ( DisplayName = "File Watcher Task", Description = "File Watcher Task", IconResource = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask.ico", UITypeName = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTaskUI,Konesans.Dts.Tasks.FileWatcherTask,Version=2.0.0.0,Culture=Neutral,PublicKeyToken=b2ab4a111192992b", TaskContact = "File Watcher Task; Konesans Ltd; Copyright © 2004-2008 Konesans Ltd; http://www.konesans.com" )] #endif public sealed class FileWatcherTask: Task, IDTSComponentPersist, IDTSBreakpointSite, IDTSSuspend { // .. code goes on... } Shown below is another example I found that needed changing. I borrow one of the MS editors, and use it against a custom property, but need to ensure I reference the correct version of the MS controls assembly. This section of code is actually shared between the 2005, 2008 and 2012 versions of my component hence it has test for both DENALI and KATMAI symbols. #if DENALI const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=11.0.00.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #elif KATMAI const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #else const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #endif // Create Match Expression parameter IDTSCustomPropertyCollection100 propertyCollection = outputColumn.CustomPropertyCollection; IDTSCustomProperty100 property = propertyCollection.New(); property = propertyCollection.New(); property.Name = MatchParams.Name; property.Description = MatchParams.Description; property.TypeConverter = typeof(MultilineStringConverter).AssemblyQualifiedName; property.UITypeEditor = multiLineUI; property.Value = MatchParams.DefaultValue; Edit project file to change referenced assemblies for 2012 We now need to edit the project file itself. Open the MyComponente2012.cproj  in you favourite text editor, and then perform a couple of find and replaces as listed below: Find Replace Comment Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Change the assembly references version from SQL Server 2008 to SQL Server 2012. Microsoft SQL Server\100\ Microsoft SQL Server\110\ Change any assembly reference hint path locations from from SQL Server 2008 to SQL Server 2012. If you use any Build Events during development, such as copying the component assembly to the DTS folder, or calling GACUTIL to install it into the GAC, you can also change these now. An example of my new post-build event for a pipeline component is shown below, which uses the .NET 4.0 path for GACUTIL. It also uses the 110 folder location, instead of 100 for SQL Server 2008, but that was covered the the previous find and replace. "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\gacutil.exe" /if "$(TargetPath)" copy "$(TargetPath)" "%ProgramFiles%\Microsoft SQL Server\110\DTS\PipelineComponents" /Y

    Read the article

  • JDK bug migration: components and subcomponents

    - by darcy
    One subtask of the JDK migration from the legacy bug tracking system to JIRA was reclassifying bugs from a three-level taxonomy in the legacy system, (product, category, subcategory), to a fundamentally two-level scheme in our customized JIRA instance, (component, subcomponent). In the JDK JIRA system, there is technically a third project-level classification, but by design a large majority of JDK-related bugs were migrated into a single "JDK" project. In the end, over 450 legacy subcategories were simplified into about 120 subcomponents in JIRA. The 120 subcomponents are distributed among 17 components. A rule of thumb used was that a subcategory had to have at least 50 bugs in it for it to be retained. Below is a listing the component / subcomponent classification of the JDK JIRA project along with some notes and guidance on which OpenJDK email addresses cover different areas. Eventually, a separate incidents project to host new issues filed at bugs.sun.com will use a slightly simplified version of this scheme. The preponderance of bugs and subcomponents for the JDK are in library-related areas, with components named foo-libs and subcomponents primarily named after packages. While there was an overall condensation of subcomponents in the migration, in some cases long-standing informal divisions in core libraries based on naming conventions in the description were promoted to formal subcomponents. For example, hundreds of bugs in the java.util subcomponent whose descriptions started with "(coll)" were moved into java.util:collections. Likewise, java.lang bugs starting with "(reflect)" and "(proxy)" were moved into java.lang:reflect. client-libs (Predominantly discussed on 2d-dev and awt-dev and swing-dev.) 2d demo java.awt java.awt:i18n java.beans (See beans-dev.) javax.accessibility javax.imageio javax.sound (See sound-dev.) javax.swing core-libs (See core-libs-dev.) java.io java.io:serialization java.lang java.lang.invoke java.lang:class_loading java.lang:reflect java.math java.net java.nio (Discussed on nio-dev.) java.nio.charsets java.rmi java.sql java.sql:bridge java.text java.util java.util.concurrent java.util.jar java.util.logging java.util.regex java.util:collections java.util:i18n javax.annotation.processing javax.lang.model javax.naming (JNDI) javax.script javax.script:javascript javax.sql org.openjdk.jigsaw (See jigsaw-dev.) security-libs (See security-dev.) java.security javax.crypto (JCE: includes SunJCE/MSCAPI/UCRYPTO/ECC) javax.crypto:pkcs11 (JCE: PKCS11 only) javax.net.ssl (JSSE, includes javax.security.cert) javax.security javax.smartcardio javax.xml.crypto org.ietf.jgss org.ietf.jgss:krb5 other-libs corba corba:idl corba:orb corba:rmi-iiop javadb other (When no other subcomponent is more appropriate; use judiciously.) Most of the subcomponents in the xml component are related to jaxp. xml jax-ws jaxb javax.xml.parsers (JAXP) javax.xml.stream (JAXP) javax.xml.transform (JAXP) javax.xml.validation (JAXP) javax.xml.xpath (JAXP) jaxp (JAXP) org.w3c.dom (JAXP) org.xml.sax (JAXP) For OpenJDK, most JVM-related bugs are connected to the HotSpot Java virtual machine. hotspot (See hotspot-dev.) build compiler (See hotspot-compiler-dev.) gc (garbage collection, see hotspot-gc-dev.) jfr (Java Flight Recorder) jni (Java Native Interface) jvmti (JVM Tool Interface) mvm (Multi-Tasking Virtual Machine) runtime (See hotspot-runtime-dev.) svc (Servicability) test core-svc (See serviceability-dev.) debugger java.lang.instrument java.lang.management javax.management tools The full JDK bug database contains entries related to legacy virtual machines that predate HotSpot as well as retired APIs. vm-legacy jit (Sun Exact VM) jit_symantec (Symantec VM, before Exact VM) jvmdi (JVM Debug Interface ) jvmpi (JVM Profiler Interface ) runtime (Exact VM Runtime) Notable command line tools in the $JDK/bin directory have corresponding subcomponents. tools appletviewer apt (See compiler-dev.) hprof jar javac (See compiler-dev.) javadoc(tool) (See compiler-dev.) javah (See compiler-dev.) javap (See compiler-dev.) jconsole launcher updaters (Timezone updaters, etc.) visualvm Some aspects of JDK infrastructure directly affect JDK Hg repositories, but other do not. infrastructure build (See build-dev and build-infra-dev.) licensing (Covers updates to the third party readme, licenses, and similar files.) release_eng (Release engineering) staging (Staging of web pages related to JDK releases.) The specification subcomponent encompasses the formal language and virtual machine specifications. specification language (The Java Language Specification) vm (The Java Virtual Machine Specification) The code for the deploy and install areas is not currently included in OpenJDK. deploy deployment_toolkit plugin webstart install auto_update install servicetags In the JDK, there are a number of cross-cutting concerns whose organization is essentially orthogonal to other areas. Since these areas generally have dedicated teams working on them, it is easier to find bugs of interest if these bugs are grouped first by their cross-cutting component rather than by the affected technology. docs doclet guides hotspot release_notes tools tutorial embedded build hotspot libraries globalization locale-data translation performance hotspot libraries The list of subcomponents will no doubt grow over time, but my inclination is to resist that growth since the addition of each subcomponent makes the system as a whole more complicated and harder to use. When the system gets closer to being externalized, I plan to post more blog entries describing recommended use of various custom fields in the JDK project.

    Read the article

  • Unmet Dependencies with kdelibs5-data

    - by Jitesh
    I was trying to install Amarok 1.4 on Ubuntu 12.04 (Gnome-classic), by following this instructions. Problem started after giving these two commands dpkg -i kdelibs5-data_4.6.2-0ubuntu4_all.deb dpkg -i kdelibs-data_3.5.10.dfsg.1-5ubuntu2_all.deb Now, immediately after these commands, Ubuntu Updater popped up and gave me an error that the package catalog is broken and needs to be repaired. Nothing can be installed or removed till then. It also offered a suggestion to run apt-get install -f. I tried that, but again got the same error.Also tried apt-get clean followed by apt-get install -f. Again got the following output: jitesh@jitesh-desktop:~$ sudo apt-get clean jitesh@jitesh-desktop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: kdelibs5-data The following packages will be upgraded: kdelibs5-data 1 upgraded, 0 newly installed, 0 to remove and 18 not upgraded. 2 not fully installed or removed. Need to get 2,832 kB of archives. After this operation, 2,998 kB disk space will be freed. Do you want to continue [Y/n]? y Get:1 http: //in.archive.ubuntu.com/ubuntu/precise-updates/main kdelibs5-data all 4:4.8.4a-0ubuntu0.2 [2,832 kB] Fetched 2,832 kB in 32s (86.6 kB/s) dpkg: dependency problems prevent configuration of kdelibs5-data: libplasma3 (4:4.8.4a-0ubuntu0.2) breaks kdelibs5-data (<< 4:4.6.80~) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. kate-data (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. katepart (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. dpkg: error processing kdelibs5-data (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of kdelibs-data: kdelibs-data depends on kdelibs5-data; however: Package kdelibs5-data is not configured yet. No apport report written because MaxReports is reached already dpkg: error processing kdelibs-data (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: kdelibs5-data kdelibs-data W: Duplicate sources.list entry http://archive.canonical.com/ubuntu/ precise/partner i386 Packages (/var/lib/apt/lists/archive.canonical.com_ubuntu_dists_precise_partner_binary-i386_Packages) W: You may want to run apt-get update to correct these problems E: Sub-process /usr/bin/dpkg returned an error code (1) As I thought the error was related to configuring kdelibs, I tried to configure using dpkg. But got the following errors: jitesh@jitesh-desktop:~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of kdelibs5-data: libplasma3 (4:4.8.4a-0ubuntu0.2) breaks kdelibs5-data (<< 4:4.6.80~) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. kate-data (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. katepart (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. dpkg: error processing kdelibs5-data (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of kdelibs-data: kdelibs-data depends on kdelibs5-data; however: Package kdelibs5-data is not configured yet. dpkg: error processing kdelibs-data (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: kdelibs5-data kdelibs-data jitesh@jitesh-desktop:~$ Now I dont have any idea how to proceed. I am unable to install anything from Software Centre or using Terminal now. Some basic info: Core2Duo, dual booting Ubuntu 12.04 with Win7. Fresh install of Ubuntu 12.04 (not upgrade). Incidentally, I had first upgraded from 10.04 and had succesfully installed Amarok 1.4 following this same method. But due to other issues, i had to format and do a clean install of 12.04. Now when I tried to install Amarok 1.4, I'm getting these errors. I also have digiKam and k3b installed, if that can be of any help. I use digiKam a lot, so removing KDE is not feasible for me. Any help on this issue will be highly appreciated. Thanks in advance.

    Read the article

  • Extending Blend for Visual Studio 2013

    - by Chris Skardon
    Originally posted on: http://geekswithblogs.net/cskardon/archive/2013/11/01/extending-blend-for-visual-studio-2013.aspxSo, I got a comment yesterday on my post about Extending Blend 4 and Blend for Visual Studio 2012 asking if I knew how to get it working for Blend for Visual Studio 2013.. My initial thoughts were, just change the location to get the blend dlls from Visual Studio 11.0 to 12.0 and you’re all set, so I went to do that, only to discover that the dlls I normally reference, well – they don’t exist. So… I’ve made a presumption that the actual process of using MEF etc is still the same. I was wrong. So, the route to discovery – required DotPeek and opening a few of blends dlls.. Browsing through the Blend install directory (./Microsoft Visual Studio 12.0/Blend/) I notice the .addin files: So I decide to peek into the SketchFlow dll, then promptly remember SketchFlow is quite a big thing, and hunting through there is not ideal, luckily there is another dll using an .addin file, ‘Microsoft.Expression.Importers.Host’, so we’ll go for that instead. We can see it’s still using the ‘IPackage’ formula, but where is that sucker? Well, we just press F12 on the ‘IPackage’ bit and DotPeek takes us there, with a very handy comment at the top: // Type: Microsoft.Expression.Framework.IPackage // Assembly: Microsoft.Expression.Framework, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a // MVID: E092EA54-4941-463C-BD74-283FD36478E2 // Assembly location: C:\Program Files (x86)\Microsoft Visual Studio 12.0\Blend\Microsoft.Expression.Framework.dll Now we know where the IPackage interface is defined, so let’s just try writing a control. Last time I did a separate dll for the control, this time I’m not, but it still works if you want to do it that way. Let’s build a control! STEP 1 Create a new WPF application Naming doesn’t matter any more! I have gone with ‘Hello2013’ (see what I did there?) STEP 2 Delete: App.Config App.xaml MainWindow.xaml We won’t be needing them STEP 3 Change your application to be a Class Library instead. (You might also want to delete the ‘vshost’ stuff in your output directory now, as they only exist for hosting the WPF app, and just cause clutter) STEP 4 Add a reference to the ‘Microsoft.Expression.Framework.dll’ (which you can find in ‘C:\Program Files\Microsoft Visual Studio 12.0\Blend’ – that’s Program Files (x86) if you’re on an x64 machine!). STEP 5 Add a User Control, I’m going with ‘Hello2013Control’, and following from last time, it’s just a TextBlock in a Grid: <UserControl x:Class="Hello2013.Hello2013Control" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="300"> <Grid> <TextBlock>Hello Blend for VS 2013</TextBlock> </Grid> </UserControl> STEP 6 Add a class to load the package – I’ve called it – yes you guessed – Hello2013Package, which will look like this: namespace Hello2013 { using Microsoft.Expression.Framework; using Microsoft.Expression.Framework.UserInterface; public class Hello2013Package : IPackage { private Hello2013Control _hello2013Control; private IWindowService _windowService; public void Load(IServices services) { _windowService = services.GetService<IWindowService>(); Initialize(); } private void Initialize() { _hello2013Control = new Hello2013Control(); if (_windowService.PaletteRegistry["HelloPanel"] == null) _windowService.RegisterPalette("HelloPanel", _hello2013Control, "Hello Window"); } public void Unload(){} } } You might note that compared to the 2012 version we’re no longer [Exporting(typeof(IPackage))]. The file you create in STEP 7 covers this for us. STEP 7 Add a new file called: ‘<PROJECT_OUTPUT_NAME>.addin’ – in reality you can call it anything and it’ll still read it in just fine, it’s just nicer if it all matches up, so I have ‘Hello2013.addin’. Content wise, we need to have: <?xml version="1.0" encoding="utf-8"?> <AddIn AssemblyFile="Hello2013.dll" /> obviously, replacing ‘Hello2013.dll’ with whatever your dll is called. STEP 8 We set the ‘addin’ file to be copied to the output directory: STEP 9 Build! STEP 10 Go to your output directory (./bin/debug) and copy the 3 files (Hello2013.dll, Hello2013.pdb, Hello2013.addin) and then paste into the ‘Addins’ folder in your Blend directory (C:\Program Files\Microsoft Visual Studio 12.0\Blend\Addins) STEP 11 Start Blend for Visual Studio 2013 STEP 12 Go to the ‘Window’ menu and select ‘Hello Window’ STEP 13 Marvel at your new control! Feel free to email me / comment with any problems!

    Read the article

  • Installing Ubuntu 12.04.1 x64 with Fake RAID 1 [SOLVED]

    - by Arkadius
    I had: Software: Dual boot with Windows XP Ubuntu 10.04 LTS x32 Hardware Fake RAID 1 (mirroring) with 2x1 TB: Partition 1 - Windows Partition 2 - SWAP Partition 3 - / (root) Partition 4 - Extended Partition 5 - /home Partition 6 - /data arek@domek:/var/log/installer$ sudo fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de1b9 Device Boot Start End Blocks Id System /dev/sda1 * 63 524297339 262148638+ 7 HPFS/NTFS/exFAT /dev/sda2 524297340 528506369 2104515 82 Linux swap / Solaris /dev/sda3 528506370 570468149 20980890 83 Linux /dev/sda4 570468150 1953118439 691325145 5 Extended /dev/sda5 570468213 675340469 52436128+ 83 Linux /dev/sda6 675340533 1953118439 638888953+ 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de1b9 Device Boot Start End Blocks Id System /dev/sdb1 * 63 524297339 262148638+ 7 HPFS/NTFS/exFAT /dev/sdb2 524297340 528506369 2104515 82 Linux swap / Solaris /dev/sdb3 528506370 570468149 20980890 83 Linux /dev/sdb4 570468150 1953118439 691325145 5 Extended /dev/sdb5 570468213 675340469 52436128+ 83 Linux /dev/sdb6 675340533 1953118439 638888953+ 83 Linux arek@domek:/var/log/installer$ ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 236 Oct 7 20:17 control lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha -> ../dm-0 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha1 -> ../dm-1 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha2 -> ../dm-2 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha3 -> ../dm-3 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha4 -> ../dm-4 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha5 -> ../dm-5 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha6 -> ../dm-6 I wanted to upgrade from 10.04 x32 to 12.04 x64 using FRESH installation. So, run installation of Ubuntu 12.04.1 x64 LTS using alternate CD. During the installation I selected manual partitioning and to: - Use and Format / (root) - Use and Format SWAP - Use and Keep data on /home - Use and Keep data on /data After I clicked "Continue" I get error creating and formatting SWAP partition. I go to terminal with Alt + F2 (?) and hit enter. I discovered that there was visible RAID as only disk with NO partitions. Something like this: arek@domek:/var/log/installer$ ls -l /dev/mapper/ lrwxrwxrwx 1 root root 7 Oct 7 20:17 /dev/mapper/pdc_jhjbcaha -> ../dm-0 arek@domek:/var/log/installer$ ls -l /dev/dm* brw-rw---- 1 root disk 252, 0 Oct 7 20:17 /dev/dm-0 So I switched to log console Alt+F3 (?) and saw errors like below: Oct 7 14:02:45 check-missing-firmware: /dev/.udev/firmware-missing does not exist, skipping Oct 7 14:02:45 check-missing-firmware: /run/udev/firmware-missing does not exist, skipping Oct 7 14:02:45 check-missing-firmware: no missing firmware in /dev/.udev/firmware-missing /run/udev/firmware-missing Oct 7 14:02:45 anna-install: Installing dmraid-udeb Oct 7 14:02:45 anna[12599]: DEBUG: retrieving dmraid-udeb 1.0.0.rc16-4.1ubuntu8 Oct 7 14:02:49 anna[12599]: DEBUG: retrieving libdmraid1.0.0.rc16-udeb 1.0.0.rc16-4.1ubuntu8 Oct 7 14:02:49 anna[12599]: DEBUG: retrieving kpartx-udeb 0.4.9-3ubuntu5 Oct 7 14:02:49 disk-detect: Serial ATA RAID disk(s) detected. Oct 7 14:02:55 disk-detect: Enabling dmraid support. Oct 7 14:02:55 disk-detect: RAID set "pdc_jhjbcaha" was activated Oct 7 14:02:55 HERE --> dmraid-activate: ERROR: Cannot retrieve RAID set information for pdc_jhjbcaha Oct 7 14:02:56 check-missing-firmware: /dev/.udev/firmware-missing does not exist, skipping Oct 7 14:02:56 check-missing-firmware: /run/udev/firmware-missing does not exist, skipping Oct 7 14:02:56 check-missing-firmware: no missing firmware in /dev/.udev/firmware-missing /run/udev/firmware-missing Oct 7 14:02:57 main-menu[428]: DEBUG: resolver (libnewt0.52): package doesn't exist (ignored) Oct 7 14:02:57 main-menu[428]: DEBUG: resolver (ext2-modules): package doesn't exist (ignored) Oct 7 14:02:57 main-menu[428]: INFO: Menu item 'partman-base' selected Oct 7 14:02:57 kernel: [ 316.512999] NTFS driver 2.1.30 [Flags: R/O MODULE]. Oct 7 14:02:57 kernel: [ 316.523221] Btrfs loaded Oct 7 14:02:57 kernel: [ 316.534781] JFS: nTxBlock = 8192, nTxLock = 65536 Oct 7 14:02:57 kernel: [ 316.554749] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled Oct 7 14:02:57 kernel: [ 316.555336] SGI XFS Quota Management subsystem Oct 7 14:02:58 md-devices: mdadm: No arrays found in config file or automatically Oct 7 14:02:58 partman: No matching physical volumes found Oct 7 14:02:58 partman: No volume groups found Oct 7 14:02:58 partman: Reading all physical volumes. This may take a while... Oct 7 14:02:58 partman-lvm: No volume groups found Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:06:11 HERE --> partman: mkswap: can't open '/dev/mapper/pdc_jhjbcaha2': No such file or directory Oct 7 14:07:28 init: starting pid 401, tty '/dev/tty2': '-/bin/sh' Oct 7 14:15:00 net/hw-detect.hotplug: Detected hotpluggable network interface eth0 Oct 7 14:15:00 net/hw-detect.hotplug: Detected hotpluggable network interface lo As You can see there are 2 errors Oct 7 14:02:55 dmraid-activate: ERROR: Cannot retrieve RAID set information for pdc_jhjbcaha and Oct 7 14:06:11 partman: mkswap: can't open '/dev/mapper/pdc_jhjbcaha2': No such file or directory I looked in the internet and try to run command "dmraid -ay" and get something like that: dmraid -ay /dev/mapper/pdc_jhjbcaha -> Already activated /dev/mapper/pdc_jhjbcaha1 -> Successfully activated /dev/mapper/pdc_jhjbcaha2 -> Successfully activated /dev/mapper/pdc_jhjbcaha3 -> Successfully activated /dev/mapper/pdc_jhjbcaha4 -> Successfully activated /dev/mapper/pdc_jhjbcaha5 -> Successfully activated /dev/mapper/pdc_jhjbcaha6 -> Successfully activated Then I returned to installer with Alt+F1 (?) and click "Return" to return to partitioning menu. I did NOT change anything just selected again "Continue" and everything goes smoothly. I hope this will help someone. arkadius

    Read the article

  • Stepping outside Visual Studio IDE [Part 1 of 2] with Eclipse

    - by mbcrump
    “If you're walking down the right path and you're willing to keep walking, eventually you'll make progress." – Barack Obama In my quest to become a better programmer, I’ve decided to start the process of learning Java. I will be primary using the Eclipse Language IDE. I will not bore you with the history just what is needed for a .NET developer to get up and running. I will provide links, screenshots and a few brief code tutorials. Links to documentation. The Official Eclipse FAQ’s Links to binaries. Eclipse IDE for Java EE Developers the Galileo Package (based on Eclipse 3.5 SR2)  Sun Developer Network – Java Eclipse officially recommends Java version 5 (also known as 1.5), although many Eclipse users use the newer version 6 (1.6). That's it, nothing more is required except to compile and run java. Installation Unzip the Eclipse IDE for Java EE Developers and double click the file named Eclipse.exe. You will probably want to create a link for it on your desktop. Once, it’s installed and launched you will have to select a workspace. Just accept the defaults and you will see the following: Lets go ahead and write a simple program. To write a "Hello World" program follow these steps: Start Eclipse. Create a new Java Project: File->New->Project. Select "Java" in the category list. Select "Java Project" in the project list. Click "Next". Enter a project name into the Project name field, for example, "HW Project". Click "Finish" Allow it to open the Java perspective Create a new Java class: Click the "Create a Java Class" button in the toolbar. (This is the icon below "Run" and "Window" with a tooltip that says "New Java Class.") Enter "HW" into the Name field. Click the checkbox indicating that you would like Eclipse to create a "public static void main(String[] args)" method. Click "Finish". A Java editor for HW.java will open. In the main method enter the following line.      System.out.println("This is my first java program and btw Hello World"); Save using ctrl-s. This automatically compiles HW.java. Click the "Run" button in the toolbar (looks like a VCR play button). You will be prompted to create a Launch configuration. Select "Java Application" and click "New". Click "Run" to run the Hello World program. The console will open and display "This is my first java program and btw Hello World". You now have your first java program, lets go ahead and make an applet. Since you already have the HW.java open, click inside the window and remove all code. Now copy/paste the following code snippet. Java Code Snippet for an applet. 1: import java.applet.Applet; 2: import java.awt.Graphics; 3: import java.awt.Color; 4:  5: @SuppressWarnings("serial") 6: public class HelloWorld extends Applet{ 7:  8: String text = "I'm a simple applet"; 9:  10: public void init() { 11: text = "I'm a simple applet"; 12: setBackground(Color.GREEN); 13: } 14:  15: public void start() { 16: System.out.println("starting..."); 17: } 18:  19: public void stop() { 20: System.out.println("stopping..."); 21: } 22:  23: public void destroy() { 24: System.out.println("preparing to unload..."); 25: } 26:  27: public void paint(Graphics g){ 28: System.out.println("Paint"); 29: g.setColor(Color.blue); 30: g.drawRect(0, 0, 31: getSize().width -1, 32: getSize().height -1); 33: g.setColor(Color.black); 34: g.drawString(text, 15, 25); 35: } 36: } The Eclipse IDE should look like Click "Run" to run the Hello World applet. Now, lets test our new java applet. So, navigate over to your workspace for example: “C:\Users\mbcrump\workspace\HW Project\bin” and you should see 2 files. HW.class java.policy.applet Create a HTML page with the following code: 1: <HTML> 2: <BODY> 3: <APPLET CODE=HW.class WIDTH=200 HEIGHT=100> 4: </APPLET> 5: </BODY> 6: </HTML> Open, the HTML page in Firefox or IE and you will see your applet running.  I hope this brief look at the Eclipse IDE helps someone get acquainted with Java Development. Even if your full time gig is with .NET, it will not hurt to have another language in your tool belt. As always, I welcome any suggestions or comments.

    Read the article

  • Jersey 2 in GlassFish 4 - First Java EE 7 Implementation Now Integrated (TOTD #182)

    - by arungupta
    The JAX-RS 2.0 specification released their Early Draft 3 recently. One of my earlier blogs explained as the features were first introduced in the very first draft of the JAX-RS 2.0 specification. Last week was another milestone when the first Java EE 7 specification implementation was added to GlassFish 4 builds. Jakub blogged about Jersey 2 integration in GlassFish 4 builds. Most of the basic functionality is working but EJB, CDI, and Validation are still a TBD. Here is a simple Tip Of The Day (TOTD) sample to get you started with using that functionality. Create a Java EE 6-style Maven project mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=example -DartifactId=jersey2-helloworld -DarchetypeVersion=1.5 -DinteractiveMode=false Note, this is still a Java EE 6 archetype, at least for now. Open the project in NetBeans IDE as it makes it much easier to edit/add the files. Add the following <respositories> <repositories> <repository> <id>snapshot-repository.java.net</id> <name>Java.net Snapshot Repository for Maven</name> <url>https://maven.java.net/content/repositories/snapshots/</url> <layout>default</layout> </repository></repositories> Add the following <dependency>s <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope></dependency><dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <version>2.0-m09</version> <scope>test</scope></dependency><dependency> <groupId>org.glassfish.jersey.core</groupId> <artifactId>jersey-client</artifactId> <version>2.0-m05</version> <scope>test</scope></dependency> The complete list of Maven coordinates for Jersey2 are available here. An up-to-date status of Jersey 2 can always be obtained from here. Here is a simple resource class: @Path("movies")public class MoviesResource { @GET @Path("list") public List<Movie> getMovies() { List<Movie> movies = new ArrayList<Movie>(); movies.add(new Movie("Million Dollar Baby", "Hillary Swank")); movies.add(new Movie("Toy Story", "Buzz Light Year")); movies.add(new Movie("Hunger Games", "Jennifer Lawrence")); return movies; }} This resource publishes a list of movies and is accessible at "movies/list" path with HTTP GET. The project is using the standard JAX-RS APIs. Of course, you need the trivial "Movie" and the "Application" class as well. They are available in the downloadable project anyway. Build the project mvn package And deploy to GlassFish 4.0 promoted build 43 (download, unzip, and start as "bin/asadmin start-domain") as asadmin deploy --force=true target/jersey2-helloworld.war Add a simple test case by right-clicking on the MoviesResource class, select "Tools", "Create Tests", and take defaults. Replace the function "testGetMovies" to @Testpublic void testGetMovies() { System.out.println("getMovies"); Client client = ClientFactory.newClient(); List<Movie> movieList = client.target("http://localhost:8080/jersey2-helloworld/webresources/movies/list") .request() .get(new GenericType<List<Movie>>() {}); assertEquals(3, movieList.size());} This test uses the newly defined JAX-RS 2 client APIs to access the RESTful resource. Run the test by giving the command "mvn test" and see the output as ------------------------------------------------------- T E S T S-------------------------------------------------------Running example.MoviesResourceTestgetMoviesTests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561 secResults :Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 GlassFish 4 contains Jersey 2 as the JAX-RS implementation. If you want to use Jersey 1.1 functionality, then Martin's blog provide more details on that. All JAX-RS 1.x functionality will be supported using standard APIs anyway. This workaround is only required if Jersey 1.x functionality needs to be accessed. The complete source code explained in this project can be downloaded from here. Here are some pointers to follow JAX-RS 2 Specification Early Draft 3 Latest status on specification (jax-rs-spec.java.net) Latest JAX-RS 2.0 Javadocs Latest status on Jersey (Reference Implementation of JAX-RS 2 - jersey.java.net) Latest Jersey API Javadocs Latest GlassFish 4.0 Promoted Build Follow @gf_jersey Provide feedback on Jersey 2 to [email protected] and JAX-RS specification to [email protected].

    Read the article

  • Migrating Virtual Iron guest to Oracle VM 3.x

    - by scoter
    As stated on the official site, Oracle in 2009, acquired a provider of server virtualization management software named Virtual Iron; you can find all the acquisition details at this link. Into the FAQ on the official site you can also view that, for the future, Oracle plans to fully integrate Virtual Iron technology into Oracle VM products, and any enhancements will be delivered as a part of the combined solution; this is what is going on with Oracle VM 3.x. So, customers started asking us to migrate Virtual Iron guests to Oracle VM. IMPORTANT: This procedure needs a dedicated OVM-Server with no-guests running on top; be careful while execute this procedure on production environments. In these little steps you will find how-to migrate, as fast as possible, your guests between VI ( Virtual Iron ) and Oracle VM; keep in mind that OracleVM has a built-in P2V utility ( Official Documentation )  that you can use to migrate guests between VI and Oracle VM. Concepts: VI repositories.  On VI we have the same "repository" concept as in Oracle VM; the difference between these two products is that VI use a raw-lun as repository ( instead of using ocfs2 and its capabilities, like ref-links ). The VI "raw-lun" repository, with a pure operating-system perspective, may be presented as in this picture: Infact on this "raw-lun" VI create an LVM2 volume-group. The VI "raw-lun" repository, with an hypervisor perspective, may be presented as in this picture: So, the relationships are: LVM2-Volume-Group <-> VI Repository LVM2-Logical-Volume <-> VI guest virtual-disk The first step is to present the VI repository ( raw-lun ) to your dedicated OVM-Server. Prepare dedicated OVM-Server On the OVM-Server ( OVS ) you need to discover new lun and, after that, discover volume-group and logical-volumes containted in VI repository; due to default OVS configuration you need to edit lvm2 configuration file: /etc/lvm/lvm.conf     # By default for OVS we restrict every block device:     # filter = [ "r/.*/" ] and comment the line starting with "filter" as above. Now you have to discover the raw-lun presented and, next, activate volume-group and logical-volumes: #!/bin/bash for HOST in `ls /sys/class/scsi_host`;do echo '- - -' > /sys/class/scsi_host/$HOST/scan; done CPATH=`pwd` cd /dev for DEVICE in `ls sd[a-z] sd?[a-z]`;do echo '1' > /sys/block/$DEVICE/device/rescan; done cd $CPATH cd /dev/mapper for PARTITION in `ls *[a-z] *?[a-z]`;do partprobe /dev/mapper/$PARTITION; done cd $CPATH vgchange -a yAfter that you will see a new device:[root@ovs01 ~]# cd /dev/6000F4B00000000000210135bef64994[root@ovs01 6000F4B00000000000210135bef64994]# ls -l 6000F4B0000000000061013* lrwxrwxrwx 1 root root 77 Oct 29 10:50 6000F4B00000000000610135c3a0b8cb -> /dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cb By your OVM-Manager create a guest server with the same definition as on VI:same core number as VI source guestsame memory as VI source guestsame number of disks as VI source guest ( you can create OVS virtual disk with a small size of 1GB because the "clone" will, eventually, extend the size of your new virtual disks )Summarizing:source-virtual-disk path ( VI ):/dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cbdest-virtual-disk path ( OVS ):/OVS/Repositories/0004fb00000300006cfeb81c12f12f00/VirtualDisks/0004fb000012000055e0fc4c5c8a35ee.img ** ** = to identify your virtual disk you have verify its name under the "vm.cfg" file of your new guest.Clone VI virtual-disk to OVS virtual-diskdd if=/dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cb of=/OVS/Repositories/0004fb00000300006cfeb81c12f12f00/VirtualDisks/0004fb000012000055e0fc4c5c8a35ee.img Clean unsupported parameters and changes on OVS.1. Restore original /etc/lvm/lvm.conf    # By default for OVS we restrict every block device:     filter = [ "r/.*/" ]    and uncomment the line starting with "filter" as above.2. Force-stop lvm2-monitor service  # service lvm2-monitor force-stop 3. Restore original /etc/lvm directories ( archive, backup and cache )  # cd /etc/lvm  # rm -fr archive backup cache; mkdir archive backup cache4. Reboot OVSRefresh OVS repository and start your guest.By OracleVM Manager refresh your repository:By OracleVM Manager start your "migrated" guest: Comments and corrections are welcome.  Simon COTER 

    Read the article

  • A very useful custom component

    - by Kevin Smith
    Whenever I am debugging a problem in WebCenter Content (WCC) I often find it useful to see the contents of the internal data binder used by WCC when executing a service. I want to know the value of all parameters passed in by the caller, either a user in the web GUI or from an application calling the service via RIDC or web services. I also want to the know the value of binder variables calculated by WCC as it processes a service. What defaults has it applied based on configuration settings or profile rules? What values has it derived based on the user input? To help with this I created a  component that uses a java filter to dump out the contents of the internal data binder to the WCC trace file. It dumps the binder contents using the toString() method. You can register this filter code using many different filter hooks to see how the binder is updated as WCC processes the service. By default, it uses the validateStandard filter hook which is useful during a CHECKIN service. It uses the system trace section, so make sure that trace section is enabled before looking for the output from this component. Here is some sample output>system/6    10.09 09:57:40.648    IdcServer-1    filter: postParseDataForServiceRequest, binder start -- system/6    10.09 09:57:40.698    IdcServer-1    *** LocalData *** system/6    10.09 09:57:40.698    IdcServer-1    (10 keys + 0 defaults) system/6    10.09 09:57:40.698    IdcServer-1    ClientEncoding=UTF-8 system/6    10.09 09:57:40.698    IdcServer-1    IdcService=CHECKIN_UNIVERSAL system/6    10.09 09:57:40.698    IdcServer-1    NoHttpHeaders=0 system/6    10.09 09:57:40.698    IdcServer-1    UserDateFormat=iso8601 system/6    10.09 09:57:40.698    IdcServer-1    UserTimeZone=UTC system/6    10.09 09:57:40.698    IdcServer-1    dDocTitle=Check in from RIDC using Framework Folder system/6    10.09 09:57:40.698    IdcServer-1    dDocType=Document system/6    10.09 09:57:40.698    IdcServer-1    dSecurityGroup=Public system/6    10.09 09:57:40.698    IdcServer-1    parentFolderPath=/folder1/folder2 system/6    10.09 09:57:40.698    IdcServer-1    primaryFile=testfile5.bin     system/6    10.09 09:57:40.698    IdcServer-1    ***  RESULT SETS  ***>system/6    10.09 09:57:40.698    IdcServer-1    binder end -------------------------------------------- See the readme included in the component for more details. You can download the component from here.

    Read the article

  • Mounting ddrescue image after recovery (in over my head)

    - by BorgDomination
    I'm having problems mounting the recovery image. I've tried to mount the image multiple ways. quark@DS9 ~ $ sudo mount -t ext4 /media/jump1/1recover/sdb1.img /mnt mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so quark@DS9 ~ $ sudo mount -r -o loop /media/jump1/1recover/sdb1.img recover mount: you must specify the filesystem type quark@DS9 ~ $ sudo mount /media/jump1/1recover/sdb1.img mnt mount: you must specify the filesystem type It doesn't even give me detailed information on the file I just made, nautilus says it's 160gb. quark@DS9 ~ $ file /media/jump1/1recover/sdb1.img /media/jump1/1recover/sdb1.img: data quark@DS9 ~ $ mmls /media/jump1/1recover/sdb1.img Cannot determine partition type I'm not sure what I'm doing wrong or if I started this process incorrectly from the beginning. I've outlined what I've done so far below. I'm clueless, I'd appreciate if someone had some input for me. What I have done from the beginning My laptop has two hard drives. One has the dual boot Win7 / Linux Mint system files. Secondary one contained my /home folder. The laptop was jarred and the /home disk was broken. I tried a LiveCD recovery, it failed. Wouldn't even load a Live session with the disk installed. So I turned to ddrescue. quark@DS9 ~ $ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009fc18 Device Boot Start End Blocks Id System /dev/sda1 * 2048 112642047 56320000 7 HPFS/NTFS/exFAT /dev/sda2 138033152 312580095 87273472 83 Linux /dev/sda3 112644094 138033151 12694529 5 Extended /dev/sda5 112644096 132173823 9764864 83 Linux /dev/sda6 132175872 138033151 2928640 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002a8ea Device Boot Start End Blocks Id System /dev/sdb1 * 63 312576704 156288321 83 Linux Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xed6d054b Device Boot Start End Blocks Id System /dev/sdc1 63 1953520064 976760001 7 HPFS/NTFS/exFAT sda - 160g internal, holds all system files and all computer functions. sdb - 160g internal, BROKEN, contains about 140g of data I'd like to recover. sdc - 1T external, contains recovery image. Only place that has space to do all this. From this site, https://apps.education.ucsb.edu/wiki/Ddrescue I used this script to create an image of the broken hard drive. I changed the destination to the external USB drive. #!/bin/sh prt=sdb1 src=/dev/$prt dst=/media/jump1/1recover/$prt.img log=$dst.log sudo time ddrescue --no-split $src $dst $log sudo time ddrescue --direct --max-retries=3 $src $dst $log sudo time ddrescue --direct --retrim --max-retries=3 $src $dst $log Everything looked like it came off without a hitch: quark@DS9 ~ $ sudo bash recover1 Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 0 B, errsize: 0 B, errors: 0 Current status rescued: 160039 MB, errsize: 4096 B, current rate: 35588 B/s ipos: 3584 B, errors: 1, average rate: 22859 kB/s opos: 3584 B, time from last successful read: 0 s Finished 12.78user 1060.42system 1:56:41elapsed 15%CPU (0avgtext+0avgdata 4944maxresident)k 312580958inputs+0outputs (1major+601minor)pagefaults 0swaps Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 160039 MB, errsize: 4096 B, errors: 1 Current status rescued: 160039 MB, errsize: 1024 B, current rate: 0 B/s ipos: 1536 B, errors: 1, average rate: 13 B/s opos: 1536 B, time from last successful read: 1.3 m Finished 0.00user 0.00system 3:43.95elapsed 0%CPU (0avgtext+0avgdata 4944maxresident)k 238inputs+0outputs (3major+374minor)pagefaults 0swaps Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 160039 MB, errsize: 1024 B, errors: 1 Current status rescued: 160039 MB, errsize: 1024 B, current rate: 0 B/s ipos: 1536 B, errors: 1, average rate: 0 B/s opos: 1536 B, time from last successful read: 3.7 m Finished 0.00user 0.00system 3:43.56elapsed 0%CPU (0avgtext+0avgdata 4944maxresident)k 8inputs+0outputs (0major+376minor)pagefaults 0swaps It looks like, from where I'm standing it worked perfectly. Here's the log: # Rescue Logfile. Created by GNU ddrescue version 1.14 # Command line: ddrescue --direct --retrim --max-retries=3 /dev/sdb1 /media/jump1/1recover/sdb1.img /media/jump1/1recover/sdb1.img.log # current_pos current_status 0x00000600 + # pos size status 0x00000000 0x00000400 + 0x00000400 0x00000400 - 0x00000800 0x254314FC00 + I'm not sure how to proceed. Does this mean all of my data is lost???????? Appreciate ANY input!

    Read the article

  • how to run mysql drop and create synonym in shell script

    - by bgrif
    I have added this command to a script I am writing and I am running into a issue with it not logging onto mysql and running the commands. How can i fix this and make it run. #! /bin/bash Subject: Please stage the following TFL09143 Locator Bulletin to all TF90 staging environments: # This next section is to go to mysql server and make changes. you can drop and create synonyms truncate a table and insert into a different one. you will be able to verify the counts to the different locations # $ mysql --host=app03-bsi --u "" --p "" "TF90BPS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90BP.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90BP.TFBPS3.BTXSUPB && TRUNCATE TABLE TF90BP.TFBPS3.BTXSUPB SELECT * FROM TF90BP.TFBPS2.BTXSUPB; select count () from TF90BP.TF90.BTXADDR select count() from TF90BPS.TF90.BTXADDR; select count() from TF90BP.TF90.BTXSUPB; select count() from TF90BPS.TF90.BTXSUPB;" $ mysql --host=app03-bsi --u "" --p "" "TF90LMS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90LM.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90LM.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90LM.TFLMS2.BTXADDR;TRUNCATE TABLE TF90LM.TFLMS3.BTXSUPB;INSERT INTO TF90LM.TFLMS3.BTXSUPB SELECT * FROM TF90LM.TFLMS2.BTXSUPB;Verify select count() from TF90LM.TF90.BTXADDR;select count() from TF90LMS.TF90.BTXADDR;select count() from TF90LM.TF90.BTXSUPB;select count() from TF90LMS.TF90.BTXSUPB" $ mysql --host=app03-bsi --u "" --p "" "TF90NCS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90NC.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90NC.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90NC.TFNCS2.BTXADDR; TRUNCATE TABLE TF90NC.TFNCS3.BTXSUPB; INSERT INTO TF90NC.TFNCS3.BTXSUPB SELECT * FROM TF90NC.TFNCS2.BTXSUPB; Verify select count() from TF90NC.TF90.BTXADDR; select count() from TF90NCS.TF90.BTXADDR;select count() from TF90NC.TF90.BTXSUPB;select count() from TF90NCS.TF90.BTXSUPB" $ mysql --host=app03-bsi --u "" --p "" "TF90PVS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90PV.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90PV.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90PV.TFPVS2.BTXADDR;TRUNCATE TABLE TF90PV.TFPVS3.BTXSUPB;INSERT INTO TF90PV.TFPVS3.BTXSUPB SELECT * FROM TF90PV.TFPVS2.BTXSUPB;Verify select count() from TF90PV.TF90.BTXADDR;select count() from TF90PVS.TF90.BTXADDR;select count() from TF90PV.TF90.BTXSUPB;select count() from TF90PVS.TF90.BTXSUPB" TFL09143 Staging cd \ntsrv\common\To\IT-CERT-TEST\TFL09143 #change to mapped network drive cp -p TFL09143.pkg /d:/tf90/code_stg && /tf90bp/code_stg && /tf90lm/code_stg && /tf90pv/code_stg # Copies the package from the networked folder and then copies to the location(s) needed.# InvalidInput="true" if [ $# -eq 0 ] ; then echo "This script sets up TF90 Staging" echo -n "Which production do you want to run? (RB/TaxLocator/Cyclic)" read ProductionDistro else ProductionDistro="$1" fi while [ "$InvalidInput" = "true" ] do if [ "$ProductionDistro" = "RB" -o "$ProductionDistro" = "TaxLocator" -o "$ProductionDistro" = "Cyclic" ] ; then InvalidInput="false" break else echo "You have entered an error" echo "You must type RB or TaxLocator or Cyclic" echo "you typed $ProductionDistro" echo "This script sets up TF90 Staging" read ProductionDistro fi done InvalidInput="true" if [ $# -eq 0 ] ; then echo "This script sets up RB TF90 Staging" echo -n "Which Element do you want to run? (TF90/TF90BP/TF90LM/TF90PV/ALL)" read ElementDistro else ElementDistro="$1" fi while [ "$InvalidInput" = "true" ] do if [ "$ElementDistro" = "TF90" -o "$ElementDistro" = "TF90BP" -o "$ElementDistro" = "TF90LM" -o "$ElementDistro" = "TF90PV" -o "$ElementDistro" = "ALL" ] ; then InvalidInput="false" break else echo "You have entered an error" echo "You must type TF90 or TF90BP or TF90LM or TF90PV" echo "you typed $ElementDistro" echo "This script sets up TF90 Staging" read ElementDistro fi done if [ "$ElementDistro" = "TF90" ] ; then cd /d/tf90/code_stg vim TFL09143.pkg export var=TF90_CONNECT_STRING=DSN=TF90NCS;export Description=TF90NCS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90NCS; export DATASET=DEFAULT pkgintall -l -v ../TFL09143.pkg fi if [ "$ElementDistro" = "$TF90BP" ] ; then cd /d/tf90bp/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90BPS;export Description=TF90BPS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90BPS; start tfloader -l –v ../TFL09143.pkg fi if [ "$ElementDistro" = "$TF90LM" ] ; then cd /d/tf90lm/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90LMS;export Description=TF90LMS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90LMS; start tfloader -l -v ../TFL09143.pkg fi if [ "$ElementDistro" = "TF90PV" ] ; then cd /d/tf90pv/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90PVS;Description=TF90PVS;Trusted_Connection=Yes;WSID=APP03- BSI;DATABASE=TF90PVS; start tfloader -l –v ../TFL09143.pkg fi exit 0

    Read the article

  • What are some concise and comprehensive introductory guide to unit testing for a self-taught programmer [closed]

    - by Superbest
    I don't have much formal training in programming and I have learned most things by looking up solutions on the internet to practical problems I have. There are some areas which I think would be valuable to learn, but which ended up both being difficult to learn and easy to avoid learning for a self-taught programmer. Unit testing is one of them. Specifically, I am interested in tests in and for C#/.NET applications using Microsoft.VisualStudio.TestTools in Visual Studio 2010 and/or 2012, but I really want a good introduction to the principles so language and IDE shouldn't matter much. At this time I'm interested in relatively trivial tests for small or medium sized programs (development time of weeks or months and mostly just myself developing). I don't necessarily intend to do test-driven development (I am aware that some say unit testing alone is supposed to be for developing features in TDD, and not an assurance that there are no bugs in the software, but unit testing is often the only kind of testing for which I have resources). I have found this tutorial which I feel gave me a decent idea of what unit tests and TDD looks like, but in trying to apply these ideas to my own projects, I often get confused by questions I can't answer and don't know how to answer, such as: What parts of my application and what sorts of things aren't necessarily worth testing? How fine grained should my tests be? Should they test every method and property separately, or work with a larger scope? What is a good naming convention for test methods? (since apparently the name of the method is the only way I will be able to tell from a glance at the test results table what works in my program and what doesn't) Is it bad to have many asserts in one test method? Since apparently VS2012 reports only that "an Assert.IsTrue failed within method MyTestMethod", and if MyTestMethod has 10 Assert.IsTrue statements, it will be irritating to figure out why a test is failing. If a lot of the functionality deals with writing and reading data to/from the disk in a not-exactly trivial fashion, how do I test that? If I provide a bunch of files as input by placing them in the program's directory, do I have to copy those files to the test project's bin/Debug folder now? If my program works with a large body of data and execution takes minutes or more, should my tests have it do the whole use all of the real data, a subset of it, or simulated data? If latter, how do I decide on the subset or how to simulate? Closely related to the previous point, if a class is such that its main operation happens in a state that is arrived to by the program after some involved operations (say, a class makes calculations on data derived from a few thousands of lines of code analyzing some raw data) how do I test just that class without inevitably ending up testing that class and all the other code that brings it to that state along with it? In general, what kind of approach should I use for test initialization? (hopefully that is the correct term, I mean preparing classes for testing by filling them in with appropriate data) How do I deal with private members? Do I just suck it up and assume that "not public = shouldn't be tested"? I have seen people suggest using private accessors and reflection, but these feel like clumsy and unsuited for regular use. Are these even good ideas? Is there anything like design patterns concerning testing specifically? I guess the main themes in what I'd like to learn more about are, (1) what are the overarching principles that should be followed (or at least considered) in every testing effort and (2) what are popular rules of thumb for writing tests. For example, at one point I recall hearing from someone that if a method is longer than 200 lines, it should be refactored - not a universally correct rule, but it has been quite helpful since I'd otherwise happily put hundreds of lines in single methods and then wonder why my code is so hard to read. Similarly I've found ReSharpers suggestions on member naming style and other things to be quite helpful in keeping my codebases sane. I see many resources both online and in print that talk about testing in the context of large applications (years of work, 10s of people or more). However, because I've never worked on such large projects, this context is very unfamiliar to me and makes the material difficult to follow and relate to my real world problems. Speaking of software development in general, advice given with the assumptions of large projects isn't always straightforward to apply to my own, smaller endeavors. Summary So my question is: What are some resources to learn about unit testing, for a hobbyist, self-taught programmer without much formal training? Ideally, I'm looking for a short and simple "bible of unit testing" which I can commit to memory, and then apply systematically by repeatedly asking myself "is this test following the bible of testing closely enough?" and then amending discrepancies if it doesn't.

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >