Search Results

Search found 12017 results on 481 pages for 'root'.

Page 109/481 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Software Center Freezing on Xubuntu 12.10

    - by AC3
    Whenever I open Software center I get this error: 012-12-12 16:19:29,196 - softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/lib/python2.7/dist-packages/dbus/proxies.py', 410, '_introspect_error_handler')' 2012-12-12 16:19:29,196 - dbus.proxies - ERROR - Introspect error on :1.74:/com/ubuntu/Softwarecenter: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. 2012-12-12 16:19:54,713 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-12-12 16:19:54,816 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2012-12-12 16:19:55,705 - softwarecenter.region - WARNING - failed to use geoclue: 'org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Geoclue.Master was not provided by any .service files' 2012-12-12 16:19:56,575 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-12-12 16:19:56,592 - softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/lib/python2.7/dist-packages/gi/importer.py', 51, 'find_module')' 2012-12-12 16:19:56,592 - root - ERROR - Could not find any typelib for LaunchpadIntegration 2012-12-12 16:19:56,910 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2012-12-12 16:19:56,935 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() Not sure if it is a bug or not, have uninstalled and reinstalled the program already with synaptic. Very little experience with linux and any help will be appreciated

    Read the article

  • Wireless drivers

    - by Kencer
    The results for my laptop are as below. How will i install wireless drivers and graphics? $ lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) 00:1c.3 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 4 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 02:00.0 Network controller: Broadcom Corporation Device 4365 (rev 01) 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 07) $ sudo lshw -c network *-network UNCLAIMED description: Network controller product: Broadcom Corporation vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f0500000-f0507fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 07 serial: 3c:97:0e:85:c0:0d size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl8168e-3_0.0.4 03/27/12 ip=172.16.96.36 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:43 ioport:2000(size=256) memory:f0404000-f0404fff memory:f0400000-f0403fff $ rfkill list all 0: tpacpi_bluetooth_sw: Bluetooth Soft blocked: no Hard blocked: no

    Read the article

  • I Can't run the Netbeans but I installed successfully

    - by David
    I'm new to Ubuntu as well as Netbeans. I installed Netbeans, and I've made sure to install all the JDKs and JREs I could find. It installed without errors. I also saw this question and made sure I followed all the instructions there as well. I never got any error messages of any kind. So far as I know, it installed okay. However, when I try to run Netbeans, I get the message in the bottom of the Netbeans IDE like this: ant -f /root/NetBeansProjects/samp1 -Djsp.includes=/root/NetBeansProjects/samp1/build/web/one.jsp -DforceRedeploy=false -Dclient.urlPart=/one.jsp -Ddirectory.deployment.supported=true -Djavac.jsp.includes=org/apache/jsp/one_jsp.java -Dnb.wait.for.caches=true run /root/NetBeansProjects/samp1/nbproject/build-impl.xml:774: The libs.CopyLibs.classpath property is not set up. This property must point to org-netbeans-modules-java-j2seproject-copylibstask.jar file which is part of NetBeans IDE installation and is usually located at <netbeans_installation>/java<version>/ant/extra folder. Either open the project in the IDE and make sure CopyLibs library exists or setup the property manually. For example like this: ant -Dlibs.CopyLibs.classpath=a/path/to/org-netbeans-modules-java-j2seproject-copylibstask.jar BUILD FAILED (total time: 0 seconds)

    Read the article

  • Grub2 attempting to boot hd1 when it should boot hd0

    - by JoBu1324
    I'm attempting to perform a "normal" install on a USB3 SSD (I don't know if it is noteworthy, but I don't have a swap partition). The installation proceeds normally (I'm installing from a USB2 device I created using LiLi Boot, with a copy of Ubuntu 12.10 64bit that I downloaded directly from the source. The system I'm running Ubuntu on has had a more traditional installation of ubuntu running on it without issue (also 12.10), so I know that everything works A-OK when booting from a 7200RPM internal disk. There are a number of oddities that I've noticed so far, including graphics corruption, but the first and most pressing issue is that Grub2 refuses to recognize the correct hd. From /boot/grub/grub.cfg: if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_msdos insmod ext2 set root='hd1,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd1,msdos1 --hint-efi=hd1,msdos1 --hint-baremetal=ahci1,msdos1 b58ee4f7-d41d-400a-b7b8-18bd1f0ae9d3 else search --no-floppy --fs-uuid --set=root b58ee4f7-d41d-400a-b7b8-18bd1f0ae9d3 fi font="/usr/share/grub/unicode.pf2" fi This is from a 100% fresh install of linux (first boot), which was installed while no hard drives were connected to the system, other than the USB2 LiLi drive. The system refuses to boot unless I change the hd1,msdos1 - hd0,msdos1 in the grub menu at boot, when it is the only disk device connected to the PC. What options are left for me to troubleshoot this issue? I've been racking my brains and taxing the internet trying to dig up something on this problem, but now I'd like to see if the Ubuntu community can rise to the challenge and help me fix this boot problem. This is the second time I've attempted this particular setup. The first time, after days of wasted time, I managed to get it to boot every other boot - i.e. every even boot it would boot into Ubuntu like it was happy; every odd boot it would boot into the BusyBox or Grub prompt. At one point it complained that it couldn't find /dev/disk/by-uuid/[the disk], which I found most perplexing, since the disk was there and booted before and after the occurrence (with intervention).

    Read the article

  • Cron prepending filename to script output

    - by Caitifty
    I'm having an issue with unwanted lines being added to files output by a cron job. I have a script in /etc/cron.hourly which selects some data from a mysql database and saves it in a text file in /var/www. When I run the script as root, it does exactly what I expect it to do. When the script is executed by cron, it creates the same file, but prepends the following three lines at the top of the output file: :::::::::::::: /var/www/outputfilename :::::::::::::: I can't for the life of me work out how to stop this unwanted behavior. The line in /etc/crontab for cron.hourly is the default "44 * * * * root cd / && run-parts --report /etc/cron.hourly". If I use su to change to being root and do "cd / && run-parts --report /etc/cron.hourly" the script runs as expected and the output doesn't have the mysterious additional 3 lines. I've also tried removing the --report flag from the run-parts command in case that was somehow connected, but no joy. Finally, perusing the cron log output in /var/log/syslog just says cron.hourly ran without giving any additional information. Any suggestions on solving this weird problem most welcome..

    Read the article

  • How can I fix my xvinfo?

    - by YumYumYum
    How can i fix my X server/driver? $ xvinfo X-Video Extension version 2.2 screen #0 no adaptors present Additional info: $ uname -a Linux desktop 2.6.32-33-generic #70-Ubuntu SMP Thu Jul 7 21:13:52 UTC 2011 x86_64 GNU/Linux $ lspci 00:00.0 Host bridge: Intel Corporation Device 0100 (rev 09) 00:02.0 VGA compatible controller: Intel Corporation Sandy Bridge Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation Cougar Point HECI Controller #1 (rev 04) 00:19.0 Ethernet controller: Intel Corporation Device 1503 (rev 05) 00:1a.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation Cougar Point High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 2 (rev b5) 00:1c.3 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 4 (rev b5) 00:1d.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation Device 1c4a (rev 05) 00:1f.2 SATA controller: Intel Corporation Cougar Point 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation Cougar Point SMBus Controller (rev 05) 01:00.0 PCI bridge: Integrated Technology Express, Inc. Device 8892 (rev 10) 04:00.0 USB Controller: NEC Corporation Device 0194 (rev 04) Follow up: It seems in 64-bit its a mess doing existing approach. Therefore, after upgrading to 12.04 64-bit this problems in same hardware is resolved (of-course, i have now other drivers problem)

    Read the article

  • Ubuntu boots in read-only filesystem after upgrade!

    - by akatzbreaker
    I got a serious problem here: I recently upgraded to the Latest Version of Ubuntu. Now, I boot to my Ubuntu Partition, and I get a Low-Graphics error! I boot to a Recovery-Mode to see what the Problem is. Then, I try to Fix any Damaged Packages, to run fsck but nothing solved the Problem. Then, from the Recovery Menu, I open a Root Shell. I try to create a File and I understand that the Filesystem is Read-Only. Then, I run: mount -o remount,rw / and it Worked for that Terminal Session! When I go back to recovery, I select to resume Boot Normally but I get the Same Error! a I also tried to Boot to my Root Shell again, remount as Read-Write and Start Gnome from there. It Worked! (But the user is ROOT, and is quite Dangerous!) However, I can't do all this proccess at every boot! Any Solution? (Note that when I try to create a new File in my Ubuntu Partition from another OS, I don't get any Errors!)

    Read the article

  • Password not working for sudo ("Authentication failure")

    - by Souta
    Before I mention anything further, DO NOT give me a response saying that terminal won't show password input. I'm AWARE of that. I'm typing my user password in (not a capslock issue), and for some reason it still says 'Authentication Failure'. Is there some other password (one I'm not aware of) I'm supposed to be using other than my user password? I've had this ubuntu before, on another hard drive and I didn't have this problem. (And it was the same ubuntu, ubuntu 12.04 LTS) ai@AiNekoYokai:~$ groups ai adm cdrom sudo dip plugdev lpadmin sambashare ai@AiNekoYokai:~$ lsb_release -rd Description: Ubuntu 12.04 LTS Release: 12.04 ai@AiNekoYokai:~$ pkexec cat /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # Please consider adding local content in /etc/sudoers.d/ instead of # directly modifying this file. # # See the man page for details on how to write a sudoers file. # Defaults env_reset Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL:ALL) ALL # Members of the admin group may gain root privileges %admin ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL # See sudoers(5) for more information on "#include" directives: #includedir /etc/sudoers.d I can log in with my password, but it's not accepted as valid for authentication <-- That is pretty much my issue. (Although, I haven't gone into recovery mode.) I've ran: ai@AiNekoYokai:~$ ls /etc/sudoers.d README And also reinstalled sudo with: pkexec apt-get update pkexec apt-get --purge --reinstall install sudo pkexec usermod -a -G admin $USER <- Says admin does not exist su $USER <- worked for me, however, my password still does not do much (in sense of not working for other things) I changed my password with pkexec passwd $USER. I was able to change it no problem. gksudo xclock was something I was able to get into, no problem. (Clock showed) ai@AiNekoYokai:~$ gksudo xclock

    Read the article

  • Printer Canon MP540 succesfully added finally but doesn't print? (Attached Debug log)

    - by NES
    i tried to setup my printer in Ubuntu 10.10. I had to use special guide to install because Canon used libcupsys2 in its packages and ubuntu expects libscupsys i followed this guide in reference to this advice. The first problem was this one (the ubuntu "add printer dialog asked for root credentials". The suggested workarout in Launchpad to create a root password worked. Now i added the printer. It's available for printing. Then i got an error message "cups insecure filter" which prevented me from printing. That could be solved by setting the need root rights in the /usr/lib/cups/filter/ directory. The error message disappeared after restarting cups service. Now it should work but it doesn't. The main problem is now, the printer seems to be proper setup but when i try to print a document, the printer icon appears for short time in gnomepanel. There's a printing job in Queue which got completed, but the printer doesn't print. I attached the Debuglog provided by printer error control, had to upload to another site, since it was to big in the question body here. Perhaps someone can identify the problem with it? Note: i know that it once worked fine with an older release of Ubuntu, but not sure which version this was.

    Read the article

  • Single complex or multiple simple autoload functions [on hold]

    - by Tyson of the Northwest
    Using the spl_autoload_register(), should I use a single autoload function that contains all the logic to determine where the include files are or should I break each include grouping into it's own function with it's own logic to include the files for the called function? As the places where include files may reside expands so too will the logic of a single function. If I break it into multiple functions I can add functions as new groupings are added, but the functions will be copy/pastes of each other with minor alterations. Currently I have a tool with a single registered autoload function that picks apart the class name and tries to predict where it is and then includes it. Due to naming conventions for the project this has been pretty simple. if has namespace if in template namespace look in Root\Templates else look in Root\Modules\Namespace else look in Root\System if file exists include But we are starting to include Interfaces and Traits into our codebase and it hurts me to include the type of a thing in it's name. So we are looking at instead of a single autoload function that digs through the class name and looks for the file and has increasingly complex logic to it, we are looking at having multiple autoload functions registered. But each one follows the same pattern and any time I see that I get paranoid about code copying. function systemAutoloadFunc logic to create probable filename if filename exists in system include it and return true else return false function moduleAutoloadFunc logic to create probable filename if filename exists in modules include it and return true else return false Every autoload function will follow that pattern and the last of each function if filename exists, include return true else return false is going to be identical code. This makes me paranoid about having to update it later across the board if the file_exists include pattern we are using ever changes. Or is it just that, paranoia and the multiple functions with some identical code is the best option?

    Read the article

  • How to input data into user defined variables into MySql query

    - by user292791
    Simple Shell script echo "Enter 1 for month of March" echo "Enter 2 for month of April" echo "Enter 3 for month of May" read Month case "$Month" in 1) echo "enter establishment name" read a; mysql -u root -p $a < "March.sql";; 2) echo "enter establishment name" read b; mysql -u root -p $b < "April.sql";; 3) echo "enter establishment name" read c; mysql -u root -p $c < "May.sql";; esac done In this i have three other query files March.sql, April.sql, May.sql. i'm linking this in shell script . Example of .sql file: SELECT DISTINCT substr( a.case_no, 3, 2 ), b.case_type, b.type_name, a.case_no into outfile '/tmp/April.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\r\n' FROM Civil_t AS a, Case_type_t AS b, disposal_proc AS c WHERE substr( a.case_no, 3, 2 ) = b.case_type AND a.date_of_decision BETWEEN '2014-04-01' AND '2014-04-30' AND a.case_no = c.case_no AND a.court_no =1; I have to alter the .sql script every time. Is there any method to read the variables from shell script and use it in mysql. For example:- echo "enter date" read a #input date Now i have read a "date" and i want to use it in March.sql query in where clause. Is there is any method of using this variable in .sql query.

    Read the article

  • after upgrade from 10.04 to 12.04 cannot boot with linux 3.2.0-24-generic-pae

    - by Ricardo
    After upgrade from 10.04 to 12.04, I cannot boot with linux 3.2.0-24-generic-pae: process gets frozen in a xubuntu initialization screen (I had qimo installed). If I try the recovery mode (with the same Linux version), booting freezes after this message: Begin: Mounting root file system. If in grub menu I choose Previous Linux versions, I can boot using Linux 2.6.32-41-generic-pae. But once logged in, some things don't seem to work (apt-get update fails, update manager fails, HID menu does not provide suggestions...) (to be honest, I have no idea whether this is part of the bigger issue) Reading in Ask Ubuntu through apparently similar problems, I decided to follow some advices: got boot-repair and run it. The problem remains & I got this report. I also run as root in terminal $ sudo update-initramfs -u and this is what I got: update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic-pae cryptsetup: WARNING: failed to detect canonical device of /dev/sda1 cryptsetup: WARNING: could not determine root device from /etc/fstab /tmp/mkinitramfs_EIDlHy/scripts/classmate-bottom/45xconfig: 9: .: Can't open /scripts/casper-functions What else? My pc is Intel® Core™ i7 CPU 870 @ 2.93GHz × 8, graphs is GeForce 8400 GS/PCIe/SSE2, memory is 7,8 GiB. I have two questions: Is this a bug in the newest kernel I should report? Is there anything I can do appart from a fresh install?

    Read the article

  • JavaFX: Use a Screen with your Scene!

    - by user12610255
    Here's a handy tip for sizing your application. You can use the javafx.stage.Screen class to obtain the width and height of the user's screen, and then use those same dimensions when sizing your scene. The following code modifies default "Hello World" application that appears when you create a new JavaFX project in NetBeans. package screendemo; import javafx.application.Application; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.stage.Stage; import javafx.stage.Screen; import javafx.geometry.Rectangle2D; public class ScreenDemo extends Application { public static void main(String[] args) { Application.launch(args); } @Override public void start(Stage primaryStage) { primaryStage.setTitle("Hello World"); Group root = new Group(); Rectangle2D screenBounds = Screen.getPrimary().getVisualBounds(); Scene scene = new Scene(root, screenBounds.getWidth(), screenBounds.getHeight()); Button btn = new Button(); btn.setLayoutX(100); btn.setLayoutY(80); btn.setText("Hello World"); btn.setOnAction(new EventHandler() { public void handle(ActionEvent event) { System.out.println("Hello World"); } }); root.getChildren().add(btn); primaryStage.setScene(scene); primaryStage.show(); } } Running this program will set the Stage boundaries to visible bounds of the main screen. -- Scott Hommel

    Read the article

  • RSA decrypting data in C# (.NET 3.5) which was encrypted with openssl in php 5.3.2

    - by panny
    Maybe someone can clear me up. I have been surfing on this a while now. Step #1: Create a root certificate Key generation on unix 1) openssl req -x509 -nodes -days 3650 -newkey rsa:1024 -keyout privatekey.pem -out mycert.pem 2) openssl rsa -in privatekey.pem -pubout -out publickey.pem 3) openssl pkcs12 -export -out mycertprivatekey.pfx -in mycert.pem -inkey privatekey.pem -name "my certificate" Step #2: Does root certificate work on php: YES PHP side I used the publickey.pem to read it into php: $publicKey = "file://C:/publickey.pem"; $privateKey = "file://C:/privatekey.pem"; $plaintext = "123"; openssl_public_encrypt($plaintext, $encrypted, $publicKey); $transfer = base64_encode($encrypted); openssl_private_decrypt($encrypted, $decrypted, $privateKey); echo $decrypted; // "123" OR $server_public_key = openssl_pkey_get_public(file_get_contents("C:\publickey.pem")); // rsa encrypt openssl_public_encrypt("123", $encrypted, $server_public_key); and the privatekey.pem to check if it works: openssl_private_decrypt($encrypted, $decrypted, openssl_get_privatekey(file_get_contents("C:\privatekey.pem"))); echo $decrypted; // "123" Coming to the conclusion, that encryption/decryption works fine on the php side with these openssl root certificate files. Step #3: Does root certificate work on .NET: YES C# side In same manner I read the keys into a .net C# console program: X509Certificate2 myCert2 = new X509Certificate2(); RSACryptoServiceProvider rsa = new RSACryptoServiceProvider(); try { myCert2 = new X509Certificate2(@"C:\mycertprivatekey.pfx"); rsa = (RSACryptoServiceProvider)myCert2.PrivateKey; } catch (Exception e) { } byte[] test = {Convert.ToByte("123")}; string t = Convert.ToString(rsa.Decrypt(rsa.Encrypt(test, false), false)); Coming to the point, that encryption/decryption works fine on the c# side with these openssl root certificate files. Step #4: Enrypt in php and Decrypt in .NET: !!NO!! PHP side $onett = "123" .... openssl_public_encrypt($onett, $encrypted, $server_public_key); $onettbase64 = base64_encode($encrypted); copy - paste $onettbase64 ("LkU2GOCy4lqwY4vtPI1JcsxgDgS2t05E6kYghuXjrQe7hSsYXETGdlhzEBlp+qhxzTXV3pw+AS5bEg9CPxqHus8fXHOnXYqsd2HL20QSaz+FjZee6Kvva0cGhWkFdWL+ANDSOWRWo/OMhm7JVqU3P/44c3dLA1eu2UsoDI26OMw=") into c# program: C# side byte[] transfered_onettbase64 = Convert.FromBase64String("LkU2GOCy4lqwY4vtPI1JcsxgDgS2t05E6kYghuXjrQe7hSsYXETGdlhzEBlp+qhxzTXV3pw+AS5bEg9CPxqHus8fXHOnXYqsd2HL20QSaz+FjZee6Kvva0cGhWkFdWL+ANDSOWRWo/OMhm7JVqU3P/44c3dLA1eu2UsoDI26OMw="); string k = Convert.ToString(rsa.Decrypt(transfered_onettbase64, false)); // Bad Data exception == Exception while decrypting!!! Any ideas?

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • Can I query DOM Document with xpath expression from multiple threads safely?

    - by Dan
    I plan to use dom4j DOM Document as a static cache in an application where multiples threads can query the document. Taking into the account that the document itself will never change, is it safe to query it from multiple threads? I wrote the following code to test it, but I am not sure that it actually does prove that operation is safe? package test.concurrent_dom; import org.dom4j.Document; import org.dom4j.DocumentException; import org.dom4j.DocumentHelper; import org.dom4j.Element; import org.dom4j.Node; /** * Hello world! * */ public class App extends Thread { private static final String xml = "<Session>" + "<child1 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText1</child1>" + "<child2 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText2</child2>" + "<child3 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText3</child3>" + "</Session>"; private static Document document; private static Element root; public static void main( String[] args ) throws DocumentException { document = DocumentHelper.parseText(xml); root = document.getRootElement(); Thread t1 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child1"); if(!n1.getText().equals("ChildText1")){ System.out.println("WRONG!"); } } } }; Thread t2 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child2"); if(!n1.getText().equals("ChildText2")){ System.out.println("WRONG!"); } } } }; Thread t3 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child3"); if(!n1.getText().equals("ChildText3")){ System.out.println("WRONG!"); } } } }; t1.start(); t2.start(); t3.start(); System.out.println( "Hello World!" ); } }

    Read the article

  • Generate Ant build file

    - by inakiabt
    I have the following project structure: root/ comp/ env/ version/ build.xml build.xml build.xml Where root/comp/env/version/build.xml is: <project name="comp-env-version" basedir="."> <import file="../build.xml" optional="true" /> <echo>Comp Env Version tasks</echo> <target name="run"> <echo>Comp Env Version run task</echo> </target> </project> root/comp/env/build.xml is: <project name="comp-env" basedir="."> <import file="../build.xml" optional="true" /> <echo>Comp Env tasks</echo> <target name="run"> <echo>Comp Env run task</echo> </target> </project> root/comp/build.xml is: <project name="comp" basedir="."> <echo>Comp tasks</echo> </project> Each build file imports the parent build file and each child inherits and overrides parent tasks/properties. What I need is to get the generated build XML without run anything. For example, if I run "ant" (or something like that) on root/comp/env/version/, I would like to get the following output: <project name="comp-env-version" basedir="."> <echo>Comp tasks</echo> <echo>Comp Env tasks</echo> <echo>Comp Env Version tasks</echo> <target name="run"> <echo>Comp Env Version run task</echo> </target> </project> Is there an Ant plugin to do this? With Maven? What are my options if not? EDIT: I need something like "mvn help:effective-pom" for Ant.

    Read the article

  • RVM can't switch to Ruby 1.9.1

    - by Marco
    I installed Ruby 1.8.7 through apt-get. I then installed 1.9.1 through RVM. The RVM 1.9.1 installation was successful: root: rvm install 1.9.1 <i>Installing Ruby from source to: /usr/local/rvm/rubies/ruby-1.9.1-p378 </i> <i>/usr/local/rvm/src/ruby-1.9.1-p378 has already been extracted. </i> <i>Configuring ruby-1.9.1-p378, this may take a while depending on your cpu(s)... </i> <i>Compiling ruby-1.9.1-p378, this may take a while, depending on your cpu(s)... </i> <i>Installing ruby-1.9.1-p378 </i> <i>Installation of ruby-1.9.1-p378 is complete. </i> <i>Updating rubygems for /usr/local/rvm/gems/ruby-1.9.1-p378@global </i> <i>Updating rubygems for /usr/local/rvm/gems/ruby-1.9.1-p378 </i> <i>adjusting shebangs for ruby-1.9.1-p378 (gem irb erb ri rdoc testrb rake). </i> <i>Installing gems for ruby-1.9.1-p378 (rdoc rake). </i> <i>Installing rdoc to /usr/local/rvm/gems/ruby-1.9.1-p378@global </i> <i>Installing rdoc to /usr/local/rvm/gems/ruby-1.9.1-p378 </i> <i>Installing rake to /usr/local/rvm/gems/ruby-1.9.1-p378@global </i> <i>Installing rake to /usr/local/rvm/gems/ruby-1.9.1-p378 </i> <i>Installation of gems for ruby-1.9.1-p378 is complete. </i> However, I cannot get RVM to switch to the new version: root: ruby -v ruby 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux] root: rvm 1.9.1 root: ruby -v ruby 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux] Despite that, it seems to have installed fine: root: /usr/local/rvm/bin/ruby-1.9.1-p378 -v ruby 1.9.1p378 (2010-01-10 revision 26273) [x86_64-linux] I also tried setting the rvm --default to 1.9.1 but that did not help. Why can't RVM switch to the new version? Should I just set an alias for ruby=1.9.1? *running Debian

    Read the article

  • Regular expression works normally, but fails when placed in an XML schema

    - by Eli Courtwright
    I have a simple doc.xml file which contains a single root element with a Timestamp attribute: <?xml version="1.0" encoding="utf-8"?> <root Timestamp="04-21-2010 16:00:19.000" /> I'd like to validate this document against a my simple schema.xsd to make sure that the Timestamp is in the correct format: <?xml version="1.0" encoding="utf-8"?> <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="root"> <xs:complexType> <xs:attribute name="Timestamp" use="required" type="timeStampType"/> </xs:complexType> </xs:element> <xs:simpleType name="timeStampType"> <xs:restriction base="xs:string"> <xs:pattern value="(0[0-9]{1})|(1[0-2]{1})-(3[0-1]{1}|[0-2]{1}[0-9]{1})-[2-9]{1}[0-9]{3} ([0-1]{1}[0-9]{1}|2[0-3]{1}):[0-5]{1}[0-9]{1}:[0-5]{1}[0-9]{1}.[0-9]{3}" /> </xs:restriction> </xs:simpleType> </xs:schema> So I use the lxml Python module and try to perform a simple schema validation and report any errors: from lxml import etree schema = etree.XMLSchema( etree.parse("schema.xsd") ) doc = etree.parse("doc.xml") if not schema.validate(doc): for e in schema.error_log: print e.message My XML document fails validation with the following error messages: Element 'root', attribute 'Timestamp': [facet 'pattern'] The value '04-21-2010 16:00:19.000' is not accepted by the pattern '(0[0-9]{1})|(1[0-2]{1})-(3[0-1]{1}|[0-2]{1}[0-9]{1})-[2-9]{1}[0-9]{3} ([0-1]{1}[0-9]{1}|2[0-3]{1}):[0-5]{1}[0-9]{1}:[0-5]{1}[0-9]{1}.[0-9]{3}'. Element 'root', attribute 'Timestamp': '04-21-2010 16:00:19.000' is not a valid value of the atomic type 'timeStampType'. So it looks like my regular expression must be faulty. But when I try to validate the regular expression at the command line, it passes: >>> import re >>> pat = '(0[0-9]{1})|(1[0-2]{1})-(3[0-1]{1}|[0-2]{1}[0-9]{1})-[2-9]{1}[0-9]{3} ([0-1]{1}[0-9]{1}|2[0-3]{1}):[0-5]{1}[0-9]{1}:[0-5]{1}[0-9]{1}.[0-9]{3}' >>> assert re.match(pat, '04-21-2010 16:00:19.000') >>> I'm aware that XSD regular expressions don't have every feature, but the documentation I've found indicates that every feature that I'm using should work. So what am I mis-understanding, and why does my document fail?

    Read the article

  • Cluster Node Recovery Using Second Node in Solaris Cluster

    - by Onur Bingul
    Assumptions:Node 0a is the cluster node that has crashed and could not boot anymore.Node 0b is the node in cluster and in production with services active.Both nodes have their boot disk mirrored via SDS/SVM.We have many options to clone the boot disk from node 0b:- make a copy via network using the ufsdump command and pipe to ufsrestore - make a copy inserting the disk locally on node 0b and creating the third mirror with SDS- make a copy inserting the disk locally on node 0b using dd commandIn this procedure we are going to use dd command (from my experience this is the best option).Bare in mind that in the examples provided we work on Sun Fire V240 systems which have SCSI internal disks. In the case of Fibre Channel (FC) internal disks you must pay attention to the unique identifier, or World Wide Name (WWN), associated with each FC disk (in this case take a look at infodoc #40133 in order to recreate the device tree correctly).Procedure:On node 0b the boot disk is c1t0d0 (c1t1d0 mirror) and this is the VTOC:* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory      0      2    00          0   2106432   2106431      1      3    01    2106432  74630784  76737215      2      5    00          0 143349312 143349311      4      7    00   76737216  50340672 127077887      5      4    00  127077888  14683968 141761855      6      0    00  141761856   1058304 142820159      7      0    00  142820160    529152 143349311We will insert the new disk on node 0b and it will be seen as c1t2d0.1) On node 0b we make a copy via dd from disk c1t0d0s2 to disk c1t2d0s2# dd if=/dev/rdsk/c1t0d0s2 of=/dev/rdsk/c1t2d0s2 bs=8192kA copy of a 72GB disk will take approximately about 45 minutes.Note: as an alternative to make identical copy of root over network follow Document ID: 47498Title: Sun[TM] Cluster 3.0: How to Rebuild a node with Veritas Volume Manager2) Perform an fsck on disk c1t2d0 data slices:   1.  fsck -o f /dev/rdsk/c1t2d0s0 (root)   2.  fsck -o f /dev/rdsk/c1t2d0s4 (/var)   3.  fsck -o f /dev/rdsk/c1t2d0s5 (/usr)   4.  fsck -o f /dev/rdsk/c1t2d0s6 (/globaldevices)3) Mount the root file system in order to edit following files for changing the node name:# mount /dev/dsk/c1t2d0s0 /mntChange the hostname from 0b to 0a:# cd /mnt/etc# vi hosts # vi hostname.bge0 # vi hostname.bge2 # vi nodename 4) Change the /mnt/etc/vfstab from the actual:/dev/md/dsk/d201        -       -       swap    -       no      -/dev/md/dsk/d200        /dev/md/rdsk/d200       /       ufs     1       no      -/dev/md/dsk/d205        /dev/md/rdsk/d205       /usr    ufs     1       no      logging/dev/md/dsk/d204        /dev/md/rdsk/d204       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d206        /dev/md/rdsk/d206       /global/.devices/node@2 ufs     2       noglobalto this (unencapsulate disk from SDS/SVM):/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs     2       no globalIt is important that global device partition (slice 6) in the new vfstab will point to the physical partition of the disk (in our case slice 6).Be careful with the name you use for the new disk. In this case we define it as c1t0d0 because we will insert it as target 0 in node 0a.But this could be different based on the configuration you are working on.5) Remove following entry from /mnt/etc/system (part of unencapsulation procedure):rootdev:/pseudo/md@0:0,200,blk6) Correct the link shared -> ../../global/.devices/node@2/dev/md/shared in order to point to the nodeid of node 0a (in our case nodeid 1):# cd /mnt/dev/mdhow it is now.... node 0b has nodeid 2lrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@2/dev/md/shared# rm shared# ln -s ../../global/.devices/node@1/dev/md/shared sharedhow is going to be... with nodeid 1 for node 0alrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@1/dev/md/shared7) Change nodeid (in our case from 2 to 1):# cd /mnt/etc/cluster# vi nodeid8) Change the file /mnt/etc/path_to_inst in order to reflect the correct nodeid for node 0a:# cd /mnt/etc# vi path_to_instChange entries from node@2 to node@1 with the vi command ":%s/node@2/node@1/g"9) Write the bootblock to the disk... just in case:# /usr/sbin/installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/c1t2d0s0Now the disk is ready to be inserted in node 0a in order to bootup the node.10) Bootup node 0a with command "boot -sx"... this is becasue we need to make some changes in ccr files in order to recreate did environment.11) Modify cluster ccr:# cd /etc/cluster/ccr# rm did_instances# rm did_instances.bak# vi directory - remove the did_instances line.# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/directory # grep ccr_gennum /etc/cluster/ccr/directory ccr_gennum -1 # /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure # grep ccr_gennum /etc/cluster/ccr/infrastructure ccr_gennum -112) Bring the node 0a down again to the ok prompt and then issue the command "boot -r"Now the node will join the cluster and from scstat and metaset command you can verify functionality. Next step is to encapsulate the boot disk in SDS/SVM and create the mirrors.In our case node 0b has metadevice name starting from d200. For this reason on node 0a we need to create metadevice starting from d100. This is just an example, you can have different names.The important thing to remember is that metadevice boot disks have different names on each node.13) Remove metadevice pointing to the boot and mirror disks (inherit from node 0b):# metaclear -r -f d200# metaclear -r -f d201# metaclear -r -f d204# metaclear -r -f d205# metaclear -r -f d206verify from metastat that no metadevices are set for boot and mirror disks.14) Encapsulate the boot disk:# metainit -f d110 1 1 c1t0d0s0# metainit d100 -m d110# metaroot d10015) Reboot node 0a.16) Create all the metadevice for slices remaining on boot disk# metainit -f d111 1 1 c1t0d0s1# metainit d101 -m d111# metainit -f d114 1 1 c1t0d0s4# metainit d104 -m d114# metainit -f d115 1 1 c1t0d0s5# metainit d105 -m d115# metainit -f d116 1 1 c1t0d0s6# metainit d106 -m d11617) Edit the vfstab in order to specifiy metadevices created:old:/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs      2       no  globalnew:/dev/md/dsk/d101        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/md/dsk/d105        /dev/md/rdsk/d105       /usr    ufs     1       no      logging/dev/md/dsk/d104        /dev/md/rdsk/d104       /var    ufs     1       no      logging#/dev/md/dsk/106       /dev/md/rdsk/d106       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d106        /dev/md/rdsk/d106       /global/.devices/node@1 ufs     2       noglobal18) Reboot node 0a in order to check new SDS/SVM boot configuration.19) Label the mirror disk c1t1d0 with the VTOC of boot disk c1t0d0:# prtvtoc /dev/dsk/c1t0d0s2 > /var/tmp/VTOC_c1t0d0 # fmthard -s /var/tmp/VTOC_c1t0d0 /dev/rdsk/c1t1d0s220) Put DB replica on slice 7 of disk c1t1d0:# metadb -a -c 3 /dev/dsk/c1t1d0s721) Create metadevice for mirror disk c1t1d0 and attach the new mirror side:# metainit d120 1 1 c1t1d0s0# metattach d100 d120# metainit d121 1 1 c1t1d0s1# metattach d101 d121# metainit d124 1 1 c1t1d0s4# metattach d104 d124# metainit d125 1 1 c1t1d0s5# metattach d105 d125# metainit d126 1 1 c1t1d0s6# metattach d106 d126

    Read the article

  • Restructure XML nodes using XSLT

    - by Brian
    Looking to use XSLT to transform my XML. The sample XML is as follows: <root> <info> <firstname>Bob</firstname> <lastname>Joe</lastname> </info> <notes> <note>text1</note> <note>text2</note> </notes> <othernotes> <note>text3</note> <note>text4</note> </othernotes> I'm looking to extract all "note" elements, and have them under a parent node "notes". The result I'm looking for is as follows: <root> <info> <firstname>Bob</firstname> <lastname>Joe</lastname> </info> <notes> <note>text1</note> <note>text2</note> <note>text3</note> <note>text4</note> </notes> </root> The XSLT I attempted to use is allowing me to extract all my "note", however, I can't figure out how I can wrap them back within a "notes" node. Here's the XSLT I'm using: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes" indent="yes"/> <xsl:template match="notes|othernotes"> <xsl:apply-templates select="note"/> </xsl:template> <xsl:template match="*"> <xsl:copy><xsl:apply-templates/></xsl:copy> </xsl:template> </xsl:stylesheet> The result I'm getting with the above XSLT is: <root> <info> <firstname>Bob</firstname> <lastname>Joe</lastname> </info> <note>text1</note> <note>text2</note> <note>text3</note> <note>text4</note> </root> Thanks

    Read the article

  • how much time does grid.py take to run ?

    - by trinity
    Hello all , I am using libsvm for binary classification.. I wanted to try grid.py , as it is said to improve results.. I ran this script for five files in separate terminals , and the script has been running for more than 12 hours.. this is the state of my 5 terminals now : [root@localhost tools]# python grid.py sarts_nonarts_feat.txt>grid_arts.txt Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sgames_nongames_feat.txt>grid_games.txt Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sref_nonref_feat.txt>grid_ref.txt Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py sbiz_nonbiz_feat.txt>grid_biz.txt Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656] line 2: warning: Cannot contour non grid data. Please use "set dgrid3d". Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656] line 4: warning: Cannot contour non grid data. Please use "set dgrid3d". [root@localhost tools]# python grid.py snews_nonnews_feat.txt>grid_news.txt Wrong input format at line 494 Traceback (most recent call last): File "grid.py", line 223, in run if rate is None: raise "get no rate" TypeError: exceptions must be classes or instances, not str I had redirected the outputs to files , but those files for now contain nothing.. And , the following files were created : sbiz_nonbiz_feat.txt.out sbiz_nonbiz_feat.txt.png sarts_nonarts_feat.txt.out sarts_nonarts_feat.txt.png sgames_nongames_feat.txt.out sgames_nongames_feat.txt.png sref_nonref_feat.txt.out sref_nonref_feat.txt.png snews_nonnews_feat.txt.out (-- is empty ) There's just one line of information in .out files.. the ".png" files are some GNU PLOTS . But i dont understand what the above GNUplots / warnings convey .. Should i re-run them ? Can anyone please tell me on how much time this script might take if each input file contains about 144000 lines.. Thanks and regards

    Read the article

  • How can I bind a instance of a custom class to a WPF TreeListView without bugs?

    - by user327104
    First, from : http://blogs.msdn.com/atc_avalon_team/archive/2006/03/01/541206.aspx I get a nice TreeListView. I left the original classes (TreeListView, TreeListItemView, and LevelToIndentConverter) intact, the only code y have Added is on the XAML, here is it: <Style TargetType="{x:Type l:TreeListViewItem}"> .... </ControlTemplate.Triggers> <ControlTemplate.Resources> <HierarchicalDataTemplate DataType="{x:Type local:FileInfo}" ItemsSource="{Binding Childs}"> </HierarchicalDataTemplate> </ControlTemplate.Resources> </ControlTemplate> ... here is my custom class public class FileInfo { private List<FileInfo> childs; public FileInfo() { childs = new List<FileInfo>(); } public bool IsExpanded { get; set; } public bool IsSelected { get; set; } public List<FileInfo> Childs { get { return childs; } set { } } public string FileName { get; set; } public string Size { get; set; } } And before I use my custom class I remove all the items inside the treeListView1: <l:TreeListView> <l:TreeListView.Columns> <GridViewColumn Header="Name" CellTemplate="{StaticResource CellTemplate_Name}" /> <GridViewColumn Header="IsAbstract" DisplayMemberBinding="{Binding IsAbstract}" Width="60" /> <GridViewColumn Header="Namespace" DisplayMemberBinding="{Binding Namespace}" /> </l:TreeListView.Columns> </l:TreeListView> So finaly I add this code to bind a Instance of my class to the TreeListView: private void Window_Loaded(object sender,RoutedEventArgs e) { FileInfo root = new FileInfo() { FileName = "mis hojos", Size="asdf" }; root.Childs.Add(new FileInfo(){FileName="sub", Size="123456" }); treeListView1.Items.Add(root); root.Childs[0].Childs.Add(new FileInfo() { FileName = "asdf" }); root.Childs[0].IsExpanded = true; } So the bug is that the button to expand the elementes dont appear and when a node is expanded by doble click the child nodes dont look like child nodes. Please F1 F1 F1 F1 F1 F1 F1 F1.

    Read the article

  • How to create nested directories in PhoneGap

    - by Stallman
    I have tried on this, but this didn't satisfy my request at all. I write a new one: var file_system; var fs_root; window.requestFileSystem(LocalFileSystem.PERSISTENT, 1024*1024, onInitFs, request_FS_fail); function onInitFs(fs) { file_system= fs; fs_root= file_system.root; alert("ini fs"); create_Directory(); alert("ini fs done."); } var string_array; var main_dir= "story_repository/"+ User_Editime; string_array= new Array("story_repository/",main_dir, main_dir+"/rec", main_dir+"/img","story_repository/"+ User_Name ); function create_Directory(){ var start= 0; var path=""; while(start < string_array.length) { path = string_array[start]; alert(start+" th created directory " +" is "+ path); fs_root.getDirectory ( path , {create: true, exclusive: false}, function(entry) { alert(path +"is created."); }, create_dir_err() ); start++; }//while loop }//create_Directory function create_dir_err() { alert("Recursively create directories error."); } function request_FS_fail() { alert("Failed to request File System "); } Although the directories are created, the it sends me ErrorCallback:"alert("Recursively create directories error.");" Firstly, I don't think this code will work since I have tried on this: This one failed: window.requestFileSystem( LocalFileSystem.PERSISTENT, 0, //request file system success callback. function(fileSys) { fileSys.root.getDirectory( "story_repository/"+ dir_name, {create: true, exclusive: false}, //Create directory story_repository/Stallman_time. function(directory) { alert("Create directory: "+ "story_repository/"+ dir_name); //create dir_name/img/ fileSys.root.getDirectory { "story_repository/"+ dir_name + "/img/", {create: true, exclusive: false}, function(directory) { alert("Create a directory: "+ "story_repository/"+ dir_name + "/img/"); //check. //create dir_name/rec/ fileSys.root.getDirectory { "story_repository/"+ dir_name + "/rec/", {create: true, exclusive: false}, function(directory) { alert("Create a directory: "+ "story_repository/"+ dir_name + "/rec/"); //check. //Go ahead. }, createError } //create dir_name/rec/ }, createError } //create dir_name/img }, createError); }, //Create directory story_repository/Stallman_time. createError()); } I just repeatedly call fs.root.getDirectory only but it failed. But the first one is almost the same... 1. What is the problem at all? Why does the first one always gives me the ErrorCallback? 2. Why can't the second one work? 3. Does anyone has a better solution?(no ErrorcallBack msg) ps: I work on Android and PhoneGap 1.7.0.

    Read the article

  • Ignore order of elements using xs:extension

    - by Peter Lang
    How can I design my xsd to ignore the sequence of elements? <root> <a/> <b/> </root> <root> <b/> <a/> </root> I need to use extension for code generation reasons, so I tried the following using all: <?xml version="1.0" encoding="UTF-8"?> <xs:schema targetNamespace="http://www.example.com/test" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:t="http://www.example.com/test" > <xs:complexType name="BaseType"> <xs:all> <xs:element name="a" type="xs:string" /> </xs:all> </xs:complexType> <xs:complexType name="ExtendedType"> <xs:complexContent> <xs:extension base="t:BaseType"> <xs:all> <!-- ERROR --> <xs:element name="b" type="xs:string" /> </xs:all> </xs:extension> </xs:complexContent> </xs:complexType> <xs:element name="root" type="t:ExtendedType"></xs:element> </xs:schema> This xsd is not valid though, the following error is reported at <!-- ERROR -->: cos-all-limited.1.2: An all model group must appear in a particle with {min occurs} = {max occurs} = 1, and that particle must be part of a pair which constitutes the {content type} of a complex type definition. Documentation of cos-all-limited.1.2 says: 1.2 the {term} property of a particle with {max occurs}=1 which is part of a pair which constitutes the {content type} of a complex type definition. I don't really understand this (neither xsd nor English native speaker :) ). Am I doing the wrong thing, am I doing the right thing wrong, or is there no way to achieve this? Thanks!

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >