Search Results

Search found 3603 results on 145 pages for 'kscope 2011'.

Page 141/145 | < Previous Page | 137 138 139 140 141 142 143 144 145  | Next Page >

  • Why can't I install Microsoft Office 2007 in Ubuntu 11.04?

    - by DK new
    I am very new to Ubuntu and only just getting a hang of it, and my questions might sound stupid especially because I am a learner in terms of techie things as well. So because of the nature of work where everyone uses stupid Windows and Microsoft, I need to have access to MS Office 2007/2010 as documents with too many tables or images open all haywire in Libre Office (which has otherwise been great!). I have been reading up about installing MS Office through WINE/PlayonLinux, but have been unsuccessful so far. I downloaded a MS Office 2007 package from Pirate Bay, which I extracted into a folder. I tried numerous different ways to install through WINE and PlayonLinux, but will discuss the one which seems to be getting me somewhere. http://www.webupd8.org/2011/01/how-to-install-microsoft-office-2007-in.html ..... Initially, when I would click on the install button of MS Office, I get a message saying "The install location you selected does not have 1558MB free space. Free up space from the selected install location or choose a different install location". The install location in this case said "C:\Program Files\Microsoft Office", which confused me as I don't have drives named as C, Z etc. I went to configure WINE and under the drives tab, created a drive named A with the path location /media/cd025f16-433b-4a90-abb6-bb7a025d0450/. Also the space thing is confusing as I have at least 450GB of unused space on my computer. anyways, when I selected the A drive for installation, the installation starts, but soon I get the following error message, "Office cannot find Office.en-us\OfficeLR.Cab. Browse to a valid installation source" .... The part saying "OfficeLR.Cab" have said different things after the Office bit every time I have made an attempt. When I select the Office.en-us sub-folder or any other folder within the folder where MS Office 2007 is saved, it says "invalid source"! I have been trying to get this sorted since 15hrs now (addictive!) and have learnt loads of things in the process, but have not managed to crack it. It might be something stupidly simple I am not aware off that is stopping it. I would really appreciate some help! Thanks a lot.. Also I am still getting used to the language, so might have many questions Also I am using Ubuntu 11.04 (tag 11.04). Also I think I don't have windows -- when my friend installed Ubuntu on my new laptop which had Windows 7, he was trying to keep windows in a separate partition, but something happened and windows was not there! Looking forward to some support! Again thanks a lot

    Read the article

  • WDS's MDT DeploymentShare and REMINST replicated with DFS-R does not read WIM from local WDS

    - by mbrownnyc
    I've read several guides on using DFS-R with WDS and MDT to replicate REMINST and DeploymentShare, and I have a particularly strange problem. On the receiving server, after configuring WDS and mounting the DeploymentShare into MDT's DeploymentWorkbench, I also performed the following: 1) in .\Control\Bootstrap.ini, changed DeployRoot to \%wdsserver%\DeploymentShare$ 2) Changed the UNC path at the root of the MDT Deployment Share in the DeploymentWorkbench to match that of the current server. 3) In Unattend.xml files located: .\Control**, modified the following value to match the current server: <cpi:offlineImage catelog://HOST/ I am able to boot and grab the LiteTouch PE image off the local WDS TFTP server, but the WIM files, the scripts, everything else is being pulled off the WDS server at the remote site (the original WDS server that was the source of the files within the DFS-R replicated folder). What do I do in order to solve this problem? I've grepped all the files below the DeploymentShare to look for instances of the hostname of the WDS server at the remote site (the source of the files), but I found none. Here are the guides I referred to: http://technet.microsoft.com/en-us/library/cc771324%28WS.10%29.aspx http://blogs.technet.com/b/askds/archive/2009/12/16/wds-and-dfsr-love-at-first-sync.aspx http://oasysadmin.com/2011/11/03/copying-moving-and-replicating-the-mdt-2010-deployment-share/

    Read the article

  • Trying to run chrooted suPHP with UserDir, getting 500 server error

    - by Greg Antowski
    I've managed to get suPHP working fine with UserDir (i.e. PHP files run from the /home/$username/public_html) directory, but I can't get it to work when I chroot it to the user's home directory. I've been following this guide: http://compilefailure.blogspot.co.nz/2011/09/suphp-chroot-gotchas.html And adapting it to my needs. I'm not creating vhosts, I just want PHP scripts to be jailed to the user's home directory. I've gotten to the part where you use makejail and set up a symlink. However even with the symlink set up correctly, PHP scripts won't run. This is what's shown in the Apache error log: SoftException in Application.cpp:537: Could not execute script "/home/jimmy/public_html/test.php" [error] [client 127.0.0.1] Caused by SystemException in API_Linux.cpp:444: execve() for program "/usr/bin/php-cgi" failed: No such file or directory The thing is, if I try running either of the following commands in the terminal it works without any issues: /home/jimmy/usr/bin/php-cgi /home/jimmy/public_html/test.php /usr/bin/php-cgi /home/jimmy/public_html/test.php I've been trying for hours to get this going and documentation for this kind of stuff is almost non-existent. If anyone could help me out with this, I'd be extremely grateful.

    Read the article

  • Unable to connect to Postgres on Vagrant Box - Connection refused

    - by Ben Miller
    First off, I'm new to Vagrant and Postgres. I created my Vagrant instance using http://files.vagrantup.com/lucid32.box with out any trouble. I am able to run vagrant up and vagrant ssh with out issue. I followed the instructions http://blog.crowdint.com/2011/08/11/postgresql-in-vagrant.html with one minor alteration. I installed "postgresql-8.4-postgis" package instead of "postgresql postgresql-contrib" I started the server using: postgres@lucid32:/home/vagrant$ /etc/init.d/postgresql-8.4 start While connected to the vagrant instance I can use psql to connect to the instance with out issue. In my Vagrantfile I had already added: config.vm.forward_port 5432, 5432 but when I try to run psql from localhost I get: psql: could not connect to server: Connection refused Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? I'm sure I am missing something simple. Any ideas? Update: I found a reference to an issue like this and the article suggested using: psql -U postgres -h localhost with that I get: psql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

    Read the article

  • Getting Perl DBD::mysql working on OS X 10.7?

    - by Bart B
    I can't seem to get Perl & MySQL to talk to each other on OS X 10.7 Lion. I did all the installs by the book, I used Oracle's PKG installer for the latest MySQL Community Server, and I installed DBI and DBD::mysql via CPAN. There were not problems at all during the install, but, when I try to USE DBD::mysql to connect to my local DB server I get the following error: install_driver(mysql) failed: Can't load '/Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle' for module DBD::mysql: dlopen(/Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle, 1): Library not loaded: /usr/local/mysql/lib/libmysqlclient.16.dylib Referenced from: /Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle Reason: image not found at /System/Library/Perl/5.12/darwin-thread-multi-2level/DynaLoader.pm line 204. at (eval 3) line 3 Compilation failed in require at (eval 3) line 3. Perhaps a required shared library or dll isn't installed where expected After a lot of googling all I could find were suggested hacks, so I gave this one a go: http://arkoftech.wordpress.com/2011/02/10/fixing-dbdmysql-for-mysql-5-5-89-under-macos-10-6-x/ I had to update some of the paths in the instructions since on Lion it's Perl 5.12 not 5.10. After doing that I got a new error: dyld: lazy symbol binding failed: Symbol not found: _mysql_init Referenced from: /Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle Expected in: flat namespace dyld: Symbol not found: _mysql_init Referenced from: /Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle Expected in: flat namespace Trace/BPT trap: 5 There must be a simple way to get MySQL & Perl working on OS X? - HELP!

    Read the article

  • How to diagnose frequent segfaults

    - by Andreas Gohr
    My server is logging frequent segmentation faults to /var/log/kern.log in different tools. So far I've seen them in Perl, PHP and rsync. All installed software is up-to-date Debian packages. Here's an exerpt from the log file: Mar 2 01:07:54 gaz kernel: [ 5316.246303] imapsync[4533]: segfault at 8b ip 00007fb448c98fe6 sp 00007ffff571dd68 error 4 in libperl.so.5.10.1[7fb448bd7000+164000] Mar 2 01:17:42 gaz kernel: [ 5904.354307] php5-cgi[4441]: segfault at 2bb3dc8 ip 0000000002bb3dc8 sp 00007fffbeeaae48 error 15 Mar 2 02:54:05 gaz kernel: [11687.922316] php5-cgi[4495]: segfault at 2d7acf9 ip 0000000002d7acf9 sp 00007fff60c6eb18 error 15 Mar 2 10:50:08 gaz kernel: [40250.390322] BUG: unable to handle kernel paging request at 00000000024b03f0 Mar 2 10:50:08 gaz kernel: [40250.390341] IP: [<00000000024b03f0>] 0x24b03f0 Mar 2 10:50:08 gaz kernel: [40250.390353] PGD 208c71067 PUD 21c811067 PMD 209329067 PTE 8000000211c88067 Mar 2 10:50:08 gaz kernel: [40250.390365] Oops: 0011 [#1] SMP Mar 2 10:50:08 gaz kernel: [40250.390373] last sysfs file: /sys/devices/pci0000:00/0000:00:12.0/host4/target4:0:0/4:0:0:0/block/sdb/stat Mar 2 10:50:08 gaz kernel: [40250.390386] CPU 1 Mar 2 10:50:08 gaz kernel: [40250.390392] Modules linked in: cpufreq_userspace cpufreq_stats cpufreq_powersave cpufreq_conservative xt_recent xt_tcpudp iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ ipv4 ip6table_filter ip6_tables xt_DSCP xt_TCPMSS ipt_LOG ipt_REJECT iptable_mangle iptable_filter xt_multiport xt_state xt_limit xt_conntrack nf_conntrack_ftp nf_conntrack ip_tables x_tables loop snd _hda_codec_atihdmi snd_hda_intel snd_hda_codec snd_hwdep snd_pcm radeon snd_timer ttm snd drm_kms_helper soundcore drm snd_page_alloc i2c_algo_bit shpchp i2c_piix4 edac_core pcspkr k8temp evdev edac_m ce_amd pci_hotplug i2c_core button ext3 jbd mbcache dm_mod powernow_k8 aacraid 3w_9xxx 3w_xxxx raid10 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 md_mod sata_nv sata_sil sata_via sd_mod crc_t10dif ata_generic ahci pata_atiixp ohci_hcd libata r8169 mii thermal ehci_hcd processor thermal_sys scsi_mod usbcore nls_base [last unloaded: scsi_wait_scan] Mar 2 10:50:08 gaz kernel: [40250.390566] Pid: 11482, comm: munin-limits Not tainted 2.6.32-5-amd64 #1 MS-7368 Mar 2 10:50:08 gaz kernel: [40250.390576] RIP: 0010:[<00000000024b03f0>] [<00000000024b03f0>] 0x24b03f0 Mar 2 10:50:08 gaz kernel: [40250.390586] RSP: 0018:ffff88021cc8dec0 EFLAGS: 00010286 Mar 2 10:50:08 gaz kernel: [40250.390593] RAX: 000000001ddc1000 RBX: 0000000000000010 RCX: ffffffff810f9904 Mar 2 10:50:08 gaz kernel: [40250.390600] RDX: 0000000000000000 RSI: ffffea0007688200 RDI: 0000000000000286 Mar 2 10:50:08 gaz kernel: [40250.390608] RBP: 00000000ffffffea R08: 0000000000000025 R09: 7865542f30312e35 Mar 2 10:50:08 gaz kernel: [40250.390615] R10: 000000d01cc8ddf8 R11: 0000000000000246 R12: ffff88021cc8def8 Mar 2 10:50:08 gaz kernel: [40250.390622] R13: 0000000002295010 R14: 00000000022c9db0 R15: 0000000002488d78 Mar 2 10:50:08 gaz kernel: [40250.390630] FS: 00007f3b3c8b2700(0000) GS:ffff880008d00000(0000) knlGS:0000000000000000 Mar 2 10:50:08 gaz kernel: [40250.390641] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Mar 2 10:50:08 gaz kernel: [40250.390648] CR2: 00000000024b03f0 CR3: 000000021c5d1000 CR4: 00000000000006e0 Mar 2 10:50:08 gaz kernel: [40250.390656] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Mar 2 10:50:08 gaz kernel: [40250.390663] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Mar 2 10:50:08 gaz kernel: [40250.390671] Process munin-limits (pid: 11482, threadinfo ffff88021cc8c000, task ffff88021bf59530) Mar 2 10:50:08 gaz kernel: [40250.390681] Stack: Mar 2 10:50:08 gaz kernel: [40250.390687] ffffffff810f1d4a ffff880208c63228 0000000000000000 00007fffc2dcecc0 Mar 2 10:50:08 gaz kernel: [40250.390697] <0> 00000000024ba2b0 0000000002295010 ffffffff810f1e3d 0000000000000004 Mar 2 10:50:08 gaz kernel: [40250.390712] <0> ffff88021bf59530 ffff88021c4edc00 ffffffff812fe0b6 ffff88021c4edc60 Mar 2 10:50:08 gaz kernel: [40250.390732] Call Trace: Mar 2 10:50:08 gaz kernel: [40250.390742] [<ffffffff810f1d4a>] ? vfs_fstatat+0x2c/0x57 Mar 2 10:50:08 gaz kernel: [40250.390750] [<ffffffff810f1e3d>] ? sys_newstat+0x11/0x30 Mar 2 10:50:08 gaz kernel: [40250.390760] [<ffffffff812fe0b6>] ? do_page_fault+0x2e0/0x2fc Mar 2 10:50:08 gaz kernel: [40250.390768] [<ffffffff812fbf55>] ? page_fault+0x25/0x30 Mar 2 10:50:08 gaz kernel: [40250.390777] [<ffffffff81010b42>] ? system_call_fastpath+0x16/0x1b Mar 2 10:50:08 gaz kernel: [40250.390783] Code: Bad RIP value. Mar 2 10:50:08 gaz kernel: [40250.390791] RIP [<00000000024b03f0>] 0x24b03f0 Mar 2 10:50:08 gaz kernel: [40250.390799] RSP <ffff88021cc8dec0> Mar 2 10:50:08 gaz kernel: [40250.390805] CR2: 00000000024b03f0 Mar 2 10:50:08 gaz kernel: [40250.391051] ---[ end trace 1cc1473b539c7f6e ]--- Mar 2 11:42:20 gaz kernel: [43382.242301] php5-cgi[10963]: segfault at d81160 ip 0000000000d81160 sp 00007fff3adcb058 error 15 Mar 2 21:51:14 gaz kernel: [79916.418302] php5-cgi[20089]: segfault at 1c59dc8 ip 0000000001c59dc8 sp 00007fff9b877fb8 error 15 Mar 3 03:45:01 gaz kernel: [101143.334305] munin-update[22519] general protection ip:7f516dce204c sp:7fff6049a978 error:0 in libperl.so.5.10.1[7f516dc7d000+164000] Mar 3 11:22:37 gaz kernel: [128599.570307] php5-cgi[22888]: segfault at 36485a8 ip 00000000036485a8 sp 00007fff2d56e1c8 error 15 Mar 4 08:32:17 gaz kernel: [204779.842304] php5-cgi[22090]: segfault at 18 ip 0000000000689e5e sp 00007fff677a6a48 error 6 in php5-cgi[400000+6f9000] Mar 4 10:01:02 gaz kernel: [210104.434706] rsync[22236] general protection ip:7f14a07137f9 sp:7fff88f940b8 error:0 in libc-2.11.2.so[7f14a069d000+158000] Mar 4 11:32:22 gaz kernel: [215584.262316] BUG: unable to handle kernel paging request at 00000000ffffff9c Mar 4 11:32:22 gaz kernel: [215584.262331] IP: [<00000000ffffff9c>] 0xffffff9c Mar 4 11:32:22 gaz kernel: [215584.262343] PGD 0 Mar 4 11:32:22 gaz kernel: [215584.262350] Oops: 0010 [#2] SMP Mar 4 11:32:22 gaz kernel: [215584.262359] last sysfs file: /sys/devices/pci0000:00/0000:00:12.0/host4/target4:0:0/4:0:0:0/block/sdb/stat Mar 4 11:32:22 gaz kernel: [215584.262371] CPU 1 Mar 4 11:32:22 gaz kernel: [215584.262378] Modules linked in: cpufreq_userspace cpufreq_stats cpufreq_powersave cpufreq_conservative xt_recent xt_tcpudp iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 ip6table_filter ip6_tables xt_DSCP xt_TCPMSS ipt_LOG ipt_REJECT iptable_mangle iptable_filter xt_multiport xt_state xt_limit xt_conntrack nf_conntrack_ftp nf_conntrack ip_tables x_tables loop snd_hda_codec_atihdmi snd_hda_intel snd_hda_codec snd_hwdep snd_pcm radeon snd_timer ttm snd drm_kms_helper soundcore drm snd_page_alloc i2c_algo_bit shpchp i2c_piix4 edac_core pcspkr k8temp evdev edac_mce_amd pci_hotplug i2c_core button ext3 jbd mbcache dm_mod powernow_k8 aacraid 3w_9xxx 3w_xxxx raid10 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 md_mod sata_nv sata_sil sata_via sd_mod crc_t10dif ata_generic ahci pata_atiixp ohci_hcd libata r8169 mii thermal ehci_hcd processor thermal_sys scsi_mod usbcore nls_base [last unloaded: scsi_wait_scan] Mar 4 11:32:22 gaz kernel: [215584.262552] Pid: 1960, comm: proxymap Tainted: G D 2.6.32-5-amd64 #1 MS-7368 Mar 4 11:32:22 gaz kernel: [215584.262563] RIP: 0010:[<00000000ffffff9c>] [<00000000ffffff9c>] 0xffffff9c Mar 4 11:32:22 gaz kernel: [215584.262573] RSP: 0018:ffff880209257e00 EFLAGS: 00010212 Mar 4 11:32:22 gaz kernel: [215584.262580] RAX: ffff8801514eb780 RBX: ffffffff810efb2d RCX: 0000000000000000 Mar 4 11:32:22 gaz kernel: [215584.262590] RDX: 0000000000000020 RSI: 0000000000000001 RDI: ffff8801514eb780 Mar 4 11:32:22 gaz kernel: [215584.262600] RBP: 00000000ffffffe9 R08: 0000000000000000 R09: 0000000000000000 Mar 4 11:32:22 gaz kernel: [215584.262611] R10: ffff880209257e78 R11: ffffffff81152c7c R12: 0000000000000001 Mar 4 11:32:22 gaz kernel: [215584.262622] R13: 0000000000008001 R14: 0000000000000024 R15: 00000000ffffff9c Mar 4 11:32:22 gaz kernel: [215584.262633] FS: 00007fca4de35700(0000) GS:ffff880008d00000(0000) knlGS:0000000000000000 Mar 4 11:32:22 gaz kernel: [215584.262644] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Mar 4 11:32:22 gaz kernel: [215584.262650] CR2: 00000000ffffff9c CR3: 00000001c9cbb000 CR4: 00000000000006e0 Mar 4 11:32:22 gaz kernel: [215584.262661] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Mar 4 11:32:22 gaz kernel: [215584.262671] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Mar 4 11:32:22 gaz kernel: [215584.262682] Process proxymap (pid: 1960, threadinfo ffff880209256000, task ffff88021c4b1c40) Mar 4 11:32:22 gaz kernel: [215584.262693] Stack: Mar 4 11:32:22 gaz kernel: [215584.262698] ffffffff810f8566 ffff880209257e78 ffff88021c7bf000 ffff88021c7bf0c8 Mar 4 11:32:22 gaz kernel: [215584.262709] <0> 0000800000000000 ffff88021fc0f000 ffff880209257e78 00000000fffffffe Mar 4 11:32:22 gaz kernel: [215584.262724] <0> ffffffff810e5881 ffff880209257f48 0000000000000286 ffff88021fc0f000 Mar 4 11:32:22 gaz kernel: [215584.262743] Call Trace: Mar 4 11:32:22 gaz kernel: [215584.262753] [<ffffffff810f8566>] ? do_filp_open+0xa7/0x94b Mar 4 11:32:22 gaz kernel: [215584.262763] [<ffffffff810e5881>] ? virt_to_head_page+0x9/0x2a Mar 4 11:32:22 gaz kernel: [215584.262771] [<ffffffff810f9904>] ? user_path_at+0x52/0x79 Mar 4 11:32:22 gaz kernel: [215584.262779] [<ffffffff810cfec1>] ? get_unmapped_area+0xd7/0x139 Mar 4 11:32:22 gaz kernel: [215584.262787] [<ffffffff811019d5>] ? alloc_fd+0x67/0x10c Mar 4 11:32:22 gaz kernel: [215584.262795] [<ffffffff810eceaf>] ? do_sys_open+0x55/0xfc Mar 4 11:32:22 gaz kernel: [215584.262804] [<ffffffff81010b42>] ? system_call_fastpath+0x16/0x1b Mar 4 11:32:22 gaz kernel: [215584.262811] Code: Bad RIP value. Mar 4 11:32:22 gaz kernel: [215584.262819] RIP [<00000000ffffff9c>] 0xffffff9c Mar 4 11:32:22 gaz kernel: [215584.262828] RSP <ffff880209257e00> Mar 4 11:32:22 gaz kernel: [215584.262833] CR2: 00000000ffffff9c Mar 4 11:32:22 gaz kernel: [215584.263077] ---[ end trace 1cc1473b539c7f6f ]--- As you can see there are segfaults, a general protection fault and a Kernel Oops. My first guess was that there's a Hardware problem of some sort and I asked my Hoster (it's a rented root server) to do a full hardwarecheck - they did, but couldn't find any problem. I don't know what and how they checked but their support team is usually quite good. I ran memtester and cpuburn myself and couldn't find any error either. Unfortunately I have no reliable way to reproduce these segfaults, they seem to be more or less random. On a hunch I disabled the firewall of the system and ran one of the programs that segfaulted regularily (imapsync) and it seemed to take longer to segfault than before, so the problem might be related to the network stack. Or could just be a random thing. Here are the kernel specs: # uname -a Linux gaz 2.6.32-5-amd64 #1 SMP Wed Jan 12 03:40:32 UTC 2011 x86_64 GNU/Linux # cat /etc/debian_version 6.0 # lsmod Module Size Used by cpufreq_userspace 1992 0 cpufreq_stats 2659 0 cpufreq_powersave 902 0 cpufreq_conservative 5162 0 xt_recent 5977 0 xt_tcpudp 2319 0 iptable_nat 4299 0 nf_nat 13388 1 iptable_nat nf_conntrack_ipv4 9833 3 iptable_nat,nf_nat nf_defrag_ipv4 1139 1 nf_conntrack_ipv4 ip6table_filter 2384 0 ip6_tables 15075 1 ip6table_filter xt_DSCP 1995 0 xt_TCPMSS 2919 0 ipt_LOG 4518 0 ipt_REJECT 1953 0 iptable_mangle 2817 0 iptable_filter 2258 0 xt_multiport 2267 0 xt_state 1303 0 xt_limit 1782 0 xt_conntrack 2407 0 nf_conntrack_ftp 5537 0 nf_conntrack 46535 6 iptable_nat,nf_nat,nf_conntrack_ipv4,xt_state,xt_conntrack,nf_conntrack_ftp ip_tables 13899 3 iptable_nat,iptable_mangle,iptable_filter x_tables 12845 13 xt_recent,xt_tcpudp,iptable_nat,ip6_tables,xt_DSCP,xt_TCPMSS,ipt_LOG,ipt_REJECT,xt_multiport,xt_state,xt_limit,xt_conntrack,ip_tables loop 11799 0 radeon 573996 0 ttm 39986 1 radeon drm_kms_helper 20065 1 radeon snd_hda_codec_atihdmi 2251 1 drm 142359 3 radeon,ttm,drm_kms_helper snd_hda_intel 20019 0 i2c_algo_bit 4225 1 radeon pcspkr 1699 0 i2c_piix4 8328 0 snd_hda_codec 54244 2 snd_hda_codec_atihdmi,snd_hda_intel i2c_core 15712 5 radeon,drm_kms_helper,drm,i2c_algo_bit,i2c_piix4 snd_hwdep 5380 1 snd_hda_codec snd_pcm 60503 2 snd_hda_intel,snd_hda_codec snd_timer 15582 1 snd_pcm snd 46446 5 snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_timer soundcore 4598 1 snd evdev 7352 3 snd_page_alloc 6249 2 snd_hda_intel,snd_pcm k8temp 3283 0 edac_core 29261 0 edac_mce_amd 6433 0 shpchp 26264 0 pci_hotplug 21203 1 shpchp button 4650 0 ext3 106518 2 jbd 37085 1 ext3 mbcache 5050 1 ext3 dm_mod 53754 0 powernow_k8 10978 1 aacraid 59779 0 3w_9xxx 28684 0 3w_xxxx 20569 0 raid10 17809 0 raid456 44500 0 async_raid6_recov 5170 1 raid456 async_pq 3479 2 raid456,async_raid6_recov raid6_pq 77179 2 async_raid6_recov,async_pq async_xor 2478 3 raid456,async_raid6_recov,async_pq xor 4380 1 async_xor async_memcpy 1198 2 raid456,async_raid6_recov async_tx 1734 5 raid456,async_raid6_recov,async_pq,async_xor,async_memcpy raid1 18431 3 raid0 5517 0 md_mod 73824 7 raid10,raid456,raid1,raid0 sata_nv 19166 0 sata_sil 7412 0 sata_via 7928 0 sd_mod 29889 8 crc_t10dif 1276 1 sd_mod ata_generic 3047 0 ahci 32374 6 r8169 29229 0 mii 3210 1 r8169 thermal 11674 0 pata_atiixp 3489 0 libata 133632 6 sata_nv,sata_sil,sata_via,ata_generic,ahci,pata_atiixp ohci_hcd 19212 0 ehci_hcd 31151 0 processor 29935 1 powernow_k8 thermal_sys 11942 2 thermal,processor scsi_mod 122149 5 aacraid,3w_9xxx,3w_xxxx,sd_mod,libata usbcore 122034 3 ohci_hcd,ehci_hcd nls_base 6377 1 usbcore # free total used free shared buffers cached Mem: 8166128 1228036 6938092 0 140412 782060 -/+ buffers/cache: 305564 7860564 Swap: 2102456 0 2102456 So, basically my questions are: How can I diagnose this further? Is there any data in the log above that could help me to isolate the troublemaker? Are there any known problems with the above hardware/software I overlooked when googling for it? Is there a way to prevent the kernel from autoloading modules (I probably don't need all these modules and one of them might be the culprit)

    Read the article

  • How to remove the hint in the terminal?

    - by jiangchengwu
    As a normal user , when I run some command like ps\netstat, the terminal hint me: (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) I know could redirect STDERR to /dev/null can remove this hint. But I want to know is there any way to remove it , such as edit some configuration files ? [deploy@storage2 ~]$ ps -V (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) procps version 3.2.7 [deploy@storage2 ~]$ ps -V 2>/dev/null procps version 3.2.7 My OS info: [deploy@storage2 ~]$ uname -a Linux storage2 2.6.18-243.el5 #1 SMP Mon Feb 7 18:47:27 EST 2011 x86_64 x86_64 x86_64 GNU/Linux [deploy@storage2 ~]$ lsb_release LSB Version: :core-3.1-amd64:core-3.1-ia32:core-3.1-noarch:graphics-3.1-amd64:graphics-3.1-ia32:graphics-3.1-noarch [deploy@storage2 ~]$ netstat -V (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) net-tools 1.60 netstat 1.42 (2001-04-15) Fred Baumgarten, Alan Cox, Bernd Eckenfels, Phil Blundell, Tuan Hoang and others +NEW_ADDRT +RTF_IRTT +RTF_REJECT +FW_MASQUERADE +I18N AF: (inet) +UNIX +INET +INET6 +IPX +AX25 +NETROM +X25 +ATALK +ECONET +ROSE HW: +ETHER +ARC +SLIP +PPP +TUNNEL +TR +AX25 +NETROM +X25 +FR +ROSE +ASH +SIT +FDDI +HIPPI +HDLC/LAPB There are more info from strace: [deploy@storage2 ~]$ strace ps -V execve("/bin/ps", ["ps", "-V"], [/* 27 vars */]) = 0 brk(0) = 0x929a000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=99752, ...}) = 0 mmap2(NULL, 99752, PROT_READ, MAP_PRIVATE, 3, 0) = 0xfffffffff7fde000 close(3) = 0 open("/lib/libnsl.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0 \241\210\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=101404, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7fdd000 mmap2(0x887000, 92104, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x887000 mmap2(0x89a000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x12) = 0x89a000 mmap2(0x89c000, 6088, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x89c000 close(3) = 0 open("/lib/libdl.so.2", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0Pzt\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=16428, ...}) = 0 mmap2(0x747000, 12408, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x747000 mmap2(0x749000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1) = 0x749000 close(3) = 0 open("/lib/libm.so.6", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\20\204p\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=208352, ...}) = 0 mmap2(0x705000, 155760, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x705000 mmap2(0x72a000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x24) = 0x72a000 close(3) = 0 open("/lib/libcrypt.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\340\246q\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=45288, ...}) = 0 mmap2(0x71a000, 201020, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xfffffffff7fab000 mmap2(0xf7fb4000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x8) = 0xfffffffff7fb4000 mmap2(0xf7fb6000, 155964, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7fb6000 close(3) = 0 open("/lib/libutil.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0 \n\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=13420, ...}) = 0 mmap2(NULL, 12428, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xfffffffff7fa7000 mmap2(0xf7fa9000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1) = 0xfffffffff7fa9000 close(3) = 0 open("/lib/libpthread.so.0", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0@(s\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=129716, ...}) = 0 mmap2(0x72e000, 90596, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x72e000 mmap2(0x741000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x13) = 0x741000 mmap2(0x743000, 4580, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x743000 close(3) = 0 open("/lib/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\340?]\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=1611564, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7fa6000 mmap2(0x5be000, 1328580, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x5be000 mmap2(0x6fd000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x13f) = 0x6fd000 mmap2(0x700000, 9668, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x700000 close(3) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7fa5000 set_thread_area(0xffd61bb4) = 0 mprotect(0x6fd000, 8192, PROT_READ) = 0 mprotect(0x741000, 4096, PROT_READ) = 0 mprotect(0xf7fa9000, 4096, PROT_READ) = 0 mprotect(0xf7fb4000, 4096, PROT_READ) = 0 mprotect(0x72a000, 4096, PROT_READ) = 0 mprotect(0x749000, 4096, PROT_READ) = 0 mprotect(0x89a000, 4096, PROT_READ) = 0 mprotect(0x5ba000, 4096, PROT_READ) = 0 munmap(0xf7fde000, 99752) = 0 set_tid_address(0xf7fa5708) = 20214 set_robust_list(0xf7fa5710, 0xc) = 0 futex(0xffd61f74, FUTEX_WAKE_PRIVATE, 1) = 0 rt_sigaction(SIGRTMIN, {0x4007323d0, [], 0}, NULL, 8) = 0 rt_sigaction(SIGRT_1, {0x10000004007322e0, [], 0}, NULL, 8) = 0 rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0 getrlimit(RLIMIT_STACK, {rlim_cur=-4284481536, rlim_max=67108864*1024}) = 0 uname({sys="Linux", node="storage2", ...}) = 0 readlink("/proc/self/exe", "/bin/ps"..., 260) = 7 brk(0) = 0x929a000 brk(0x92bb000) = 0x92bb000 open("/bin/ps", O_RDONLY|O_LARGEFILE) = 3 _llseek(3, -12, [711660], SEEK_END) = 0 read(3, "\274U!\253\2\0\0\0\224\237\t\0", 12) = 12 mmap2(NULL, 634880, PROT_READ, MAP_SHARED, 3, 0x13) = 0xfffffffff7f0a000 mmap2(NULL, 630784, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7e70000 close(3) = 0 futex(0x74a06c, FUTEX_WAKE_PRIVATE, 2147483647) = 0 geteuid32() = 501 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 open("/etc/nsswitch.conf", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=1696, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7ff6000 read(3, "#\n# /etc/nsswitch.conf\n#\n# An ex"..., 4096) = 1696 read(3, "", 4096) = 0 close(3) = 0 munmap(0xf7ff6000, 4096) = 0 open("/etc/ld.so.cache", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=99752, ...}) = 0 mmap2(NULL, 99752, PROT_READ, MAP_PRIVATE, 3, 0) = 0xfffffffff7fde000 close(3) = 0 open("/lib/libnss_files.so.2", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\300\30\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=46680, ...}) = 0 mmap2(NULL, 41616, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xfffffffff7e65000 mmap2(0xf7e6e000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x8) = 0xfffffffff7e6e000 close(3) = 0 mprotect(0xf7e6e000, 4096, PROT_READ) = 0 munmap(0xf7fde000, 99752) = 0 open("/etc/passwd", O_RDONLY) = 3 fcntl64(3, F_GETFD) = 0 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 fstat64(3, {st_mode=S_IFREG|0644, st_size=2166, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7ff6000 read(3, "root:x:0:0:root:/root:/bin/bash\n"..., 4096) = 2166 close(3) = 0 munmap(0xf7ff6000, 4096) = 0 mkdir("/tmp/pdk-deploy/", 0755) = -1 EEXIST (File exists) mkdir("/tmp/pdk-deploy/fcb734befe617ec3ae1edc38da810a5a", 0755) = -1 EEXIST (File exists) open("/tmp/pdk-deploy/fcb734befe617ec3ae1edc38da810a5a/libperl.so", O_RDONLY|O_LARGEFILE) = 3 close(3) = 0 open("/tmp/pdk-deploy/fcb734befe617ec3ae1edc38da810a5a/libperl.so", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\300!\2\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0664, st_size=1264090, ...}) = 0 mmap2(NULL, 1140104, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xfffffffff7d4e000 mmap2(0xf7e5a000, 45056, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x10b) = 0xfffffffff7e5a000 close(3) = 0 rt_sigaction(SIGFPE, {0x1000000000000001, [], SA_RESTORER|SA_STACK|SA_RESTART|SA_INTERRUPT|SA_NODEFER|SA_RESETHAND|SA_SIGINFO|0x3d61cb8, (nil)}, {SIG_DFL, ~[HUP INT ILL ABRT BUS SEGV USR2 PIPE ALRM TERM STOP TSTP TTIN TTOU XCPU WINCH IO PWR SYS RTMIN RT_1 RT_2 RT_4 RT_5 RT_8 RT_9 RT_11 RT_12 RT_13 RT_16 RT_17 RT_18 RT_22 RT_24 RT_25 RT_26 RT_27 RT_28 RT_29 RT_30 RT_31], SA_RESTART|SA_RESETHAND|0x22302d0}, 8) = 0 getuid32() = 501 geteuid32() = 501 getgid32() = 502 getegid32() = 502 open("/usr/lib/locale/locale-archive", O_RDONLY|O_LARGEFILE) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=56454896, ...}) = 0 mmap2(NULL, 2097152, PROT_READ, MAP_PRIVATE, 3, 0) = 0xfffffffff7b4e000 mmap2(NULL, 241664, PROT_READ, MAP_PRIVATE, 3, 0x13ec) = 0xfffffffff7b13000 mmap2(NULL, 4096, PROT_READ, MAP_PRIVATE, 3, 0x1466) = 0xfffffffff7b12000 close(3) = 0 mmap2(NULL, 135168, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7af1000 time(NULL) = 1348210009 readlink("/proc/self/exe", "/bin/ps"..., 4095) = 7 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 _llseek(0, 0, 0xffd618d0, SEEK_CUR) = -1 ESPIPE (Illegal seek) ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd618a8) = -1 EINVAL (Invalid argument) _llseek(1, 0, 0xffd618d0, SEEK_CUR) = -1 ESPIPE (Illegal seek) ioctl(2, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd618a8) = -1 EINVAL (Invalid argument) _llseek(2, 0, 0xffd618d0, SEEK_CUR) = -1 ESPIPE (Illegal seek) open("/dev/null", O_RDONLY|O_LARGEFILE) = 3 ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd61978) = -1 ENOTTY (Inappropriate ioctl for device) _llseek(3, 0, [0], SEEK_CUR) = 0 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], SA_RESTART|SA_RESETHAND|0x22302d0}, 8) = 0 brk(0x92dc000) = 0x92dc000 getppid() = 20212 stat64("/opt/ActivePerl-5.8/site/lib/sitecustomize.pl", 0xffd61560) = -1 ENOENT (No such file or directory) close(3) = 0 open("/usr/lib/.khostd/.hostconf", O_RDONLY|O_LARGEFILE) = 3 ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd61828) = -1 ENOTTY (Inappropriate ioctl for device) _llseek(3, 0, [0], SEEK_CUR) = 0 fstat64(3, {st_mode=S_IFREG|0644, st_size=334, ...}) = 0 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 read(3, "bindport=9001\ntrustip=221.122.57"..., 4096) = 334 read(3, "", 4096) = 0 close(3) = 0 pipe([3, 4]) = 0 pipe([5, 6]) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0) = 20215 close(6) = 0 close(4) = 0 read(5, "", 4) = 0 close(5) = 0 ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd61868) = -1 EINVAL (Invalid argument) _llseek(3, 0, 0xffd61890, SEEK_CUR) = -1 ESPIPE (Illegal seek) fstat64(3, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0 read(3, (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) "tcp 0 0 0.0.0.0:9001"..., 4096) = 109 read(3, "", 4096) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- fstat64(3, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0 close(3) = 0 rt_sigaction(SIGHUP, {0x1, [], SA_STACK|0x129b3d8}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_RESTART|SA_RESETHAND|0x22302d0}, 8) = 0 rt_sigaction(SIGINT, {0x1, [], SA_STACK|0x129b3d8}, {SIG_DFL, [TRAP BUS FPE USR1 CHLD CONT TTOU VTALRM IO RTMIN], SA_RESTART|SA_RESETHAND|0x22302d0}, 8) = 0 rt_sigaction(SIGQUIT, {0x1, [], 0}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_RESTART|SA_RESETHAND|0x22302d0}, 8) = 0 waitpid(20215, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0) = 20215 rt_sigaction(SIGHUP, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGINT, {SIG_DFL, [TRAP BUS FPE USR1 CHLD CONT TTOU VTALRM IO RTMIN], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGQUIT, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 chdir("/usr/lib/.khostd") = 0 pipe([3, 4]) = 0 pipe([5, 6]) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0) = 20218 close(6) = 0 close(4) = 0 read(5, "", 4) = 0 close(5) = 0 ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd61868) = -1 EINVAL (Invalid argument) _llseek(3, 0, 0xffd61890, SEEK_CUR) = -1 ESPIPE (Illegal seek) read(3, "", 4096) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- close(3) = 0 rt_sigaction(SIGHUP, {0x1, [], SA_RESTORER|SA_STACK|SA_RESTART|SA_INTERRUPT|SA_NODEFER|SA_RESETHAND|0x3d61850, (nil)}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 rt_sigaction(SIGINT, {0x1, [], SA_STACK|0x129b3d8}, {SIG_DFL, [HUP INT], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 rt_sigaction(SIGQUIT, {0x1, [], 0}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 waitpid(20218, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0) = 20218 rt_sigaction(SIGHUP, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGINT, {SIG_DFL, [HUP INT], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGQUIT, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 chdir("/home/deploy") = 0 stat64("/etc/cron.hourly/hichina", {st_mode=S_IFREG|0755, st_size=711660, ...}) = 0 pipe([3, 4]) = 0 pipe([5, 6]) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0) = 20230 close(6) = 0 close(4) = 0 read(5, "", 4) = 0 close(5) = 0 ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd61868) = -1 EINVAL (Invalid argument) _llseek(3, 0, 0xffd61890, SEEK_CUR) = -1 ESPIPE (Illegal seek) read(3, "procps version 3.2.7\n", 4096) = 21 read(3, "", 4096) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- close(3) = 0 rt_sigaction(SIGHUP, {0x1, [], SA_RESTORER|SA_STACK|SA_RESTART|SA_INTERRUPT|SA_NODEFER|SA_RESETHAND|0x3d61850, (nil)}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 rt_sigaction(SIGINT, {0x1, [], SA_STACK|0x129b3d8}, {SIG_DFL, [HUP INT], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 rt_sigaction(SIGQUIT, {0x1, [], 0}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 waitpid(20230, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0) = 20230 rt_sigaction(SIGHUP, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGINT, {SIG_DFL, [HUP INT], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGQUIT, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 write(1, "procps version 3.2.7\n", 21procps version 3.2.7 ) = 21 munmap(0xf7af1000, 135168) = 0 munmap(0xf7e70000, 630784) = 0 munmap(0xf7f0a000, 634880) = 0 munmap(0xf7d4e000, 1140104) = 0 exit_group(0) = ? [ Process PID=20214 runs in 32 bit mode. ] Thank you very much.

    Read the article

  • SSH login very slow on OS X Leopard

    - by acjohnson55
    My SSH sessions take a very long time to initiate. This applies for logins with and without passwords, interactive and non-interactive. I have tried setting 'GSSAPIAuthentication no' and 'IPQoS 0x00' on the client side, and 'UseDNS no' on the server side, but no dice. I'm really stumped and frustrated. The worst part is that it SFTP takes forever to establish connections too, making file transfer much longer than it would be otherwise. I thought the problem might be something with PAM, because of where the hang is in the sshd log below, so I tried commenting out each line one-by-one in the /etc/pam.d/sshd file. Some caused login to be impossible, some had no apparent effect. I can't really tell if PAM is stalling for other services, but I can say that su'ing into my account from another account with 'su -l' has no apparent delay. I tried creating a new user account, just to see if there was something wrong with my existing account, and the same problem persisted. Any ideas of what's going on? On the client side, the most verbose mode outputs (redacted where reasonable): OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data ... debug1: ... line 1: Applying options for ... debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 53: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ... [x.x.x.x] port 22. debug1: Connection established. debug1: identity file /.../.ssh/id_rsa type -1 debug1: identity file /.../.ssh/id_rsa-cert type -1 debug3: Incorrect RSA1 identifier debug3: Could not load "/.../.ssh/id_dsa" as a RSA1 public key debug1: identity file /.../.ssh/id_dsa type 2 debug1: identity file /.../.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.2 debug1: match: OpenSSH_5.2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "..." from file "/.../.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /.../.ssh/known_hosts:9 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 136/256 debug2: bits set: 523/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA ... debug3: load_hostkeys: loading entries for host "..." from file "/.../.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /.../.ssh/known_hosts:9 debug3: load_hostkeys: loaded 1 keys debug3: load_hostkeys: loading entries for host "x.x.x.x" from file "/.../.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /.../.ssh/known_hosts:9 debug3: load_hostkeys: loaded 1 keys debug1: Host '...' is known and matches the RSA host key. debug1: Found key in /.../.ssh/known_hosts:9 debug2: bits set: 492/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /.../.ssh/id_dsa (0x7f8b7b41d6c0) debug2: key: /.../.ssh/id_rsa (0x0) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug3: start over, passed a different list publickey,password,keyboard-interactive debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering DSA public key: /.../.ssh/id_dsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-dss blen 434 debug2: input_userauth_pk_ok: fp ... debug3: sign_and_send_pubkey: DSA ... debug1: Authentication succeeded (publickey). Authenticated to ... ([x.x.x.x]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. ****** Hangs here ****** debug2: callback start debug2: client_session2_setup: id 0 debug2: fd 3 setting TCP_NODELAY debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env TERM_PROGRAM debug3: Ignored env SHELL debug3: Ignored env TERM debug3: Ignored env TMPDIR debug3: Ignored env Apple_PubSub_Socket_Render debug3: Ignored env TERM_PROGRAM_VERSION debug3: Ignored env TERM_SESSION_ID debug3: Ignored env USER debug3: Ignored env COMMAND_MODE debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env Apple_Ubiquity_Message debug3: Ignored env __CF_USER_TEXT_ENCODING debug3: Ignored env PATH debug3: Ignored env MKL_NUM_THREADS debug3: Ignored env PWD debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env DYLD_LIBRARY_PATH debug3: Ignored env PYTHONPATH debug3: Ignored env LOGNAME debug3: Ignored env DISPLAY debug3: Ignored env SECURITYSESSIONID debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 On the server side, the debug output looks like: Sep 16 18:46:40 ... sshd[31435]: debug1: inetd sockets after dupping: 3, 4 Sep 16 18:46:40 ... sshd[31435]: Connection from x.x.x.x port 52758 Sep 16 18:46:40 ... sshd[31435]: debug1: Current Session ID is 56AC0FB0 / Session Attributes are 00008000 Sep 16 18:46:40 ... sshd[31435]: debug1: Running in inetd mode in a non-root session... assuming inetd created the session for us. Sep 16 18:46:40 ... sshd[31435]: debug1: Client protocol version 2.0; client software version OpenSSH_5.9 Sep 16 18:46:40 ... sshd[31435]: debug1: match: OpenSSH_5.9 pat OpenSSH* Sep 16 18:46:40 ... sshd[31435]: debug1: Enabling compatibility mode for protocol 2.0 Sep 16 18:46:40 ... sshd[31435]: debug1: Local version string SSH-2.0-OpenSSH_5.2 Sep 16 18:46:40 ... sshd[31435]: debug1: Checking with Service ACLs for ssh login restrictions Sep 16 18:46:40 ... sshd[31435]: debug1: call to mbr_user_name_to_uuid with <...> suceeded to retrieve user_uuid Sep 16 18:46:40 ... sshd[31435]: debug1: Call to mbr_check_service_membership failed with status <0> Sep 16 18:46:40 ... sshd[31435]: debug1: PAM: initializing for "..." Sep 16 18:46:40 ... sshd[31435]: debug1: PAM: setting PAM_RHOST to "x.x.x.x" Sep 16 18:46:40 ... sshd[31435]: Failed none for ... from x.x.x.x port 52758 ssh2 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys2 Sep 16 18:46:40 ... sshd[31435]: debug1: fd 5 clearing O_NONBLOCK Sep 16 18:46:40 ... sshd[31435]: debug1: matching key found: file /.../.ssh/authorized_keys2, line 1 Sep 16 18:46:40 ... sshd[31435]: Found matching DSA key: ... Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: temporarily_use_uid: 509/20 (e=0/0) Sep 16 18:46:40 ... sshd[31435]: debug1: trying public key file /.../.ssh/authorized_keys2 Sep 16 18:46:40 ... sshd[31435]: debug1: fd 5 clearing O_NONBLOCK Sep 16 18:46:40 ... sshd[31435]: debug1: matching key found: file /.../.ssh/authorized_keys2, line 1 Sep 16 18:46:40 ... sshd[31435]: Found matching DSA key: ... Sep 16 18:46:40 ... sshd[31435]: debug1: restore_uid: 0/0 Sep 16 18:46:40 ... sshd[31435]: debug1: ssh_dss_verify: signature correct Sep 16 18:46:40 ... sshd[31435]: debug1: do_pam_account: called Sep 16 18:46:40 ... sshd[31435]: Accepted publickey for ... from x.x.x.x port 52758 ssh2 Sep 16 18:46:40 ... sshd[31435]: debug1: monitor_child_preauth: ... has been authenticated by privileged process Sep 16 18:46:40 ... sshd[31435]: debug1: PAM: establishing credentials ***** Hangs here ***** Sep 16 18:46:54 ... sshd[31435]: User child is on pid 31654 Sep 16 18:46:54 ... sshd[31654]: debug1: PAM: establishing credentials Sep 16 18:46:54 ... sshd[31654]: debug1: permanently_set_uid: 509/20 Sep 16 18:46:54 ... sshd[31654]: debug1: Entering interactive session for SSH2. Sep 16 18:46:54 ... sshd[31654]: debug1: server_init_dispatch_20 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_open: ctype session rchan 0 win 1048576 max 16384 Sep 16 18:46:54 ... sshd[31654]: debug1: input_session_request Sep 16 18:46:54 ... sshd[31654]: debug1: channel 0: new [server-session] Sep 16 18:46:54 ... sshd[31654]: debug1: session_new: session 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_open: channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_open: session 0: link with channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_open: confirm session Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_global_request: rtype [email protected] want_reply 0 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_req: channel 0 request pty-req reply 1 Sep 16 18:46:54 ... sshd[31654]: debug1: session_by_channel: session 0 channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_input_channel_req: session 0 req pty-req Sep 16 18:46:54 ... sshd[31654]: debug1: Allocating pty. Sep 16 18:46:54 ... sshd[31435]: debug1: session_new: session 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_pty_req: session 0 alloc /dev/ttys008 Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_req: channel 0 request env reply 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_by_channel: session 0 channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_input_channel_req: session 0 req env Sep 16 18:46:54 ... sshd[31654]: debug1: server_input_channel_req: channel 0 request shell reply 1 Sep 16 18:46:54 ... sshd[31654]: debug1: session_by_channel: session 0 channel 0 Sep 16 18:46:54 ... sshd[31654]: debug1: session_input_channel_req: session 0 req shell Sep 16 18:46:54 ... sshd[31655]: debug1: Setting controlling tty using TIOCSCTTY.

    Read the article

  • Amazon VPC NAT not working

    - by rpkelly
    I'm trying to create a NAT instance for my VPC to allow instances on private subnets connect to the internet (most importantly, S3). I tried following the instructions here: http://docs.amazonwebservices.com/AmazonVPC/2011-07-15/UserGuide/index.html?VPC_NAT_Instance.html . Unfortunately, the instances in the private subnet (call it 10.10.2.0/24) cannot reach the internet. I have done the following: Create a NAT instance (Amazon's ami-vpc-nat-1.0.0-beta.i386-ebs (ami-d8699bb1)) in public subnet (call it 10.10.1.0/24). Changed "Source / Dest Check" to disabled. Created a new entry in the default routing table (which is used by 10.10.2.0/24) and had it point to the ID of the newly created instance. Associated an Elastic IP address with the NAT instance. Allowed all outbound traffic on the security group of the NAT instance. Ensured that all traffic could pass between the two subnets. I've tried also doing this with an existing instance using iptables, but had no luck. And I have verified that sys.net.ipv4.ip_forward is 1, just in case anyone was wondering. And I still have no internet connectivity from the instances on 10.10.2.0/24. Does anyone have any suggestions?

    Read the article

  • UAC being turned off once a day on Windows 7

    - by Mehper C. Palavuzlar
    I have strange problem on my HP laptop. This began to happen recently. Whenever I start my machine, Windows 7 Action Center displays the following warning: You need to restart your computer for UAC to be turned off. Actually, this does not happen if it happened once on a specific day. For example, when I start the machine in the morning, it shows up; but it never shows up in the subsequent restarts within that day. On the next day, the same thing happens again. I never disable UAC, but obviously some rootkit or virus causes this. As soon as I get this warning, I head for the UAC settings, and re-enable UAC to dismiss this warning. This is a bothersome situation as I can't fix it. First, I have run a full scan on the computer for any probable virus and malware/rootkit activity, but TrendMicro OfficeScan said that no viruses have been found. I went to an old Restore Point using Windows System Restore, but the problem was not solved. What I have tried so far (which couldn't find the rootkit): TrendMicro OfficeScan Antivirus AVAST Malwarebytes' Anti-malware Ad-Aware Vipre Antivirus GMER TDSSKiller (Kaspersky Labs) HiJackThis RegRuns UnHackMe SuperAntiSpyware Portable Tizer Rootkit Razor (*) Sophos Anti-Rootkit SpyHunter 4 There are no other strange activities on the machine. Everything works fine except this bizarre incident. What could be the name of this annoying rootkit? How can I detect and remove it? EDIT: Below is the log file generated by HijackThis: Logfile of Trend Micro HijackThis v2.0.4 Scan saved at 13:07:04, on 17.01.2011 Platform: Windows 7 (WinNT 6.00.3504) MSIE: Internet Explorer v8.00 (8.00.7600.16700) Boot mode: Normal Running processes: C:\Windows\system32\taskhost.exe C:\Windows\system32\Dwm.exe C:\Windows\Explorer.EXE C:\Program Files\CheckPoint\SecuRemote\bin\SR_GUI.Exe C:\Windows\System32\igfxtray.exe C:\Windows\System32\hkcmd.exe C:\Windows\system32\igfxsrvc.exe C:\Windows\System32\igfxpers.exe C:\Program Files\Hewlett-Packard\HP Wireless Assistant\HPWAMain.exe C:\Program Files\Synaptics\SynTP\SynTPEnh.exe C:\Program Files\Hewlett-Packard\HP Quick Launch Buttons\QLBCTRL.exe C:\Program Files\Analog Devices\Core\smax4pnp.exe C:\Program Files\Hewlett-Packard\HP Quick Launch Buttons\VolCtrl.exe C:\Program Files\LightningFAX\LFclient\lfsndmng.exe C:\Program Files\Common Files\Java\Java Update\jusched.exe C:\Program Files\Microsoft Office Communicator\communicator.exe C:\Program Files\Iron Mountain\Connected BackupPC\Agent.exe C:\Program Files\Trend Micro\OfficeScan Client\PccNTMon.exe C:\Program Files\Microsoft LifeCam\LifeExp.exe C:\Program Files\Hewlett-Packard\Shared\HpqToaster.exe C:\Program Files\Windows Sidebar\sidebar.exe C:\Program Files\mimio\mimio Studio\system\aps_tablet\atwtusb.exe C:\Program Files\Microsoft Office\Office12\OUTLOOK.EXE C:\Program Files\Babylon\Babylon-Pro\Babylon.exe C:\Program Files\Mozilla Firefox\firefox.exe C:\Users\userx\Desktop\HijackThis.exe R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = about:blank R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://go.microsoft.com/fwlink/?LinkId=69157 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://go.microsoft.com/fwlink/?LinkId=54896 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant = R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = R1 - HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings,AutoConfigURL = http://www.yaysat.com.tr/proxy/proxy.pac R0 - HKCU\Software\Microsoft\Internet Explorer\Toolbar,LinksFolderName = O2 - BHO: AcroIEHelperStub - {18DF081C-E8AD-4283-A596-FA578C2EBDC3} - C:\Program Files\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll O2 - BHO: Babylon IE plugin - {9CFACCB6-2F3F-4177-94EA-0D2B72D384C1} - C:\Program Files\Babylon\Babylon-Pro\Utils\BabylonIEPI.dll O2 - BHO: Java(tm) Plug-In 2 SSV Helper - {DBC80044-A445-435b-BC74-9C25C1C588A9} - C:\Program Files\Java\jre6\bin\jp2ssv.dll O4 - HKLM\..\Run: [IgfxTray] C:\Windows\system32\igfxtray.exe O4 - HKLM\..\Run: [HotKeysCmds] C:\Windows\system32\hkcmd.exe O4 - HKLM\..\Run: [Persistence] C:\Windows\system32\igfxpers.exe O4 - HKLM\..\Run: [hpWirelessAssistant] C:\Program Files\Hewlett-Packard\HP Wireless Assistant\HPWAMain.exe O4 - HKLM\..\Run: [SynTPEnh] C:\Program Files\Synaptics\SynTP\SynTPEnh.exe O4 - HKLM\..\Run: [QlbCtrl.exe] C:\Program Files\Hewlett-Packard\HP Quick Launch Buttons\QlbCtrl.exe /Start O4 - HKLM\..\Run: [SoundMAXPnP] C:\Program Files\Analog Devices\Core\smax4pnp.exe O4 - HKLM\..\Run: [Adobe Reader Speed Launcher] "C:\Program Files\Adobe\Reader 9.0\Reader\Reader_sl.exe" O4 - HKLM\..\Run: [Adobe ARM] "C:\Program Files\Common Files\Adobe\ARM\1.0\AdobeARM.exe" O4 - HKLM\..\Run: [lfsndmng] C:\Program Files\LightningFAX\LFclient\LFSNDMNG.EXE O4 - HKLM\..\Run: [SunJavaUpdateSched] "C:\Program Files\Common Files\Java\Java Update\jusched.exe" O4 - HKLM\..\Run: [Communicator] "C:\Program Files\Microsoft Office Communicator\communicator.exe" /fromrunkey O4 - HKLM\..\Run: [AgentUiRunKey] "C:\Program Files\Iron Mountain\Connected BackupPC\Agent.exe" -ni -sss -e http://localhost:16386/ O4 - HKLM\..\Run: [OfficeScanNT Monitor] "C:\Program Files\Trend Micro\OfficeScan Client\pccntmon.exe" -HideWindow O4 - HKLM\..\Run: [Babylon Client] C:\Program Files\Babylon\Babylon-Pro\Babylon.exe -AutoStart O4 - HKLM\..\Run: [LifeCam] "C:\Program Files\Microsoft LifeCam\LifeExp.exe" O4 - HKCU\..\Run: [Sidebar] C:\Program Files\Windows Sidebar\sidebar.exe /autoRun O4 - Global Startup: mimio Studio.lnk = C:\Program Files\mimio\mimio Studio\mimiosys.exe O8 - Extra context menu item: Microsoft Excel'e &Ver - res://C:\PROGRA~1\MICROS~1\Office12\EXCEL.EXE/3000 O8 - Extra context menu item: Translate this web page with Babylon - res://C:\Program Files\Babylon\Babylon-Pro\Utils\BabylonIEPI.dll/ActionTU.htm O8 - Extra context menu item: Translate with Babylon - res://C:\Program Files\Babylon\Babylon-Pro\Utils\BabylonIEPI.dll/Action.htm O9 - Extra button: Research - {92780B25-18CC-41C8-B9BE-3C9C571A8263} - C:\PROGRA~1\MICROS~1\Office12\REFIEBAR.DLL O9 - Extra button: Translate this web page with Babylon - {F72841F0-4EF1-4df5-BCE5-B3AC8ACF5478} - C:\Program Files\Babylon\Babylon-Pro\Utils\BabylonIEPI.dll O9 - Extra 'Tools' menuitem: Translate this web page with Babylon - {F72841F0-4EF1-4df5-BCE5-B3AC8ACF5478} - C:\Program Files\Babylon\Babylon-Pro\Utils\BabylonIEPI.dll O16 - DPF: {00134F72-5284-44F7-95A8-52A619F70751} (ObjWinNTCheck Class) - https://172.20.12.103:4343/officescan/console/html/ClientInstall/WinNTChk.cab O16 - DPF: {08D75BC1-D2B5-11D1-88FC-0080C859833B} (OfficeScan Corp Edition Web-Deployment SetupCtrl Class) - https://172.20.12.103:4343/officescan/console/html/ClientInstall/setup.cab O17 - HKLM\System\CCS\Services\Tcpip\Parameters: Domain = yaysat.com O17 - HKLM\Software\..\Telephony: DomainName = yaysat.com O17 - HKLM\System\CS1\Services\Tcpip\Parameters: Domain = yaysat.com O17 - HKLM\System\CS2\Services\Tcpip\Parameters: Domain = yaysat.com O18 - Protocol: qcom - {B8DBD265-42C3-43E6-B439-E968C71984C6} - C:\Program Files\Common Files\Quest Shared\CodeXpert\qcom.dll O22 - SharedTaskScheduler: FencesShellExt - {1984DD45-52CF-49cd-AB77-18F378FEA264} - C:\Program Files\Stardock\Fences\FencesMenu.dll O23 - Service: Andrea ADI Filters Service (AEADIFilters) - Andrea Electronics Corporation - C:\Windows\system32\AEADISRV.EXE O23 - Service: AgentService - Iron Mountain Incorporated - C:\Program Files\Iron Mountain\Connected BackupPC\AgentService.exe O23 - Service: Agere Modem Call Progress Audio (AgereModemAudio) - LSI Corporation - C:\Program Files\LSI SoftModem\agrsmsvc.exe O23 - Service: BMFMySQL - Unknown owner - C:\Program Files\Quest Software\Benchmark Factory for Databases\Repository\MySQL\bin\mysqld-max-nt.exe O23 - Service: Com4QLBEx - Hewlett-Packard Development Company, L.P. - C:\Program Files\Hewlett-Packard\HP Quick Launch Buttons\Com4QLBEx.exe O23 - Service: hpqwmiex - Hewlett-Packard Development Company, L.P. - C:\Program Files\Hewlett-Packard\Shared\hpqwmiex.exe O23 - Service: OfficeScanNT RealTime Scan (ntrtscan) - Trend Micro Inc. - C:\Program Files\Trend Micro\OfficeScan Client\ntrtscan.exe O23 - Service: SMS Task Sequence Agent (smstsmgr) - Unknown owner - C:\Windows\system32\CCM\TSManager.exe O23 - Service: Check Point VPN-1 Securemote service (SR_Service) - Check Point Software Technologies - C:\Program Files\CheckPoint\SecuRemote\bin\SR_Service.exe O23 - Service: Check Point VPN-1 Securemote watchdog (SR_Watchdog) - Check Point Software Technologies - C:\Program Files\CheckPoint\SecuRemote\bin\SR_Watchdog.exe O23 - Service: Trend Micro Unauthorized Change Prevention Service (TMBMServer) - Trend Micro Inc. - C:\Program Files\Trend Micro\OfficeScan Client\..\BM\TMBMSRV.exe O23 - Service: OfficeScan NT Listener (tmlisten) - Trend Micro Inc. - C:\Program Files\Trend Micro\OfficeScan Client\tmlisten.exe O23 - Service: OfficeScan NT Proxy Service (TmProxy) - Trend Micro Inc. - C:\Program Files\Trend Micro\OfficeScan Client\TmProxy.exe O23 - Service: VNC Server Version 4 (WinVNC4) - RealVNC Ltd. - C:\Program Files\RealVNC\VNC4\WinVNC4.exe -- End of file - 8204 bytes As suggested in this very similar question, I have run full scans (+boot time scans) with RegRun and UnHackMe, but they also did not find anything. I have carefully examined all entries in the Event Viewer, but there's nothing wrong. Now I know that there is a hidden trojan (rootkit) on my machine which seems to disguise itself quite successfully. Note that I don't have the chance to remove the HDD, or reinstall the OS as this is a work machine subjected to certain IT policies on a company domain. Despite all my attempts, the problem still remains. I strictly need a to-the-point method or a pukka rootkit remover to remove whatever it is. I don't want to monkey with the system settings, i.e. disabling auto runs one by one, messing the registry, etc. EDIT 2: I have found an article which is closely related to my trouble: Malware can turn off UAC in Windows 7; “By design” says Microsoft. Special thanks(!) to Microsoft. In the article, a VBScript code is given to disable UAC automatically: '// 1337H4x Written by _____________ '// (12 year old) Set WshShell = WScript.CreateObject("WScript.Shell") '// Toggle Start menu WshShell.SendKeys("^{ESC}") WScript.Sleep(500) '// Search for UAC applet WshShell.SendKeys("change uac") WScript.Sleep(2000) '// Open the applet (assuming second result) WshShell.SendKeys("{DOWN}") WshShell.SendKeys("{DOWN}") WshShell.SendKeys("{ENTER}") WScript.Sleep(2000) '// Set UAC level to lowest (assuming out-of-box Default setting) WshShell.SendKeys("{TAB}") WshShell.SendKeys("{DOWN}") WshShell.SendKeys("{DOWN}") WshShell.SendKeys("{DOWN}") '// Save our changes WshShell.SendKeys("{TAB}") WshShell.SendKeys("{ENTER}") '// TODO: Add code to handle installation of rebound '// process to continue exploitation, i.e. place something '// evil in Startup folder '// Reboot the system '// WshShell.Run "shutdown /r /f" Unfortunately, that doesn't tell me how I can get rid of this malicious code running on my system. EDIT 3: Last night, I left the laptop open because of a running SQL task. When I came in the morning, I saw that UAC was turned off. So, I suspect that the problem is not related to startup. It is happening once a day for sure no matter if the machine is rebooted.

    Read the article

  • DNS server and fallback outside home

    - by Jens
    I have my own DNS server at home to access local names, and that is working fine. Then I have my laptop, now obviously my laptop leaves the home now and then, therefore it accesses different nets outside my home, and my DNS server is not accessible there... So I figured that I would just add Google as secondary DNS... But actually, when I do that, then suddenly I can't access my local stuff, the page won't resolve (at home that is, obviously), like my laptop is getting a quicker response from Google's DNS or something, because it can't find anything on the addresses I use locally. If I then remove the secondary DNS, and keeps my own, then it works fine again... So do I somehow need to seperate what DNS's to use on what nets? I already use sepperate DNS settings when I connect using my 3G modem, but when I use hotspots it seems to use the same settings regardless (at least in the train), also can it differ wired connections?... Is there another solution? OS: Windows 7 Ultimate, x64 EDIT: Currently trying this "hack/fix" out for the time being: http://blog.johnruiz.com/2011/12/windows-does-not-always-honor-dns-order.html

    Read the article

  • How to schedule automatic (daily) snapshots of AWS EC2 Windows Instance?

    - by Stanley
    I have some Windows servers hosted on Amazon EC2. Some run Windows Server 2003 and other run Windows Server 2008. These are EBS-backed instances. Most of the instances also have some additional EBS-volumes attached. We want to schedule a daily snapshot of the windows machines (and also the attached EBS-volumes) to S3 so that we have daily backups available. One would think that this is a very common requirement and would be made available via the AWS Management Console, but alas, it is not. What approaches are available? How do I schedule daily snapshots on our Windows Servers? There are several scripting examples available online for Linux, but not so much for windows. I have had a look at http://sehmer.blogspot.com/2011/04/amazon-ec2-daily-snapshot-script-for.html as well as https://github.com/ronmichael/aws-snapshot-scheduler. Has anyone used one of these approaches and does it work? I have also considered a service like Skeddly which seems inexpensive at first glance but when you look at using it for several servers the price soon escalates to such a point where it seems a better option to create your own solution as you can then apply it to new servers in the future. With Skeddly we'll pay for each server. How do we schedule daily snapshots of our windows instances?

    Read the article

  • Has anyone managed to build php5-xapian on Ubuntu 12.04?

    - by jetboy
    As Xapian's been dropped from the Ubuntu repositories, I'm attempting to build my own .deb from the instructions here: http://article.gmane.org/gmane.comp.search.xapian.general/8855 http://beeznest.wordpress.com/2011/07/06/howto-build-your-own-binaries-of-php-xapian-bindings-for-debian/ I can only get things to progress beyond the first few seconds by leaving out 'rm debian/control', but if I do, it looks as if the Python and Ruby bindings are building and passing their versions of smoketest correctly. However, the PHP part of the build is failing with this error: /home/charlie/xapian-bindings-1.2.8/php/smoketest.php:38: include(xapian.php): failed to open stream: No such file or directory FAIL: smoketest.php There's a xapian.php file in /home/charlie/xapian-bindings-1.2.8/php/php5/ but if I copy it to /home/charlie/xapian-bindings-1.2.8/php/ or change the path to it in smoketest.php, the build fails right near the start with: dpkg-source: error: aborting due to unexpected upstream changes Unfortunately I'm out of my comfort zone building from source. Anyone got any ideas? Edit post James' answer: Builds fine if I follow instructions exactly. I built it on a test VM initially, but that didn't build the PHP package as PHP itself wasn't installed. Obvious gotcha, but worth mentioning. Installing generated the following error: Setting up php5-xapian (1.2.8-1) ... Processing triggers for libapache2-mod-php5 ... dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/libapache2-mod-php5.postinst): Permission denied ssion denied dpkg: error processing libapache2-mod-php5 (--install): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: libapache2-mod-php5 It's only a script for restarting Apache. Stopping Apache before running sudo dpkg -i php5-xapian_*.deb prevents the error. Xapian now shows up in phpinfo(). Job done. Thanks.

    Read the article

  • When connecting to PPTP Centos via Windows 7 VPN, I get error 2147943625

    - by Charlie Dyason
    The remote computer refused the network connection. phrase has been my arch enemy for the past week now I recently "bought" a VPS server, I gave up trying to configure it with OpenVPN, all the issues were making me lose my mind, so I tried the easier way with pptp, but i figure, both are leading to a dead end... I followed this post (many others too but this is the unlucky one), http://blog.secaserver.com/2011/10/install-vpn-pptp-server-centos-6/ and it all goes well with the setup, however, I run into this error when connecting to the VPN in Windows 7 here is a pic of the error: Image So I do not know what I have done wrong... When connecting, Code: Select all netstat -apn | grep -w 1723 before connecting: netstat -apn |grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1137/pptpd after the error came I tried again: netstat -apn |grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1137/pptpd tcp 0 0 41.185.26.238:1723 41.13.212.47:49607 TIME_WAIT - iptables: # Generated by iptables-save v1.4.7 on Fri Nov 1 18:14:53 2013 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [63:8868] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i eth0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 1723 -j ACCEPT -A INPUT -i eth0 -p gre -j ACCEPT -A FORWARD -i ppp+ -o eth0 -j ACCEPT -A FORWARD -i eth0 -o ppp+ -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT # Completed on Fri Nov 1 18:14:53 2013 # Generated by iptables-save v1.4.7 on Fri Nov 1 18:14:53 2013 *nat : PREROUTING ACCEPT [96:12732] : POSTROUTING ACCEPT [0:0] : OUTPUT ACCEPT [31:2179] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT # Completed on Fri Nov 1 18:14:53 2013 options.pptpd the only changes was the require-mppe # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 require-mppe # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} I check the iptables, everything is normal, all INPUTs, etc are before rejects, username and password I also checked in chap-secrets file, I am really puzzled...

    Read the article

  • Triple monitor setting in Linux with USB-HDMI adapter

    - by Oscar Carballal
    I'm trying to set up a triple monitor desktop at my office using Fedora 17, but it seems impossible, let me explain the setting: Laptop ASUS K53SD with 2 graphic cards, Intel and nVidia (Screen controled by Intel card) 24" Full HD monitor connected to the HDMI output (controlled by Intel card) 23" Full HD monitor connected to an USB-HDMI adapter (via framebuffer in /dev/fb2, apparently) VGA output (not used) controlled by nVidia card First of all, the USB-HDMI adapter works perfectly, it gives me a green screen (which means the communication is OK) and I can make it work if I set up a single monitor setting via framebuffer in Xorg. Here I leave the page where I got the instructions: http://plugable.com/2011/12/23/usb-graphics-and-linux Now I'm trying to set up the the two main monitors (laptop and 24") with the intel driver and the 23" with the framebuffer, but the most succesful configuration I get is the two main monitors working and the third disconnected. Do you have any idea what can I do to make this work? Here I leave my xRandr output and my Xorg conf: -> xrandr Screen 0: minimum 320 x 200, current 3286 x 1080, maximum 8192 x 8192 LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.0*+ 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA2 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1920x1080+1366+0 (normal left inverted right x axis y axis) 531mm x 299mm 1920x1080 60.0*+ 50.0 25.0 30.0 1680x1050 59.9 1680x945 60.0 1400x1050 74.9 59.9 1600x900 60.0 1280x1024 75.0 60.0 1440x900 75.0 59.9 1280x960 60.0 1366x768 60.0 1360x768 60.0 1280x800 74.9 59.9 1152x864 75.0 1280x768 74.9 60.0 1280x720 50.0 60.0 1440x576 25.0 1024x768 75.1 70.1 60.0 1440x480 30.0 1024x576 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 720x576 50.0 848x480 60.0 720x480 59.9 640x480 72.8 75.0 66.7 60.0 59.9 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) 1920x1080_60.00 60.0 The Xorg file: # Xorg configuration file for using a tri-head display Section "ServerLayout" Identifier "Layout0" Screen 0 "HDMI" 0 0 Screen 1 "USB" RightOf "HDMI" Option "Xinerama" "on" EndSection ########### MONITORS ################ Section "Monitor" Identifier "USB1" VendorName "Unknown" ModelName "Acer 24as" Option "DPMS" EndSection Section "Monitor" Identifier "HDMI1" VendorName "Unknown" ModelName "Acer 23SH" Option "DPMS" EndSection ########### DEVICES ################## Section "Device" Identifier "Device 0" Driver "intel" BoardName "GeForce" BusID "PCI:0:02:0" Screen 0 EndSection Section "Device" Identifier "USB Device 0" driver "fbdev" Option "fbdev" "/dev/fb2" Option "ShadowFB" "off" EndSection ############## SCREENS ###################### Section "Screen" Identifier "HDMI" Device "Device 0" Monitor "HDMI1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "USB" Device "USB Device 0" Monitor "USB1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • BIND returns serverfail when querying for its authoriative domain

    - by estol
    Hi there Serverfault folks! First of all: sorry about the title, I had some problem coming up with the proper title. I have a little home server set up, for internet sharing, samba, basic http, dlna mediaserver and what not, and I happend to have a domain at hand, so I thought why not direct it to this computer? I have a BIND 9.8.0 installed, and - afaik - configured it properly. For a few days, the public view did not worked, and I really did not cared, since the local view worked. But now suddenly, even the local view fails. If I try to query the nameserver for anything in my domain, it returns the following error: $ nslookup andromeda.dafaces.com ;; Got SERVFAIL reply from ::1, trying next server ;; Got SERVFAIL reply from ::1, trying next server Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find andromeda.dafaces.com.dafaces.com: SERVFAIL Also, the public view points to the old ip address of the domain, probably because of the same error. Some information about the system: $ uname -a Linux tressis 2.6.37-ARCH #1 SMP PREEMPT Tue Mar 15 09:21:17 CET 2011 x86_64 AMD Athlon(tm) 64 X2 Dual Core Processor 5000+ AuthenticAMD GNU/Linux $ named -v BIND 9.8.0 And the named.conf file: # cat /etc/named.conf // // /etc/named.conf // include "/etc/rndc.key"; #controls { # inet 127.0.0.1 allow {localhost; } keys { "dnskulcs"; }; #}; options { directory "/var/named"; pid-file "/var/run/named/named.pid"; auth-nxdomain yes; datasize default; // Uncomment these to enable IPv6 connections support // IPv4 will still work: listen-on-v6 { any; }; listen-on { any; }; // Add this for no IPv4: // listen-on { none; }; // Default security settings. // allow-recursion { 127.0.0.1; ::1; 192.168.1.0/24; }; // allow-recursion { any; }; allow-query { any; }; allow-transfer { 127.0.0.1; ::1; 92.243.14.172; 87.98.164.164; 88.191.64.64; }; allow-update { key "dnskulcs"; }; version none; hostname none; server-id none; zone-statistics yes; forwarders { 213.46.246.53; 213.26.246.54; 8.8.8.8; 8.8.4.4; 192.188.242.65; 193.227.196.3; 2001:470:20::2; }; }; view "local" { match-clients { 192.168.1.0/24; 127.0.0.1; ::1; fec0:0:0:ffff::/64; }; recursion yes; zone "localhost" IN { type master; file "localhost.zone"; allow-transfer { any; }; }; zone "0.0.127.in-addr.arpa" IN { type master; file "127.0.0.zone"; allow-transfer { any; }; }; zone "." IN { type hint; file "root.hint"; }; zone "dafaces.com" IN { type master; file "internal/dafaces.com.fw"; allow-update { key "dnskulcs"; }; }; zone "1.168.192.in-addr.arpa" IN { type master; file "internal/dafaces.com.rev"; allow-update { key "dnskulcs"; }; }; }; view "public" { match-clients { any;}; recursion no; zone "dafaces.com" IN { type master; file "external/dafaces.com.fw"; allow-transfer { 87.98.164.164; 195.234.42.1; 88.191.64.64; }; }; }; //zone "example.org" IN { // type slave; // file "example.zone"; // masters { // 192.168.1.100; // }; // allow-query { any; }; // allow-transfer { any; }; //}; logging { channel xfer-log { file "/var/log/named.log"; print-category yes; print-severity yes; print-time yes; severity info; }; category xfer-in { xfer-log; }; category xfer-out { xfer-log; }; category notify { xfer-log; }; }; All help would be highly appreciated! EDIT: Zone files: # cat /var/named/internal/dafaces.com.fw $ORIGIN . $TTL 3600 ; 1 hour dafaces.com IN SOA tressis.dafaces.com. postmaster.dafaces.com. ( 2011032201 ; serial 28800 ; refresh (8 hours) 7200 ; retry (2 hours) 2419200 ; expire (4 weeks) 3600 ; minimum (1 hour) ) NS tressis.dafaces.com. A 192.168.1.1 MX 10 mail.dafaces.com. $ORIGIN _tcp.dafaces.com. _http SRV 0 5 80 www.dafaces.com. _ssh SRV 0 5 22 tressis.dafaces.com. $ORIGIN dafaces.com. acrisius A 192.168.1.230 andromeda A 192.168.1.7 andromeda-win7 CNAME andromeda aspasia A 192.168.1.233 athena A 192.168.1.232 callisto A 192.168.1.102 db A 192.168.1.1 management A 192.168.1.1 ; web management for the router functions haley A 192.168.1.5 hoth A 192.168.1.101 mail A 192.168.1.1 satelite A 192.168.1.20 sony-player A 192.168.1.103 TXT "310f16de2d2712dfc4ae6e5c54f60f828e" torrent A 192.168.1.1 tracker A 192.168.1.1 tressis A 192.168.1.1 www A 192.168.1.1 zeus A 192.168.1.231 and # cat /var/named/external/dafaces.com.fw $ORIGIN . $TTL 3600 dafaces.com IN SOA ns.dafaces.com. postmaster.dafaces.com. ( 2011032405; serial 28800; refresh 7200; retry 2419200; expire 3600; minimum ) NS ns.dafaces.com. NS ns0.xname.org. NS ns1.xname.org. NS ns2.xname.org. A 89.135.129.37 MX 10 mail.dafaces.com. $ORIGIN dafaces.com. ;Szolgaltatasok _ssh._tcp SRV 0 5 22 tressis _http._tcp SRV 0 5 80 www ns A 89.135.129.37 hoth A 89.135.129.37 www A 89.135.129.37 mail A 89.135.129.37 db A 89.135.129.37 torrent A 89.135.129.37 tracker A 89.135.129.37 Edit: Ohh, hell I almost forgot. Since the node is connected to the internet via a residential connection, there is a possibility, that the public ipv4 address will change(but thank god, it is a very rare case), so I daily update the external IP address in the zone file with a shellscript: # cat /etc/cron.daily/dnsupdate #!/bin/sh FILE="/var/named/external/dafaces.com.fw" SERIAL=$(date +%Y%m%d05) PUBLIC_IP=$(ifconfig internet |sed -n "/inet addr:.*255.255.255.255/{s/.*inet addr://; s/ .*//; p}") cat $FILE | sed --posix 's/^.* serial$/\t\t\t\t\t'$SERIAL'; serial/' | sed --posix 's/[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*/'$PUBLIC_IP'/' > /tmp/ujzona mv /tmp/ujzona $FILE /etc/rc.d/named reload

    Read the article

  • Why cache static files with Varnish, why not pass

    - by Saif Bechan
    I have a system runnning nginx / php-fpm / varnish / wordpress and amazon s3. Now I have looked at a lot of configuration files while setting up the system, and in all of them I found something like this: /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "\.(jpg|jpeg|png|gif|css|js)$") { /* Remove the cookie and make the request static */ unset req.http.cookie; return (lookup); } I do not understand why this is done. Most of the examples also run NginX as a webserver. Now the question is, why would you use the varnish cache to cache these static files. It makes much more sense to me to only cache the dynamic files so that php-fpm / mysql don't get hit that much. Am I correct or am I missing something here? UPDATE I want to add some info to the question based on the answer given. If you have a dynamic website, where the content actually changes a lot, chaching does not make sense. But if you use WordPress for a static website for example, this can be cached for long periods of time. That said, more important to me is static conent. I have found a link with some test and benchmarks on different cache apps and webserver apps. http://nbonvin.wordpress.com/2011/03/14/apache-vs-nginx-vs-varnish-vs-gwan/ NginX is actually faster in getting your static content, so it makes more sense to just let it pass. NginX works great with static files. -- Apart from that, most of the time static content is not even in the webserver itself. Most of the time this content is stores on a CDN somewhere, maybe AWS S3, something like that. I think the varnish cache is the last place where you want to have you static content stored.

    Read the article

  • Industrial strength cloud file storage

    - by ArthurG
    I'm looking for an industrial strength cloud file storage system. It will be used by multiple people in a startup. Our requirements: Transparent file system access: files and folders in the file system must be able transparently access (read and write) files in the cloud; files must be synchronized whenever network access is available and buffered otherwise. The system must be usable by non-technical people. Access control: we need to control who can access which files, at least on a very coarse basis. e.g., the developers will be able to access the system design documents, only the corporate folks can access recruiting documents, and only management can access certain corporate documents. Dropbox provides this via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. so the cloud service should have a notion of an account (our startup) with multiple users with distinct credentials and rights for each user Clients: it must be accessible from Macs and PCs; I would hope that it supports Linux (e.g., Ubuntu) too Security: it must provide robust security Backup: the cloud service must reliably backup the files Versioning: change version history, is a big plus, but not required Not free: we're willing to pay for the service So far, we've reviewed the following, albeit not completely thoroughly: Dropbox: has all except 1) Access control, which is provided via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. and 2) Security, as discussed here http://www.economist.com/blogs/babbage/2011/05/internet_security and here http://blog.dropbox.com/?p=821. Windows Live Mesh, has all except 1) Clients, only supporting Windows 7 and OS X. SpiderOak has all, except 1) Transparent file system access, which is only available for 1 user. Amazon Cloud, doesn't offer 1) Transparent file system access Rackspace Cloud Drive has all except 1) Access control and 2) Versioning I'll gladly include any clarifications or additional systems the community provides. Arthur

    Read the article

  • SQL - an error occurred during the pre-login handshake

    - by Rivka
    Until yesterday evening, I was able to connect to my server from my local machine. Now, I get the following error: A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 0 - The wait operation timed out.) (.Net SqlClient Data Provider) Note, I can log on to the actual server with no problem. Yesterday, I installed IIS on my machine and set up a site using my IP address - don't know if this has anything to do. I did come across this article, followed the steps, but didn't seem to help. http://www.escapekeys.com/blog/index.cfm/2011/1/26/Microsoft-SQL-Server-Error-64-A-connection-was-successfully-established-with-the-server I also went through the following article, changed TC/IP settings, restarted, but nothing. http://blog.sqlauthority.com/2009/05/21/sql-server-fix-error-provider-named-pipes-provider-error-40-could-not-open-a-connection-to-sql-server-microsoft-sql-server-error/ Started trying suggestions from comments too but stopped when I realized I might be messing things up more. So, why is this happening / how can I fix?

    Read the article

  • Is Ubuntu a bad distro for a standalone mysql database server?

    - by DhruvPathak
    I read an article here : http://www.mysqlperformanceblog.com/2011/12/08/which-linux-distribution-for-mysql-server/ On the other end there are Debian and Ubuntu. Both use tool called dpkg for package management. There isn’t a month that I log in to a system based on either distribution where there are no issues with packages consistency. Unfinished installations, unresolved conflicts are so common that it’s just beyond simple negligence. The packaging system is just not robust enough. Another problem is that one broken package may block you from installing or uninstalling anything else. Imagine that someone left system in such shape, you prepared for downtime, stopped MySQL and… error – text editor has not been properly installed, so you cannot upgrade MySQL either until the problem is fixed. In a stressful situation when downtime clock ticks – annoying at best We prefer Ubuntu server because of familiarity and Ubuntu also being development environment. Questions: Is Ubuntu used commonly in production for a mysql database server ? Is it worth the trouble ever to have one distro eg Ubuntu in web server, and another say Red Hat in database server ? Or Is a homogenous server pool a better choice ?

    Read the article

  • Server on blacklist

    - by Cudos
    I have a Debian Wheezy server with several websites with separate domains. Some of these websites uses Wordpress and in turn uses PHP's mail function to send mail. I installed "sendmail" to be able for the server to send mail from PHP. We use Google Apps for our customers, so no need to setup a regular mail server. Now the server is blacklisted at www.spamhaus.org and get this message: This IP address is HELO'ing as "localhost.localdomain" which violates the relevant standards (specifically: RFC5321). I have tried to follow the instructions on these websites with no luck: http://www.cardiothink.com/downloads/README.spamhaus-and-blocked-email.html http://centosbeginer.wordpress.com/2011/07/12/how-to-remove-ip-in-cbl-spamhaus/ Can you please help me figure out how to configure the server? File: /etc/hosts # nameserver config # IPv4 127.0.0.1 somedomain.dk xxx.xxx.xxx.xxx server.somedomain.dk bigby # # IPv6 ::1 ip6-localhost ip6-loopback xxxx::0 ip6-localnet xxxx::0 ip6-mcastprefix xxxx::1 ip6-allnodes xxxx::2 ip6-allrouters xxxx::3 ip6-allhosts xxxx:xxx:xxx:xxxx::2 Debian-76-wheezy-64-minimal File: /etc/hostname bigby somedomain.dk is a made up domain. In reality another domain name I have on this server along with other domains. bigby is also a made up name. It is also something else in reality.

    Read the article

  • Identifying Exchange 2010 regular process that is walking the mailbox database

    - by toongeneral
    I have an Exchange 2010 server running on a SAN-backed platform. The platform does block-level backups based on a snapshot/incremental basis, that only capture changed data. I was surprised to see a regular period of time where the data changes were happening at a high, sustained rate. Due to the way this system works, that can lead to 1.2TB of stored data per month. The regularity implied a scheduled task, but it is not a fixed interval. It is approximately every 26-32hrs. The disks were performing read operations of ~5MB/s and write operations of ~4.5MB/s, for a period of 3-4hrs. The total written data was ~55-60GB. Reading on TechNet, I am wondering if the following is causing this: http://blogs.technet.com/b/exchange/archive/2011/12/14/database-maintenance-in-exchange-2010.aspx#checksumming The somewhat restrictive thing is that the process only happens at most once every 24 hours. I was able to investigate while it was running, finding the following: the process is store.exe it is working on the mailbox database files while running, it is generating .log files (in the mailbox database folder) consistent with database changes the mailbox database is ~60GB in size, which fits with the total data changes on each iteration I have currently switched to a fixed maintenance window, as a test. It's not clear whether this is the cause, as the symptoms fit, but are not conclusive. Does anyone have any suggestions for additional troubleshooting?

    Read the article

  • Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6?

    - by Ricardo Ferreira
    In this post, I will cover some technical differences between Oracle WebLogic 12c and JBoss EAP 6, which was released a couple days ago from Red Hat. This article claims to help you in the evaluation of key points that you should consider when choosing for an Java EE application server. In the following sections, I will present to you some important aspects that most customers ask us when they are seriously evaluating for an middleware infrastructure, specially if you are considering JBoss for some reason. I would suggest that you keep the following question in mind while you are reading the points: "Why should I choose JBoss instead of WebLogic?" 1) Multi Datacenter Deployment and Clustering - D/R ("Disaster & Recovery") architecture support is embedded on the WebLogic Server 12c product. JBoss EAP 6 on the other hand has no direct D/R support included, Red Hat relies on third-part tools with higher prices. When you consider a middleware solution to host your business critical application, you should worry with every architectural aspect that are related with the solution. Fail-over support is one little aspect of a truly reliable solution. If you do not worry about D/R, your solution will not be reliable. Having said that, with Red Hat and JBoss EAP 6, you have this extra cost that will increase considerably the total cost of ownership of the solution. As we commonly hear from analysts, open-source are not so cheaper when you start seeing the big picture. - WebLogic Server 12c supports advanced LAN clustering, detection of death servers and have a common alert framework. JBoss EAP 6 on the other hand has limited LAN clustering support with no server death detection. They do not generate any alerts when servers goes down (only if you buy JBoss ON which is a separated technology, but until now does not support JBoss EAP 6) and manual intervention are required when servers goes down. In most cases, admin people must rely on "kill -9", "tail -f someFile.log" and "ps ax | grep java" commands to manage failures and clustering anomalies. - WebLogic Server 12c supports the concept of Node Manager, which is a separated process that runs on the physical | virtual servers that allows extend the administration of the cluster to WebLogic managed servers that are often distributed across multiple machines and geographic locations. JBoss EAP 6 on the other hand has no equivalent technology. Whole server instances must be managed individually. - WebLogic Server 12c Node Manager supports Coherence to boost performance when managing servers. JBoss EAP 6 on the other hand has no similar technology. There is no way to coordinate JBoss and infiniband instances provided by JBoss using high throughput and low latency protocols like InfiniBand. The Node Manager feature also allows another very important feature that JBoss EAP lacks: secure the administration. When using WebLogic Node Manager, all the administration tasks are sent to the managed servers in a secure tunel protected by a certificate, which means that the transport layer that separates the WebLogic administration console from the managed servers are secured by SSL. - WebLogic Server 12c are now integrated with OTD ("Oracle Traffic Director") which is a web server technology derived from the former Sun iPlanet Web Server. This software complements the web server support offered by OHS ("Oracle HTTP Server"). Using OTD, WebLogic instances are load-balanced by a high powerful software that knows how to handle SDP ("Socket Direct Protocol") over InfiniBand, which boost performance when used with engineered systems technologies like Oracle Exalogic Elastic Cloud. JBoss EAP 6 on the other hand only offers support to Apache Web Server with custom modules created to deal with JBoss clusters, but only across standard TCP/IP networks.  2) Application and Runtime Diagnostics - WebLogic Server 12c have diagnostics capabilities embedded on the server called WLDF ("WebLogic Diagnostic Framework") so there is no need to rely on third-part tools. JBoss EAP 6 on the other hand has no diagnostics capabilities. Their only diagnostics tool is the log generated by the application server. Admin people are encouraged to analyse thousands of log lines to find out what is going on. - WebLogic Server 12c complement WLDF with JRockit MC ("Mission Control"), which provides to administrators and developers a complete insight about the JVM performance, behavior and possible bottlenecks. WebLogic Server 12c also have an classloader analysis tool embedded, and even a log analyzer tool that enables administrators and developers to view logs of multiple servers at the same time. JBoss EAP 6 on the other hand relies on third-part tools to do something similar. Again, only log searching are offered to find out whats going on. - WebLogic Server 12c offers end-to-end traceability and monitoring available through Oracle EM ("Enterprise Manager"), including monitoring of business transactions that flows through web servers, ESBs, application servers and database servers, all of this with high deep JVM analysis and diagnostics. JBoss EAP 6 on the other hand, even using JBoss ON ("Operations Network"), which is a separated technology, does not support those features. Red Hat relies on third-part tools to provide direct Oracle database traceability across JVMs. One of those tools are Oracle EM for non-Oracle middleware that manage JBoss, Tomcat, Websphere and IIS transparently. - WebLogic Server 12c with their JRockit support offers a tool called JRockit Flight Recorder, which can give developers a complete visibility of a certain period of application production monitoring with zero extra overhead. This automatic recording allows you to deep analyse threads latency, memory leaks, thread contention, resource utilization, stack overflow damages and GC ("Garbage Collection") cycles, to observe in real time stop-the-world phenomenons, generational, reference count and parallel collects and mutator threads analysis. JBoss EAP 6 don't even dream to support something similar, even because they don't have their own JVM. 3) Application Server Administration - WebLogic Server 12c offers a complete administration console complemented with scripting and macro-like recording capabilities. A single WebLogic console can managed up to hundreds of WebLogic servers belonging to the same domain. JBoss EAP 6 on the other hand has a limited console and provides a XML centric administration. JBoss, after ten years, started the development of a rudimentary centralized administration that still leave a lot of administration tasks aside, so admin people and developers must touch scripts and XML configuration files for most advanced and even simple administration tasks. This lead applications to error prone and risky deployments. Even using JBoss ON, JBoss EAP are not able to offer decent administration features for admin people which must be high skilled in JBoss internal architecture and its managing capabilities. - Oracle EM is available to manage multiple domains, databases, application servers, operating systems and virtualization, with a complete end-to-end visibility. JBoss ON does not provide management capabilities across the complete architecture, only basic monitoring. Even deployment must be done aside JBoss ON which does no integrate well with others softwares than JBoss. Until now, JBoss ON does not supports JBoss EAP 6, so even their minimal support for JBoss are not available for JBoss EAP 6 leaving customers uncovered and subject to high skilled JBoss admin people. - WebLogic Server 12c has the same administration model whatever is the topology selected by the customer. JBoss EAP 6 on the other hand differentiates between two operational models: standalone-mode and domain-mode, that are not consistent with each other. Depending on the mode used, the administration skill is different. - WebLogic Server 12c has no point-of-failures processes, and it does not need to define any specialized server. Domain model in WebLogic is available for years (at least ten years or more) and is production proven. JBoss EAP 6 on the other hand needs special processes to garantee JBoss integrity, the PC ("Process-Controller") and the HC ("Host-Controller"). Different from WebLogic, the domain model in JBoss is quite new (one year at tops) of maturity, and need to mature considerably until start doing things like WebLogic domain model does. - WebLogic Server 12c supports parallel deployment model which enables some artifacts being deployed at the same time. JBoss EAP 6 on the other hand does not have any similar feature. Every deployment are done atomically in the containers. This means that if you have a huge EAR (an EAR of 120 MB of size for instance) and deploy onto JBoss EAP 6, this EAR will take some minutes in order to starting accept thread requests. The same EAR deployed onto WebLogic Server 12c will reduce the deployment time at least in 2X compared to JBoss. 4) Support and Upgrades - WebLogic Server 12c has patch management available. JBoss EAP 6 on the other hand has no patch management available, each JBoss EAP instance should be patched manually. To achieve such feature, you need to buy a separated technology called JBoss ON ("Operations Network") that manage this type of stuff. But until now, JBoss ON does not support JBoss EAP 6 so, in practice, JBoss EAP 6 does not have this feature. - WebLogic Server 12c supports previuous WebLogic domains without any reconfiguration since its kernel is robust and mature since its creation in 1995. JBoss EAP 6 on the other hand has a proven lack of supportability between JBoss AS 4, 5, 6 and 7. Different kernels and messaging engines were implemented in JBoss stack in the last five years reveling their incapacity to create a well architected and proven middleware technology. - WebLogic Server 12c has patch prescription based on customer configuration. JBoss EAP 6 on the other hand has no such capability. People need to create ticket supports and have their installations revised by Red Hat support guys to gain some patch prescription from them. - Oracle WebLogic Server independent of the version has 8 years of support of new patches and has lifetime release of existing patches beyond that. JBoss EAP 6 on the other hand provides patches for a specific application server version up to 5 years after the release date. JBoss EAP 4 and previous versions had only 4 years. A good question that Red Hat will argue to answer is: "what happens when you find issues after year 5"?  5) RAC ("Real Application Clusters") Support - WebLogic Server 12c ships with a specific JDBC driver to leverage Oracle RAC clustering capabilities (Fast-Application-Notification, Transaction Affinity, Fast-Connection-Failover, etc). Oracle JDBC thin driver are also available. JBoss EAP 6 on the other hand ships only the standard Oracle JDBC thin driver. Load balancing with Oracle RAC are not supported. Manual intervention in case of planned or unplanned RAC downtime are necessary. In JBoss EAP 6, situation does not reestablish automatically after downtime. - WebLogic Server 12c has a feature called Active GridLink for Oracle RAC which provides up to 3X performance on OLTP applications. This seamless integration between WebLogic and Oracle database enable more value added to critical business applications leveraging their investments in Oracle database technology and Oracle middleware. JBoss EAP 6 on the other hand has no performance gains at all, even when admin people implement some kind of connection-pooling tuning. - WebLogic Server 12c also supports transaction and web session affinity to the Oracle RAC, which provides aditional gains of performance. This is particularly interesting if you are creating a reliable solution that are distributed not only in an LAN cluster, but into a different data center. JBoss EAP 6 on the other hand has no such support. 6) Standards and Technology Support - WebLogic Server 12c is fully Java EE 6 compatible and production ready since december of 2011. JBoss EAP 6 on the other hand became fully compatible with Java EE 6 only in the community version after three months, and production ready only in a few days considering that this article was written in June of 2012. Red Hat says that they are the masters of innovation and technology proliferation, but compared with Oracle and even other proprietary vendors like IBM, they historically speaking are lazy to deliver the most newest technologies and standards adherence. - Oracle is the steward of Java, driving innovation into the platform from commercial and open-source vendors. Red Hat on the other hand does not have its own JVM and relies on third-part JVMs to complete their application server offer. 95% of Red Hat customers are using Oracle HotSpot as JVM, which means that without Oracle involvement, their support are limited exclusively to the application server layer and we all know that most problems are happens in the JVM layer. - WebLogic Server 12c supports natively JDK 7, which empower developers to explore the maximum of the Java platform productivity when writing code. This feature differentiate WebLogic from others application servers (except GlassFish that are also managed by Oracle) because the usage of JDK 7 introduce such remarkable productivity features like the "try-with-resources" enhancement, catching multiple exceptions with one try block, Strings in the switch statements, JVM improvements in terms of JDBC, I/O, networking, security, concurrency and of course, the most important feature of Java 7: native support for multiple non-Java languages. More features regarding JDK 7 can be found here. JBoss EAP 6 on the other hand does not support JDK 7 officially, they comment in their community version that "Java SE 7 can be used with JBoss 7" which does not gives you any guarantees of enterprise support for JDK 7. - Oracle WebLogic Server 12c supports integration with Spring framework allowing Spring applications to use WebLogic special transaction manager, exposing bean interfaces to WebLogic MBeans to take advantage of all WebLogic monitoring and administration advantages. JBoss EAP 6 on the other hand has no special integration with Spring. In fact, Red Hat offers a suspicious package called "JBoss Web Platform" that in theory supports Spring, but in practice this package does not offers any special integration. It is just a facility for Red Hat customers to have support from both JBoss and Spring technology using the same customer support. 7) Lightweight Development - Oracle WebLogic Server 12c and Oracle GlassFish are completely integrated and can share applications without any modifications. Starting with the 12c version, WebLogic now understands natively GlassFish deployment descriptors and specific configurations in order to offer you a truly and reliable migration path from a community Java EE application server to a enterprise middleware product like WebLogic. JBoss EAP 6 on the other hand has no support to natively reuse an existing (or still in development) application from JBoss AS community server. Users of JBoss suffer of critical issues during deployment time that includes: changing the libraries and dependencies of the application, patching the DTD or XSD deployment descriptors, refactoring of the application layers due classloading issues and anomalies, rebuilding of persistence, business and web layers due issues with "usage of the certified version of an certain dependency" or "frameworks that Red Hat potentially does not recommend" etc. If you have the culture or enterprise IT directive of developing Java EE applications using community middleware to in a certain future, transition to enterprise (supported by a vendor) middleware, Oracle WebLogic plus Oracle GlassFish offers you a more sustainable solution. - WebLogic Server 12c has a very light ZIP distribution (less than 165 MB). JBoss EAP 6 ZIP size is around 130 MB, together with JBoss ON you have more 100 MB resulting in a higher download footprint. This is particularly interesting if you plan to use automated setup of application server instances (for example, to rapidly setup a development or staging environment) using Maven or Hudson. - WebLogic Server 12c has a complete integration with Maven allowing developers to setup WebLogic domains with few commands. Tasks like downloading WebLogic, installation, domain creation, data sources deployment are completely integrated. JBoss EAP 6 on the other hand has a limited offer integration with those tools.  - WebLogic Server 12c has a startup mode called WLX that turns-off EJB, JMS and JCA containers leaving enabled only the web container with Java EE 6 web profile. JBoss EAP 6 on the other hand has no such feature, you need to disable manually the containers that you do not want to use. - WebLogic Server 12c supports fastswap, which enables you to change classes without redeployment. This is particularly interesting if you are developing patches for the application that is already deployed and you do not want to redeploy the entire application. This is the same behavior that most application servers offers to JSP pages, but with WebLogic Server 12c, you have the same feature for Java classes in general. JBoss EAP 6 on the other hand has no such support. Even JBoss EAP 5 does not support this until now. 8) JMS and Messaging - WebLogic Server 12c has a proven and high scalable JMS implementation since its initial release in 1995. JBoss EAP 6 on the other hand has a still immature technology called HornetQ, which was introduced in JBoss EAP 5 replacing everything that was implemented in the previous versions. Red Hat loves to introduce new technologies across JBoss versions, playing around with customers and their investments. And when they are asked about why they have changed the implementation and caused such a mess, their answer is always: "the previous implementation was inadequate and not aligned with the community strategy so we are creating a new a improved one". This Red Hat practice leads to uncomfortable investments that in a near future (sometimes less than a year) will be affected in someway. - WebLogic Server 12c has troubleshooting and monitoring features included on the WebLogic console and WLDF. JBoss EAP 6 on the other hand has no direct monitoring on the console, activity is reflected only on the logs, no debug logs available in case of JMS issues. - WebLogic Server 12c has extremely good performance and scalability. JBoss EAP 6 on the other hand has a JMS storage mechanism relying on Oracle database or MySQL. This means that if an issue in production happens and Red Hat affirms that an performance issue is happening due to database problems, they will not support you on the performance issue. They will orient you to call Oracle instead. - WebLogic Server 12c supports messaging enterprise features like SAF ("Store and Forward"), Distributed Queues/Topics and Foreign JMS providers support that leverage JMS implementations without compromise developer code making things completely transparent. JBoss EAP 6 on the other hand do not even dream to support such features. 9) Caching and Grid - Coherence, which is the leading and most mature data grid technology from Oracle, is available since early 2000 and was integrated with WebLogic in 2009. Coherence and WebLogic clusters can be both managed from WebLogic administrative console. Even Node Manager supports Coherence. JBoss on the other hand discontinued JBoss Cache, which was their caching implementation just like they did with the messaging implementation (JBossMQ) which was a issue for long term customers. JBoss EAP 6 ships InfiniSpan version 1.0 which is immature and lack a proven record of successful cases and reliability. - WebLogic Server 12c has a feature called ActiveCache which uses Coherence to, without any code changes, replicate HTTP sessions from both WebLogic and other application servers like JBoss, Tomcat, Websphere, GlassFish and even Microsoft IIS. JBoss EAP 6 on the other hand does have such support and even when they do in the future, they probably will support only their own application server. - Coherence can be used to manage both L1 and L2 cache levels, providing support to Oracle TopLink and others JPA compliant implementations, even Hibernate. JBoss EAP 6 and Infinispan on the other hand supports only Hibernate. And most important of all: Infinispan does not have any successful case of L1 or L2 caching level support using Hibernate, which lead us to reflect about its viability. 10) Performance - WebLogic Server 12c is certified with Oracle Exalogic Elastic Cloud and can run unchanged applications at this engineered system. This approach can benefit customers from Exalogic optimization's of both kernel and JVM layers to boost performance in terms of 10X for web, OLTP, JMS and grid applications. JBoss EAP 6 on the other hand has no investment on engineered systems: customers do not have the choice to deploy on a Java ultra fast system if their project becomes relevant and performance issues are detected. - WebLogic Server 12c maintains a performance gain across each new release: starting on WebLogic 5.1, the overall performance gain has been close to 4X, which close to a 20% gain release by release. JBoss on the other hand does not provide SPECJAppServer or SPECJEnterprise performance benchmarks. Their so called "performance gains" remains hidden in their customer environments, which lead us to think if it is true or not since we will never get access to those environments. - WebLogic Server 12c has industry performance benchmarks with submissions across platforms and configurations leading SPECJ. Oracle WebLogic leads SPECJAppServer performance in multiple categories, fitting all customer topologies like: dual-node, single-node, multi-node and multi-node with RAC. JBoss... again, does not provide any SPECJAppServer performance benchmarks. - WebLogic Server 12c has a feature called work manager which allows your application to embrace new performance levels based on critical resource utilization of the CPUs usage. Work managers prioritizes work and allocates threads based on an execution model that takes into account administrator-defined parameters and actual run-time performance and throughput. JBoss EAP 6 on the other hand has no compared feature and probably they never will. Not supporting such feature like work managers, JBoss EAP 6 forces admin people and specially developers to uncover performance gains in a intrusive way, rewriting the code and doing performance refactorings. 11) Professional Services Support - WebLogic Server 12c and any other technology sold by Oracle give customers the possibility of hire OCS ("Oracle Consulting Services") to manage critical scenarios, deployment assistance of new applications, high skilled consultancy of architecture, best practices and people allocation together with customer teams. All OCS services are available without any restrictions, having the customer bought software from Oracle or just starting their implementation before any acquisition. JBoss EAP 6 or Red Hat to be more specifically, only offers professional services if you buy subscriptions from them. If you are developing a new critical application for your business and need the help of Red Hat for a serious issue or architecture decision, they will probably say: "OK... I can help you but after you buy subscriptions from me". Red Hat also does not allows their professional services consultants to manage environments that uses community based software. They will probably force you to first buy a subscription, download their "enterprise" version and them, optionally hire their consultants. - Oracle provides you our university to educate your team into our technologies, including of course specialized trainings of WebLogic application server. At any time and location, you can hire Oracle to train your team so you get trustful knowledge according to your specific needs. Certifications for the products are also available if your technical people desire to differentiate themselves as professionals. Red Hat on the other hand have a limited pool of resources to train your team in their technologies. Basically they are selling training and certification for RHEL ("Red Hat Enterprise Linux") but if you demand more specialized training in JBoss middleware, they will probably connect you to some "certified" partner localized training since they are apparently discontinuing their education center, at least here in Brazil. They were not able to reproduce their success with RHEL education to their middleware division since they need first sell the subscriptions to after gives you specialized training. And again, they only offer you specialized training based on their enterprise version (EAP in the case of JBoss) which means that the courses will be a quite outdated. There are reports of developers that took official training's from Red Hat at this year (2012) and in a certain JBoss advanced course, Red Hat supposedly covered JBossMQ as the messaging subsystem, and even the printed material provided was based on JBossMQ since the training was created for JBoss EAP 4.3. 12) Encouraging Transparency without Ulterior Motives - WebLogic Server 12c like any other software from Oracle can be downloaded any time from anywhere, you should only possess an OTN ("Oracle Technology Network") credential and you can download any enterprise software how many times you want. And is not some kind of "trial" version. It is the official binaries that will be running for ever in your data center. Oracle does not encourages the usage of "specific versions" of our software. The binaries you buy from Oracle are the same binaries anyone in the world could download and use for testing and personal education. JBoss EAP 6 on the other hand are not available for download unless you buy a subscription and get access to the Red Hat enterprise repositories. If you need to test, learn or just start creating your application using Red Hat's middleware software, you should download it from the community website. You are not allowed to download the enterprise version that, according to Red Hat are more secure, reliable and robust. But no one of us want to start the development of a software with an unsecured, unreliable and not scalable middleware right? So what you do? You are "invited" by Red Hat to buy subscriptions from them to get access to the "cool" version of the software. - WebLogic Server 12c prices are publicly available in the Oracle website. If you want to know right now how much WebLogic will cost to your organization, just click here and get access to our price list. In the case of WebLogic, check out the "US Oracle Technology Commercial Price List". Oracle also encourages you to get in touch with a sales representative to discuss discounts that would make possible the investment into our technology. But you are not required to do this, only if you are interested in buying our technology or maybe you want to discuss some discount scenarios. JBoss EAP 6 on the other hand does not have its cost publicly available in Red Hat's website or in any other media, at least is not so easy to get such information. The only link you will possibly find in their website is a "Contact a Sales Representative" link. This is not a very good relationship between an customer and an vendor. This is not an example of transparency, mainly when the software are sold as open. In this situations, customers expects to see the software prices publicly available, so they can have the chance to decide, based on the existing features of the software, if the cost is fair or not. Conclusion Oracle WebLogic is the most mature, secure, reliable and scalable Java EE application server of the market, and have a proven record of success around the globe to prove it's majority. Don't lose the chance to discover today how WebLogic could fit your needs and sustain your global IT middleware strategy, no matter if your strategy are completely based on the Cloud or not.

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • Could not resolve <fx:Script> to a component implementation.

    - by seref
    Hi, i created project with flexmojos maven archtype..i used flexmojos:flexbuilder and compile/run with FlashBuilder 4 everything is okay but when i try to compile project with flexmojos i got following error: [ERROR] Z:....\src\main\flex\Main.mxml:[6,-1] Could not resolve < fx:Script to a component implementation. [INFO] BUILD FAILURE my mxml: <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" width="100%" height="100%" creationComplete="application1_creationCompleteHandler(event)"> <fx:Script> <![CDATA[ import mx.controls.Alert; import mx.events.FlexEvent; protected function application1_creationCompleteHandler(event:FlexEvent):void { Alert.show("success!!!!") } ]]></fx:Script> </s:Application> pom.xml like: ...... <packaging>swf</packaging> ...... <properties> <flex-sdk.version>4.1.0.16076</flex-sdk.version> <flexmojos.version>3.8</flexmojos.version> </properties> ...... <build> <sourceDirectory>src/main/flex</sourceDirectory> <testSourceDirectory>src/test/flex</testSourceDirectory> <plugins> <plugin> <groupId>org.sonatype.flexmojos</groupId> <artifactId>flexmojos-maven-plugin</artifactId> <version>${flexmojos.version}</version> <extensions>true</extensions> <dependencies> <dependency> <groupId>com.adobe.flex</groupId> <artifactId>compiler</artifactId> <version>${flex-sdk.version}</version> <type>pom</type> </dependency> </dependencies> <configuration> <compiledLocales> <locale>en_US</locale> </compiledLocales> <mergeResourceBundle>true</mergeResourceBundle> <accessible>true</accessible> <optimize>true</optimize> <targetPlayer>10.0.0</targetPlayer> <showWarnings>true</showWarnings> <linkReport>true</linkReport> </configuration> </plugin> </plugins> </build> <dependencies> <!-- Flex framework resource bundles --> <dependency> <groupId>com.adobe.flex.framework</groupId> <artifactId>flex-framework</artifactId> <version>${flex-sdk.version}</version> <type>pom</type> </dependency> <!-- Include unit test dependencies. --> <dependency> <groupId>com.adobe.flexunit</groupId> <artifactId>flexunit</artifactId> <version>4.0-rc-1</version> <type>swc</type> <scope>test</scope> </dependency> </dependencies> ....... maven output compiler config : INFO] Flex compiler configurations: -compiler.external-library-path C:\...\.m2\repository\com\adobe\flex \framework\playerglobal\4.1.0.16076\10.0\playerglobal.swc -compiler.include-libraries= -compiler.library-path C:\...\.m2\repository\com\adobe\flex\framework \datavisualization\4.1.0.16076\datavisualization-4.1.0.16076.swc C:\... \.m2\repository\com\adobe\flex\framework\flash-integration \4.1.0.16076\flash-integration-4.1.0.16076.swc C:\...\.m2\repository \com\adobe\flex\framework\flex\4.1.0.16076\flex-4.1.0.16076.swc C:\... \.m2\repository\com\adobe\flex\framework\framework \4.1.0.16076\framework-4.1.0.16076.swc C:\...\.m2\repository\com\adobe \flex\framework\osmf\4.1.0.16076\osmf-4.1.0.16076.swc C:\... \.m2\repository\com\adobe\flex\framework\rpc \4.1.0.16076\rpc-4.1.0.16076.swc C:\...\.m2\repository\com\adobe\flex \framework\spark\4.1.0.16076\spark-4.1.0.16076.swc C:\... \.m2\repository\com\adobe\flex\framework\sparkskins \4.1.0.16076\sparkskins-4.1.0.16076.swc C:\...\.m2\repository\com\adobe \flex\framework\textLayout\4.1.0.16076\textLayout-4.1.0.16076.swc C: \...\.m2\repository\com\adobe\flex\framework\utilities \4.1.0.16076\utilities-4.1.0.16076.swc C:\...\.m2\repository\com\adobe \flex\framework\datavisualization \4.1.0.16076\datavisualization-4.1.0.16076-en_US.rb.swc C:\... \.m2\repository\com\adobe\flex\framework\framework \4.1.0.16076\framework-4.1.0.16076-en_US.rb.swc C:\...\.m2\repository \com\adobe\flex\framework\osmf\4.1.0.16076\osmf-4.1.0.16076- en_US.rb.swc C:\...\.m2\repository\com\adobe\flex\framework\rpc \4.1.0.16076\rpc-4.1.0.16076-en_US.rb.swc C:\...\.m2\repository\com \adobe\flex\framework\spark\4.1.0.16076\spark-4.1.0.16076-en_US.rb.swc C:\...\.m2\repository\com\adobe\flex\framework\textLayout \4.1.0.16076\textLayout-4.1.0.16076-en_US.rb.swc C:\...\.m2\repository \com\adobe\flex\framework\flash-integration\4.1.0.16076\flash- integration-4.1.0.16076-en_US.rb.swc C:\...\.m2\repository\com\adobe \flex\framework\playerglobal\4.1.0.16076\playerglobal-4.1.0.16076- en_US.rb.swc -compiler.theme Z:\.....\target\classes\configs\themes\Spark \spark.css -compiler.accessible=true -compiler.allow-source-path-overlap=false -compiler.as3=true -compiler.debug=false -compiler.es=false -compiler.fonts.managers flash.fonts.JREFontManager flash.fonts.BatikFontManager flash.fonts.AFEFontManager flash.fonts.CFFFontManager -compiler.fonts.local-fonts-snapshot Z:\.....\target\classes \fonts.ser -compiler.keep-generated-actionscript=false -licenses.license flashbuilder4 952309948800588759250406 -licenses.license flexbuilder4.displayedStartPageAtLeastOneTime true -compiler.locale en_US -compiler.optimize=true -compiler.source-path Z:\.....\src\main\flex -compiler.strict=true -use-network=true -compiler.verbose-stacktraces=false -compiler.actionscript-file-encoding UTF-8 -target-player 10.0.0 -default-background-color 8821927 -default-frame-rate 24 -default-script-limits 1000 60 -default-size 500 375 -compiler.headless-server=false -compiler.keep-all-type-selectors=false -compiler.use-resource-bundle-metadata=true -metadata.date Fri Mar 04 14:04:37 EET 2011 -metadata.localized-title Main x-default -verify-digests=true -compiler.namespaces.namespace+=http://ns.adobe.com/mxml/2009,Z:\..... \target\classes\config-4.1.0.16076\mxml-2009-manifest.xml -compiler.namespaces.namespace+=library://ns.adobe.com/flex/spark,Z: \.....\target\classes\config-4.1.0.16076\spark-manifest.xml -compiler.namespaces.namespace+=library://ns.adobe.com/flex/mx,Z:\..... \target\classes\config-4.1.0.16076\mx-manifest.xml -compiler.namespaces.namespace+=http://www.adobe.com/2006/mxml,Z:\..... \PozitronUI\target\classes\config-4.1.0.16076\mxml-manifest.xml - static-link-runtime-shared-libraries=false -load-config= -metadata.language+=en_US any help... regards,

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145  | Next Page >