Search Results

Search found 17749 results on 710 pages for 'connection pool'.

Page 438/710 | < Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >

  • Gartner PCC: A Shovel & Some Ah-Ha's

    - by kellsey.ruppel
    When Gartner Vice President and leading analyst Whit Andrews kicked off the Gartner Portals, Content & Collaboration Summit on Monday, March 12 at the Gaylord Palms in Orlando, FL by bringing a shovel to the stage, eyebrows raised and a few thoughts went through my head. Either this guy plans to go help the construction workers outside construct that new pool at the Gaylord or he took a wrong turn and is at the wrong conference. Oh and how did he get that shovel through airport security? As Whit explained more his objective became more clear…take everything anyone has ever told you about portals and throw it out the window, as portals have evolved and times they are most certainly changing. The future Web is here, available not only on browsers but also via a broad spectrum of access points, including automobiles, consumer electronics and more and more mobile devices. Not merely prevalent, the future Web is also multimedia-driven and operates in real time, driven by mobility, social media, streaming video and other dynamic services. Applications and user experiences are in the midst of an evolution — from the early, simple mobile Web models to today’s Web 2.0 mobile apps and, ultimately, to a world of predominantly Web apps. Additionally, cloud services will forever change how portals and user experience are designed, built, delivered, sourced and managed. So what does this mean for you? Today’s organizations need software that will enable them to not just do their jobs, but to do it in a way that is familiar and easy for them.  What does this mean for IT? Use software and technology as an enabler, not as a roadblock. Overall, we had a great week in Orlando learning about how to improve the user experience, manage content explosion, launch social initiatives, transition to mobile environments and understand cloud and SaaS options.  We had some great conversations throughout the conference and at the Oracle booth. Lots of demonstrations were given of Oracle WebCenter Sites and Oracle Social Network. And as Christie mentioned earlier this week, our Vice President of Product Management and Strategy for WebCenter Loren Weinberg presented on the topic of customer engagement and talked about how organization’s relationships with their customers have fundamentally changed today and the resulting impact that has on their priorities.  Loren also talked about the importance of customer engagement, why that matters now more than ever, and what you can do to help your company or organization succeed in this new world. The question asked in every keynote and session was a simple one: What is your “ah-ha” moment? I personally had quite a few, some of which I’ve captured below. 70% of internal social initiatives eventually fail. By 2014, refusing to communicate with consumers via social media will be as harmful as ignoring emails/phone calls is today. Customer engagement = multi-channel + social & interactive + personal & relevant + optimized. If people choose to talk about your product/company/service, it's because it's remarkable. -- Seth Godin's keynote (one of the highlights of the conference!) The Web will become the primary method used for delivering content and applications to mobile devices. By 2015, 20% of smart phone users worldwide will conduct commerce using context-enriched services on a weekly basis.  86% of customers will pay more for a better customer experience. 6 P's of Quality User Experience. Product. Enabled by: People, Patterns, Process, Profit, Priorities. Did you attend the Gartner Summit? What were your ah-ha moments?

    Read the article

  • Dell 3721 Wifi problem Ubuntu 13.04

    - by Sebastian
    I have a Dell 3721 which comes original with windows 8. I managed it to install ubuntu 12.10 on this laptop, even there was no out of the box drivers. With 12.10 I was forces to install this .deb package couple of times. After installing it, it was ok for some weeks before I have to install it again. Maybe some security updates destroy something which made in necessary to install it again... http://jas.gemnetworks.com/debian/pool/main/w/wireless-bcm43142/wireless-bcm43142-dkms_6.20.55.19-1_amd64.deb My problem is, that this package isnt working with ubuntu 13.04 anymore. So I cant use Wifi now. Also my display is almost dark, I cant change the brightness level with ubuntu 13.04. Hope you can give me some advice. Dell:~$ sudo lspci -nn 00:00.0 Host bridge [0600]: Intel Corporation 3rd Gen Core processor DRAM Controller [8086:0154] (rev 09) 00:02.0 VGA compatible controller [0300]: Intel Corporation 3rd Gen Core processor Graphics Controller [8086:0166] (rev 09) 00:14.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller [8086:1e31] (rev 04) 00:16.0 Communication controller [0780]: Intel Corporation 7 Series/C210 Series Chipset Family MEI Controller #1 [8086:1e3a] (rev 04) 00:1a.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 [8086:1e2d] (rev 04) 00:1b.0 Audio device [0403]: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller [8086:1e20] (rev 04) 00:1c.0 PCI bridge [0604]: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 1 [8086:1e10] (rev c4) 00:1c.1 PCI bridge [0604]: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 2 [8086:1e12] (rev c4) 00:1d.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 [8086:1e26] (rev 04) 00:1f.0 ISA bridge [0601]: Intel Corporation HM76 Express Chipset LPC Controller [8086:1e59] (rev 04) 00:1f.2 SATA controller [0106]: Intel Corporation 7 Series Chipset Family 6-port SATA Controller [AHCI mode] [8086:1e03] (rev 04) 00:1f.3 SMBus [0c05]: Intel Corporation 7 Series/C210 Series Chipset Family SMBus Controller [8086:1e22] (rev 04) 01:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 05) 02:00.0 Network controller [0280]: Broadcom Corporation BCM43142 802.11b/g/n [14e4:4365] (rev 01) @Dell:~$ lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 0a5c:21d7 Broadcom Corp. Bus 001 Device 004: ID 0bda:0129 Realtek Semiconductor Corp. Bus 001 Device 005: ID 0c45:64ad Microdia @Dell:~$ lspci -nn | grep VGA 00:02.0 VGA compatible controller [0300]: Intel Corporation 3rd Gen Core processor Graphics Controller [8086:0166] (rev 09) @Dell:~$ sudo lshw -class network *-network Beschreibung: Ethernet interface Produkt: RTL8101E/RTL8102E PCI Express Fast Ethernet controller Hersteller: Realtek Semiconductor Co., Ltd. Physische ID: 0 Bus-Informationen: pci@0000:01:00.0 Logischer Name: eth0 Version: 05 Seriennummer: xx:xx:xx:xx:xx:xx Größe: 100Mbit/s Kapazität: 100Mbit/s Breite: 64 bits Takt: 33MHz Fähigkeiten: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation Konfiguration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8105e-1.fw ip=192.168.2.103 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s Ressourcen: irq:42 ioport:2000(Größe=256) memory:c0404000-c0404fff memory:c0400000-c0403fff *-network UNGEFORDERT Beschreibung: Network controller Produkt: BCM43142 802.11b/g/n Hersteller: Broadcom Corporation Physische ID: 0 Bus-Informationen: pci@0000:02:00.0 Version: 01 Breite: 64 bits Takt: 33MHz Fähigkeiten: pm msi pciexpress bus_master cap_list Konfiguration: latency=0 Ressourcen: memory:c0500000-c0507fff

    Read the article

  • Friday Tips #3

    - by Chris Kawalek
    Even though yesterday was Thanksgiving here in the US, we still have a Friday tip for those of you around your computers today. In fact, we have two! The first one came in last week via our #AskOracleVirtualization Twitter hashtag. The tweet has disappeared into the ether now, but we remember the gist, so here it is: Question: Will there be an Oracle Virtual Desktop Client for Android? Answer by our desktop virtualization product development team: We are looking at Android as a supported platform for future releases. Question: How can I make a Sun Ray Client automatically connect to a virtual machine? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Someone recently asked how they can assign VM’s to specific Sun Ray Desktop Units (“DTU’s”) without any user interfaction being required, without the “Desktop Selector” being displayed, or any User Directory.  That is, they wanted each Sun Ray to power on and immediately connect to a pre-assigned Solaris VM.   This can be achieved by using “tokens” for user assignment – that is, the tokens found on Smart Cards, DTU’s, or OVDC clients can be used in place of user credentials.  Note, however, that mixing “token-only” assignments and “User Directories” in the same VDI Center won’t work.   Much of this procedure is covered in the documentation, particularly here. But it can useful to have everything in one place, “cookbook-style”:  1. Create the “token-only” directory type: From the VDI administration interface, select:  “Settings”, “Company”, “New”, select the “None” radio button, and click “Next.” Enter a name for the new “Company”, and click “Next”, then “Finish.” 2. Create Desktop Providers, Pools, and VM’s as appropriate. 3. Access the Sun Ray administration interface at http://servername:1660 and login using “root” credentials, and access the token-id’s you wish to use for assignment.  If you’re using DTU tokens rather than Smart Card tokens, these can be found under the “Tokens” tab, and “Search-ing” using the “Currently Used Tokens” tab.  DTU’s can be identified by the prefix “psuedo.”   For example: 4. Copy/paste this token into the VDI administrative interface, by selecting “Users”, “New”, and pasting in the token ID, and click “OK” - for example: 5. Assign the token (DTU) to a desktop, that is, in the VDI Admin Gui, select “Pool”, “Desktop”, select the VM, and click "Assign" and select the token you want, for example: In addition to assigning tokens to desktops, you'll need to bypass the login screen.  To do this, you need to do two things:  1.  Disable VDI client authentication with:  /opt/SUNWvda/sbin/vda settings-setprops -p clientauthentication=Disabled 2. Disable the VDI login screen – to do this,  add a kiosk argument of "-n" to the Sun Ray kiosk arguments screen.   You set this on the Sun Ray administration page - "Advanced", "Kiosk Mode", "Edit", and add the “-n” option to the arguments screen, for example: 3.  Restart both the Sun Ray and VDI services: # /opt/SUNWut/sbin/utstart –c # /opt/SUNWvda/sbin/vda-service restart Remember, if you have a question for us, please post on Twitter with our hashtag (again, it's #AskOracleVirtualization), and we'll try to answer it if we can. See you next time!

    Read the article

  • MySQL Server 5.6 defaults changes

    - by user12626240
    We're improving the MySQL Server defaults, as announced by Tomas Ulin at MySQL Connect. Here's what we're changing:  Setting  Old  New  Notes back_log  50  50 + ( max_connections / 5 ) capped at 900 binlog_checksum  off  CRC32  New variable in 5.6 binlog_row_event_max_size  1k  8k flush_time  1800  Windows changes from 1800 to 0  Was already 0 on other platforms host_cache_size  128  128 + 1 for each of the first 500 max_connections + 1 for every 20 max_connections over 500, capped at 2000  New variable in 5.6 innodb_autoextend_increment  8  64  Now affects *.ibd files. 64 is 64 megabytes innodb_buffer_pool_instances  0  8. On 32 bit Windows only, if innodb_buffer_pool_size is greater than 1300M, default is innodb_buffer_pool_size / 128M innodb_concurrency_tickets  500  5000 innodb_file_per_table  off  on innodb_log_file_size  5M  48M  InnoDB will always change size to match my.cnf value. Also see innodb_log_compressed_pages and binlog_row_image innodb_old_blocks_time 0  1000 1 second innodb_open_files  300  300; if innodb_file_per_table is ON, higher of table_open_cache or 300 innodb_purge_batch_size  20  300 innodb_purge_threads  0  1 innodb_stats_on_metadata  on  off join_buffer_size 128k  256k max_allowed_packet  1M  4M max_connect_errors  10  100 open_files_limit  0  5000  See note 1 query_cache_size  0  1M query_cache_type  on/1  off/0 sort_buffer_size  2M  256k sql_mode  none  NO_ENGINE_SUBSTITUTION  See later post about default my.cnf for STRICT_TRANS_TABLES sync_master_info  0  10000  Recommend: master_info_repository=table sync_relay_log  0  10000 sync_relay_log_info  0  10000  Recommend: relay_log_info_repository=table. Also see Replication Relay and Status Logs table_definition_cache  400  400 + table_open_cache / 2, capped at 2000 table_open_cache  400  2000   Also see table_open_cache_instances thread_cache_size  0  8 + max_connections/100, capped at 100 Note 1: In 5.5 there was already a rule to make open_files_limit 10 + max_connections + table_cache_size * 2 if that was higher than the user-specified value. Now uses the higher of that and (5000 or what you specify). We are also adding a new default my.cnf file and guided instructions on the key settings to adjust. More on this in a later post. We're also providing a page with suggestions for settings to improve backwards compatibility. The old example files like my-huge.cnf are obsolete. Some of the improvements are present from 5.6.6 and the rest are coming. These are ideas, and until they are in an official GA release, they are subject to change. As part of this work I reviewed every old server setting plus many hundreds of emails of feedback and testing results from inside and outside Oracle's MySQL Support team and the many excellent blog entries and comments from others over the years, including from many MySQL Gurus out there, like Baron, Sheeri, Ronald, Schlomi, Giuseppe and Mark Callaghan. With these changes we're trying to make it easier to set up the server by adjusting only a few settings that will cause others to be set. This happens only at server startup and only applies to variables where you haven't set a value. You'll see a similar approach used for the Performance Schema. The Gurus don't need this but for many newcomers the defaults will be very useful. Possibly the most unusual change is the way we vary the setting for innodb_buffer_pool_instances for 32-bit Windows. This is because we've found that DLLs with specified load addresses often fragment the limited four gigabyte 32-bit address space and make it impossible to allocate more than about 1300 megabytes of contiguous address space for the InnoDB buffer pool. The smaller requests for many pools are more likely to succeed. If you change the value of innodb_log_file_size in my.cnf you will see a message like this in the error log file at the next restart, instead of the old error message: [Warning] InnoDB: Resizing redo log from 2*64 to 5*128 pages, LSN=5735153 One of the biggest challenges for the defaults is the millions of installations on a huge range of systems, from point of sale terminals and routers though shared hosting or end user systems and on to major servers with lots of CPU cores, hundreds of gigabytes of RAM and terabytes of fast disk space. Our past defaults were for the smaller systems and these change that to larger shared hosting or shared end user systems, still with a bias towards the smaller end. There is a bias in favour of OLTP workloads, so reporting systems may need more changes. Where there is a conflict between the best settings for benchmarks and normal use, we've favoured production, not benchmarks. We're very interested in your feedback, comments and suggestions.

    Read the article

  • ORA-4031 Troubleshooting Tool ???

    - by Takeyoshi Sasaki
    ORA-4031 ???????????????????? SGA ????????????(??????)??????????????????????????????????????????????????? ORA-4031 ??????????????????? ORA-4031 Troubleshooting Tool ??????????? ORA-4031 Troubleshooting Tool ?? ORA-4031 Troubleshooting Tool ? ORA-4031 ????????? ORA-4031 ???????????????????????????????????WEB????????????????????????????????My Oracle Support ??????????????????????? ORA-4031 ??????????????????????????????ORA-4031 ?????????????????????????????? ORA-4031 Troubleshooting Tool ???????? My Oracle Support ?????? Diagnostic Tools Catalog ??  ORA-4031 Troubleshooting Tool ???????????????? ORA-4031 Troubleshooting Tool ??????????????? ORA-4031 Troubleshooting Tool ????? ???2??????????????? ORA-4031 ?????????????????????????????ORA-4031 Troubleshooting Tool ????????????????????????????????????????ORA-4031 ???????????????????? ??????????????? ORA-4031???????????? ????????????? ORA-4031 ?????? AWR???????????????????????????????????????????????????????????? ????·???????·???????????? ORA-4031 ?????????? ????????? SR ?????????????????????????? [ADR] ????·??????·?????????? [10g ???] AWR?????(STATSPACK????)???? ?????????? ORA-4031 ????????????????????????????????????????????? ORA-4031 ?????????????? ORA-4031 ??????1?1??????????? ORA-4031 Troubleshooting Tool ???????????(??????????????)???????? ORA-4031 ????????? ???????????????????????????????????????????????? ??????????????????????????????????????1)High Session_Cached_Cursor Setting Causing Excessive Consumption of Shared Pool???SESSION_CACHED_CURSOR ??????????????????????????????????????????????2)Insufficient SGA Free Memory at StartupThis issue could occur if in the init.ora parameters of your Alert log, (shared_pool_size + large_pool_size + java_pool_size + db_keep_cache_size + streams_pool_size + db_cache_size) / sga_target is greater than 90%.????????????????????????? ?????????????? shared_pool_size, large_pool_size, java_pool_size, db_keep_cache_size, streams_pool_size, db_cache_size ????? /sga_target ??? 90% ??????????????????sga_target ??? memory_target ??????????????????????????????????????????????????????????????????? shared_pool_size ???????????????????????????????????????????????????????????????????????(?????????)? sga_target ????????????????????????????????????????????????????????????? ?????????????????????????????????????????????1)In your Alert log,* Look for parameters under "System parameters with non-default values:". If session_cached_cursor * 2000 / shared_pool_size is greater than 10%, then session_cached_cursors are consuming significant shared_pool_size.??? ????????????? "System parameters with non-default values:" ????  session_cached_cursor * 2000 ??? shared_pool_size ? 10% ???????????????????????? ???2) In your Alert log, SGA Utilization (Sum of shared_pool_size, large_pool_size, java_pool_size, db_keep_cache_size, streams_pool_size and db_cache_size over sga_target) is 99%, which might be too high. ??? shared_pool_size, large_pool_size, java_pool_size, db_keep_cache_size, streams_pool_size and db_cache_size ???? sga_target ? 99% ???????????????????? ?????????????????????????????? My Oracle Support ??????????????????????????1)Decrease the parameter SESSION_CACHED_CURSORSSESSION_CACHED_CURSORS ???????????????????????2)Reduce the minimum values for the dynamic SGA components to allow memory manager to make changes as neededSGA ?????????????????(???)?????????????????? 2????????????????????? ORA-4031 Trobuleshooting Tool ????????????????????????????????????????? ORA-4031 Troubleshooting Tool ?????? ORA-4031 ??????????????????????ORA-4031 Troubleshooting Tool ???????????????????????????????????????????????????????ORA-4031 ????????????????????????????ORA-4031 ??????????????? ORA-4031 Troubleshooting Tool ?????????

    Read the article

  • Postfix - am I sending spam?

    - by olrehm
    today I received like 30 messages within 5 minutes telling me that some mail I send could not be delivered, mostly to *.ru email addresses which I did not send any mail to. I have my own webserver (postfix/dovecot) set up using this guide (http://workaround.org/ispmail/lenny) but adjusted a little bit for Ubuntu. I tested whether I am an Open Relay which I am apparently not. Now there are two possible reasons for the above mentioned emails: Either I am sending out spam, or somebody wants me to think that, correct? How can I check this? I selected one particular address that I supposedly send spam to. Then I searched my mail.log for this entry. I found two blocks that record that somebody from the server connected to my server and delivered some message to two different users. I cannot find an entry reporting that anyone from my server send an email to that server. Does this mean its just some mail to scare me or could it still have been send by me in the first place? Here is one such block from the log (I replaced some confidential stuff): Jun 26 23:23:28 mycustomernumber postfix/smtpd[29970]: connect from mx.webstyle.ru[195.144.251.97] Jun 26 23:23:29 mycustomernumber postfix/smtpd[29970]: 044991528995: client=mx.webstyle.ru[195.144.251.97] Jun 26 23:23:29 mycustomernumber postfix/cleanup[29974]: 044991528995: message-id=<[email protected]> Jun 26 23:23:29 mycustomernumber postfix/qmgr[3369]: 044991528995: from=<>, size=2198, nrcpt=1 (queue active) Jun 26 23:23:29 mycustomernumber amavis[28598]: (28598-11) ESMTP::10024 /var/lib/amavis/tmp/amavis-20110626T223137-28598: <> -> <[email protected]> SIZE=2198 Received: from mycustomernumber.stratoserver.net ([127.0.0.1]) by localhost (rehmsen.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP for <[email protected]>; Sun, 26 Jun 2011 23:23:29 +0200 (CEST) Jun 26 23:23:29 mycustomernumber amavis[28598]: (28598-11) Checking: YakjkrdFq6A8 [195.144.251.97] <> -> <[email protected]> Jun 26 23:23:29 mycustomernumber postfix/smtpd[29970]: disconnect from mx.webstyle.ru[195.144.251.97] Jun 26 23:23:29 mycustomernumber amavis[28598]: (28598-11) lookup_sql_field(id) (WARN: no such field in the SQL table), "[email protected]" result=undef Jun 26 23:23:32 mycustomernumber postfix/smtpd[29979]: connect from localhost.localdomain[127.0.0.1] Jun 26 23:23:32 mycustomernumber postfix/smtpd[29979]: 0A1FA1528A21: client=localhost.localdomain[127.0.0.1] Jun 26 23:23:32 mycustomernumber postfix/cleanup[29974]: 0A1FA1528A21: message-id=<[email protected]> Jun 26 23:23:32 mycustomernumber postfix/qmgr[3369]: 0A1FA1528A21: from=<>, size=2841, nrcpt=1 (queue active) Jun 26 23:23:32 mycustomernumber postfix/smtpd[29979]: disconnect from localhost.localdomain[127.0.0.1] Jun 26 23:23:32 mycustomernumber amavis[28598]: (28598-11) FWD via SMTP: <> -> <[email protected]>,BODY=7BIT 250 2.0.0 Ok, id=28598-11, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 0A1FA1528A21 Jun 26 23:23:32 mycustomernumber amavis[28598]: (28598-11) Passed CLEAN, [195.144.251.97] [195.144.251.97] <> -> <[email protected]>, Message-ID: <[email protected]>, mail_id: YakjkrdFq6A8, Hits: 2.249, size: 2197, queued_as: 0A1FA1528A21, 2882 ms Jun 26 23:23:32 mycustomernumber postfix/smtp[29975]: 044991528995: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=3.3, delays=0.39/0.01/0.01/2.9, dsn=2.0.0, status=sent (250 2.0.0 Ok, id=28598-11, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 0A1FA1528A21) Jun 26 23:23:32 mycustomernumber postfix/qmgr[3369]: 044991528995: removed Jun 26 23:23:33 mycustomernumber postfix/smtp[29980]: 0A1FA1528A21: to=<[email protected]>, orig_to=<[email protected]>, relay=mx3.hotmail.com[65.54.188.110]:25, delay=1.2, delays=0.15/0.02/0.51/0.55, dsn=2.0.0, status=sent (250 <[email protected]> Queued mail for delivery) Jun 26 23:23:33 mycustomernumber postfix/qmgr[3369]: 0A1FA1528A21: removed Jun 26 23:26:49 mycustomernumber postfix/anvil[29972]: statistics: max connection rate 1/60s for (smtp:195.144.251.97) at Jun 26 23:23:28 Jun 26 23:26:49 mycustomernumber postfix/anvil[29972]: statistics: max connection count 1 for (smtp:195.144.251.97) at Jun 26 23:23:28 Jun 26 23:26:49 mycustomernumber postfix/anvil[29972]: statistics: max cache size 1 at Jun 26 23:23:28 I can provide more info if you tell me what you need to know. Thank you for you help!

    Read the article

  • Slow login to load-balanced Terminal Server 2008 behind Gateway Server

    - by Frans
    I have a small load-balanced (using Session Broker) Terminal Server 2008 farm behind a Gateway Server which is accessed from the Internet. The problem I have is that there is a delay of 20-30 seconds if the session broker switches the user to another server during login. I think this is related to the fact that I am forcing the security layer to be RDP rather than SSL. The background The Gateway server has a public routeable IP addres and DNS name so it can be accessed from the Internet and all users come in via this route (the system is used to provide access to hosted applications to external customers). The actual terminal servers only have internal IP addresses. This works really well, except that with a Vista or Windows 7 client, the Remote Desktop client will negotiate with the server to use SSL for the security layer. This then exposes the auto-generated certificate that TS1 or TS2 has - but since they are internal, auto-generated certificates, the client will get a stern warning that the certificate is not valid. I can't give the servers a properly authorised certificate as the servers do not have public routeable IP address or DNS name. Instead, I am using Group Policy to force the connections to be over RDP instead of SSL. \Computer Configuration\Policies\Administrative Templates\Windows Components\Terminal Services\Terminal Server\Security\Require use of specific security layer for remote (RDP) connections The Windows 7 user now gets a much less stern warning that "the server's identity cannot be confirmed" which I can live with. I don't have enough control over the end-user's machines to ask them to install a new root certificate either. TS1 and TS2 are also load-balanced using the Session Broker, which is installed on the Gateway Server. I am using round-robin DNS, so the user's initial connection will go via Gateway1 to either TS1 or TS2. TS1/TS2 will then talk to the session broker and may pass the user to the other server. I.e. the user may get connected to TS2, but after talking to the session broker the user may be passed to TS1, which is where they will run their session. When this switching of servers happens, in my setup, the screen sits with the word "Welcome" for 20-30 seconds after which it flickers, Welcome is shown again and then flashing through nthe normal login screens (i.e. "wait for user profile manager" etc). Having done some research, I think what is happening is that the user is being fully logged on to TS2 (while "Welcome" is shown) before being passed to TS1, where they are then logged in again. It is interesting that normally when you see the ""Welcome" word, the little circle to left rotates. However, it does not rotate during this delay - the screen just looks frozen. This blog post leads me to think that this is because CredSSP is not being used, probably because I am disallowing SSL and forcing RDP. What I have tried I enabled SSL again which removes the "Welcome" delay. However, it seems to introduc a new delay much earlier in the process. Specifically, when the RDP client is saying "initialising connection" - this is now much slower. Quite apart from the fact that my certificate problem precludes me using that solution without considerable difficulty. I tried disabling the load balancing (just remove the servers from the session broker farm) and the connections do not have any delay. The problem is also intermittent in the sense that it only happens when the user gets bumped from one server to another. I tested this by trying to connect directly to TS1 (via the Gateway, of course) and then checking which server I actually got connected to. Just to be sure, I also by-passed the round-robin DNS to see if it had any impact and it doesn't. The setup is essentially in line with MS recommendations here: TS Session Broker Load Balancing Step-by-Step Guide I tried changing to using a dedicated redirector. Basically, rather than using a round-robin DNS, I pointed my DNS to the Gateway server and configured it to be a dedicated redirector (disallow logons, add it to the farm). Same problem, alas. Any ideas or suggestions gratefully received.

    Read the article

  • How to bridge Debian guest VM to VPN via Cisco AnyConnect Client running on Windows Vista host

    - by bgoodr
    I am running Cisco Anyconnect VPN Client version 2.5.3054 on a laptop running Windows Vista Home Premium (version 6.0.6002) Service Pack 2. I am running the VMware Player version 4.0.2 build-591240. The host operating system running under VMware Player is Debian 6.0.2.1 i386. The laptop is connected to a wireless connection, and I can browse the web from Windows Vista using Firefox just fine. I am able to boot into the Debian VM and open up a browser and access websites on the WAN from within the VM just fine. I can ping real Linux hosts on my LAN via: ping <lan_system>.local where <lan_system> is the hostname returned from uname -a on that system on my LAN. From a DOS CMD shell, I am able to ping hosts that exist on the remote network served by the Cisco AnyConnect Client's VPN network (and without the .local suffix applied as above): ping <remote_system> However, from within the Debian VM, I expect to be able to also ping those same remote hosts (<remote_system>) that are tunnelled over the VPN set up by Cisco AnyConnect Client. Let's say that I can ping a <remote_system> called flubber from a DOS CMD prompt just fine. When I execute Linux ping command from inside the Debian VM via: ping flubber It returns immediately with this output: ping: unknown host flubber For reference since I suspect it will be useful, here is the output of the route print command from the DOS CMD prompt: route print =========================================================================== Interface List 30 ...00 05 9a 3c 7a 00 ...... Cisco AnyConnect VPN Virtual Miniport Adapter for Windows 11 ...00 1b 9e c4 de e5 ...... Atheros AR5007EG Wireless Network Adapter 26 ...00 50 56 c0 00 01 ...... VMware Virtual Ethernet Adapter for VMnet1 28 ...00 50 56 c0 00 08 ...... VMware Virtual Ethernet Adapter for VMnet8 1 ........................... Software Loopback Interface 1 12 ...02 00 54 55 4e 01 ...... Teredo Tunneling Pseudo-Interface 13 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 32 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #4 27 ...00 00 00 00 00 00 00 e0 isatap.{E5292CF6-4FBB-4320-806D-A6B366769255} 17 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 20 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #8 22 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #10 24 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #11 25 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #12 29 ...00 00 00 00 00 00 00 e0 isatap.{C3852986-5053-4E2E-BE60-52EA2FCF5899} 41 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #14 =========================================================================== At the top window border of the VM, clicking on Virtual Machine, then clicking on Virtual Machine Settings, then clicking on Network Adapter, I have these two options checked: [X] Bridged: Connected directly to the physical Network [X] Replicate physical network connection state [ ] NAT: used to share the hosts's IP address [ ] Host-only: A private network shared with the host [ ] LAN segment: [ ] <LAN Segments...> <Advanced> I've toyed with the other options such as NAT and Host-only but that had no effect. Is there some way to allow the VM to access those <remote_system>'s?

    Read the article

  • Red Hat Yum not working out of the box?

    - by Tucker
    I have a server runnning Red Hat Enterprise Linux v5.6 in the cloud. My project constraints do not allow me to use another OS. When I created the cloud server, I was able to SSH into it and access the shell. I next ran the command: sudo yum update But the command failed. About a month ago I created another server with the same machine image and didn't have that error. Why is it failing now? The following is the terminal output sudo yum update Loaded plugins: security Repository rhel-server is listed more than once in the configuration Traceback (most recent call last): File "/usr/bin/yum", line 29, in ? yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 309, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 178, in main result, resultmsgs = base.doCommands() File "/usr/share/yum-cli/cli.py", line 345, in doCommands self._getTs(needTsRemove) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 101, in _getTs self._getTsInfo(remove_only) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 112, in _getTsInfo pkgSack = self.pkgSack File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 662, in <lambda> pkgSack = property(fget=lambda self: self._getSacks(), File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 502, in _getSacks self.repos.populateSack(which=repos) File "/usr/lib/python2.4/site-packages/yum/repos.py", line 260, in populateSack sack.populate(repo, mdtype, callback, cacheonly) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 168, in populate if self._check_db_version(repo, mydbtype): File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 226, in _check_db_version return repo._check_db_version(mdtype) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1233, in _check_db_version repoXML = self.repoXML File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1406, in <lambda> repoXML = property(fget=lambda self: self._getRepoXML(), File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1398, in _getRepoXML self._loadRepoXML(text=self) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1388, in _loadRepoXML return self._groupLoadRepoXML(text, ["primary"]) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1372, in _groupLoadRepoXML if self._commonLoadRepoXML(text): File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1208, in _commonLoadRepoXML result = self._getFileRepoXML(local, text) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 989, in _getFileRepoXML cache=self.http_caching == 'all') File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 826, in _getFile http_headers=headers, File "/usr/lib/python2.4/site-packages/urlgrabber/mirror.py", line 412, in urlgrab return self._mirror_try(func, url, kw) File "/usr/lib/python2.4/site-packages/urlgrabber/mirror.py", line 398, in _mirror_try return func_ref( *(fullurl,), **kwargs ) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 936, in urlgrab return self._retry(opts, retryfunc, url, filename) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 854, in _retry r = apply(func, (opts,) + args, {}) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 922, in retryfunc fo = URLGrabberFileObject(url, filename, opts) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 1010, in __init__ self._do_open() File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 1093, in _do_open fo, hdr = self._make_request(req, opener) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 1202, in _make_request fo = opener.open(req) File "/usr/lib64/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib64/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib64/python2.4/site-packages/M2Crypto/m2urllib2.py", line 82, in https_open h.request(req.get_method(), req.get_selector(), req.data, headers) File "/usr/lib64/python2.4/httplib.py", line 810, in request self._send_request(method, url, body, headers) File "/usr/lib64/python2.4/httplib.py", line 833, in _send_request self.endheaders() File "/usr/lib64/python2.4/httplib.py", line 804, in endheaders self._send_output() File "/usr/lib64/python2.4/httplib.py", line 685, in _send_output self.send(msg) File "/usr/lib64/python2.4/httplib.py", line 652, in send self.connect() File "/usr/lib64/python2.4/site-packages/M2Crypto/httpslib.py", line 47, in connect self.sock.connect((self.host, self.port)) File "/usr/lib64/python2.4/site-packages/M2Crypto/SSL/Connection.py", line 174, in connect ret = self.connect_ssl() File "/usr/lib64/python2.4/site-packages/M2Crypto/SSL/Connection.py", line 167, in connect_ssl return m2.ssl_connect(self.ssl, self._timeout) M2Crypto.SSL.SSLError: certificate verify failed

    Read the article

  • PHP crashing (seg-fault) under mod_fcgi, apache

    - by Andras Gyomrey
    I've been programming a site using: Zend Framework 1.11.5 (complete MVC) PHP 5.3.6 Apache 2.2.19 CentOS 5.6 i686 virtuozzo on vps cPanel WHM 11.30.1 (build 4) Mysql 5.1.56-log Mysqli API 5.1.56 The issue started here http://stackoverflow.com/questions/6769515/php-programming-seg-fault. In brief, php is giving me random segmentation-faults. [Wed Jul 20 17:45:34 2011] [error] mod_fcgid: process /usr/local/cpanel/cgi-sys/php5(11562) exit(communication error), get unexpected signal 11 [Wed Jul 20 17:45:34 2011] [warn] [client 190.78.208.30] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Wed Jul 20 17:45:34 2011] [error] [client 190.78.208.30] Premature end of script headers: index.php About extensions. When i compile php with "--enable-debug" flag, i have to disable this line: zend_extension="/usr/local/IonCube/ioncube_loader_lin_5.3.so" Otherwise, the server doesn't accept requests and i get a "The connection with the server was reset". It is possible that i have to disable eaccelerator too because of the same reason. I still don't get why apache gets running it some times and some others not: extension="eaccelerator.so" Anyway, after i get httpd running, seg-faults can occurr randomly. If i don't compile php with "--enable-debug" flag, i can get DETERMINISTICALLY a php crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $row = $db->fetchRow("SHOW CREATE TABLE 222AFI"); } } ?> BUT if i compile php with "--enable-debug" flag, it's really hard to get this error. I must add some complexity to make it crash. I have to be doing many paralell requests for a few seconds to get a crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $tableList = $db->listTables(); foreach ($tableList as $tableName){ $row = $db->fetchRow("SHOW CREATE TABLE " . $db->quoteIdentifier($tableName)); file_put_contents( DB_DEFINITIONS_PATH . '/' . $tableName . '.sql', $row['Create Table'] . ';' ); } } } ?> Please notice this is the same script, but creating DDL for all tables in database rather than for one. It seems that if php is heavy loaded (with extensions and me doing many paralell requests) it's when i get php to crash. About starting httpd with "-X": i've tried. The thing is, it is already hard to make php crash with --enable-debug. With "-X" option (which only enables one child process) i can't do parallel requests. So i haven't been able to create to proper debug backtrace: https://bugs.php.net/bugs-generating-backtrace.php My concrete question is, what do i do to get a coredump? root@GWT4 [~]# httpd -V Server version: Apache/2.2.19 (Unix) Server built: Jul 20 2011 19:18:58 Cpanel::Easy::Apache v3.4.2 rev9999 Server's Module Magic Number: 20051115:28 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 32-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf"

    Read the article

  • Delaying NIS & NFS startup till after network interface is fully ready on Fedora 17

    - by obmarg
    I've recently set up a fedora 17 server for our network, and I've been having trouble getting the NIS service to work on startup. Here's some logs from the system: Aug 21 12:57:12 cairnwell ypbind-pre-setdomain[718]: Setting NIS domain: 'indigo-nis' (environment variable) Aug 21 12:57:13 cairnwell ypbind: Binding NIS service Aug 21 12:57:13 cairnwell rpc.statd[730]: Unable to prune capability 0 from bounding set: Operation not permitted Aug 21 12:57:13 cairnwell systemd[1]: nfs-lock.service: control process exited, code=exited status=1 Aug 21 12:57:13 cairnwell systemd[1]: Unit nfs-lock.service entered failed state. Aug 21 12:57:14 cairnwell setroubleshoot: SELinux is preventing /usr/sbin/rpc.statd from using the setpcap capability. For complete SELinux messages. run sealert -l 024bba8a-b0ef-43dc-b195-5c9a2d4c4d41 Aug 21 12:57:15 cairnwell kernel: [ 18.822282] bnx2 0000:02:00.0: em1: NIC Copper Link is Up, 1000 Mbps full duplex Aug 21 12:57:15 cairnwell kernel: [ 18.822925] ADDRCONF(NETDEV_CHANGE): em1: link becomes ready Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> (em1): carrier now ON (device state 20) Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> (em1): device state change: unavailable -> disconnected (reason 'carrier-changed') [20 30 40] Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> Auto-activating connection 'System em1'. Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> Activation (em1) starting connection 'System em1' Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> (em1): device state change: disconnected -> prepare (reason 'none') [30 40 0] ....... Aug 21 12:57:19 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:57:26 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:57:31 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:57:35 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:58:00 cairnwell ypbind: Binding took 47 seconds Aug 21 12:58:00 cairnwell ypbind: NIS server for domain indigo-nis is not responding. Aug 21 12:58:01 cairnwell ypbind: Killing ypbind with PID 733. Aug 21 12:58:01 cairnwell ypbind-post-waitbind[734]: /usr/lib/ypbind/ypbind-post-waitbind: line 51: kill: SIGTERM: invalid signal specification Aug 21 12:58:01 cairnwell systemd[1]: ypbind.service: control process exited, code=exited status=1 Aug 21 12:58:01 cairnwell systemd[1]: Unit ypbind.service entered failed state. By the looks of these logs the ypbind service is starting up at 12:57:12 but the network interface isn't coming up till 12:57:15. My guess is that this is causing ypbind to time out when trying to connect. As a knock-on effect the NIS failure is causing problems with NFS which is no longer able to map UIDs properly. This problem doesn't seem to be fixed by actually starting ypbind etc. so I've had to set all my NFS shares to noauto. I have tried manually adding NETWORKDELAY and NETWORKWAIT in /etc/sysconfig/network and also running systemctl enable NetworkManager-wait-online.service as I've seen suggested in some places, but neither of these have had any effect. It is relatively easy to fix manually by restarting ypbind & mounting NFS shares after the network has started up, but it's less than ideal to have to do this every time the server has been rebooted. Does anyone know of an easy (and preferably hack-free) way of delaying the ypbind startup till after the network interface is fully ready?

    Read the article

  • ufw portforwarding to virtualbox guest

    - by user85116
    My goal is to be able to connect using remote desktop on my desktop machine, to windows xp running in virtualbox on my linux server. My setup: server = debian squeeze, 64 bit, with a public IP address (host) virtualbox-ose 3.2.10 (from debian repo) windows xp running inside VBox as a guest; bridged networking mode in VBox, ip = 192.168.1.100 ufw as the firewall on debian, 3 ports are opened: 22 / ssh, 80 / apache, and 3389 for remote desktop My problem: If I try to use remote desktop on my home computer, I am unable to connect to the windows guest. If I first "ssh -X -C" into the debian server, then run "rdesktop 192.168.1.100", I am able to connect without issue. The windows firewall was configured to allow remote desktop connections, and I've even turned it off (as it is redundant here) to see if that was the problem but it made no difference. Since I am able to connect from inside the local subnet, I suspect that I have not setup my debian firewall correctly to handle connections from outside the LAN. Here is what I've done... First my ufw status: ufw status Status: active To Action From -- ------ ---- 22 ALLOW Anywhere 80 ALLOW Anywhere 3389 ALLOW Anywhere I edited /etc/ufw/sysctl.conf and added: net/ipv4/ip_forward=1 Edited /etc/default/ufw and added: DEFAULT_FORWARD_POLICY="ACCEPT" Edited /etc/ufw/before.rules and added: # setup port forwarding to forward rdp to windows VM *nat :PREROUTING - [0:0] -A PREROUTING -i eth0 -p tcp --dport 3389 -j DNAT --to-destination 192.168.1.100 -A PREROUTING -i eth0 -p udp --dport 3389 -j DNAT --to-destination 192.168.1.100 COMMIT # Don't delete these required lines, otherwise there will be errors *filter <snip> Restarted the firewall etc., but no connection. My log files on the debian host show this (my public ip address was removed for this posting but it is correct in the actual log): Feb 6 11:11:21 localhost kernel: [171991.856941] [UFW AUDIT] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27518 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:21 localhost kernel: [171991.856963] [UFW ALLOW] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27518 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:24 localhost kernel: [171994.856701] [UFW AUDIT] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27519 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:24 localhost kernel: [171994.856723] [UFW ALLOW] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27519 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:30 localhost kernel: [172000.856656] [UFW AUDIT] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27520 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:30 localhost kernel: [172000.856678] [UFW ALLOW] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27520 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Although this is the current setup / configuration, I've also tried several variations of this; I thought maybe the ISP would be blocking 3389 for some reason and tried using different ports, but again there was no connection. Any ideas...? Did I forget to modify some file somewhere?

    Read the article

  • OpenSwan IPSec phase #2 complications

    - by XXL
    Phase #1 (IKE) succeeds without any problems (verified at the target host). Phase #2 (IPSec), however, is erroneous at some point (apparently due to misconfiguration on localhost). This should be an IPSec-only connection. I am using OpenSwan on Debian. The error log reads the following (the actual IP-addr. of the remote endpoint has been modified): pluto[30868]: "x" #2: initiating Quick Mode PSK+ENCRYPT+PFS+UP+IKEv2ALLOW+SAREFTRACK {using isakmp#1 msgid:5ece82ee proposal=AES(12)_256-SHA1(2)_160 pfsgroup=OAKLEY_GROUP_DH22} pluto[30868]: "x" #1: ignoring informational payload, type NO_PROPOSAL_CHOSEN msgid=00000000 pluto[30868]: "x" #1: received and ignored informational message pluto[30868]: "x" #1: the peer proposed: 0.0.0.0/0:0/0 - 0.0.0.0/0:0/0 pluto[30868]: "x" #3: responding to Quick Mode proposal {msgid:a4f5a81c} pluto[30868]: "x" #3: us: 192.168.1.76<192.168.1.76[+S=C] pluto[30868]: "x" #3: them: 222.222.222.222<222.222.222.222[+S=C]===10.196.0.0/17 pluto[30868]: "x" #3: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 pluto[30868]: "x" #3: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 pluto[30868]: "x" #1: ignoring informational payload, type NO_PROPOSAL_CHOSEN msgid=00000000 pluto[30868]: "x" #1: received and ignored informational message pluto[30868]: "x" #3: next payload type of ISAKMP Hash Payload has an unknown value: 97 X pluto[30868]: "x" #3: malformed payload in packet pluto[30868]: | payload malformed after IV I am behind NAT and this is all coming from wlan2. Here are the details: default via 192.168.1.254 dev wlan2 proto static 169.254.0.0/16 dev wlan2 scope link metric 1000 192.168.1.0/24 dev wlan2 proto kernel scope link src 192.168.1.76 metric 2 Output of ipsec verify: Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.37/K3.2.0-24-generic (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Two or more interfaces found, checking IP forwarding [OK] Checking NAT and MASQUERADEing [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] This is what happens when I run ipsec auto --up x: 104 "x" #1: STATE_MAIN_I1: initiate 003 "x" #1: received Vendor ID payload [RFC 3947] method set to=109 106 "x" #1: STATE_MAIN_I2: sent MI2, expecting MR2 003 "x" #1: received Vendor ID payload [Cisco-Unity] 003 "x" #1: received Vendor ID payload [Dead Peer Detection] 003 "x" #1: ignoring unknown Vendor ID payload [502099ff84bd4373039074cf56649aad] 003 "x" #1: received Vendor ID payload [XAUTH] 003 "x" #1: NAT-Traversal: Result using RFC 3947 (NAT-Traversal): i am NATed 108 "x" #1: STATE_MAIN_I3: sent MI3, expecting MR3 004 "x" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_128 prf=oakley_sha group=modp1024} 117 "x" #2: STATE_QUICK_I1: initiate 010 "x" #2: STATE_QUICK_I1: retransmission; will wait 20s for response 010 "x" #2: STATE_QUICK_I1: retransmission; will wait 40s for response 031 "x" #2: max number of retransmissions (2) reached STATE_QUICK_I1. No acceptable response to our first Quick Mode message: perhaps peer likes no proposal 000 "x" #2: starting keying attempt 2 of at most 3, but releasing whack I have enabled NAT traversal in ipsec.conf accordingly. Here are the settings relative to the connection in question: version 2.0 config setup plutoopts="--perpeerlog" plutoopts="--interface=wlan2" dumpdir=/var/run/pluto/ nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn x authby=secret pfs=yes auto=add phase2alg=aes256-sha1;dh22 keyingtries=3 ikelifetime=8h type=transport left=192.168.1.76 leftsubnet=192.168.1.0/24 leftprotoport=0/0 right=222.222.222.222 rightsubnet=10.196.0.0/17 rightprotoport=0/0 Here are the specs provided by the other end that must be met for Phase #2: encryption algorithm: AES (128 or 256 bit) hash algorithm: SHA local ident1 (addr/mask/prot/port): (10.196.0.0/255.255.128.0/0/0) local ident2 (addr/mask/prot/port): (10.241.0.0/255.255.0.0/0/0) remote ident (addr/mask/prot/port): (x.x.x.x/x.x.x.x/0/0) (internal network or localhost) Security association lifetime: 4608000 kilobytes/3600 seconds PFS: DH group2 So, finally, what might be the cause of the issue that I am experiencing? Thank you.

    Read the article

  • WebDav issue with Mac OS X 10.5.3 onwards

    - by svnr
    Hi, We upgraded to Mac OS X 10.5.3 and getting problem when uploading files (PUT) to a webdav server (the server is Apache running on a Windows environment). When we drag and drop on to a webdav folder using Finder we get a -36 error. When looking at the stack trace of the web server the problem is due to INVALID CRLF or some times getting the following error. Both the stack point to error when copying the stream. When googled found that it is because the Mac changed to Transfer-Encoding to 'Chunked' ClientAbortException: java.net.SocketException: Software caused connection abort: socket write error at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:366) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:433) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:348) at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:392) at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:381) at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:88) at org.apache.commons.io.CopyUtils.copy(CopyUtils.java:200) at com.artesia.webdav.action.helper.ResponseWriterHelper.writeFileContentResponse(ResponseWriterHelper.java:206) at com.artesia.webdav.action.GetMethodAction.executeWebDavMethod(GetMethodAction.java:147) at com.artesia.webdav.action.BaseWebDavMethodAction.execute(BaseWebDavMethodAction.java:257) at com.artesia.webdav.action.BaseWebDavAction.execute(BaseWebDavAction.java:92) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507) at javax.servlet.http.HttpServlet.service(HttpServlet.java:697) at com.artesia.webdav.web.WebDavActionServlet.service(WebDavActionServlet.java:93) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:672) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301) at org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1069) at org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:455) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:279) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507) at javax.servlet.http.HttpServlet.service(HttpServlet.java:697) at com.artesia.webdav.web.WebDavActionServlet.service(WebDavActionServlet.java:93) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:672) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301) at com.artesia.webdav.web.BaseWebDavServlet.forward(BaseWebDavServlet.java:91) at com.artesia.webdav.web.BaseWebDavServlet.service(BaseWebDavServlet.java:83) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.action.RequestFilter.doFilter(RequestFilter.java:46) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.web.WebDavAuthenticationFilter.doFilter(WebDavAuthenticationFilter.java:463) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.web.MacSessionHackFilter.doFilter(MacSessionHackFilter.java:111) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:175) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:74) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:664) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527) at org.apache.tomcat.util.net.MasterSlaveWorkerThread.run(MasterSlaveWorkerThread.java:112) at java.lang.Thread.run(Thread.java:595) Caused by: java.net.SocketException: Software caused connection abort: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:746) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:433) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:348) at org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:769) at org.apache.coyote.http11.filters.IdentityOutputFilter.doWrite(IdentityOutputFilter.java:117) at org.apache.coyote.http11.InternalOutputBuffer.doWrite(InternalOutputBuffer.java:579) at org.apache.coyote.Response.doWrite(Response.java:559) at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:361)

    Read the article

  • Postfix - Gmail - Mountain Lion // can't send mail

    - by miako
    I have read most of the tutorials found on google but still can't make it work. I run the command : date | mail -s "Test" [email protected] . The log is this : Oct 22 11:38:00 XXX.local postfix/master[288]: daemon started -- version 2.9.2, configuration /etc/postfix Oct 22 11:38:00 XXX.local postfix/pickup[289]: 9D85418A031: uid=501 from=<me> Oct 22 11:38:00 XXX.local postfix/cleanup[291]: 9D85418A031: message-id=<[email protected]> Oct 22 11:38:00 XXX.local postfix/qmgr[290]: 9D85418A031: from=<[email protected]>, size=327, nrcpt=1 (queue active) Oct 22 11:38:00 XXX.local postfix/smtp[293]: initializing the client-side TLS engine Oct 22 11:38:02 XXX.local postfix/smtp[293]: setting up TLS connection to smtp.gmail.com[173.194.70.109]:587 Oct 22 11:38:02 XXX.local postfix/smtp[293]: smtp.gmail.com[173.194.70.109]:587: TLS cipher list "ALL:!EXPORT:!LOW:+RC4:@STRENGTH:!eNULL" Oct 22 11:38:02 XXX.local postfix/smtp[293]: SSL_connect:before/connect initialization Oct 22 11:38:02 XXX.local postfix/smtp[293]: SSL_connect:SSLv2/v3 write client hello A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 read server hello A Oct 22 11:38:03 XXX.local postfix/smtp[293]: smtp.gmail.com[173.194.70.109]:587: certificate verification depth=2 verify=0 subject=/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA Oct 22 11:38:03 --- last message repeated 1 time --- Oct 22 11:38:03 XXX.local postfix/smtp[293]: smtp.gmail.com[173.194.70.109]:587: certificate verification depth=1 verify=1 subject=/C=US/O=Google Inc/CN=Google Internet Authority G2 Oct 22 11:38:03 XXX.local postfix/smtp[293]: smtp.gmail.com[173.194.70.109]:587: certificate verification depth=0 verify=1 subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 read server certificate A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 read server done A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 write client key exchange A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 write change cipher spec A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 write finished A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 flush data Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 read server session ticket A Oct 22 11:38:03 XXX.local postfix/smtp[293]: SSL_connect:SSLv3 read finished A Oct 22 11:38:03 XXX.local postfix/smtp[293]: smtp.gmail.com[173.194.70.109]:587: subject_CN=smtp.gmail.com, issuer_CN=Google Internet Authority G2, fingerprint E4:CA:10:85:C3:53:00:E6:A1:D2:AC:C4:35:E4:A2:10, pkey_fingerprint=D6:06:2E:15:AF:DF:E9:50:A5:B4:E2:E4:C5:2E:F9:BA Oct 22 11:38:03 XXX.local postfix/smtp[293]: Untrusted TLS connection established to smtp.gmail.com[173.194.70.109]:587: TLSv1 with cipher RC4-SHA (128/128 bits) Oct 22 11:38:03 XXX.local postfix/smtp[293]: 9D85418A031: to=<[email protected]>, relay=smtp.gmail.com[173.194.70.109]:587, delay=3.4, delays=0.26/0.13/2.8/0.26, dsn=5.5.1, status=bounced (host smtp.gmail.com[173.194.70.109] said: 530-5.5.1 Authentication Required. Learn more at 530 5.5.1 http://support.google.com/mail/bin/answer.py?answer=14257 s3sm54097220eeo.3 - gsmtp (in reply to MAIL FROM command)) Oct 22 11:38:04 XXX.local postfix/cleanup[291]: D4D2F18A03C: message-id=<[email protected]> Oct 22 11:38:04 XXX.local postfix/qmgr[290]: D4D2F18A03C: from=<>, size=2382, nrcpt=1 (queue active) Oct 22 11:38:04 XXX.local postfix/bounce[297]: 9D85418A031: sender non-delivery notification: D4D2F18A03C Oct 22 11:38:04 XXX.local postfix/qmgr[290]: 9D85418A031: removed Oct 22 11:38:04 XXX.local postfix/local[298]: D4D2F18A03C: to=<[email protected]>, relay=local, delay=0.11, delays=0/0.08/0/0.02, dsn=2.0.0, status=sent (delivered to mailbox) Oct 22 11:38:04 XXX.local postfix/qmgr[290]: D4D2F18A03C: removed Oct 22 11:39:00 XXX.local postfix/master[288]: master exit time has arrived I am really confused as i have never setup MTA again an i need it for local web development. I don't use XAMPP. I use the built in Servers. Can anyone guide me?

    Read the article

  • Windows Server 2003 RDP not listening on IPv6

    - by Ian Boyd
    i have a Windows Server 2003 machine; with IPv6 enabled: Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : newland.local IP Address. . . . . . . . . . . . : 192.168.1.244 Subnet Mask . . . . . . . . . . . : 255.255.0.0 IP Address. . . . . . . . . . . . : 2001:470:¦¦¦¦:¦¦¦¦:¦¦¦:¦¦¦¦:¦¦¦¦:¦¦¦¦ IP Address. . . . . . . . . . . . : fe80::224:1dff:fe86:fdf2%4 Default Gateway . . . . . . . . . : 192.168.1.1 fe80::250:bfff:fe91:955f%4 i can connect to IIS server using IPv, remotely and locally, using IPv4 and IPv6: > telnet 127.0.0.1 80 (connects) > telnet 192.168.1.244 80 (connects) > telnet ::1 80 (connects) > telnet fe80::224:1dff:fe86:fdf2 80 (connects) And TCPView shows that the server is listening on port 80: Note: This is useful to establish that Windows Server 2003 does have support for IPv6 services. And i can connect to terminal services, locally and remotely, using IPv4: > telnet 127.0.0.1 3389 (connects) > telnet 192.168.1.244 3389 (connects) But i cannot connect to RDP, locally or remotely, over IPv6: > telnet ::1 3389 (fails to connect) > telnet fe80::224:1dff:fe86:fdf2 3389 (fails to connect) We can see that the system is listening on 3389: Except why won't it listen on port 3389 (ipv6)? Unfortunately it's not the firewall. Aside from the fact that i'm connecting locally (in which case the firewall doesn't apply), the firewall doesn't apply:

    Read the article

  • Extreme headache from ASSP Extreme Ban

    - by Chase Florell
    I've got a local user on my server that as of today cannot send email from any of their devices. Only Webmail (which doesn't touch any of their devices) works. Here are the various email failures I'm receiving in the logs. Dec-04-12 19:52:47 75966-05166 [SpoofedSender] 111.111.111.111 <[email protected]> to: [email protected] [scoring:20] -- No Spoofing Allowed -- [Test]; Dec-04-12 19:52:47 75966-05166 [Extreme] 111.111.111.111 <[email protected]> to: [email protected] [spam found] -- score for 111.111.111.111 is 1980, surpassing extreme level of 500 -- [Test] -> spam/Test__1.eml; Dec-04-12 19:52:48 75968-05169 111.111.111.111 <[email protected]> to: [email protected] [scoring:10] -- IP in HELO does not match connection: '[192.168.0.10]' -- [Re Demo Feedbacks for End of November Sales]; Dec-04-12 19:52:48 75968-05169 [SpoofedSender] 111.111.111.111 <[email protected]> to: [email protected] [scoring:20] -- No Spoofing Allowed -- [Re Demo Feedbacks for End of November Sales]; Dec-04-12 19:52:48 75968-05169 [Extreme] 111.111.111.111 <[email protected]> to: [email protected] [spam found] -- score for 111.111.111.111 is 2020, surpassing extreme level of 500 -- [Re Demo Feedbacks for End of November Sales] ->spam/Re_Demo_Feedbacks_for_End_of_N__2.eml; Dec-04-12 19:52:57 75977-05179 [SpoofedSender] 111.111.111.111 <[email protected]> to: [email protected] [scoring:20] -- No Spoofing Allowed -- [test]; Dec-04-12 19:52:57 75977-05179 [Extreme] 111.111.111.111 <[email protected]> to: [email protected] [spam found] -- score for 111.111.111.111 is 2040, surpassing extreme level of 500 -- [test] -> spam/test__3.eml; ……………. Dec-04-12 19:55:35 76135-05338 [SpoofedSender] 111.111.111.111 <[email protected]> to: [email protected] [scoring:20] -- No Spoofing Allowed -- [test]; Dec-04-12 19:55:35 76135-05338 [MsgID] 111.111.111.111 <[email protected]> to: [email protected] [scoring] (Message-ID not valid: 'E8472A91545B44FBAE413F6D8760C7C3@bts'); Dec-04-12 19:55:35 76135-05338 [InvalidHELO] 111.111.111.111 <[email protected]> to: [email protected] [spam found] -- Invalid HELO: 'bts' -- [test] -> discarded/test__4.eml; note: 111.111.111.111 is a replacement for the users home IP address Here is the headers of one of the messages X-Assp-Score: 10 (HELO contains IP: '[192.168.0.10]') X-Assp-Score: 10 (IP in HELO does not match connection: '[192.168.0.10]') X-Assp-Score: 20 (No Spoofing Allowed) X-Assp-Score: 10 (bombSubjectRe: 'sale') X-Assp-Score: 20 (blacklisted HELO '[192.168.0.10]') X-Assp-Score: 45 (DNSBLcache: failed, 111.111.111.111 listed in safe.dnsbl.sorbs.net) X-Assp-DNSBLcache: failed, 174.0.35.31 listed in safe.dnsbl.sorbs.net X-Assp-Received-SPF: fail (cache) ip=174.0.35.31 [email protected] helo=[192.168.0.10] X-Assp-Score: 10 (SPF fail) X-Assp-Envelope-From: [email protected] X-Assp-Intended-For: [email protected] X-Assp-Version: 1.7.5.7(1.0.07) on ASSP.nospam X-Assp-ID: ASSP.nospam (77953-07232) X-Assp-Spam: YES X-Assp-Original-Subject: Re: Demo Feedbacks for End of November Sales X-Spam-Status:yes X-Assp-Spam-Reason: MessageScore (125) over limit (50) X-Assp-Message-Totalscore: 125 Received: from [192.168.0.10] ([111.111.111.111] helo=[192.168.0.10]) with IPv4:25 by ASSP.nospam; 4 Dec 2012 20:25:52 -0700 Content-Type: multipart/alternative; boundary=Apple-Mail-40FE7453-4BE7-4AD6-B297-FB81DAA554EC Content-Transfer-Encoding: 7bit Subject: Re: Demo Feedbacks for End of November Sales References: <003c01cdd22e$eafbc6f0$c0f354d0$@com> From: Some User <[email protected]> In-Reply-To: <003c01cdd22e$eafbc6f0$c0f354d0$@com> Message-Id: <[email protected]> Date: Tue, 4 Dec 2012 19:32:28 -0700 To: External User <[email protected]> Mime-Version: 1.0 (1.0) X-Mailer: iPhone Mail (10A523) Why is it that a local sender has been banned on our local server, and how can I fix this?

    Read the article

  • WebDav issue with Mac OS X 10.5.3 onwards

    - by svnr
    We upgraded to Mac OS X 10.5.3 and getting problem when uploading files (PUT) to a webdav server (the server is Apache running on a Windows environment). When we drag and drop on to a webdav folder using Finder we get a -36 error. When looking at the stack trace of the web server the problem is due to INVALID CRLF or some times getting the following error. Both the stack point to error when copying the stream. When googled found that it is because the Mac changed to Transfer-Encoding to 'Chunked' ClientAbortException: java.net.SocketException: Software caused connection abort: socket write error at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:366) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:433) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:348) at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:392) at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:381) at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:88) at org.apache.commons.io.CopyUtils.copy(CopyUtils.java:200) at com.artesia.webdav.action.helper.ResponseWriterHelper.writeFileContentResponse(ResponseWriterHelper.java:206) at com.artesia.webdav.action.GetMethodAction.executeWebDavMethod(GetMethodAction.java:147) at com.artesia.webdav.action.BaseWebDavMethodAction.execute(BaseWebDavMethodAction.java:257) at com.artesia.webdav.action.BaseWebDavAction.execute(BaseWebDavAction.java:92) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507) at javax.servlet.http.HttpServlet.service(HttpServlet.java:697) at com.artesia.webdav.web.WebDavActionServlet.service(WebDavActionServlet.java:93) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:672) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301) at org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1069) at org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:455) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:279) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507) at javax.servlet.http.HttpServlet.service(HttpServlet.java:697) at com.artesia.webdav.web.WebDavActionServlet.service(WebDavActionServlet.java:93) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:672) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301) at com.artesia.webdav.web.BaseWebDavServlet.forward(BaseWebDavServlet.java:91) at com.artesia.webdav.web.BaseWebDavServlet.service(BaseWebDavServlet.java:83) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.action.RequestFilter.doFilter(RequestFilter.java:46) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.web.WebDavAuthenticationFilter.doFilter(WebDavAuthenticationFilter.java:463) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.web.MacSessionHackFilter.doFilter(MacSessionHackFilter.java:111) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:175) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:74) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:664) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527) at org.apache.tomcat.util.net.MasterSlaveWorkerThread.run(MasterSlaveWorkerThread.java:112) at java.lang.Thread.run(Thread.java:595) Caused by: java.net.SocketException: Software caused connection abort: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:746) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:433) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:348) at org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:769) at org.apache.coyote.http11.filters.IdentityOutputFilter.doWrite(IdentityOutputFilter.java:117) at org.apache.coyote.http11.InternalOutputBuffer.doWrite(InternalOutputBuffer.java:579) at org.apache.coyote.Response.doWrite(Response.java:559) at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:361)

    Read the article

  • Users suddenly missing write permissions to the root drive c within an active directory domain

    - by Kevin
    I'm managing an active directory single domain environment on some Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012 machines. Since a few weeks I got a strange issue. Some users (not all!) report that they cannot any longer save, copy or write files to the root drive c, whether on their clients (vista, win 7) nor via remote desktop connection on a Windows Server 2008 machine. Even running programs that require direct write permissions to the root drive without administrator permissions fail to do so since then. The affected users have local administrator permissions. The question I'm facing now is: What caused this change of system behavior? Why did this happen? I didn't find out yet. What was the last thing I did before it happened? The last action that was made before it happened was the rollout of a GPO containing network drive mappings for the users depending on their security group membership. All network drives are located on a linux server with samba enabled. We did not change any UAC settings, and they have always been activated. However I can't imagine that rolling out this GPO caused the problem. Has anybody faced an issue like that? Just in case: I know that it is for a specific reason that an user without administrative privileges is prevented from writing to the root drive since windows vista and the implementation of UAC. I don't think that those users should be able to write to drive c, but I try to figure out why this is happening and a few weeks ago this was still working. I also know that a user who is a member of the local administrators group does not execute anything with administrator permissions per default unless he or she executes a program with this permissions. What did I do yet? I checked the permissions of the affected programs, the affected clients/server. Didn't find something special. I checked ALL of our GPOs if there exist any restrictions that could prevent the affected users from writing to the root drive. Did not find any settings. I checked the UAC settings of the affected users and compared those to other users that still can write to the root drive. Everything similar. I googled though the internet and tried to find someone who had a similar problem. Did not find one. Has anybody an idea? Thank you very much. Edit: The GPO that was rolled out does the following (Please excuse if the settings are not named exactly like that, I translated the settings into english): **Windows Settings -- Network Drive Mappings -- Drive N: -- General:** Action: Replace **Properties:** Letter: N Location: \\path-to-drive\drivename Re-Establish connection: deactivated Label as: Name_of_the_Share Use first available Option: deactivated **Windows Settings -- Network Drive Mappings -- Drive N: -- Public: Options:** On error don't process any further elements for this extension: no Run as the logged in user: no remove element if it is not applied anymore: no Only apply once: no **Securitygroup:** Attribute -- Value bool -- AND not -- 0 name -- domain\groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 **Securitygroup:** Attribute -- Value bool -- OR not -- 0 name -- domain\another-groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 Edit: The Error-Message of an affected users says the following: Due to an unexpected error you can't copy the file. Error-Code 0x80070522: The client is missing a required permission. The command icacls C: shows the following: NT-AUTORITY\SYSTEM:(OI)(CI)(F) PRE-DEFINED\Administrators:(OI)(CI)(F) computername\username:(OI)(CI)(F) A college just told me that also the primary domain-controller (PDC) changed from Windows Server 2008 to Windows Server 2012. That also may be a reason. Any suggestions?

    Read the article

  • django, mod_wsgi, MySQL High CPU - Problems

    - by Red Rover
    I am having a problem with an OSQA site. It is Django/Apache/mod_wsgi configured site. Every hour, the CPU spikes to 164% (Average) for task HTTPD. After 10 minutes, it frees back up. I have reviewed the logs, cron tables, made many config changes, but cannot track this problem down. Can someone please look at the information below and let me know if it is a configuration problem, or if anyone else has experienced this issue. Running TOP shows HTTPD using 165% of CPU VMware performance monitor also displays spikes. This happens every hour for 10 minutes. I have the following information from server status Server Version: Apache/2.2.15 (Unix) DAV/2 mod_wsgi/3.2 Python/2.6.6 Server Built: Feb 7 2012 09:50:15 Current Time: Sunday, 10-Jun-2012 21:44:29 EDT Restart Time: Sunday, 10-Jun-2012 19:44:51 EDT Parent Server Generation: 0 Server uptime: 1 hour 59 minutes 37 seconds Total accesses: 1088 - Total Traffic: 11.5 MB CPU Usage: u80.26 s243.8 cu0 cs0 - 4.52% CPU load .152 requests/sec - 1682 B/second - 10.8 kB/request 4 requests currently being processed, 11 idle workers ....._..........__......W....................................... ...................................C._..._....._L__._L_._....... ...................... Scoreboard Key: "_" Waiting for Connection, "S" Starting up, "R" Reading Request, "W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup, "C" Closing connection, "L" Logging, "G" Gracefully finishing, "I" Idle cleanup of worker, "." Open slot with no current process Srv PID Acc M CPU SS Req Conn Child Slot Client VHost Request 0-0 - 0/0/34 . 0.42 327 17 0.0 0.00 0.67 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 1-0 - 0/0/22 . 0.31 339 32 0.0 0.00 0.26 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 2-0 - 0/0/22 . 0.65 358 10 0.0 0.00 0.31 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 3-0 - 0/0/31 . 1.03 378 31 0.0 0.00 0.60 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 4-0 - 0/0/20 . 0.45 356 9 0.0 0.00 0.31 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 5-0 18852 0/16/34 _ 0.98 27 18120 0.0 0.37 0.62 69.180.250.36 osqa.informs.org GET /questions/289/what-is-the-difference-between-operations-re 6-0 - 0/0/32 . 0.94 309 29 0.0 0.00 0.64 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 7-0 - 0/0/31 . 1.15 382 32 0.0 0.00 0.75 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 8-0 - 0/0/21 . 0.28 403 19 0.0 0.00 0.20 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 9-0 - 0/0/32 . 1.37 288 16 0.0 0.00 0.60 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 10-0 - 0/0/33 . 1.72 383 16 0.0 0.00 0.40 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 I am running Django 1.3 This is a mod_wsgi configuration and copied is the wsgi.conf file: <IfModule !python_module> <IfModule !wsgi_module> LoadModule wsgi_module modules/mod_wsgi.so <IfModule wsgi_module> <Directory /var/www/osqa> Order allow,deny Allow from all #Deny from all </Directory> WSGISocketPrefix /var/run/wsgi WSGIPythonEggs /var/tmp WSGIDaemonProcess OSQA maximum-requests=10000 WSGIProcessGroup OSQA Alias /admin_media/ /usr/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/contrib/admin/media/ Alias /m/ /var/www/osqa/forum/skins/ Alias /upfiles/ /var/www/osqa/forum/upfiles/ <Directory /var/www/osqa/forum/skins> Order allow,deny Allow from all </Directory> WSGIScriptAlias / /var/www/osqa/osqa.wsgi </IfModule> </IfModule> </IfModule> This is the httpd.conf file Timeout 120 KeepAlive Off MaxKeepAliveRequests 100 MaxKeepAliveRequests 400 KeepAliveTimeout 3 <IfModule prefork.c> Startservers 15 MinSpareServers 10 MaxSpareServers 20 ServerLimit 50 MaxClients 50 MaxRequestsPerChild 0 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> We are using MySQL The server is an ESX4i, configured for the VM to use 4 CPUs and 8 GB Ram. Hyper threading is enabled, 2 physical CPU's, with 4 Logical. the CPU are Intel Xeon 2.8 GHz. Total memory is 12GB

    Read the article

  • Users loggin to 3Com switches authenticated by radius not getting admin priv and no access available

    - by 3D1L
    Hi, Following the setup that I have for my Cisco devices, I got some basic level of functionality authenticating users that loggin to 3Com switches authenticated against a RADIUS server. Problem is that I can not get the user to obtain admin privileges. I'm using Microsoft's IAS service. According to 3Com documentation when configuring the access policy on IAS the value of 010600000003 have to be used to specify admin access level. That value have to be input in the Dial-in profile section: 010600000003 - indicates admin privileges 010600000002 - manager 010600000001 - monitor 010600000000 - visitor Here is the configuration on the switch: radius scheme system server-type standard primary authentication XXX.XXX.XXX.XXX accounting optional key authentication XXXXXX key accounting XXXXXX domain system scheme radius-scheme system local-user admin service-type ssh telnet terminal level 3 local-user manager service-type ssh telnet terminal level 2 local-user monitor service-type ssh telnet terminal level 1 The configuration is working with the IAS server because I can check user login events with the Eventviewer tool. Here is the output of the DISPLAY RADIUS command at the switch: [4500]disp radius SchemeName =system Index=0 Type=standard Primary Auth IP =XXX.XXX.XXX.XXX Port=1645 State=active Primary Acct IP =127.0.0.1 Port=1646 State=active Second Auth IP =0.0.0.0 Port=1812 State=block Second Acct IP =0.0.0.0 Port=1813 State=block Auth Server Encryption Key= XXXXXX Acct Server Encryption Key= XXXXXX Accounting method = optional TimeOutValue(in second)=3 RetryTimes=3 RealtimeACCT(in minute)=12 Permitted send realtime PKT failed counts =5 Retry sending times of noresponse acct-stop-PKT =500 Quiet-interval(min) =5 Username format =without-domain Data flow unit =Byte Packet unit =1 Total 1 RADIUS scheme(s). 1 listed Here is the output of the DISPLAY DOMAIN and DISPLAY CONNECTION commands after users log into the switch: [4500]display domain 0 Domain = system State = Active RADIUS Scheme = system Access-limit = Disable Domain User Template: Idle-cut = Disable Self-service = Disable Messenger Time = Disable Default Domain Name: system Total 1 domain(s).1 listed. [4500]display connection Index=0 ,Username=admin@system IP=0.0.0.0 Index=2 ,Username=user@system IP=xxx.xxx.xxx.xxx On Unit 1:Total 2 connections matched, 2 listed. Total 2 connections matched, 2 listed. [4500] Here is the DISP RADIUS STATISTICS: [4500] %Apr 2 00:23:39:957 2000 4500 SHELL/5/LOGIN:- 1 - ecajigas(xxx.xxx.xxx.xxx) in un it1 logindisp radius stat state statistic(total=1048): DEAD=1046 AuthProc=0 AuthSucc=0 AcctStart=0 RLTSend=0 RLTWait=2 AcctStop=0 OnLine=2 Stop=0 StateErr=0 Received and Sent packets statistic: Unit 1........................................ Sent PKT total :4 Received PKT total:1 Resend Times Resend total 1 1 2 1 Total 2 RADIUS received packets statistic: Code= 2,Num=1 ,Err=0 Code= 3,Num=0 ,Err=0 Code= 5,Num=0 ,Err=0 Code=11,Num=0 ,Err=0 Running statistic: RADIUS received messages statistic: Normal auth request , Num=1 , Err=0 , Succ=1 EAP auth request , Num=0 , Err=0 , Succ=0 Account request , Num=1 , Err=0 , Succ=1 Account off request , Num=0 , Err=0 , Succ=0 PKT auth timeout , Num=0 , Err=0 , Succ=0 PKT acct_timeout , Num=3 , Err=1 , Succ=2 Realtime Account timer , Num=0 , Err=0 , Succ=0 PKT response , Num=1 , Err=0 , Succ=1 EAP reauth_request , Num=0 , Err=0 , Succ=0 PORTAL access , Num=0 , Err=0 , Succ=0 Update ack , Num=0 , Err=0 , Succ=0 PORTAL access ack , Num=0 , Err=0 , Succ=0 Session ctrl pkt , Num=0 , Err=0 , Succ=0 RADIUS sent messages statistic: Auth accept , Num=0 Auth reject , Num=0 EAP auth replying , Num=0 Account success , Num=0 Account failure , Num=0 Cut req , Num=0 RecError_MSG_sum:0 SndMSG_Fail_sum :0 Timer_Err :0 Alloc_Mem_Err :0 State Mismatch :0 Other_Error :0 No-response-acct-stop packet =0 Discarded No-response-acct-stop packet for buffer overflow =0 The other problem is that when the RADIUS server is not available I can not log in to the switch. The switch have 3 local accounts but none of them works. How can I specify the switch to use the local accounts in case that the RADIUS service is not available?

    Read the article

  • Network Restructure Method for Double-NAT network

    - by Adrian
    Due to a series of poor network design decisions (mostly) made many years ago in order to save a few bucks here and there, I have a network that is decidedly sub-optimally architected. I'm looking for suggestions to improve this less-than-pleasant situation. We're a non-profit with a Linux-based IT department and a limited budget. (Note: None of the Windows equipment we have runs does anything that talks to the Internet nor do we have any Windows admins on staff.) Key points: We have a main office and about 12 remote sites that essentially double NAT their subnets with physically-segregated switches. (No VLANing and limited ability to do so with current switches) These locations have a "DMZ" subnet that are NAT'd on an identically assigned 10.0.0/24 subnet at each site. These subnets cannot talk to DMZs at any other location because we don't route them anywhere except between server and adjacent "firewall". Some of these locations have multiple ISP connections (T1, Cable, and/or DSLs) that we manually route using IP Tools in Linux. These firewalls all run on the (10.0.0/24) network and are mostly "pro-sumer" grade firewalls (Linksys, Netgear, etc.) or ISP-provided DSL modems. Connecting these firewalls (via simple unmanaged switches) is one or more servers that must be publically-accessible. Connected to the main office's 10.0.0/24 subnet are servers for email, tele-commuter VPN, remote office VPN server, primary router to the internal 192.168/24 subnets. These have to be access from specific ISP connections based on traffic type and connection source. All our routing is done manually or with OpenVPN route statements Inter-office traffic goes through the OpenVPN service in the main 'Router' server which has it's own NAT'ing involved. Remote sites only have one server installed at each site and cannot afford multiple servers due to budget constraints. These servers are all LTSP servers several 5-20 terminals. The 192.168.2/24 and 192.168.3/24 subnets are mostly but NOT entirely on Cisco 2960 switches that can do VLAN. The remainder are DLink DGS-1248 switches that I am not sure I trust well enough to use with VLANs. There is also some remaining internal concern about VLANs since only the senior networking staff person understands how it works. All regular internet traffic goes through the CentOS 5 router server which in turns NATs the 192.168/24 subnets to the 10.0.0.0/24 subnets according to the manually-configured routing rules that we use to point outbound traffic to the proper internet connection based on '-host' routing statements. I want to simplify this and ready All Of The Things for ESXi virtualization, including these public-facing services. Is there a no- or low-cost solution that would get rid of the Double-NAT and restore a little sanity to this mess so that my future replacement doesn't hunt me down? Basic Diagram for the main office: These are my goals: Public-facing Servers with interfaces on that middle 10.0.0/24 network to be moved in to 192.168.2/24 subnet on ESXi servers. Get rid of the double NAT and get our entire network on one single subnet. My understanding is that this is something we'll need to do under IPv6 anyway, but I think this mess is standing in the way.

    Read the article

  • Pfsense 2.1 OpenVPN can't reach servers on the LAN

    - by Lucas Kauffman
    I have a small network set up like this: I have a Pfsense for connecting my servers to the WAN, they are using NAT from the LAN - WAN. I have an OpenVPN server using TAP to allow remote workers to be put on the same LAN network as the servers. They connect through the WAN IP to the OVPN interface. The LAN interface also servers as the gateway for the servers to get internet connection and has an IP of 10.25.255.254 The OVPN Interface and the LAN interface are bridged in BR0 Server A has an IP of 10.25.255.1 and is able to connect the internet Client A is connecting through the VPN and is assigned an IP address on its TAP interface of 10.25.24.1 (I reserved a /24 within the 10.25.0.0/16 for VPN clients) Firewall currently allows any-any connection OVPN towards LAN and vice versa Currently when I connect, all routes seem fine on the client side: Destination Gateway Genmask Flags Metric Ref Use Iface 300.300.300.300 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.25.0.0 10.25.255.254 255.255.0.0 UG 0 0 0 tap0 10.25.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tap0 0.0.0.0 300.300.300.300 0.0.0.0 UG 0 0 0 eth0 I can ping the LAN interface: root@server:# ping 10.25.255.254 PING 10.25.255.254 (10.25.255.254) 56(84) bytes of data. 64 bytes from 10.25.255.254: icmp_req=1 ttl=64 time=7.65 ms 64 bytes from 10.25.255.254: icmp_req=2 ttl=64 time=7.49 ms 64 bytes from 10.25.255.254: icmp_req=3 ttl=64 time=7.69 ms 64 bytes from 10.25.255.254: icmp_req=4 ttl=64 time=7.31 ms 64 bytes from 10.25.255.254: icmp_req=5 ttl=64 time=7.52 ms 64 bytes from 10.25.255.254: icmp_req=6 ttl=64 time=7.42 ms But I can't ping past the LAN interface: root@server:# ping 10.25.255.1 PING 10.25.255.1 (10.25.255.1) 56(84) bytes of data. From 10.25.255.254: icmp_seq=1 Redirect Host(New nexthop: 10.25.255.1) From 10.25.255.254: icmp_seq=2 Redirect Host(New nexthop: 10.25.255.1) I ran a tcpdump on my em1 interface (LAN interface which has the IP of 10.25.255.254) tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on em1, link-type EN10MB (Ethernet), capture size 96 bytes 08:21:13.449222 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 10, length 64 08:21:13.458211 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:14.450541 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 11, length 64 08:21:14.458431 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:15.451794 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 12, length 64 08:21:15.458530 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:16.453203 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 13, length 64 So traffic is reaching the LAN interface, but it's not getting passed it. But no answer from the 10.25.255.1 host. I'm not sure what I'm missing.

    Read the article

  • Can't send commands via SSH to Juniper firewalls

    - by Massimo
    I have some Juniper SSG firewalls which I need to manage, and I'd like to be able to send commands to them from some monitoring scripts. I configured SSH access using public keys, and I'm able to automatically login to the firewalls. When I run SSH interactively, everything works fine: $ssh <firewall IP> FIREWALL-> <command> <command output> FIREWALL-> exit Connection to <firewall IP> closed. $ But when I try to run the command from the command line, it doesn't work: $ssh <firewall IP> <command> $ This, of course, works fine when sending a command to a remote Linux box: $ssh <linux box IP> <command> <command output> $ Why is this happening? What is the difference between running SSH interactively and specifying the command to run on the SSH command line? Update: It also works fine with a Cisco router. Only these Juniper firewalls seem to behave this way. From the debug output from SSH, it looks like the connection gets established correctly, but the Juniper box replies with an EOF when sending the command, while instead the Linux box replies with the actual command output: Linux: debug1: Authentication succeeded (publickey). debug1: channel 0: new [client-session] debug2: channel 0: send open debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug1: Sending command: uptime debug2: channel 0: request exec confirm 0 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel 0: rcvd adjust 131072 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 16:44:44 up 25 days, 1:06, 3 users, load average: 0.08, 0.02, 0.01 debug2: channel 0: rcvd eof debug2: channel 0: output open -> drain debug2: channel 0: obuf empty debug2: channel 0: close_write debug2: channel 0: output drain -> closed debug2: channel 0: rcvd close debug2: channel 0: close_read debug2: channel 0: input open -> closed debug2: channel 0: almost dead debug2: channel 0: gc: notify user debug2: channel 0: gc: user detached debug2: channel 0: send close debug2: channel 0: is dead debug2: channel 0: garbage collecting debug1: channel 0: free: client-session, nchannels 1 debug1: Transferred: stdin 0, stdout 0, stderr 0 bytes in 0.1 seconds debug1: Bytes per second: stdin 0.0, stdout 0.0, stderr 0.0 debug1: Exit status 0 Juniper: debug1: Authentication succeeded (publickey). debug1: channel 0: new [client-session] debug2: channel 0: send open debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug1: Sending environment. debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug1: Sending command: get system debug2: channel 0: request exec confirm 0 debug2: callback done debug2: channel 0: open confirm rwindow 2048 rmax 1024 debug2: channel 0: rcvd eof debug2: channel 0: output open -> drain debug2: channel 0: obuf empty debug2: channel 0: close_write debug2: channel 0: output drain -> closed debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug2: channel 0: rcvd close debug2: channel 0: close_read debug2: channel 0: input open -> closed debug2: channel 0: almost dead debug2: channel 0: gc: notify user debug2: channel 0: gc: user detached debug2: channel 0: send close debug2: channel 0: is dead debug2: channel 0: garbage collecting debug1: channel 0: free: client-session, nchannels 1 debug1: Transferred: stdin 0, stdout 0, stderr 0 bytes in 0.2 seconds debug1: Bytes per second: stdin 0.0, stdout 0.0, stderr 0.0 debug1: Exit status 1

    Read the article

  • Deploying play! 2.0 application on an apache server with a reverse proxy

    - by locrizak
    I'm trying to deploy my play! 2.0 application on an Ubuntu 11.10 server and I have been running into error after error and hope someone can help me here. I am try to deploy my Play! application using a reverse proxy on Apache 2. I have enabled the apache proxy modules and configured the proxy.conf file in mods_enabled. The vhost for my domain looks like this: <Directory /var/www/stage.domain.com AllowOverride None Order Deny,Allow Deny from all </Directory <VirtualHost *:80 DocumentRoot /var/www/stage.domain.com/web ServerName stage.domain.com ServerAdmin [email protected] # ProxyRequests Off # ProxyPreserveHost On <Proxy * Order allow,deny Allow from all </Proxy # ProxyVia On # ProxyPass /play/ http://localhost:9000/ # ProxyPassReverse /play/ http://localhost:9000/ ErrorLog /var/log/ispconfig/httpd/stage.domain.com/error.log ErrorDocument 400 /error/400.html ErrorDocument 401 /error/401.html ErrorDocument 403 /error/403.html ErrorDocument 404 /error/404.html ErrorDocument 405 /error/405.html ErrorDocument 500 /error/500.html ErrorDocument 502 /error/502.html ErrorDocument 503 /error/503.html <IfModule mod_ssl.c </IfModule <Directory /var/www/stage.domain.com/web Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory <Directory /var/www/clients/client2/web7/web Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory # Clear PHP settings of this website <FilesMatch "\.ph(p3?|tml)$" SetHandler None </FilesMatch # mod_php enabled AddType application/x-httpd-php .php .php3 .php4 .php5 php_admin_value sendmail_path "/usr/sbin/sendmail -t -i [email protected]" php_admin_value upload_tmp_dir /var/www/clients/client2/web7/tmp php_admin_value session.save_path /var/www/clients/client2/web7/tmp # PHPIniDir /var/www/conf/web7 php_admin_value open_basedir /var/www/clients/client2/web7/:/var/www/clients/client2/web7/web:/va$ # add support for apache mpm_itk <IfModule mpm_itk_module AssignUserId web7 client2 </IfModule <IfModule mod_dav_fs.c # Do not execute PHP files in webdav directory <Directory /var/www/clients/client2/web7/webdav <FilesMatch "\.ph(p3?|tml)$" SetHandler None </FilesMatch </Directory # DO NOT REMOVE THE COMMENTS! # IF YOU REMOVE THEM, WEBDAV WILL NOT WORK ANYMORE! # WEBDAV BEGIN # WEBDAV END </IfModule # <Location /play/ # ProxyPass http://localhost:9000/ # SetEnv force-proxy-request-1.0 1 # SetEnv proxy-nokeepalive 1 # </Location ProxyRequests Off ProxyPass /play/ http://localhost:9000/ ProxyPassReverse /play/ localhost:9000/ ProxyPass /play http://localhost:9000/ ProxyPassReverse /play http://localhost:9000/ # SetEnv force-proxy-request-1.0 1 # SetEnv proxy-nokeepalive 1 </VirtualHost This vhost file was generated by ispconfig and I have not touched anything that was there before just added onto. As you can see by the commented out parts I have tried a lot of different things based on random tutorials I have found but all of them have ended up in Internal Server Error, 503 and most often a '502 Bad Gateway`. I can start play and it does connect successfully to my database. I can get a page to show up when there is an error and the play! stack trace error pages comes up but where everything is fine I get one of the errors above. My application.conf file looks like this: db info ....... application.mode=PROD logger.root=ERROR # Logger used by the framework: logger.play=INFO # Logger provided to your application: logger.application=DEBUG http.path="/play/" XForwardedSupport="127.0.0.1" And my hosts file looks like this (I have never changed or added anything to the host file): 127.0.0.1 localhost 127.0.1.1 matrix # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Any insights onto what I might be doing wrong or if theres anything I can try please let me know! Thanks!! Edit Again the reverse proxy will work (I checked with sending to to google.com). Its when there is a successful connection to Netty. It's like Netty refuses the connection to the page. Edit 2 output from apachectl -S _default_:8081 127.0.0.1 (/etc/apache2/sites-enabled/000-apps.vhost:10) *:8090 is a NameVirtualHost default server 127.0.0.1 (/etc/apache2/sites-enabled/000-ispconfig.vhost:10) port 8090 namevhost 127.0.0.1 (/etc/apache2/sites-enabled/000-ispconfig.vhost:10) *:80 is a NameVirtualHost default server 127.0.0.1 (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost 127.0.0.1 (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost stage.domain.com (/etc/apache2/sites-enabled/100-stage.domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7)

    Read the article

< Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >