Search Results

Search found 762 results on 31 pages for 'peer allan'.

Page 28/31 | < Previous Page | 24 25 26 27 28 29 30 31  | Next Page >

  • ntpdate works, but ntpd can't synchronize

    - by dafydd
    This is in RHEL 5.5. First, ntpdate to the remote host works: $ ntpdate XXX.YYY.4.21 24 Oct 16:01:17 ntpdate[5276]: adjust time server XXX.YYY.4.21 offset 0.027291 sec Second, here are the server lines in my /etc/ntp.conf. All restrict lines have been commented out for troubleshooting. server 127.127.1.0 server XXX.YYY.4.21 I execute service ntpd start and check with ntpq: $ ntpq ntpq> peer remote refid st t when poll reach delay offset jitter ============================================================================== *LOCAL(0) .LOCL. 5 l 36 64 377 0.000 0.000 0.001 timeserver.doma .LOCL. 1 u 39 128 377 0.489 51.261 58.975 ntpq> opeer remote local st t when poll reach delay offset disp ============================================================================== *LOCAL(0) 127.0.0.1 5 l 40 64 377 0.000 0.000 0.001 timeserver.doma XXX.YYY.22.169 1 u 43 128 377 0.489 51.261 58.975 XXX.YYY.22.169 is the address of the host I'm working on. A reverse lookup on the IP address in my ntp.conf file validates that the ntpq output is correctly naming the remote server. However, as you can see, it appears to just roll over to my .LOCL. time server. Also, ntptrace just returns the local time server, and ntptrace XXX.YYY.4.21 times out. $ ntptrace localhost.localdomain: stratum 6, offset 0.000000, synch distance 0.948181 $ ntptrace XXX.YYY.4.21 XXX.YYY.4.21: timed out, nothing received ***Request timed out This looks like my ntp daemon is just querying itself. I am thinking about the possibility that the router-I-don't-control between my test network timeserver and the corporate network timeserver is blocking on source port. (I think ntpdate sends on port 123, which gets it around that filter and is why I can't use it while ntpd is running.) I have email in to the network folks to check that. Finally, telnet XXX.YYY.4.21 123 never times out or completes a connection. The questions: What am I missing, here? What else can I check to try to figure out where this connection is failing? Would strace ntptrace XXX.YYY.4.21 show me the source port ntptrace is sending from? I can deconstruct most strace calls, but I can't figure out the location of that datum. If I can't directly examine the gateway router between my test network and the timeserver, how might I build evidence that it's responsible for these disconnections? Alternately, how might I rule it out?

    Read the article

  • L2TP connection fails!

    - by a.toraby
    I've installed l2tp-ipsec-vpn but when I try to connect to the vpn server I get error 500. Here are the logs: Jun 17 12:54:37.449 ipsec_setup: Stopping Openswan IPsec... Jun 17 12:54:38.858 Stopping xl2tpd: xl2tpd. Jun 17 12:54:38.859 xl2tpd[1511]: death_handler: Fatal signal 15 received Jun 17 12:54:38.872 ipsec_setup: Starting Openswan IPsec U2.6.37/K3.2.0-23-generic... Jun 17 12:54:39.027 ipsec__plutorun: Starting Pluto subsystem... Jun 17 12:54:39.033 ipsec__plutorun: adjusting ipsec.d to /etc/ipsec.d Jun 17 12:54:39.037 recvref[30]: Protocol not available Jun 17 12:54:39.038 xl2tpd[2442]: This binary does not support kernel L2TP. Jun 17 12:54:39.038 xl2tpd[2444]: xl2tpd version xl2tpd-1.3.1 started on atp-ThinkPad-SL410 PID:2444 Jun 17 12:54:39.038 xl2tpd[2444]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. Jun 17 12:54:39.038 xl2tpd[2444]: Forked by Scott Balmos and David Stipp, (C) 2001 Jun 17 12:54:39.038 xl2tpd[2444]: Inherited by Jeff McAdams, (C) 2002 Jun 17 12:54:39.039 xl2tpd[2444]: Forked again by Xelerance (www.xelerance.com) (C) 2006 Jun 17 12:54:39.039 xl2tpd[2444]: Listening on IP address 0.0.0.0, port 1701 Jun 17 12:54:39.040 Starting xl2tpd: xl2tpd. Jun 17 12:54:39.062 ipsec__plutorun: 002 added connection description "L2TP" Jun 17 12:55:30.753 104 "L2TP" #1: STATE_MAIN_I1: initiate Jun 17 12:55:30.754 010 "L2TP" #1: STATE_MAIN_I1: retransmission; will wait 20s for response Jun 17 12:55:30.754 010 "L2TP" #1: STATE_MAIN_I1: retransmission; will wait 40s for response Jun 17 12:55:30.754 003 "L2TP" #1: ignoring Vendor ID payload [MS NT5 ISAKMPOAKLEY 00000008] Jun 17 12:55:30.754 003 "L2TP" #1: received Vendor ID payload [RFC 3947] method set to=109 Jun 17 12:55:30.754 003 "L2TP" #1: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 109 Jun 17 12:55:30.755 003 "L2TP" #1: ignoring Vendor ID payload [FRAGMENTATION] Jun 17 12:55:30.755 003 "L2TP" #1: ignoring Vendor ID payload [MS-Negotiation Discovery Capable] Jun 17 12:55:30.755 003 "L2TP" #1: ignoring Vendor ID payload [IKE CGA version 1] Jun 17 12:55:30.755 106 "L2TP" #1: STATE_MAIN_I2: sent MI2, expecting MR2 Jun 17 12:55:30.755 010 "L2TP" #1: STATE_MAIN_I2: retransmission; will wait 20s for response Jun 17 12:55:30.755 003 "L2TP" #1: NAT-Traversal: Result using RFC 3947 (NAT-Traversal): i am NATed Jun 17 12:55:30.755 108 "L2TP" #1: STATE_MAIN_I3: sent MI3, expecting MR3 Jun 17 12:55:30.756 004 "L2TP" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=oakley_3des_cbc_192 prf=oakley_sha group=modp1024} Jun 17 12:55:30.756 117 "L2TP" #2: STATE_QUICK_I1: initiate Jun 17 12:55:30.756 010 "L2TP" #2: STATE_QUICK_I1: retransmission; will wait 20s for response Jun 17 12:55:30.756 003 "L2TP" #2: ignoring informational payload, type IPSEC_RESPONDER_LIFETIME msgid=6b03ff69 Jun 17 12:55:30.756 003 "L2TP" #2: NAT-Traversal: received 2 NAT-OA. ignored because peer is not NATed Jun 17 12:55:30.756 003 "L2TP" #2: our client subnet returned doesn't match my proposal - us:192.168.1.3/32 vs them:109.162.174.235/32 Jun 17 12:55:30.757 003 "L2TP" #2: Allowing questionable proposal anyway [ALLOW_MICROSOFT_BAD_PROPOSAL] Jun 17 12:55:30.757 004 "L2TP" #2: STATE_QUICK_I2: sent QI2, IPsec SA established transport mode {ESP=>0x23af21f8 <0xdb4a87b6 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none} Jun 17 12:55:31.759 xl2tpd[2444]: Connecting to host x.x.x.x, port 1701 Jun 17 12:55:32.021 xl2tpd[2444]: Connection established to x.x.x.x, 1701. Local: 4720, Remote: 200 (ref=0/0). Jun 17 12:55:32.023 xl2tpd[2444]: Calling on tunnel 4720 Jun 17 12:55:32.454 xl2tpd[2444]: Call established with x.x.x.x, Local: 9667, Remote: 3, Serial: 1 (ref=0/0) Jun 17 12:55:32.456 xl2tpd[2444]: start_pppd: I'm running: Jun 17 12:55:32.456 xl2tpd[2444]: "/usr/sbin/pppd" Jun 17 12:55:32.457 xl2tpd[2444]: "passive" Jun 17 12:55:32.458 xl2tpd[2444]: "nodetach" Jun 17 12:55:32.458 xl2tpd[2444]: ":" Jun 17 12:55:32.459 xl2tpd[2444]: "file" Jun 17 12:55:32.459 xl2tpd[2444]: "/etc/ppp/L2TP.options.xl2tpd" Jun 17 12:55:32.460 xl2tpd[2444]: "ipparam" Jun 17 12:55:32.461 xl2tpd[2444]: "x.x.x.x" Jun 17 12:55:32.462 xl2tpd[2444]: "/dev/pts/1" Jun 17 12:55:32.583 pppd[2711]: Plugin passprompt.so loaded. Jun 17 12:55:32.583 pppd[2711]: pppd 2.4.5 started by root, uid 0 Jun 17 12:55:32.619 pppd[2711]: Using interface ppp0 Jun 17 12:55:32.620 pppd[2711]: Connect: ppp0 <--> /dev/pts/1 Jun 17 12:55:33.693 pppd[2711]: /usr/bin/L2tpIPsecVpn exited with code 0 Jun 17 12:55:34.454 [ERROR 404] Authentication failed: closing connection to 'L2TP' Jun 17 12:55:34.456 pppd[2711]: MS-CHAP authentication failed: E=691 Authentication failure Jun 17 12:55:34.457 pppd[2711]: CHAP authentication failed Jun 17 12:55:34.461 Stopping xl2tpd: xl2tpd. Jun 17 12:55:34.462 xl2tpd[2444]: death_handler: Fatal signal 15 received Jun 17 12:55:34.463 pppd[2711]: Modem hangup Jun 17 12:55:34.463 pppd[2711]: Connection terminated. Jun 17 12:55:34.474 ipsec_setup: Stopping Openswan IPsec... Jun 17 12:55:34.482 pppd[2711]: Exit. Jun 17 12:55:35.587 ipsec_setup: ERROR: Module xfrm4_mode_transport is in use Jun 17 12:55:35.665 ipsec_setup: ERROR: Module esp4 is in use I had this problem by ubuntu 11.10 though I can easily connect to the server from windows. I use ubuntu 12.0 64bit

    Read the article

  • XNA Notes 005

    - by George Clingerman
    Another week and another crazy amount of activity going on in the XNA community. I’m fairly certain I missed over half of it. In fact, if I am missing things, make sure to email me and I’ll try and make sure I catch it next week! ([email protected]). Also, if you’ve got any advice, things you like/don’t like about the way these XNA Notes are going let me know. I always appreciate feedback (currently spammers are leaving me the nicest comments so you guys have work to do!) Without further ado, here’s this week’s notes! Time Critical XNA News The XNA Team Blob reminds us that February 7th is the last day to submit XNA 3.1 games to peer review! http://blogs.msdn.com/b/xna/archive/2011/01/31/7-days-left-to-submit-xna-gs-3-1-games-on-app-hub.aspx XNA MVPS Chris Williams kicks off the marketing campaign for our book http://geekswithblogs.net/cwilliams/archive/2011/01/28/143680.aspx Catalin Zima posts the comparison cheat sheet for why Angry Birds is different than Chickens Can’t Fly http://www.amusedsloth.com/2011/02/comparison-cheat-sheet-for-chickens-cant-fly-and-angry-birds/ Jim Perry congratulates the developers selected by Game Developer Magazine for Best Xbox LIVE Indie Games of 2010 http://machxgames.com/blog/?p=24 @NemoKrad posts his XNAKUUG talks for all to enjoy http://twitter.com/NemoKrad/statuses/33142362502864896 http://xna-uk.net/blogs/randomchaos/archive/2011/02/03/xblig-uk-2011-january-amp-february-talk.aspx George  (that’s me!) preps for his XNA talk coming up on the 8th http://twitter.com/clingermangw/statuses/32669550554124288 http://www.portlandsilverlight.net/Meetings/Details/15 XNA Developers FireFly posts the last tutorial in his XNA Tower Defense tutorial series http://forums.create.msdn.com/forums/p/26442/451460.aspx#451460 http://xnatd.blogspot.com/2011/01/tutorial-14-polishing-game.html @fredericmy posts the main difference when porting a game from Windows Phone 7 to Xbox 360 http://fairyengine.blogspot.com/2011/01/main-differences-when-porting-game-from.html @ElementCy creates a pretty rad video of a MineCraft type terrain created using XNA http://www.youtube.com/watch?v=Waw1f7wnl9I Andrew Russel gets the first XNA badge on gamedev.stackexchange http://twitter.com/_AndrewRussell/statuses/32322877004972032 http://gamedev.stackexchange.com/badges?tab=tags And his funding for ExEn has passed $7000 only $3000 to go http://twitter.com/_AndrewRussell/statuses/33042412804771840 Subodh Pushpak blogs about his Windows Phone 7 XNA talk http://geekswithblogs.net/subodhnpushpak/archive/2011/02/01/windows-phone-7-silverlight--xna-development-talk.aspx Slyprid releases the latest version of Transmute and needs more people to test http://twitter.com/slyprid/statuses/32452488418299904 http://forgottenstarstudios.com/ SpynDoctorGames celebrates the 1 year anniversary of Your Doodles Are Bugged! Congrats! http://twitter.com/SpynDoctorGames/statuses/32511689068908544 Noogy (creator of Dust the Elysian Tail) prepares his conversion to XNA 4.0 http://twitter.com/NoogyTweet/statuses/32522008449253376 @philippedasilva posts about the Indiefreaks Game Framework v0.2.0.0 Input management system http://twitter.com/philippedasilva/statuses/32763393957957632 http://indiefreaks.com/2011/02/02/behind-smart-input-system-feature/ Mommy’s Best Games debates what to do about High Scores with their new update http://mommysbest.blogspot.com/2011/02/high-score-shake-up.html @BinaryTweedDeej want to know if there’s anything the community needs to make XNA games for the PC. Give him some feedback! http://twitter.com/BinaryTweedDeej/status/32895453863354368 @mikebmcl promises to write us all a book (I can’t wait to read it!) http://twitter.com/mikebmcl/statuses/33206499102687233 @werezompire is going to live, LIVE, thanks to all the generosity and support from the community! http://twitter.com/werezompire/statuses/32840147644977153 Xbox LIVE Indie Games (XBLIG) Making money in Xbox 360 indie game development. Is it possible? http://www.bitmob.com/articles/making-money-in-xbox-360-indie-game-development-is-it-possible @AlejandroDaJ posts some thoughts abut the bitmob article http://twitter.com/AlejandroDaJ/statuses/31068552165330944 http://www.apathyworks.com/blog/view.php?id=00215 Kobun gets my respect as an XBLIG champion. I’m not sure who Kobun is, but if you’ve every read through the comment sections any time Kotaku writes about XBLIGs you’ll see a lot of confusion, disinformation in there. Kobun has been waging a secret war battling that lack of knowledge and he does it well. Also he’s running a pretty kick ass site for Xbox LIVE Indie Game reviews http://xboxindies.teamkobun.com/ @radiangames releases his last Xbox LIVE Indie Game...for now http://bit.ly/gMK6lE Playing Avaglide with the Kinect controller http://www.youtube.com/watch?v=UqAYbHww53o http://www.joystiq.com/2011/01/30/kinect-hacks-take-to-the-skies-with-avaglide/ Luke Schneider of Radiangames interviewed in Edge magazine http://www.next-gen.biz/features/radiangames-venture Digital Quarters posts thoughts on why XBLIG’s online requirement kills certain genres http://digitalquarters.blogspot.com/2011/02/thoughts-why-xbligs-online-requirement.html Mommy’s Best Games shares the news that several XBLIGs were featured in the March 2011 issue of Famitsu 360 http://forums.create.msdn.com/forums/p/33455/451487.aspx#451487 NaviFairy continues with his Indie-Game-A-Day http://gaygamer.net/2011/02/indie_game_a_day_epic_dungeon.html http://gaygamer.net/2011/02/indie_game_a_day_break_limit_r.html and more every day...that’s kind of the point! Keep your eye on this series! VVGTV continues with it’s awesome reviews/promotions for XBLIGs http://vvgtv.com/ http://vvgtv.com/2011/02/03/iredia-atrams-secret-xblig-review-2/ http://vvgtv.com/2011/02/02/poopocalypse-coming-soon-to-xblig/ ….and even more, you get the point. Magicka is an Indie Game doing really well on Steam AND it’s made using XNA http://www.magickagame.com/ http://twitter.com/Magickagame/statuses/32712762580799488 GameMarx reviews Antipole http://www.gamemarx.com/reviews/73/antipole-is-vvvvvvery-good.aspx Armless Octopus review Alpha Squad http://www.armlessoctopus.com/2011/01/28/xbox-indie-review-alpha-squad/ An interesting article about Kodu that Jim Perry found http://twitter.com/MachXGames/statuses/32848044105924608 http://www.develop-online.net/news/36915/10-year-old-Jordan-makes-games-The-UK-needs-more-like-her XNA Game Development Sgt. Conker posts about the Natur beta, a new book and whether you can make money with XBLIG http://www.sgtconker.com/ http://www.sgtconker.com/2011/01/a-new-book-on-the-block-and-a-new-natur-beta/ http://www.sgtconker.com/2011/01/making-money-in-xbox-360-indie-game-development-is-it-possible/ Tips for setting up SVN http://bit.ly/fKxgFh @bsimser found tons of royalty free music and soundfx for your XNA Games http://twitter.com/bsimser/statuses/31426632933711872 Post on the new features in the next Sunburn Editor http://www.synapsegaming.com/blogs/fivesidedbarrel/archive/2011/01/28/new-editor-features-prefabs-components-and-more.aspx @jasons_novaleaf posts source code for light pre-pass optimizations for #xna http://twitter.com/jasons_novaleaf/statuses/33348855403642880 http://jcoluna.wordpress.com/2011/02/01/xna-4-0-light-pre-pass-optimization-round-one/ I’ve been learning about doing an A.I. for turn based games and this article was a great resource. http://www.gamasutra.com/view/feature/1535/designing_ai_algorithms_for_.php?print=1

    Read the article

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

  • career advice for PhD scientist seeking to program?

    - by C SD
    I'm largely a self-taught programmer. In fact, I first started programming about half way through biophysics grad school, and even though I think I've done some pretty nice work, I've never worked as part of a 'serious' development team that had more than one or two other developers (and I wouldn't hesitate to call them equally inexperienced in software development as a profession). After finishing my PhD I applied to Google, on a lark, since I had some confidence in my abilities, if not necessarily my experience, and I was hoping to maybe slip in and absorb all the experience and talent I'd be surrounded with and become productive enough, quickly enough, that they wouldn't immediately regret their decision. I was excited to actually get invited to interview up at Mountain View (this was ~ mid 2008). Overall, my memory of the interview was very positive, but after close to a three month wait (is that normal?) they ended up turning me down. I wasn't too surprised or disappointed (aside from the uncomfortably long wait) given my unusual background and admitted lack of experience. I decided to continue as a postdoc, but focus on improving my skills rather than doing research. I've done about three years of that, and my honest assessment is that I've learned a ton more, but I really need more of a peer group to maintain or accelerate my growth. Google invited me to interview again about eight months ago, and the interview process went even better than the first time around (I thought), though they again declined to give me an offer. I have to admit this second rejection was much more discouraging. They had insisted I interview even after I mentioned to them that a move on my part was unlikely given that I had bought a house, gotten married, etc. since the first interview. I guess I was hoping they'd at least give me an offer that I could parlay into a more conventional, but still interesting, programming position close to home. So here I am, going on my third year out of grad school, a glorified postdoc and I'm starting to get pretty discouraged. Even though I could technically get 'back-on-track' for a career in science, I have been focusing the vast majority of this time on gaining programming experience rather than on research and publications. The problem is, whenever I look, most job listings have requirements that seem impossibly grandiose and I hesitate to apply. That, or the job/project seems incredibly dull. Ironically, applying to Google struck me as less intimidating. I suspect that either most people are just a lot less realistic than I am when it comes to assessing how long it will take for them to get up to speed, or they don't care; my fear is that I'm just woefully unqualified for any interesting, well paying work. IE: I'm confident I could switch fully back into C++ mode with a couple weeks work (I mostly use C,Python,C# daily) but I don't list myself as being 'proficient' in C++ on my CV, or applying for jobs that 'require' such knowledge. The few applications for which I did feel I was a legitimately good match have not elicited a response. I suspect the following things are potential problems with my application/CV and I would like feedback on: I don't have a CS degree. My BS was in biochemistry and molecular biology, my PhD in biophysics. I took a undergrad and grad level CS course at UCSD and completely killed them, but I don't know how to translate that to my CV effectively. I have a PhD, but it's not in CS... I've been debating if I should remove it from my CV, and wether or not it would then be misleading to list at least some of those years as some kind of 'programming' job (in many respects it was). I think there are sometimes strong stigmas associated with 'self-taught' programmers. I am certainly one of those. I even recognize that some of those stigmas hold a hint of truth, but I really do want to be an asset to a team. How do I communicate that even though I have been largely self-directing for ~8 years I can still take marching orders when needed? Do I just say so outright? Should I just become a lot less scrupulous about the whole process? anecdote: I have a friend who applied for positions where he completely fudged his qualifications to get past the first culling. He was much more honest and forthcoming about his actual qualifications when contacted and he still managed to get invited to a couple of interviews and even got some offers. His balls are larger than mine though.

    Read the article

  • XBRL - Moving from Production to Consumption

    - by jmorourke
    Here's an update on what’s new with XBRL and how it can actually benefit your organization versus adding extra time and costs to financial reporting.  On February 29th (leap day) of 2012 I attended the XBRL and Financial Analysis Technology Conference at Baruch College in NYC.  The event, which attracted over 300 XBRL gurus and fans was presented by XBRL US, The New York Society of Security Analysts’ Improved Corporate Reporting Committee, and Baruch College’s Robert Zicklin Center for Corporate Integrity.  The event featured keynotes from the U.S. Securities and Exchange Commission (SEC), and the CFA Institute as well as panels covering alternative research tools and data, corporate reporting to stakeholders and a demonstration of XBRL analysis tools.  The program culminated in a presentation of the finalists and the winner of the $20,000 XBRL Challenge.    Some of the key points made in the sessions included: The focus of XBRL tools is moving from production to consumption. As of February 2012, over 9000 companies are reporting in XBRL, with over 10 million facts filed to date XBRL taxonomy extensions have dropped from 27% to 11% making comparisons easier The SEC reports that XBRL makes it easier to analyze disclosures, focus on accounting issues XBRL is helping standards-setters like the FASB speed their analysis of impacts of proposed accounting rule changes Companies like Thomson Reuters report that XBRL is helping speed the delivery of data to clients The most interesting part of the program though, was the session highlighting the 5 finalists in the XBRL Challenge competition and the winning solution.  The XBRL Challenge was launched in 2011 as a means of spurring the development of more end-user tools to help with the consumption of XBRL-based financial information.       Over an 8-month process handled by 5 judges, there were 84 registrants, 15 completed submissions, 5 finalists and one winner of the challenge.  All of the solutions are open-sourced tools and most of them focus on consuming XBRL-based data.  The 5 finalists included: Advanced XBRL Processing from Oxide solutions – XBRL viewer for taxonomies, filings and company data with peer comparison capabilities. Arrelle – API for XBRL processes, supports SEC Validations, RSS Feeds to access filings etc. Calcbench – XBRL data analysis tool that can be embedded in other web applications.  This tool can combine XBRL filings with real-time market data. XBRL to XL – allows the importing of XBRL data into Microsoft Excel for analysis, comparisons.  Users start on the web and populate Excel with XBRL data. XBurble – allows users to search and view XBRL filings, export to Excel, merge for comparison, and includes a workflow interface. The winner of the $20,000 XBRL Challenge prize was CalcBench.  More information about the XBRL Challenge and the finalists can be found at www.XBRLUS.org/challenge XBRL for Sustainability Reporting – other recent news on the XBRL front was the announcement by the Global Reporting Initiative (GRI) of an XBRL taxonomy for Sustainability Reporting.  This taxonomy was co-developed by the GRI and Deloitte and is designed to make the consumption of data found in Sustainability Reports much easier.  Although there is no government mandate to file Sustainability Reports in XBRL format, organizations that do use the GRI guidelines for Sustainability Reporting are encouraged to tag and submit their data voluntarily to the GRI – who will populate a database with Sustainability Reporting data and make this available to the public.  For more information about this initiative, you can go to the GRI web site:  www.globalreporting.org. So how does all of this benefit corporate filers and investors?  Since its introduction, the consensus in the market is that XBRL has mainly benefited the regulators and investment analysts who need to consume and analyze large volumes of financial data.  But with the emergence of more end-user tools for consuming and analyzing XBRL-based data, and the ability to perform quick comparisons of one company versus its peers and competitors in an industry group, will soon accelerate the benefits to corporate finance staff, as well as individual investors.  This could apply to financial results tagged in XBRL, as well as non-financial information such as Sustainability Reporting – which over the long-term will likely be integrated with financial reporting.   And as multiple regulators and agencies in a country adopt the XBRL standard for corporate filings, more benefits will accrue as companies will be able to leverage one set of XBRL-based financial data for multiple regulatory filings.     For more information about the latest developments in XBRL, check out the XBRL US or XBRL International web sites:  www.xbrl.org, www.xbrlus.org. For more information about what Oracle is doing to support XBRL, here are some links: http://www.oracle.com/us/solutions/ent-performance-bi/disclosure-management-065892.html http://www.oracle.com/technetwork/database/features/xmldb/index-087631.html Feel free to contact me if you have any questions or need more information:  [email protected]

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • GI ????

    - by Allen Gao
    Normal 0 7.8 ? 0 2 false false false MicrosoftInternetExplorer4 classid="clsid:38481807-CA0E-42D2-BF39-B33AF135CC4D" id=ieooui st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} ??????????11gR2 GI ?????????,??????GI????????????????????? ????????GI???????3???,ohasd??,??????,??????? ??,ohasd??? 1. /etc/inittab?????? h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null ???,??????? root 4865 1 0 Dec02 ? 00:01:01 /bin/sh /etc/init.d/init.ohasd run ??????????????,???? +init.ohasd ????????? + os????????? + ??S* ohasd????, ??S96ohasd + GI????????(crsctl enable crs) ??,ohasd.bin ??????,????OLR????,??,??ohasd.bin??????,?????OLR??????????????OLR???$GRID_HOME/cdata/${HOSTNAME}.olr 2. ohasd.bin????????agents(orarootagent, oraagent, cssdagnet ? cssdmonitor) ???????????????,?????agent??????,??????????$GRID_HOME/bin ???????????,??,?????????,??corruption. ???,??????? 1. Mdnsd ??????(Multicast)???????????????????,??????????????????????????? 2. Gpnpd ????,??????????bootstrap ??,??????????????gpnp profile???,?????mdnsd??????,???????????,?????????????,??gpnp profile (<gi_home>/gpnp/profiles/peer/profile.xml)?????????? 3. Gipcd ????,????????????????(cluster interconnect)?????,???????gpnpd???,??,??????????,?????gpnpd ??????? 4. Ocssd.bin ?????????????gpnp profile?????????(Voting Disk),????gpnpd ??????????,?????????????,??ocssd.bin ??????,?????????? + gpnp profile ?????????? + gpnpd ??????? + ??????asm disk ??????????? + ??????????? 5. ??????????:ora.ctssd, ora.asm, ora.cluster_interconnect.haip, ora.crf, ora.crsd ?? ??:????????????????ocssd.bin, gpnpd.bin ? gipcd.bin ????,??gpnpd.bin????,ocssd.bin ? gipcd.bin ?????????,?gpnpd.bin????????,ocssd.bin ? gipcd.bin ????????gpnp profile?????????? ??,????????????,?????crsd????????? 1. Crsd?????????????OCR,????OCR????ASM?,???? ASM??????,??OCR???ASM??????????OCR???????,???????????????? 2. Crsd ?????agents(orarootagent, oraagent_<rdbms_owner>, oraagent_<gi_owner> )???agent????,??????????$GRID_HOME/bin ???????????,??,?????????,??corruption. 3. ????????  ora.net1.network : ????,?????????????,scanvip, vip, listener?????????????,??????????,vip, scanvip ?listener ??offline,?????????????? ora.<scan_name>.vip:scan???vip??,?????3?? ora.<node_name>.vip : ?????vip ?? ora.<listener_name>.lsnr: ???????????????,?11gR2??,listener.ora???????,????????? ora.LISTENER_SCAN<n>.lsnr: scan ????? ora.<????>.dg: ASM ????????????????mount???,dismount???? ora.<????>.db: ???????11gR2????????????,??????????rac ????????,??????????,???????“USR_ORA_INST_NAME@SERVERNAME(<node name> )”???????,??????????ASM???,???????????????????,??dependency?????????,??????????????????,???dependancy???????,??????(crsctl modify res ……)? ora.<???>.svc:?????????11gR2 ??,?????????,???10gR2??,???????????,srv ?cs ????? ora.cvu :?????11.2.0.2???,???????cluvfy??,???????????????? ora.ons : ONS??,????????,????? ??,?????GI??????????????????? $GRID_HOME/log/<node_name>/ocssd <== ocssd.bin ?? $GRID_HOME/log/<node_name>/gpnpd <== gpnpd.bin ?? $GRID_HOME/log/<node_name>/gipcd <== gipcd.bin ?? $GRID_HOME/log/<node_name>/agent/crsd <== crsd.bin ?? $GRID_HOME/log/<node_name>/agent/ohasd <== ohasd.bin ?? $GRID_HOME/log/<node_name>/mdnsd <== mdnsd.bin ?? $GRID_HOME/log/<node_name>/client <== ????GI ??(ocrdump, crsctl, ocrcheck, gpnptool??)??????????? $GRID_HOME/log/<node_name>/ctssd <== ctssd.bin ?? $GRID_HOME/log/<node_name>/crsd <== crsd.bin ?? $GRID_HOME/log/<node_name>/cvu <== cluvfy ????????? $GRID_HOME/bin/diagcollection.sh <== ????????????????? ??,????????(/var/tmp/.oracle ? /tmp/.oracle),??????????????????ipc???,??,?????????????????????,???GI?????????????????????,??????????GI??????????????

    Read the article

  • undefined reference to `main' collect2: ld returned 1 exit status

    - by sobingt
    I am working on this QT project and i am making test cases for my project. Here is a small test case #include <QApplication> #include <QPalette> #include <QPixmap> #include <QSplashScreen> #include <qthread.h> #define BOOST_TEST_MAIN #include <boost/test/unit_test.hpp> #include <boost/make_shared.hpp> # include <boost/thread.hpp> #include "MainWindow.h" namespace { const std::string dbname = "Project.db"; struct SongFixture { SongFixture(const std::string &fixturePath) { // Create the Master file Master::creator(); // Create/open file std::pair<int, SQLiteDbPtr> result = open( dbname, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, 0); if (result.first != SQLITE_OK) { throw SQLiteError(result.first, sqlite3_errmsg(result.second.get())); } SQLiteDbPtr &spDb = result.second; // Execute all the SQL from the fixture file execSQLFromFile(spDb, fixturePath); } }; std::auto_ptr<SongFixture> pf; } class I : public QThread { public: static void sleep(unsigned long secs) { QThread::sleep(secs); } }; void free_test_function() { BOOST_CHECK(true ); } test_suite* init_unit_test_suite(int argc, char *argv[]) { // Create a fixture for the peer: // Manage fixture creation manually instead of using // BOOST_FIXTURE_TEST_CASE because the fixture depends on runtime args. std::ostringstream fixturePathSS; fixturePathSS << PROJECT_DIR << "/test/songs_fixture.sql"; std::string fixturePath = fixturePathSS.str(); pf.reset(new SongFixture(fixturePath)); QApplication app(argc, argv); MainWindow window("artists"); window.show(); framework::master_test_suite().add( BOOST_TEST_CASE( &free_test_function )); return app.exec(); } Well i am getting any error /usr/lib/gcc/x86_64-linux-gnu/4.6.1/../../../x86_64-linux-gnu/crt1.o: In function _start': (.text+0x20): undefined reference tomain' collect2: ld returned 1 exit status pls help me you have a lead..thankz I tired adding #define BOOST_TEST_MAIN then i get ../test/UI/main.cpp: In function ‘boost::unit_test::test_suite* init_unit_test_suite(int, char**)’: ../test/UI/main.cpp:75:31: error: redefinition of ‘boost::unit_test::test_suite* init_unit_test_suite(int, char**)’ /usr/local/include/boost/test/unit_test_suite.hpp:223:1: error: ‘boost::unit_test::test_suite* init_unit_test_suite(int, char**)’ previously defined here Well the program is working in Windows but in Linux the above mention problem is observed

    Read the article

  • Objective-c - How to serialize audio file into small packets that can be played?

    - by vfn
    Hi there, So, I would like to get a sound file and convert it in packets, and send it to another computer. I would like that the other computer be able to play the packets as they arrive. I am using AVAudioPlayer to try to play this packets, but I couldn't find a proper way to serialize the data on the peer1 that the peer2 can play. The scenario is, peer1 has a audio file, split the audio file in many small packets, put them on a NSData and send them to peer2. Peer 2 receive the packets and play one by one, as they arrive. Does anyone have know how to do this? or even if it is possible? EDIT: Here it is some piece of code to illustrate what I would like to achieve. // This code is part of the peer1, the one who sends the data - (void)sendData { int packetId = 0; NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:@"myAudioFile" ofType:@"wav"]; NSData *soundData = [[NSData alloc] initWithContentsOfFile:soundFilePath]; NSMutableArray *arraySoundData = [[NSMutableArray alloc] init]; // Spliting the audio in 2 pieces // This is only an illustration // The idea is to split the data into multiple pieces // dependin on the size of the file to be sent NSRange soundRange; soundRange.length = [soundData length]/2; soundRange.location = 0; [arraySoundData addObject:[soundData subdataWithRange:soundRange]]; soundRange.length = [soundData length]/2; soundRange.location = [soundData length]/2; [arraySoundData addObject:[soundData subdataWithRange:soundRange]]; for (int i=0; i // This is the code on peer2 that would receive an play the piece of audio on each packet - (void) receiveData:(NSData *)data { NSKeyedUnarchiver* unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:data]; if ([unarchiver containsValueForKey:PACKET_ID]) NSLog(@"DECODED PACKET_ID: %i", [unarchiver decodeIntForKey:PACKET_ID]); if ([unarchiver containsValueForKey:PACKET_SOUND_DATA]) { NSLog(@"DECODED sound"); NSData *sound = (NSData *)[unarchiver decodeObjectForKey:PACKET_SOUND_DATA]; if (sound == nil) { NSLog(@"sound is nil!"); } else { NSLog(@"sound is not nil!"); AVAudioPlayer *audioPlayer = [AVAudioPlayer alloc]; if ([audioPlayer initWithData:sound error:nil]) { [audioPlayer prepareToPlay]; [audioPlayer play]; } else { [audioPlayer release]; NSLog(@"Player couldn't load data"); } } } [unarchiver release]; } So, here is what I am trying to achieve...so, what I really need to know is how to create the packets, so peer2 can play the audio. It would be a kind of streaming. Yes, for now I am not worried about the order that the packet are received or played...I only need to get the sound sliced and them be able to play each piece, each slice, without need to wait for the whole file be received by peer2. Thanks!

    Read the article

  • Is One Tool or a Suite of Tools Better for Scrum?

    - by Rob Wells
    G'day, Edit: We've been using Scrum very successfully for several years on several projects of varying sizes. In fact, our team developed the successful iPlayer project for the BBC using a classical Scrum approach. After using various combinations of tools, some high-tech, some low-tech, across these projects we now wish to try adopting a suitable tool suite. Our manager is to some extent attempting to force the adoption of a single suite of tools for Scrum. I've looked at the SO question "Best Scrum tools" and most people seem to recommend either: a suite of low-tech solutions, e.g. whiteboards, post-its, index cards, etc., or a monolithic tool that tries to satisfy as much as possible of the process, e.g. Agilo, Mingle, ScrumWorks, Target Process, etc. Our team is currently evaluating several different Scrum tools. However, we are looking at selecting a single, monolithic tool, e.g. Agilo. All of the "one-stop" solutions have their strengths and weaknesses with the serious enterprise type solutions being the best sort of fit. But all have some short comings. After reading the paper "Peer Code Review: An Agile Process" over at SmartBear I started wondering if we were trying to force adoption of a tool on a "best fit" basis. I think you can take a couple of reference artefacts of the Scrum development process, say user stories, epics and themes, and the code base which must use a well-known SCM, e.g. SVN, Hg, etc. Then if we take that as the common reference points for the tools employed then we would be able to use a group of tools to handle the different aspects of the Scrum process rather than try forcing a fit of a single tool would is a bit like forcing a square peg into the round hole. In this way, providing you've agreed your common reference points, you can use several tools, each performing their role better than a could be done by a single component in a monolithic tool suite. Is this a more sensible approach? Are the two reference points I mentioned above suitable, or is their a better choice of points where the tools would meet? cheers,

    Read the article

  • Armchair Linguists: 'code' vs. 'codes'--or why I write 'code' and my manager asks for 'codes'

    - by Ukko
    I wanted to tap into the collective wisdom here to see if I can get some insight into one of my pet peeves, people who thread "code" as a countable noun. Let me also preface this by saying that I am not talking about anyone who speaks english as a second language, this is a native phenomenon. For those of us who slept through grammar class there are two classes of nouns which basically refer to things that are countable and non-countable (sometimes referred to as count and noncount). For instance 'sand' is a non-count noun and 'apple' is count. You can talk about "two apples" but "two sands" does not parse. The bright students then would point out a word like "beer" where is looks like this is violated. Beer as a substance is certainly a non-count noun, but I can ask for "two beers" without offending the grammar police. The reason is that there are actually two words tied up in that one utterance, Definition #1 is a yummy golden substance and Definition #2 is a colloquial term for a container of said substance. #1 is non-count and #2 is countable. This gets to my problem with "codes" as a countable noun. In my mind the code that we programmers write is non-count, "I wrote some code today." When used in the plural like "Have you got the codes" I can only assume that you are asking if I have the cryptographically significant numbers for launching a missile or the like. Every time my peer in marketing asks about when we will have the new codes ready I have a vision of rooms of code breakers going over the latest Enigma coded message. I corrected the usage in all the documents I am asked to review, but then I noticed that our customer was also using the work "codes" when they meant "code". At this point I have realized that there is a significant sub-population that uses "codes" and they seem to be impervious to what I see as the dominant "correct" usage. This is the part I want some help on, has anyone else noticed this phenomenon? Do you know what group it is associated with, old Fortran programmer perhaps? Is it a regionalism? I have become quick to change my terms when I notice a customer's usage, but it would be nice to know if I am sending a proposal somewhere what style they expect. I would hate to get canned with a review of "Ha, these guy's must be morons they don't even know 'code' is plural!"

    Read the article

  • Google App Engine with local Django 1.1 gets Intermittent Failures

    - by Jon Watte
    I'm using the Windows Launcher development environment for Google App Engine. I have downloaded Django 1.1.2 source, and un-tarrred the "django" subdirectory to live within my application directory (a peer of app.yaml) At the top of each .py source file, I do this: import settings import os os.environ["DJANGO_SETTINGS_MODULE"] = 'settings' In my file settings.py (which lives at the root of the app directory, as well), I do this: DEBUG = True TEMPLATE_DIRS = ('html') INSTALLED_APPS = ('filters') import os os.environ["DJANGO_SETTINGS_MODULE"] = 'settings' from google.appengine.dist import use_library use_library('django', '1.1') from django.template import loader Yes, this looks a bit like overkill, doesn't it? I only use django.template. I don't explicitly use any other part of django. However, intermittently I get one of two errors: 1) Django complains that DJANGO_SETTINGS_MODULE is not defined. 2) Django complains that common.html (a template I'm extending in other templates) doesn't exist. 95% of the time, these errors are not encountered, and they randomly just start happening. Once in that state, the local server seems "wedged" and re-booting it generally fixes it. What's causing this to happen, and what can I do about it? How can I even debug it? Here is the traceback from the error: Traceback (most recent call last): File "C:\code\kwbudget\edit_budget.py", line 34, in get self.response.out.write(t.render(template.Context(values))) File "C:\code\kwbudget\django\template\__init__.py", line 165, in render return self.nodelist.render(context) File "C:\code\kwbudget\django\template\__init__.py", line 784, in render bits.append(self.render_node(node, context)) File "C:\code\kwbudget\django\template\__init__.py", line 797, in render_node return node.render(context) File "C:\code\kwbudget\django\template\loader_tags.py", line 71, in render compiled_parent = self.get_parent(context) File "C:\code\kwbudget\django\template\loader_tags.py", line 66, in get_parent raise TemplateSyntaxError, "Template %r cannot be extended, because it doesn't exist" % parent TemplateSyntaxError: Template u'common.html' cannot be extended, because it doesn't exist And edit_budget.py starts with exactly the lines that I included up top. All templates live in a directory named "html" in my root directory, and "html/common.html" exists. I know the template engine finds them, because I start out with "html/edit_budget.html" which extends common.html. It looks as if the settings module somehow isn't applied (because that's what adds html to the search path for templates).

    Read the article

  • Paypal IPN: how get the POSTs from this class?

    - by sineverba
    I'm using this Class <?php class paypalIPN { //sandbox: private $paypal_url = 'https://www.sandbox.paypal.com/cgi-bin/webscr'; //live site: //private $paypal_url = 'https://www.paypal.com/cgi-bin/webscr'; private $data = null; public function __construct() { $this->data = new stdClass; } public function isa_dispute() { //is it some sort of dispute. return $this->data->txn_type == "new_case"; } public function validate() { // parse the paypal URL $response = ""; $url_parsed = parse_url($this->paypal_url); // generate the post string from the _POST vars aswell as load the // _POST vars into an arry so we can play with them from the calling // script. $post_string = ''; foreach ($_POST as $field=>$value) { $this->data->$field = $value; $post_string .= $field.'='.urlencode(stripslashes($value)).'&'; } $post_string.="cmd=_notify-validate"; // append ipn command $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $this->paypal_url); //curl_setopt($ch, CURLOPT_VERBOSE, 1); //keep the peer and server verification on, recommended //(can switch off if getting errors, turn to false) curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $post_string); $response = curl_exec($ch); if (curl_errno($ch)) { die("Curl Error: " . curl_errno($ch) . ": " . curl_error($ch)); } curl_close($ch); return $response; if (preg_match("/VERIFIED/", $response)) { // Valid IPN transaction. return $this->data; } else { return false; } } } ANd i recall in this mode: public function get_ipn() { $ipn = new paypalIPN(); $result = $ipn->validate(); $logger = new Log('/error.log'); $logger->write(print_r($result)); } But I obtain only "VERIFIED" or "1" (whitout or with the print_r function). I just tried also to return directly the raw curl response with return $response; or return $this->response; or also return $this->parse_string; but everytime I receive only "1" or "VERIFIED"....... Thank you very much

    Read the article

  • Binding on a port with netpipes/netcat

    - by mindas
    I am trying to write a simple bash script that is listening on a port and responding with a trivial HTTP response. My specific issue is that I am not sure if the port is available and in case of bind failure I fall back to next port until bind succeeds. So far to me the easiest way to achieve this was something like: for (( i=$PORT_BASE; i < $(($PORT_BASE+$PORT_RANGE)); i++ )) do if [ $DEBUG -eq 1 ] ; then echo trying to bind on $i fi /usr/bin/faucet $i --out --daemon echo test 2>/dev/null if [ $? -eq 0 ] ; then #success? port=$i if [ $DEBUG -eq 1 ] ; then echo "bound on port $port" fi break fi done Here I am using faucet from netpipes Ubuntu package. The problem with this is that if I simply print "test" to the output, curl complains about non-standard HTTP response (error code 18). That's fair enough as I don't print HTTP-compatible response. If I replace echo test with echo -ne "HTTP/1.0 200 OK\r\n\r\ntest", curl still complains: user@server:$ faucet 10020 --out --daemon echo -ne "HTTP/1.0 200 OK\r\n\r\ntest" ... user@client:$ curl ip.of.the.server:10020 curl: (56) Failure when receiving data from the peer I think the problem lies in how faucet is printing the response and handling the connection. For example if I do the server side in netcat, curl works fine: user@server:$ echo -ne "HTTP/1.0 200 OK\r\n\r\ntest\r\n" | nc -l 10020 ... user@client:$ curl ip.of.the.server:10020 test user@client:$ I would be more than happy to replace faucet with netcat in my main script, but the problem is that I want to spawn independent server process to be able to run client from the same base shell. faucet has a very handy --daemon parameter as it forks to background and I can use $? (exit status code) to check if bind succeeded. If I was to use netcat for a similar purpose, I would have to fork it using & and $? would not work. Does anybody know why faucet isn't responding correctly in this particular case and/or can suggest a solution to this problem. I am not married neither to faucet nor netcat but would like the solution to be implemented using bash or it's utilities (as opposed to write something in yet another scripting language, such as Perl or Python).

    Read the article

  • Which MS technologies would be suited for a data intensive application?

    - by steve.tse
    I'm a junior VB.net developer with little application design knowledge. I've been reading a lot of material online regarding different design patterns, frameworks, and methodologies. It's become a bit confusing for me. Right now I'm trying to decide on what language would be best suited to convert an existing VB6 application (with SQL server backend.) I need to update the UI and add more user functionality and reporting capabilities. Initially I was thinking of using WPF and attempting the MVVM model for this big project. Reports would be generated from SSRS. A peer suggested using ASP.net and I don't have enough experience to determine what would be better. The senior programmers here are stuck on using VB6 and don't have any input on what to use. They are encouraging me to use the latest technologies. This application would be for ~20 users in a central location. Ideally I would stick to a Microsoft .net language. Current interface is similar to a datagrid table where the user would click in to see the detail of each record. They would need to have multiple records open at any given time. I look forward to all the advice I can get. EDIT 2010/04/22 2:47 PM EST What is your audience? Internal clients within an intranet How complex are the interactions you expect to implement? not very... displaying data from SQL server to UI. Allow user updates to said data. Typically just one user modifying a record. Do you require near real-time data updates? no How often do you expect to update the application after the first release? twice/year Do you expect a well-defined set of client platforms? Yes, windows xp environment, potentially upgrading to Win7. Currently in IE.6 moving to IE7 or 8 within a couple of months. Do users need access from anywhere? No, just from their PC.

    Read the article

  • Unity JS - simple if statements not behaving as expected?

    - by IHazABone
    I have a simple script (please no remarks on the fact that I'm not using a switch statement or better code, this is the earliest version and written this way by a peer, I am improving it) that takes an object and moves it back and forth. For some reason, the variable time gets stuck at 249. It is probably an obvious bug with this inefficient logic, but I cannot seem to find it. var speed = 1; private var time = 0; function Start() { } function Update() { if(condition == true)moveStuff(); } function moveStuff() { var timeSwitch = false; if(time == 0)timeSwitch = false; if(time == timeSet)timeSwitch = true; if(direction == 1) { if(timeSwitch == false) { transform.Translate(Vector3.up * (Time.deltaTime * speed)); time += 1; Debug.Log(time); }else if(timeSwitch == true) { transform.Translate(Vector3.up * ((Time.deltaTime * speed) * -1)); time -= 1; Debug.Log(time); } } else if(direction == 2) { if(timeSwitch == false) { transform.Translate(Vector3.down * (Time.deltaTime * speed)); time += 1; Debug.Log("Moved down. "); }else if(timeSwitch == true){ transform.Translate(Vector3.down * ((Time.deltaTime * speed) * -1)); time -= 1; } } else if(direction == 3) { if(timeSwitch == false) { transform.Translate(Vector3.forward * (Time.deltaTime * speed)); time += 1; Debug.Log("Moved forward. "); }else if(timeSwitch == true){ transform.Translate(Vector3.forward * ((Time.deltaTime * speed) * -1)); time -= 1; } } else if(direction == 4) { if(timeSwitch == false) { transform.Translate(Vector3.back * (Time.deltaTime * speed)); time += 1; Debug.Log("Moved back. "); }else if(timeSwitch == true){ transform.Translate(Vector3.back * ((Time.deltaTime * speed) * -1)); time -= 1; } } else if(direction == 5) { if(timeSwitch == false) { transform.Translate(Vector3.right * (Time.deltaTime * speed)); time += 1; Debug.Log("Moved right. "); }else if(timeSwitch == true){ transform.Translate(Vector3.right * ((Time.deltaTime * speed) * -1)); time -= 1; } } else if(direction == 6) { if(timeSwitch == false) { transform.Translate(Vector3.left * (Time.deltaTime * speed)); time += 1; Debug.Log("Moved left. "); }else if(timeSwitch == true){ transform.Translate(Vector3.left * ((Time.deltaTime * speed) * -1)); time -= 1; } } }

    Read the article

  • Cisco Prime NCS not starting

    - by Kwazii
    I have received the Cisco Prime OVA file and which we placed onto an Oracle virtual environment. We turn the VM on and the CLI boots, When we try to start the NCS service we get errors. HOSTNAME/USER# ncs start Starting Network Control System... Exception in thread "main" java.lang.NullPointerException at com.cisco.wnbu.udi.impl.UDIManager.isPhysicalAppliance(UDIManager.java:184) at com.cisco.packaging.WCSAdmin.start(WCSAdmin.java:335) at com.cisco.packaging.WCSAdmin.runMain(WCSAdmin.java:281) at com.cisco.packaging.WCSAdmin.main(WCSAdmin.java:901) Logs HOSTNAME/USER# show logging 07/18/13 10:25:38.878 INFO [system] [main] Setting management interface address to 192.168.0.10 07/18/13 10:25:38.884 INFO [system] [main] Setting peer server interface address to 192.168.0.10 07/18/13 10:25:38.884 INFO [system] [main] Setting client interface address to 192.168.0.10 07/18/13 10:25:38.884 INFO [system] [main] Setting local host name to HOSTNAME 07/18/13 10:25:40.341 ERROR [system] [main] THROW java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:419) at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:536) at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:228) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:521) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at com.cisco.server.persistence.util.OracleSchemaUtil.openConnection(OracleSchemaUtil.java:277) at com.cisco.server.persistence.util.OracleSchemaUtil.dbServerUp(OracleSchemaUtil.java:836) at com.cisco.packaging.DBAdmin.dbServerUp(DBAdmin.java:1429) at com.cisco.packaging.WCSAdmin.status(WCSAdmin.java:833) at com.cisco.packaging.WCSAdmin.status(WCSAdmin.java:757) at com.cisco.packaging.WCSAdmin.wcsServerUp(WCSAdmin.java:637) at com.cisco.packaging.WCSAdmin.start(WCSAdmin.java:294) at com.cisco.packaging.WCSAdmin.runMain(WCSAdmin.java:281) at com.cisco.packaging.WCSAdmin.main(WCSAdmin.java:901) Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:375) at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:422) at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:678) at oracle.net.ns.NSProtocol.connect(NSProtocol.java:238) at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1054) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:308) ... 15 more Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:209) at oracle.net.nt.ConnOption.connect(ConnOption.java:123) at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:353) ... 20 more 07/18/13 10:25:40.347 INFO [admin] [main] 07/18/13 10:25:40.347 INFO [admin] [main] Starting Network Control System... 07/18/13 10:25:40.347 INFO [admin] [main] 07/18/13 10:25:40.394 ERROR [admin] [main] Problem using CARS API: com.cisco.cars.fnd.CARSException: CARS_FAILURE : -999 : Failed to get UDI configuration. : Failure occurred during request at com.cisco.cars.fnd.CARSException.analyzeReturnCode(CARSException.java:118) at com.cisco.cars.serviceEngine.impl.EngineAdminServiceImpl.getUDI(EngineAdminServiceImpl.java:66) at com.cisco.wnbu.udi.impl.UDIManager.generateUDI(UDIManager.java:69) at com.cisco.wnbu.udi.impl.UDIManager.setPersistenceDirectory(UDIManager.java:139) at com.cisco.packaging.WCSAdmin.start(WCSAdmin.java:332) at com.cisco.packaging.WCSAdmin.runMain(WCSAdmin.java:281) at com.cisco.packaging.WCSAdmin.main(WCSAdmin.java:901) 07/18/13 10:25:40.396 ERROR [admin] [main] Problem using CARS API: com.cisco.cars.fnd.CARSException: CARS_FAILURE : -999 : Failed to get UDI configuration. : Failure occurred during request at com.cisco.cars.fnd.CARSException.analyzeReturnCode(CARSException.java:118) at com.cisco.cars.serviceEngine.impl.EngineAdminServiceImpl.getUDI(EngineAdminServiceImpl.java:66) at com.cisco.wnbu.udi.impl.UDIManager.generateUDI(UDIManager.java:69) at com.cisco.wnbu.udi.impl.UDIManager.setVirtualPID(UDIManager.java:169) at com.cisco.packaging.WCSAdmin.start(WCSAdmin.java:333) at com.cisco.packaging.WCSAdmin.runMain(WCSAdmin.java:281) at com.cisco.packaging.WCSAdmin.main(WCSAdmin.java:901) 07/18/13 10:25:40.397 ERROR [admin] [main] Problem using CARS API: com.cisco.cars.fnd.CARSException: CARS_FAILURE : -999 : Failed to get UDI configuration. : Failure occurred during request at com.cisco.cars.fnd.CARSException.analyzeReturnCode(CARSException.java:118) at com.cisco.cars.serviceEngine.impl.EngineAdminServiceImpl.getUDI(EngineAdminServiceImpl.java:66) at com.cisco.wnbu.udi.impl.UDIManager.generateUDI(UDIManager.java:69) at com.cisco.wnbu.udi.impl.UDIManager.setPhysicalPID(UDIManager.java:154) at com.cisco.packaging.WCSAdmin.start(WCSAdmin.java:334) at com.cisco.packaging.WCSAdmin.runMain(WCSAdmin.java:281) at com.cisco.packaging.WCSAdmin.main(WCSAdmin.java:901) 07/18/13 10:25:40.397 ERROR [admin] [main] Problem using CARS API: com.cisco.cars.fnd.CARSException: CARS_FAILURE : -999 : Failed to get UDI configuration. : Failure occurred during request at com.cisco.cars.fnd.CARSException.analyzeReturnCode(CARSException.java:118) at com.cisco.cars.serviceEngine.impl.EngineAdminServiceImpl.getUDI(EngineAdminServiceImpl.java:66) at com.cisco.wnbu.udi.impl.UDIManager.generateUDI(UDIManager.java:69) at com.cisco.wnbu.udi.impl.UDIManager.getUDI(UDIManager.java:112) at com.cisco.wnbu.udi.impl.UDIManager.isPhysicalAppliance(UDIManager.java:184) at com.cisco.packaging.WCSAdmin.start(WCSAdmin.java:335) at com.cisco.packaging.WCSAdmin.runMain(WCSAdmin.java:281) at com.cisco.packaging.WCSAdmin.main(WCSAdmin.java:901) Any help is appreciated, Thanks

    Read the article

  • Cisco VPN Client dropping connection

    - by IT Team
    Using Windows XP and Cisco VPN client version 5.0.4.xxx to connect to a remote customer site. We are able to establish the connection and start an RDP session, but within 1-2 minutes the connection drops and the VPN connection disconnects. The PC making the connection is on a DMZ which is NATed to a public IP address. If we move the PC directly onto the internet without being on the DMZ the connection works and we don't encounter any disconnects. We use a PIX 515E running 7.2.4 and don't have any problems with similar setups connecting to other customer sites from the DMZ. The VPN setup on the client side is pretty basic, using IPSec over TCP port 10000. Not sure what device they are using on the peer, but my guess would be an ASA. Any idea as to what the problem would be? Below is the logs from the VPN client when the problem occurs. The real IP address has been changed to: RemotePeerIP. 4 14:39:30.593 09/23/09 Sev=Info/4 CM/0x63100024 Attempt connection with server "RemotePeerIP" 5 14:39:30.593 09/23/09 Sev=Info/6 CM/0x6310002F Allocated local TCP port 1942 for TCP connection. 6 14:39:30.796 09/23/09 Sev=Info/4 IPSEC/0x63700008 IPSec driver successfully started 7 14:39:30.796 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 8 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x6370002C Sent 256 packets, 0 were fragmented. 9 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x63700020 TCP SYN sent to RemotePeerIP, src port 1942, dst port 10000 10 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x6370001C TCP SYN-ACK received from RemotePeerIP, src port 10000, dst port 1942 11 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x63700021 TCP ACK sent to RemotePeerIP, src port 1942, dst port 10000 12 14:39:30.796 09/23/09 Sev=Warning/3 IPSEC/0xA370001C Bad cTCP trailer, Rsvd 26984, Magic# 63697672h, trailer len 101, MajorVer 13, MinorVer 10 13 14:39:30.796 09/23/09 Sev=Info/4 CM/0x63100029 TCP connection established on port 10000 with server "RemotePeerIP" 14 14:39:31.296 09/23/09 Sev=Info/4 CM/0x63100024 Attempt connection with server "RemotePeerIP" 15 14:39:31.296 09/23/09 Sev=Info/6 IKE/0x6300003B Attempting to establish a connection with RemotePeerIP. 16 14:39:31.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (SA, KE, NON, ID, VID(Xauth), VID(dpd), VID(Frag), VID(Unity)) to RemotePeerIP 17 14:39:36.296 09/23/09 Sev=Info/4 IKE/0x63000021 Retransmitting last packet! 18 14:39:36.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (Retransmission) to RemotePeerIP 19 14:39:41.296 09/23/09 Sev=Info/4 IKE/0x63000021 Retransmitting last packet! 20 14:39:41.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (Retransmission) to RemotePeerIP 21 14:39:46.296 09/23/09 Sev=Info/4 IKE/0x63000021 Retransmitting last packet! 22 14:39:46.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (Retransmission) to RemotePeerIP 23 14:39:51.328 09/23/09 Sev=Info/4 IKE/0x63000017 Marking IKE SA for deletion (I_Cookie=AEFC3FFF0405BBD6 R_Cookie=0000000000000000) reason = DEL_REASON_PEER_NOT_RESPONDING 24 14:39:51.828 09/23/09 Sev=Info/4 IKE/0x6300004B Discarding IKE SA negotiation (I_Cookie=AEFC3FFF0405BBD6 R_Cookie=0000000000000000) reason = DEL_REASON_PEER_NOT_RESPONDING 25 14:39:51.828 09/23/09 Sev=Info/4 CM/0x63100014 Unable to establish Phase 1 SA with server "RemotePeerIP" because of "DEL_REASON_PEER_NOT_RESPONDING" 26 14:39:51.828 09/23/09 Sev=Info/5 CM/0x63100025 Initializing CVPNDrv 27 14:39:51.828 09/23/09 Sev=Info/4 CM/0x6310002D Resetting TCP connection on port 10000 28 14:39:51.828 09/23/09 Sev=Info/6 CM/0x63100030 Removed local TCP port 1942 for TCP connection. 29 14:39:51.828 09/23/09 Sev=Info/6 CM/0x63100046 Set tunnel established flag in registry to 0. 30 14:39:51.828 09/23/09 Sev=Info/4 IKE/0x63000001 IKE received signal to terminate VPN connection 31 14:39:52.328 09/23/09 Sev=Info/6 IPSEC/0x63700023 TCP RST sent to RemotePeerIP, src port 1942, dst port 10000 32 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 33 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 34 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 35 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x6370000A IPSec driver successfully stopped Thank you for any help you can provide.

    Read the article

  • PHP crashing (seg-fault) under mod_fcgi, apache

    - by Andras Gyomrey
    I've been programming a site using: Zend Framework 1.11.5 (complete MVC) PHP 5.3.6 Apache 2.2.19 CentOS 5.6 i686 virtuozzo on vps cPanel WHM 11.30.1 (build 4) Mysql 5.1.56-log Mysqli API 5.1.56 The issue started here http://stackoverflow.com/questions/6769515/php-programming-seg-fault. In brief, php is giving me random segmentation-faults. [Wed Jul 20 17:45:34 2011] [error] mod_fcgid: process /usr/local/cpanel/cgi-sys/php5(11562) exit(communication error), get unexpected signal 11 [Wed Jul 20 17:45:34 2011] [warn] [client 190.78.208.30] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Wed Jul 20 17:45:34 2011] [error] [client 190.78.208.30] Premature end of script headers: index.php About extensions. When i compile php with "--enable-debug" flag, i have to disable this line: zend_extension="/usr/local/IonCube/ioncube_loader_lin_5.3.so" Otherwise, the server doesn't accept requests and i get a "The connection with the server was reset". It is possible that i have to disable eaccelerator too because of the same reason. I still don't get why apache gets running it some times and some others not: extension="eaccelerator.so" Anyway, after i get httpd running, seg-faults can occurr randomly. If i don't compile php with "--enable-debug" flag, i can get DETERMINISTICALLY a php crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $row = $db->fetchRow("SHOW CREATE TABLE 222AFI"); } } ?> BUT if i compile php with "--enable-debug" flag, it's really hard to get this error. I must add some complexity to make it crash. I have to be doing many paralell requests for a few seconds to get a crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $tableList = $db->listTables(); foreach ($tableList as $tableName){ $row = $db->fetchRow("SHOW CREATE TABLE " . $db->quoteIdentifier($tableName)); file_put_contents( DB_DEFINITIONS_PATH . '/' . $tableName . '.sql', $row['Create Table'] . ';' ); } } } ?> Please notice this is the same script, but creating DDL for all tables in database rather than for one. It seems that if php is heavy loaded (with extensions and me doing many paralell requests) it's when i get php to crash. About starting httpd with "-X": i've tried. The thing is, it is already hard to make php crash with --enable-debug. With "-X" option (which only enables one child process) i can't do parallel requests. So i haven't been able to create to proper debug backtrace: https://bugs.php.net/bugs-generating-backtrace.php My concrete question is, what do i do to get a coredump? root@GWT4 [~]# httpd -V Server version: Apache/2.2.19 (Unix) Server built: Jul 20 2011 19:18:58 Cpanel::Easy::Apache v3.4.2 rev9999 Server's Module Magic Number: 20051115:28 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 32-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf"

    Read the article

  • 502 Bad Gateway - nginx

    - by ADH2
    I am randomly receiving 502 Bad Gateway error pages - I can reproduce this issue by modifying hosting plans in plesk 11 and in the same time refreshing a page for a minute or two. When I get the 502 error page all I have to do is refresh the browser and the page refreshes properly. i am using centos 6 this it from todays log (/var/log/nginx/error.log): 2012/12/04 10:52:07 [error] 21272#0: *545 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 82.77.68.111, server: likeit-craiova.ro, request: "GET / HTTP/1.1", upstream: "http://195.254.135.113:7080/", host: "likeit-craiova.ro" this is the nginx config (/etc/nginx/nginx.conf) #user nginx; worker_processes 1; #error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; #pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #tcp_nodelay on; #gzip on; #gzip_disable "MSIE [1-6]\.(?!.*SV1)"; server_tokens off; include /etc/nginx/conf.d/*.conf; } fastcgi config file (/etc/nginx/fastcgi.conf): fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; fastcgi parameters config (/etc/nginx/fastcgi_params): fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; alsow i'm getting this on a shared hosting server, on one of the domains: Unable to generate the web server configuration file on the host because of the following errors: nginx: [warn] duplicate MIME type "text/html" in /etc/nginx/nginx.conf:45 nginx: [emerg] open() "/var/www/vhosts/partydayandnight.ro/statistics/logs/proxy_access_log" failed (24: Too many open files) nginx: configuration file /etc/nginx/nginx.conf test failed Please resolve the errors in web server configuration templates and generate the file again. why is this appearing and what troubles may it cause? what can i do to get this errors fixed? thank you!

    Read the article

  • Tunnel is up but cannot ping directly connected network

    - by drmanalo
    We configured a site-to-site VPN and here is the topology. I control the network on the left but not the one on the right. All devices in our network has public IPs. Server---ASA5505---Cisco887======Internet=====ASA5510---devices I can see the tunnel is up and can do extended ping using a loopback interface. From the 10.175 and 10.165 networks, they can also ping my loopback address. I can also dial in using a Cisco VPN client, and can connect to the devices on the right. #show crypto session Crypto session current status Interface: Vlan3 Profile: xxx-profile Session status: UP-ACTIVE Peer: 213.121.x.x port 500 IKEv1 SA: local 77.245.x.x/500 remote 213.121.x.x/500 Active IPSEC FLOW: permit ip 10.0.20.0/255.255.255.240 10.175.0.0/255.255.128.0 Active SAs: 0, origin: crypto map IPSEC FLOW: permit ip 10.0.20.0/255.255.255.240 10.165.0.0/255.255.192.0 Active SAs: 2, origin: crypto map #ping 10.165.29.39 source loopback 2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.165.29.39, timeout is 2 seconds: Packet sent with a source address of 10.0.20.1 !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 16/17/20 ms My problem is the devices on the right cannot reach my server. They could only ping the loopback address and nothing else. I'm pasting some diagnostics related to routing thinking perhaps routing is my issue. I can paste all the running-config on my side of network if needed. #show ip int brief Interface IP-Address OK? Method Status Protocol ATM0 unassigned YES NVRAM administratively down down Ethernet0 unassigned YES NVRAM administratively down down FastEthernet0 unassigned YES unset up up connected to ASA FastEthernet1 unassigned YES unset administratively down down FastEthernet2 unassigned YES unset administratively down down FastEthernet3 unassigned YES unset up up Loopback1 10.0.20.65 YES NVRAM up up Loopback2 10.0.20.1 YES NVRAM up up Virtual-Template1 77.245.x.x YES unset up down Virtual-Template2 77.245.x.x YES unset up down Vlan1 unassigned YES unset down down Vlan3 77.245.x.x YES NVRAM up up connected to the Internet #show run | section ip route ip route 0.0.0.0 0.0.0.0 77.245.x.x ip route 213.121.240.36 255.255.255.255 Vlan3 #show access-list Extended IP access list 102 10 permit ip 10.0.20.0 0.0.0.15 10.175.0.0 0.0.127.255 (3332 matches) 20 permit ip 10.0.20.0 0.0.0.15 10.165.0.0 0.0.63.255 (3498 matches) #show vlan-switch VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------- 1 default active 3 VLAN0003 active Fa0, Fa1, Fa2, Fa3 1002 fddi-default act/unsup 1003 token-ring-default act/unsup 1004 fddinet-default act/unsup 1005 trnet-default act/unsup #show ip route Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP + - replicated route, % - next hop override Gateway of last resort is 77.245.x.x to network 0.0.0.0 S* 0.0.0.0/0 [1/0] via 77.245.x.x 10.0.0.0/8 is variably subnetted, 5 subnets, 3 masks C 10.0.20.0/28 is directly connected, Loopback2 L 10.0.20.1/32 is directly connected, Loopback2 C 10.0.20.64/28 is directly connected, Loopback1 L 10.0.20.65/32 is directly connected, Loopback1 S 10.165.0.0/18 [1/0] via 213.121.x.x 77.0.0.0/8 is variably subnetted, 3 subnets, 3 masks S 77.0.0.0/8 [1/0] via 77.245.x.x C 77.245.x.x/29 is directly connected, Vlan3 L 77.245.x.x/32 is directly connected, Vlan3 213.121.x.0/32 is subnetted, 1 subnets S 213.121.x.x is directly connected, Vlan3 I read some of the posts here which lead to NATing issue but I'not sure of my next step. Should I translate my public address to private and route it to the loopback address? (only guessing) CISCO VPN site to site Site-to-Site VPN between two ASA 5505s only working in one direction Hope someone could help. Thanks in advance!

    Read the article

  • Cisco ASA 5505 site to site IPSEC VPN won't route from multiple LANs

    - by franklundy
    Hi I've set up a standard site to site VPN between 2 ASA 5505s (using the wizard in ASDM) and have the VPN working fine for traffic between Site A and Site B on the directly connected LANs. But this VPN is actually to be used for data originating on LAN subnets that are one hop away from the directly connected LANs. So actually there is another router connected to each ASA (LAN side) that then route to two completely different LAN ranges, where the clients and servers reside. At the moment, any traffic that gets to the ASA that has not originated from the directly connected LAN gets sent straight to the default gateway, and not through the VPN. I've tried adding the additional subnets to the "Protected Networks" on the VPN, but that has no effect. I have also tried adding a static route to each ASA trying to point the traffic to the other side, but again this hasn't worked. Here is the config for one of the sites. This works for traffic to/from the 192.168.144.x subnets perfectly. What I need is to be able to route traffic from 10.1.0.0/24 to 10.2.0.0/24 for example. ASA Version 8.0(3) ! hostname Site1 enable password ** encrypted names name 192.168.144.4 Site2 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.144.2 255.255.255.252 ! interface Vlan2 nameif outside security-level 0 ip address 10.78.254.70 255.255.255.252 (this is a private WAN circuit) ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! passwd ** encrypted ftp mode passive access-list inside_access_in extended permit ip any any access-list outside_access_in extended permit icmp any any echo-reply access-list outside_1_cryptomap extended permit ip 192.168.144.0 255.255.255.252 Site2 255.255.255.252 access-list inside_nat0_outbound extended permit ip 192.168.144.0 255.255.255.252 Site2 255.255.255.252 pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-603.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 access-group inside_access_in in interface inside access-group outside_access_in in interface outside route outside 0.0.0.0 0.0.0.0 10.78.254.69 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout uauth 0:05:00 absolute dynamic-access-policy-record DfltAccessPolicy aaa authentication ssh console LOCAL http server enable http 0.0.0.0 0.0.0.0 outside http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto map outside_map 1 match address outside_1_cryptomap crypto map outside_map 1 set pfs crypto map outside_map 1 set peer 10.78.254.66 crypto map outside_map 1 set transform-set ESP-3DES-SHA crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 no crypto isakmp nat-traversal telnet timeout 5 ssh 0.0.0.0 0.0.0.0 outside ssh timeout 5 console timeout 0 management-access inside threat-detection basic-threat threat-detection statistics port threat-detection statistics protocol threat-detection statistics access-list group-policy DfltGrpPolicy attributes vpn-idle-timeout none username enadmin password * encrypted privilege 15 tunnel-group 10.78.254.66 type ipsec-l2l tunnel-group 10.78.254.66 ipsec-attributes pre-shared-key * ! ! prompt hostname context

    Read the article

  • Diagnosing packet loss / high latency in Ubuntu

    - by Sam Gammon
    We have a Linux box (Ubuntu 12.04) running Nginx (1.5.2), which acts as a reverse proxy/load balancer to some Tornado and Apache hosts. The upstream servers are physically and logically close (same DC, sometimes same-rack) and show sub-millisecond latency between them: PING appserver (10.xx.xx.112) 56(84) bytes of data. 64 bytes from appserver (10.xx.xx.112): icmp_req=1 ttl=64 time=0.180 ms 64 bytes from appserver (10.xx.xx.112): icmp_req=2 ttl=64 time=0.165 ms 64 bytes from appserver (10.xx.xx.112): icmp_req=3 ttl=64 time=0.153 ms We receive a sustained load of about 500 requests per second, and are currently seeing regular packet loss / latency spikes from the Internet, even from basic pings: sam@AM-KEEN ~> ping -c 1000 loadbalancer PING 50.xx.xx.16 (50.xx.xx.16): 56 data bytes 64 bytes from loadbalancer: icmp_seq=0 ttl=56 time=11.624 ms 64 bytes from loadbalancer: icmp_seq=1 ttl=56 time=10.494 ms ... many packets later ... Request timeout for icmp_seq 2 64 bytes from loadbalancer: icmp_seq=2 ttl=56 time=1536.516 ms 64 bytes from loadbalancer: icmp_seq=3 ttl=56 time=536.907 ms 64 bytes from loadbalancer: icmp_seq=4 ttl=56 time=9.389 ms ... many packets later ... Request timeout for icmp_seq 919 64 bytes from loadbalancer: icmp_seq=918 ttl=56 time=2932.571 ms 64 bytes from loadbalancer: icmp_seq=919 ttl=56 time=1932.174 ms 64 bytes from loadbalancer: icmp_seq=920 ttl=56 time=932.018 ms 64 bytes from loadbalancer: icmp_seq=921 ttl=56 time=6.157 ms --- 50.xx.xx.16 ping statistics --- 1000 packets transmitted, 997 packets received, 0.3% packet loss round-trip min/avg/max/stddev = 5.119/52.712/2932.571/224.629 ms The pattern is always the same: things operate fine for a while (<20ms), then a ping drops completely, then three or four high-latency pings (1000ms), then it settles down again. Traffic comes in through a bonded public interface (we will call it bond0) configured as such: bond0 Link encap:Ethernet HWaddr 00:xx:xx:xx:xx:5d inet addr:50.xx.xx.16 Bcast:50.xx.xx.31 Mask:255.255.255.224 inet6 addr: <ipv6 address> Scope:Global inet6 addr: <ipv6 address> Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:527181270 errors:1 dropped:4 overruns:0 frame:1 TX packets:413335045 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:240016223540 (240.0 GB) TX bytes:104301759647 (104.3 GB) Requests are then submitted via HTTP to upstream servers on the private network (we can call it bond1), which is configured like so: bond1 Link encap:Ethernet HWaddr 00:xx:xx:xx:xx:5c inet addr:10.xx.xx.70 Bcast:10.xx.xx.127 Mask:255.255.255.192 inet6 addr: <ipv6 address> Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:430293342 errors:1 dropped:2 overruns:0 frame:1 TX packets:466983986 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:77714410892 (77.7 GB) TX bytes:227349392334 (227.3 GB) Output of uname -a: Linux <hostname> 3.5.0-42-generic #65~precise1-Ubuntu SMP Wed Oct 2 20:57:18 UTC 2013 x86_64 GNU/Linux We have customized sysctl.conf in an attempt to fix the problem, with no success. Output of /etc/sysctl.conf (with irrelevant configs omitted): # net: core net.core.netdev_max_backlog = 10000 # net: ipv4 stack net.ipv4.tcp_ecn = 2 net.ipv4.tcp_sack = 1 net.ipv4.tcp_fack = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 0 net.ipv4.tcp_timestamps = 1 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_max_syn_backlog = 10000 net.ipv4.tcp_congestion_control = cubic net.ipv4.ip_local_port_range = 8000 65535 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_thin_dupack = 1 net.ipv4.tcp_thin_linear_timeouts = 1 net.netfilter.nf_conntrack_max = 99999999 net.netfilter.nf_conntrack_tcp_timeout_established = 300 Output of dmesg -d, with non-ICMP UFW messages suppressed: [508315.349295 < 19.852453>] [UFW BLOCK] IN=bond1 OUT= MAC=<mac addresses> SRC=118.xx.xx.143 DST=50.xx.xx.16 LEN=68 TOS=0x00 PREC=0x00 TTL=51 ID=43221 PROTO=ICMP TYPE=3 CODE=1 [SRC=50.xx.xx.16 DST=118.xx.xx.143 LEN=40 TOS=0x00 PREC=0x00 TTL=249 ID=10220 DF PROTO=TCP SPT=80 DPT=53817 WINDOW=8190 RES=0x00 ACK FIN URGP=0 ] [517787.732242 < 0.443127>] Peer 190.xx.xx.131:59705/80 unexpectedly shrunk window 1155488866:1155489425 (repaired) How can I go about diagnosing the cause of this problem, on a Debian-family Linux box?

    Read the article

  • Mpd as pppoe server with authorisation by freeradius2

    - by Korjavin Ivan
    I install freeradius2, add to raddb/users: test Cleartext-Password := "test1" Service-Type = Framed-User, Framed-Protocol = PPP, Framed-IP-Address = 10.36.0.2, Framed-IP-Netmask = 255.255.255.0, start radiusd, and check auth: radtest test test1 127.0.0.1 1002 testing123 Sending Access-Request of id 199 to 127.0.0.1 port 1812 User-Name = "test" User-Password = "test1" NAS-IP-Address = 127.0.0.1 NAS-Port = 1002 Message-Authenticator = 0x00000000000000000000000000000000 rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=199, length=44 Service-Type = Framed-User Framed-Protocol = PPP Framed-IP-Address = 10.36.0.2 Framed-IP-Netmask = 255.255.255.0 Works fine. Next step. Add to mpd.conf: radius: set auth disable internal set auth max-logins 1 CI set auth enable radius-auth set radius timeout 90 set radius retries 2 set radius server 127.0.0.1 testing123 1812 1813 set radius me 127.0.0.1 create link template L pppoe set link action bundle B set link max-children 1000 set link no multilink set link no shortseq set link no pap chap-md5 chap-msv1 chap-msv2 set link enable chap set pppoe acname Internet load radius create link template em1 L set pppoe iface em1 set link enable incoming And trying to connect, auth failed, here is mpd log: mpd: [em1-2] LCP: auth: peer wants nothing, I want CHAP mpd: [em1-2] CHAP: sending CHALLENGE #1 len: 21 mpd: [em1-2] LCP: LayerUp mpd: [em1-2] CHAP: rec'd RESPONSE #1 len: 58 mpd: [em1-2] Name: "test" mpd: [em1-2] AUTH: Trying RADIUS mpd: [em1-2] RADIUS: Authenticating user 'test' mpd: [em1-2] RADIUS: Rec'd RAD_ACCESS_REJECT for user 'test' mpd: [em1-2] AUTH: RADIUS returned: failed mpd: [em1-2] AUTH: ran out of backends mpd: [em1-2] CHAP: Auth return status: failed mpd: [em1-2] CHAP: Reply message: ^AE=691 R=1 mpd: [em1-2] CHAP: sending FAILURE #1 len: 14 mpd: [em1-2] LCP: authorization failed Then i start freeradius as radiusd -fX, and get this log: rad_recv: Access-Request packet from host 127.0.0.1 port 46400, id=223, length=282 NAS-Identifier = "rubin.svyaz-nt.ru" NAS-IP-Address = 127.0.0.1 Message-Authenticator = 0x14d36639bed8074ec2988118125367ea Acct-Session-Id = "815965-em1-2" NAS-Port = 2 NAS-Port-Type = Ethernet Service-Type = Framed-User Framed-Protocol = PPP Calling-Station-Id = "00e05290b3e3 / 00:e0:52:90:b3:e3 / em1" NAS-Port-Id = "em1" Vendor-12341-Attr-12 = 0x656d312d32 Tunnel-Medium-Type:0 = IEEE-802 Tunnel-Client-Endpoint:0 = "00:e0:52:90:b3:e3" User-Name = "test" MS-CHAP-Challenge = 0xbb1e68d5bbc30f228725a133877de83e MS-CHAP2-Response = 0x010088746ae65b68e435e9d045ad6f9569b60000000000000000b56991b4f20704cb6c68e5982eec5e98a7f4b470c109c1b9 # Executing section authorize from file /usr/local/etc/raddb/sites-enabled/default +- entering group authorize {...} ++[preprocess] returns ok ++[chap] returns noop [mschap] Found MS-CHAP attributes. Setting 'Auth-Type = mschap' ++[mschap] returns ok [eap] No EAP-Message, not doing EAP ++[eap] returns noop [files] users: Matched entry DEFAULT at line 172 ++[files] returns ok Found Auth-Type = MSCHAP # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group MS-CHAP {...} [mschap] No Cleartext-Password configured. Cannot create LM-Password. [mschap] No Cleartext-Password configured. Cannot create NT-Password. [mschap] Creating challenge hash with username: test [mschap] Client is using MS-CHAPv2 for test, we need NT-Password [mschap] FAILED: No NT/LM-Password. Cannot perform authentication. [mschap] FAILED: MS-CHAP2-Response is incorrect ++[mschap] returns reject Failed to authenticate the user. Login incorrect: [test] (from client localhost port 2 cli 00e05290b3e3 / 00:e0:52:90:b3:e3 / em1) Using Post-Auth-Type REJECT # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group REJECT {...} [attr_filter.access_reject] expand: %{User-Name} -> test attr_filter: Matched entry DEFAULT at line 11 ++[attr_filter.access_reject] returns updated Delaying reject of request 2 for 1 seconds Going to the next request Waking up in 0.9 seconds. Sending delayed reject for request 2 Sending Access-Reject of id 223 to 127.0.0.1 port 46400 MS-CHAP-Error = "\001E=691 R=1" Why i have error "[mschap] No Cleartext-Password configured. Cannot create LM-Password." ? I define cleartext-password in users. I check raddb/sites-enabled/default authorize { chap mschap eap { ok = return } files } looks ok for me. Whats wrong with mpd/chap/radius ?

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31  | Next Page >