Search Results

Search found 12291 results on 492 pages for 'session scope'.

Page 120/492 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • How to make Net::Msmgr run?

    - by codeholic
    There's Net::Msmgr module on CPAN. It's written clean and the code looks trustworthy at the first glance. However this module seems to be beta and there is little documentation and no tests :-/ Has anyone used this module in production? I haven't managed to make it run by now, because it requires all event loop processing to be done in the application and as I've already said there is little documentation and no working examples to study. That's where I've gone so far: #!/usr/bin/perl use strict; use warnings; use Event; use Net::Msmgr::Object; use Net::Msmgr::Session; use Net::Msmgr::User; use constant DEBUG => 511; use constant EVENT_TIMEOUT => 5; # seconds my ($username, $password) = qw/[email protected] my.password/; my $buddy = '[email protected]'; my $user = Net::Msmgr::User->new(user => $username, password => $password); my $session = Net::Msmgr::Session->new; $session->debug(DEBUG); $session->login_handler(\&login_handler); $session->user($user); my $conv; sub login_handler { my $self = shift; print "LOGIN\n"; $self->ui_state_nln; $conv = $session->ui_new_conversation; $conv->invite($buddy); } our %watcher; sub ConnectHandler { my ($connection) = @_; warn "CONNECT\n"; my $socket = $connection->socket; $watcher{$connection} = Event->io(fd => $socket, cb => [ $connection, '_recv_message' ], poll => 're', desc => 'recv_watcher', repeat => 1); } sub DisconnectHandler { my $connection = shift; print "DISCONNECT\n"; $watcher{$connection}->cancel; } $session->connect_handler(\&ConnectHandler); $session->disconnect_handler(\&DisconnectHandler); $session->Login; Event::loop(); That's what it outputs: Dispatch Server connecting to: messenger.hotmail.com:1863 Dispatch Server connected CONNECT Dispatch Server >>>VER 1 MSNP2 CVR0 --> VER 1 MSNP2 CVR0 Dispatch Server >>>USR 2 MD5 I [email protected] --> USR 2 MD5 I [email protected] Dispatch Server <<<VER 1 CVR0 <-- VER 1 CVR0 And that's all, here it hangs. The handler on login is not being triggered. What am I doing wrong?

    Read the article

  • How to make Net::Msmnr run?

    - by codeholic
    There's Net::Msmgr module on CPAN. It's written clean and the code looks trustworthy at the first glance. However this module seems to be beta and there is little documentation and no tests :-/ Has anyone used this module in production? I haven't managed to make it run by now, because it requires all event loop processing to be done in the application and as I've already said there is little documentation and no working examples to study. That's where I've gone so far: #!/usr/bin/perl use strict; use warnings; use Event; use Net::Msmgr::Object; use Net::Msmgr::Session; use Net::Msmgr::User; use constant DEBUG => 511; use constant EVENT_TIMEOUT => 5; # seconds my ($username, $password) = qw/[email protected] my.password/; my $buddy = '[email protected]'; my $user = Net::Msmgr::User->new(user => $username, password => $password); my $session = Net::Msmgr::Session->new; $session->debug(DEBUG); $session->login_handler(\&login_handler); $session->user($user); my $conv; sub login_handler { my $self = shift; print "LOGIN\n"; $self->ui_state_nln; $conv = $session->ui_new_conversation; $conv->invite($buddy); } our %watcher; sub ConnectHandler { my ($connection) = @_; warn "CONNECT\n"; my $socket = $connection->socket; $watcher{$connection} = Event->io(fd => $socket, cb => [ $connection, '_recv_message' ], poll => 're', desc => 'recv_watcher', repeat => 1); } sub DisconnectHandler { my $connection = shift; print "DISCONNECT\n"; $watcher{$connection}->cancel; } $session->connect_handler(\&ConnectHandler); $session->disconnect_handler(\&DisconnectHandler); $session->Login; Event::loop(); That's what it outputs: Dispatch Server connecting to: messenger.hotmail.com:1863 Dispatch Server connected CONNECT Dispatch Server >>>VER 1 MSNP2 CVR0 --> VER 1 MSNP2 CVR0 Dispatch Server >>>USR 2 MD5 I [email protected] --> USR 2 MD5 I [email protected] Dispatch Server <<<VER 1 CVR0 <-- VER 1 CVR0 And that's all, here it hangs. The handler on login is not being triggered. What am I doing wrong?

    Read the article

  • Why does Perl's Net::Msmgr hang when I try to authenticate?

    - by codeholic
    There's Net::Msmgr module on CPAN. It's written clean and the code looks trustworthy at the first glance. However this module seems to be beta and there is little documentation and no tests :-/ Has anyone used this module in production? I haven't managed to make it run by now, because it requires all event loop processing to be done in the application and as I've already said there is little documentation and no working examples to study. That's where I've gone so far: #!/usr/bin/perl use strict; use warnings; use Event; use Net::Msmgr::Object; use Net::Msmgr::Session; use Net::Msmgr::User; use constant DEBUG => 511; use constant EVENT_TIMEOUT => 5; # seconds my ($username, $password) = qw/[email protected] my.password/; my $buddy = '[email protected]'; my $user = Net::Msmgr::User->new(user => $username, password => $password); my $session = Net::Msmgr::Session->new; $session->debug(DEBUG); $session->login_handler(\&login_handler); $session->user($user); my $conv; sub login_handler { my $self = shift; print "LOGIN\n"; $self->ui_state_nln; $conv = $session->ui_new_conversation; $conv->invite($buddy); } our %watcher; sub ConnectHandler { my ($connection) = @_; warn "CONNECT\n"; my $socket = $connection->socket; $watcher{$connection} = Event->io(fd => $socket, cb => [ $connection, '_recv_message' ], poll => 're', desc => 'recv_watcher', repeat => 1); } sub DisconnectHandler { my $connection = shift; print "DISCONNECT\n"; $watcher{$connection}->cancel; } $session->connect_handler(\&ConnectHandler); $session->disconnect_handler(\&DisconnectHandler); $session->Login; Event::loop(); That's what it outputs: Dispatch Server connecting to: messenger.hotmail.com:1863 Dispatch Server connected CONNECT Dispatch Server >>>VER 1 MSNP2 CVR0 --> VER 1 MSNP2 CVR0 Dispatch Server >>>USR 2 MD5 I [email protected] --> USR 2 MD5 I [email protected] Dispatch Server <<<VER 1 CVR0 <-- VER 1 CVR0 And that's all, here it hangs. The handler on login is not being triggered. What am I doing wrong?

    Read the article

  • Problem with Authlogic and Unit/Functional Tests in Rails

    - by mmacaulay
    I'm learning how unit testing is done in Rails, and I've run into a problem involving Authlogic. According to the Documentation there are a few things required to use Authlogic stuff in your tests: test_helper.rb: require "authlogic/test_case" class ActiveSupport::TestCase setup :activate_authlogic end Then in my functional tests I can login users: UserSession.create(users(:tester)) The problem seems to stem from the setup :activate_authlogic line in test_helper.rb, whenever that is included, I get the following errors when running functional tests: NoMethodError: undefined method `request=' for nil:NilClass authlogic (2.1.3) lib/authlogic/controller_adapters/abstract_adapter.rb:63:in `send' authlogic (2.1.3) lib/authlogic/controller_adapters/abstract_adapter.rb:63:in `method_missing' If I remove setup :activate_authlogic and add instead Authlogic::Session::Base.controller = Authlogic::ControllerAdapters::RailsAdapter.new(self) to test_helper.rb, my functional tests seem to work but now my unit tests fail: NoMethodError: undefined method `params' for ActiveSupport::TestCase:Class authlogic (2.1.3) lib/authlogic/controller_adapters/abstract_adapter.rb:30:in `params' authlogic (2.1.3) lib/authlogic/session/params.rb:96:in `params_credentials' authlogic (2.1.3) lib/authlogic/session/params.rb:72:in `params_enabled?' authlogic (2.1.3) lib/authlogic/session/params.rb:66:in `persist_by_params' authlogic (2.1.3) lib/authlogic/session/callbacks.rb:79:in `persist' authlogic (2.1.3) lib/authlogic/session/persistence.rb:55:in `persisting?' authlogic (2.1.3) lib/authlogic/session/persistence.rb:39:in `find' authlogic (2.1.3) lib/authlogic/acts_as_authentic/session_maintenance.rb:96:in `get_session_information' authlogic (2.1.3) lib/authlogic/acts_as_authentic/session_maintenance.rb:95:in `each' authlogic (2.1.3) lib/authlogic/acts_as_authentic/session_maintenance.rb:95:in `get_session_information' /test/unit/user_test.rb:23:in `test_should_save_user_with_email_password_and_confirmation' What am I doing wrong?

    Read the article

  • Seems doctrine listener is not fired

    - by Roel Veldhuizen
    Got a service which should be executed the moment an object is persisted. Though, I think the code looks like it should work, it doesn't. I configured the service like the following yml. services: bla_orm.listener: class: Bla\OrmBundle\EventListener\UserManager arguments: [@security.encoder_factory] tags: - { name: doctrine.event_listener, event: prePersist } The class: namespace Bla\OrmBundle\EventListener; use Doctrine\ORM\Event\LifecycleEventArgs; use Bla\OrmBundle\Entity\User; class UserManager { protected $encoderFactory; public function __construct(\Symfony\Component\Security\Core\Encoder\EncoderFactoryInterface $encoderFactory) { $this->encoderFactory = $encoderFactory; } public function prePersist(LifecycleEventArgs $args) { $entity = $args->getEntity(); if ($entity instanceof User) { $encoder = $this->encoderFactory ->getEncoder($entity); $entity->setSalt(rand(10000, 99999)); $password = $encoder->encodePassword($entity->getPassword(), $entity->getSalt()); $entity->setPassword($password); } } } Symfony version: Symfony version 2.3.3 - app/dev/debug Output of container:debug [container] Public services Service Id Scope Class Name annotation_reader container Doctrine\Common\Annotations\FileCacheReader assetic.asset_manager container Assetic\Factory\LazyAssetManager assetic.controller prototype Symfony\Bundle\AsseticBundle\Controller\AsseticController assetic.filter.cssrewrite container Assetic\Filter\CssRewriteFilter assetic.filter_manager container Symfony\Bundle\AsseticBundle\FilterManager assetic.request_listener container Symfony\Bundle\AsseticBundle\EventListener\RequestListener cache_clearer container Symfony\Component\HttpKernel\CacheClearer\ChainCacheClearer cache_warmer container Symfony\Component\HttpKernel\CacheWarmer\CacheWarmerAggregate data_collector.request container Symfony\Component\HttpKernel\DataCollector\RequestDataCollector data_collector.router container Symfony\Bundle\FrameworkBundle\DataCollector\RouterDataCollector database_connection n/a alias for doctrine.dbal.default_connection debug.controller_resolver container Symfony\Component\HttpKernel\Controller\TraceableControllerResolver debug.deprecation_logger_listener container Symfony\Component\HttpKernel\EventListener\ErrorsLoggerListener debug.emergency_logger_listener container Symfony\Component\HttpKernel\EventListener\ErrorsLoggerListener debug.event_dispatcher container Symfony\Component\HttpKernel\Debug\TraceableEventDispatcher debug.stopwatch container Symfony\Component\Stopwatch\Stopwatch debug.templating.engine.php container Symfony\Bundle\FrameworkBundle\Templating\TimedPhpEngine debug.templating.engine.twig n/a alias for templating doctrine container Doctrine\Bundle\DoctrineBundle\Registry doctrine.dbal.connection_factory container Doctrine\Bundle\DoctrineBundle\ConnectionFactory doctrine.dbal.default_connection container stdClass doctrine.orm.default_entity_manager container Doctrine\ORM\EntityManager doctrine.orm.default_manager_configurator container Doctrine\Bundle\DoctrineBundle\ManagerConfigurator doctrine.orm.entity_manager n/a alias for doctrine.orm.default_entity_manager doctrine.orm.validator.unique container Symfony\Bridge\Doctrine\Validator\Constraints\UniqueEntityValidator doctrine.orm.validator_initializer container Symfony\Bridge\Doctrine\Validator\DoctrineInitializer event_dispatcher container Symfony\Component\EventDispatcher\ContainerAwareEventDispatcher file_locator container Symfony\Component\HttpKernel\Config\FileLocator filesystem container Symfony\Component\Filesystem\Filesystem form.csrf_provider container Symfony\Component\Form\Extension\Csrf\CsrfProvider\SessionCsrfProvider form.factory container Symfony\Component\Form\FormFactory form.registry container Symfony\Component\Form\FormRegistry form.resolved_type_factory container Symfony\Component\Form\ResolvedFormTypeFactory form.type.birthday container Symfony\Component\Form\Extension\Core\Type\BirthdayType form.type.button container Symfony\Component\Form\Extension\Core\Type\ButtonType form.type.checkbox container Symfony\Component\Form\Extension\Core\Type\CheckboxType form.type.choice container Symfony\Component\Form\Extension\Core\Type\ChoiceType form.type.collection container Symfony\Component\Form\Extension\Core\Type\CollectionType form.type.country container Symfony\Component\Form\Extension\Core\Type\CountryType form.type.currency container Symfony\Component\Form\Extension\Core\Type\CurrencyType form.type.date container Symfony\Component\Form\Extension\Core\Type\DateType form.type.datetime container Symfony\Component\Form\Extension\Core\Type\DateTimeType form.type.email container Symfony\Component\Form\Extension\Core\Type\EmailType form.type.entity container Symfony\Bridge\Doctrine\Form\Type\EntityType form.type.file container Symfony\Component\Form\Extension\Core\Type\FileType form.type.form container Symfony\Component\Form\Extension\Core\Type\FormType form.type.hidden container Symfony\Component\Form\Extension\Core\Type\HiddenType form.type.integer container Symfony\Component\Form\Extension\Core\Type\IntegerType form.type.language container Symfony\Component\Form\Extension\Core\Type\LanguageType form.type.locale container Symfony\Component\Form\Extension\Core\Type\LocaleType form.type.money container Symfony\Component\Form\Extension\Core\Type\MoneyType form.type.number container Symfony\Component\Form\Extension\Core\Type\NumberType form.type.password container Symfony\Component\Form\Extension\Core\Type\PasswordType form.type.percent container Symfony\Component\Form\Extension\Core\Type\PercentType form.type.radio container Symfony\Component\Form\Extension\Core\Type\RadioType form.type.repeated container Symfony\Component\Form\Extension\Core\Type\RepeatedType form.type.reset container Symfony\Component\Form\Extension\Core\Type\ResetType form.type.search container Symfony\Component\Form\Extension\Core\Type\SearchType form.type.submit container Symfony\Component\Form\Extension\Core\Type\SubmitType form.type.text container Symfony\Component\Form\Extension\Core\Type\TextType form.type.textarea container Symfony\Component\Form\Extension\Core\Type\TextareaType form.type.time container Symfony\Component\Form\Extension\Core\Type\TimeType form.type.timezone container Symfony\Component\Form\Extension\Core\Type\TimezoneType form.type.url container Symfony\Component\Form\Extension\Core\Type\UrlType form.type_extension.csrf container Symfony\Component\Form\Extension\Csrf\Type\FormTypeCsrfExtension form.type_extension.form.http_foundation container Symfony\Component\Form\Extension\HttpFoundation\Type\FormTypeHttpFoundationExtension form.type_extension.form.validator container Symfony\Component\Form\Extension\Validator\Type\FormTypeValidatorExtension form.type_extension.repeated.validator container Symfony\Component\Form\Extension\Validator\Type\RepeatedTypeValidatorExtension form.type_extension.submit.validator container Symfony\Component\Form\Extension\Validator\Type\SubmitTypeValidatorExtension form.type_guesser.doctrine container Symfony\Bridge\Doctrine\Form\DoctrineOrmTypeGuesser form.type_guesser.validator container Symfony\Component\Form\Extension\Validator\ValidatorTypeGuesser fragment.handler container Symfony\Component\HttpKernel\Fragment\FragmentHandler fragment.listener container Symfony\Component\HttpKernel\EventListener\FragmentListener fragment.renderer.hinclude container Symfony\Bundle\FrameworkBundle\Fragment\ContainerAwareHIncludeFragmentRenderer fragment.renderer.inline container Symfony\Component\HttpKernel\Fragment\InlineFragmentRenderer http_kernel container Symfony\Component\HttpKernel\DependencyInjection\ContainerAwareHttpKernel kernel container locale_listener container Symfony\Component\HttpKernel\EventListener\LocaleListener logger container Symfony\Bridge\Monolog\Logger mailer n/a alias for swiftmailer.mailer.default monolog.handler.chromephp container Symfony\Bridge\Monolog\Handler\ChromePhpHandler monolog.handler.debug container Symfony\Bridge\Monolog\Handler\DebugHandler monolog.handler.firephp container Symfony\Bridge\Monolog\Handler\FirePHPHandler monolog.handler.main container Monolog\Handler\StreamHandler monolog.logger.deprecation container Symfony\Bridge\Monolog\Logger monolog.logger.doctrine container Symfony\Bridge\Monolog\Logger monolog.logger.emergency container Symfony\Bridge\Monolog\Logger monolog.logger.event container Symfony\Bridge\Monolog\Logger monolog.logger.profiler container Symfony\Bridge\Monolog\Logger monolog.logger.request container Symfony\Bridge\Monolog\Logger monolog.logger.router container Symfony\Bridge\Monolog\Logger monolog.logger.security container Symfony\Bridge\Monolog\Logger monolog.logger.templating container Symfony\Bridge\Monolog\Logger profiler container Symfony\Component\HttpKernel\Profiler\Profiler profiler_listener container Symfony\Component\HttpKernel\EventListener\ProfilerListener property_accessor container Symfony\Component\PropertyAccess\PropertyAccessor request request response_listener container Symfony\Component\HttpKernel\EventListener\ResponseListener router container Symfony\Bundle\FrameworkBundle\Routing\Router router_listener container Symfony\Component\HttpKernel\EventListener\RouterListener routing.loader container Symfony\Bundle\FrameworkBundle\Routing\DelegatingLoader security.context container Symfony\Component\Security\Core\SecurityContext security.encoder_factory container Symfony\Component\Security\Core\Encoder\EncoderFactory security.firewall container Symfony\Component\Security\Http\Firewall security.firewall.map.context.dev container Symfony\Bundle\SecurityBundle\Security\FirewallContext security.firewall.map.context.login container Symfony\Bundle\SecurityBundle\Security\FirewallContext security.firewall.map.context.rest container Symfony\Bundle\SecurityBundle\Security\FirewallContext security.firewall.map.context.secured_area container Symfony\Bundle\SecurityBundle\Security\FirewallContext security.rememberme.response_listener container Symfony\Component\Security\Http\RememberMe\ResponseListener security.secure_random container Symfony\Component\Security\Core\Util\SecureRandom security.validator.user_password container Symfony\Component\Security\Core\Validator\Constraints\UserPasswordValidator sensio.distribution.webconfigurator n/a alias for sensio_distribution.webconfigurator sensio_distribution.webconfigurator container Sensio\Bundle\DistributionBundle\Configurator\Configurator sensio_framework_extra.cache.listener container Sensio\Bundle\FrameworkExtraBundle\EventListener\CacheListener sensio_framework_extra.controller.listener container Sensio\Bundle\FrameworkExtraBundle\EventListener\ControllerListener sensio_framework_extra.converter.datetime container Sensio\Bundle\FrameworkExtraBundle\Request\ParamConverter\DateTimeParamConverter sensio_framework_extra.converter.doctrine.orm container Sensio\Bundle\FrameworkExtraBundle\Request\ParamConverter\DoctrineParamConverter sensio_framework_extra.converter.listener container Sensio\Bundle\FrameworkExtraBundle\EventListener\ParamConverterListener sensio_framework_extra.converter.manager container Sensio\Bundle\FrameworkExtraBundle\Request\ParamConverter\ParamConverterManager sensio_framework_extra.view.guesser container Sensio\Bundle\FrameworkExtraBundle\Templating\TemplateGuesser sensio_framework_extra.view.listener container Sensio\Bundle\FrameworkExtraBundle\EventListener\TemplateListener service_container container session container Symfony\Component\HttpFoundation\Session\Session session.handler container Symfony\Component\HttpFoundation\Session\Storage\Handler\NativeFileSessionHandler session.storage n/a alias for session.storage.native session.storage.filesystem container Symfony\Component\HttpFoundation\Session\Storage\MockFileSessionStorage session.storage.native container Symfony\Component\HttpFoundation\Session\Storage\NativeSessionStorage session.storage.php_bridge container Symfony\Component\HttpFoundation\Session\Storage\PhpBridgeSessionStorage session_listener container Symfony\Bundle\FrameworkBundle\EventListener\SessionListener streamed_response_listener container Symfony\Component\HttpKernel\EventListener\StreamedResponseListener swiftmailer.email_sender.listener container Symfony\Bundle\SwiftmailerBundle\EventListener\EmailSenderListener swiftmailer.mailer n/a alias for swiftmailer.mailer.default swiftmailer.mailer.default container Swift_Mailer swiftmailer.mailer.default.plugin.messagelogger container Swift_Plugins_MessageLogger swiftmailer.mailer.default.spool container Swift_FileSpool swiftmailer.mailer.default.transport container Swift_Transport_SpoolTransport swiftmailer.mailer.default.transport.real container Swift_Transport_EsmtpTransport swiftmailer.plugin.messagelogger n/a alias for swiftmailer.mailer.default.plugin.messagelogger swiftmailer.spool n/a alias for swiftmailer.mailer.default.spool swiftmailer.transport n/a alias for swiftmailer.mailer.default.transport swiftmailer.transport.real n/a alias for swiftmailer.mailer.default.transport.real templating container Symfony\Bundle\TwigBundle\Debug\TimedTwigEngine templating.asset.package_factory container Symfony\Bundle\FrameworkBundle\Templating\Asset\PackageFactory templating.filename_parser container Symfony\Bundle\FrameworkBundle\Templating\TemplateFilenameParser templating.globals container Symfony\Bundle\FrameworkBundle\Templating\GlobalVariables templating.helper.actions container Symfony\Bundle\FrameworkBundle\Templating\Helper\ActionsHelper templating.helper.assets request Symfony\Component\Templating\Helper\CoreAssetsHelper templating.helper.code container Symfony\Bundle\FrameworkBundle\Templating\Helper\CodeHelper templating.helper.form container Symfony\Bundle\FrameworkBundle\Templating\Helper\FormHelper templating.helper.logout_url container Symfony\Bundle\SecurityBundle\Templating\Helper\LogoutUrlHelper templating.helper.request container Symfony\Bundle\FrameworkBundle\Templating\Helper\RequestHelper templating.helper.router container Symfony\Bundle\FrameworkBundle\Templating\Helper\RouterHelper templating.helper.security container Symfony\Bundle\SecurityBundle\Templating\Helper\SecurityHelper templating.helper.session container Symfony\Bundle\FrameworkBundle\Templating\Helper\SessionHelper templating.helper.slots container Symfony\Component\Templating\Helper\SlotsHelper templating.helper.translator container Symfony\Bundle\FrameworkBundle\Templating\Helper\TranslatorHelper templating.loader container Symfony\Bundle\FrameworkBundle\Templating\Loader\FilesystemLoader templating.name_parser container Symfony\Bundle\FrameworkBundle\Templating\TemplateNameParser translation.dumper.csv container Symfony\Component\Translation\Dumper\CsvFileDumper translation.dumper.ini container Symfony\Component\Translation\Dumper\IniFileDumper translation.dumper.mo container Symfony\Component\Translation\Dumper\MoFileDumper translation.dumper.php container Symfony\Component\Translation\Dumper\PhpFileDumper translation.dumper.po container Symfony\Component\Translation\Dumper\PoFileDumper translation.dumper.qt container Symfony\Component\Translation\Dumper\QtFileDumper translation.dumper.res container Symfony\Component\Translation\Dumper\IcuResFileDumper translation.dumper.xliff container Symfony\Component\Translation\Dumper\XliffFileDumper translation.dumper.yml container Symfony\Component\Translation\Dumper\YamlFileDumper translation.extractor container Symfony\Component\Translation\Extractor\ChainExtractor translation.extractor.php container Symfony\Bundle\FrameworkBundle\Translation\PhpExtractor translation.loader container Symfony\Bundle\FrameworkBundle\Translation\TranslationLoader translation.loader.csv container Symfony\Component\Translation\Loader\CsvFileLoader translation.loader.dat container Symfony\Component\Translation\Loader\IcuResFileLoader translation.loader.ini container Symfony\Component\Translation\Loader\IniFileLoader translation.loader.mo container Symfony\Component\Translation\Loader\MoFileLoader translation.loader.php container Symfony\Component\Translation\Loader\PhpFileLoader translation.loader.po container Symfony\Component\Translation\Loader\PoFileLoader translation.loader.qt container Symfony\Component\Translation\Loader\QtFileLoader translation.loader.res container Symfony\Component\Translation\Loader\IcuResFileLoader translation.loader.xliff container Symfony\Component\Translation\Loader\XliffFileLoader translation.loader.yml container Symfony\Component\Translation\Loader\YamlFileLoader translation.writer container Symfony\Component\Translation\Writer\TranslationWriter translator n/a alias for translator.default translator.default container Symfony\Bundle\FrameworkBundle\Translation\Translator twig container Twig_Environment twig.controller.exception container Symfony\Bundle\TwigBundle\Controller\ExceptionController twig.exception_listener container Symfony\Component\HttpKernel\EventListener\ExceptionListener twig.loader container Symfony\Bundle\TwigBundle\Loader\FilesystemLoader twig.translation.extractor container Symfony\Bridge\Twig\Translation\TwigExtractor uri_signer container Symfony\Component\HttpKernel\UriSigner bla_orm.listener container Bla\OrmBundle\EventListener\UserManager validator container Symfony\Component\Validator\Validator web_profiler.controller.exception container Symfony\Bundle\WebProfilerBundle\Controller\ExceptionController web_profiler.controller.profiler container Symfony\Bundle\WebProfilerBundle\Controller\ProfilerController web_profiler.controller.router container Symfony\Bundle\WebProfilerBundle\Controller\RouterController web_profiler.debug_toolbar container Symfony\Bundle\WebProfilerBundle\EventListener\WebDebugToolbarListener Update It seems that the listener is not invoked when an updateAction, generated by generate:doctrine:crud has taken place though. At another part of the code the lister seems to be invoked. Though, there are both Controller types and both us $em->persist($something); $em->flush(); to save the changes. I would expect that in both cases the listener is invoked.

    Read the article

  • MINA: Performing synchronous write requests / read responses

    - by Matt Huggins
    I'm attempting to perform a synchronous write/read in a demux-based client application with MINA 2.0 RC1, but it seems to get stuck. Here is my code: public boolean login(final String username, final String password) { // block inbound messages session.getConfig().setUseReadOperation(true); // send the login request final LoginRequest loginRequest = new LoginRequest(username, password); final WriteFuture writeFuture = session.write(loginRequest); writeFuture.awaitUninterruptibly(); if (writeFuture.getException() != null) { session.getConfig().setUseReadOperation(true); return false; } // retrieve the login response final ReadFuture readFuture = session.read(); readFuture.awaitUninterruptibly(); if (readFuture.getException() != null) { session.getConfig().setUseReadOperation(true); return false; } // stop blocking inbound messages session.getConfig().setUseReadOperation(false); // determine if the login info provided was valid final LoginResponse loginResponse = (LoginResponse)readFuture.getMessage(); return loginResponse.getSuccess(); } I can see on the server side that the LoginRequest object is retrieved, and a LoginResponse message is sent. On the client side, the DemuxingProtocolCodecFactory receives the response, but after throwing in some logging, I can see that the client gets stuck on the call to readFuture.awaitUninterruptibly(). I can't for the life of me figure out why it is stuck here based upon my own code. I properly set the read operation to true on the session config, meaning that messages should be blocked. However, it seems as if the message no longer exists by time I try to read response messages synchronously. Any clues as to why this won't work for me?

    Read the article

  • How can I inject multiple repositories in a NServicebus message handler?

    - by Paco
    I use the following: public interface IRepository<T> { void Add(T entity); } public class Repository<T> { private readonly ISession session; public Repository(ISession session) { this.session = session; } public void Add(T entity) { session.Save(entity); } } public class SomeHandler : IHandleMessages<SomeMessage> { private readonly IRepository<EntityA> aRepository; private readonly IRepository<EntityB> bRepository; public SomeHandler(IRepository<EntityA> aRepository, IRepository<EntityB> bRepository) { this.aRepository = aRepository; this.bRepository = bRepository; } public void Handle(SomeMessage message) { aRepository.Add(new A(message.Property); bRepository.Add(new B(message.Property); } } public class MessageEndPoint : IConfigureThisEndpoint, AsA_Server, IWantCustomInitialization { public void Init() { ObjectFactory.Configure(config => { config.For<ISession>() .CacheBy(InstanceScope.ThreadLocal) .TheDefault.Is.ConstructedBy(ctx => ctx.GetInstance<ISessionFactory>().OpenSession()); config.ForRequestedType(typeof(IRepository<>)) .TheDefaultIsConcreteType(typeof(Repository<>)); } } My problem with the threadlocal storage is, is that the same session is used during the whole application thread. I discovered this when I saw the first level cache wasn't cleared. What I want is using a new session instance, before each call to IHandleMessages<.Handle. How can I do this with structuremap? Do I have to create a message module?

    Read the article

  • Jsch how to list files along with directories

    - by Rajeev
    In the following code when the command ls -a is executed why is that i see only the list of directories and not any files that are on a remote linux server JSch jsch = new JSch(); Session session = jsch.getSession(username1, ip1, 22); java.util.Properties config = new java.util.Properties(); config.put("StrictHostKeyChecking", "no"); session.setConfig(config); //session.setPassword(password.getText().toString()); session.setPassword(password1); session.connect(); final ListView mainlist = (ListView)findViewById(R.id.list); final RelativeLayout rl = (RelativeLayout)findViewById(R.id.rl); mainlist.setVisibility(View.VISIBLE); rl.setVisibility(View.INVISIBLE); String command = "ls -a"; Channel channel = session.openChannel("exec"); ((ChannelExec) channel).setCommand(command); channel.setInputStream(null); ((ChannelExec) channel).setErrStream(System.err); InputStream in = channel.getInputStream();

    Read the article

  • Learning Hibernate: too many connections

    - by stivlo
    I'm trying to learn Hibernate and I wrote the simplest Person Entity and I was trying to insert 2000 of them. I know I'm using deprecated methods, I will try to figure out what are the new ones later. First, here is the class Person: @Entity public class Person { private int id; private String name; @Id @GeneratedValue(strategy = GenerationType.TABLE, generator = "person") @TableGenerator(name = "person", table = "sequences", allocationSize = 1) public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } } Then I wrote a small App class that insert 2000 entities with a loop: public class App { private static AnnotationConfiguration config; public static void insertPerson() { SessionFactory factory = config.buildSessionFactory(); Session session = factory.getCurrentSession(); session.beginTransaction(); Person aPerson = new Person(); aPerson.setName("John"); session.save(aPerson); session.getTransaction().commit(); } public static void main(String[] args) { config = new AnnotationConfiguration(); config.addAnnotatedClass(Person.class); config.configure("hibernate.cfg.xml"); //is the default already new SchemaExport(config).create(true, true); //print and execute for (int i = 0; i < 2000; i++) { insertPerson(); } } } What I get after a while is: Exception in thread "main" org.hibernate.exception.JDBCConnectionException: Cannot open connection Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Too many connections Now I know that probably if I put the transaction outside the loop it would work, but mine was a test to see what happens when executing multiple transactions. And since there is only one open at each time, it should work. I tried to add session.close() after the commit, but I got Exception in thread "main" org.hibernate.SessionException: Session was already closed So how to solve the problem?

    Read the article

  • Saving a record in Authlogic table

    - by denniss
    I am using authlogic to do my authentication. The current model that serves as the authentication model is the user model. I want to add a "belongs to" relationship to user which means that I need a foreign key in the user table. Say the foreign key is called car_id in the user's model. However, for some reason, when I do u = User.find(1) u.car_id = 1 u.save! I get ActiveRecord::RecordInvalid: Validation failed: Password can't be blank My guess is that this has something to do with authlogic. I do not have validation on password on the user's model. This is the migration for the user's table. def self.up create_table :users do |t| t.string :email t.string :first_name t.string :last_name t.string :crypted_password t.string :password_salt t.string :persistence_token t.string :single_access_token t.string :perishable_token t.integer :login_count, :null => false, :default => 0 # optional, see Authlogic::Session::MagicColumns t.integer :failed_login_count, :null => false, :default => 0 # optional, see Authlogic::Session::MagicColumns t.datetime :last_request_at # optional, see Authlogic::Session::MagicColumns t.datetime :current_login_at # optional, see Authlogic::Session::MagicColumns t.datetime :last_login_at # optional, see Authlogic::Session::MagicColumns t.string :current_login_ip # optional, see Authlogic::Session::MagicColumns t.string :last_login_ip # optional, see Authlogic::Session::MagicColumns t.timestamps end end And later I added the car_id column to it. def self.up add_column :users, :user_id, :integer end Is there anyway for me to turn off this validation?

    Read the article

  • Problems when data save on database by Hibernate

    - by alex
    Hi; I'm trying do save data to database by help of Hibernate , in Java. But when i run codes , i had lot of problems. Can anyone help me about that? Thanks... My code: package org.ultimania.model; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import org.hibernate.cfg.Configuration; public class Test { public static void main(String[] args) { Session session = null; SessionFactory sessionFactory = new Configuration().configure().buildSessionFactory(); session = sessionFactory.openSession(); Transaction transaction = session.getTransaction(); BusinessCard card = new BusinessCard(); card.setId(1); card.setName("Özgür"); card.setDescription("Aciklama"); try{ transaction.begin(); session.save(card); transaction.commit(); } catch(Exception e){ e.printStackTrace(); } finally{ session.close(); } } } Problems : SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/impl/StaticLoggerBinder at org.slf4j.LoggerFactory.getSingleton(LoggerFactory.java:223) at org.slf4j.LoggerFactory.bind(LoggerFactory.java:120) at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:111) at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:269) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:242) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:255) at org.hibernate.cfg.Configuration.(Configuration.java:152) at org.ultimania.model.Test.main(Test.java:14) Caused by: java.lang.ClassNotFoundException: org.slf4j.impl.StaticLoggerBinder at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 8 more

    Read the article

  • Dealing with expired authentication for a partially filled form?

    - by aaronls
    I have a large webform, and would like to prompt the user to login if their session expires, or have them login when they submit the form. It seems that having them login when they submit the form creates alot of challenges because they get redirected to the login page and then the postback data for the original form submission is lost. So I'm thinking about how to prompt them to login asynchrounsly when the session expires. So that they stay on the original form page, have a panel appear telling them the session has expired and they need to login, it submits the login asynchronously, the login panel disapears, and the user is still on the original partially filled form and can submit it. Is this easily doable using the existing ASP.NET Membership controls? When they submit the form will I need to worry about the session key? I mean, I am wondering if the session key the form submits will be the original one from before the session expired which won't match the new one generated after logging in again asynchrounously(I still do not understand the details of how ASP.NET tracks authentication/session IDs). Edit: Yes I am actually concerned about authentication expiration. The user must be authenticated for the submitted data to be considered valid.

    Read the article

  • NHibernate: What are child sessions and why and when should I use them?

    - by stefando
    In the comments for the ayende's blog about the auditing in NHibernate there is a mention about the need to use a child session:session.GetSession(EntityMode.Poco). As far as I understand it, it has something to do with the order of the SQL operation which session.Flush will emit. (For example: If I wanted to perform some delete operation in the pre-insert event but the session was already done with deleting operations, I would need some way to inject them in.) However I did not find documentation about this feature and behavior. Questions: Is my understanding of child sessions correct? How and in which scenarios should I use them? Are they documented somewhere? Could they be used for session "scoping"? (For example: I open the master session which will hold some data and then I create 2 child-sessions from the master one. I'd expect that the two child-scopes will be separated but the will share objects from the master session cache. Is this the case?) Are they first class citizens in NHibernate or are they just hack to support some edge-case scenarios? Thanks in advance for any info.

    Read the article

  • Magento: How do I get observers to work in an external script?

    - by Laizer
    As far as I can tell, when a script is run outside of Magento, observers are not invoked when an event is fired. Why? How do I fix it? Below is the original issue that lead me to this question. The issue is that the observer that would apply the catalog rule is never called. The event fires, but the observer doesn't pick it up. I'm running an external script that loads up a Magento session. Within that script, I'm loading products and grabbing a bunch of properties. The one issue is that getFinalPrice() does not apply the catalog rules that apply to the product. I'm doing everything I know to set the session, even a bunch of stuff that I think is superfluous. Nothing seems to get these rules applied. Here's a test script: require_once "app/Mage.php"; umask(0); $app = Mage::app("default"); $app->getTranslator()->init('frontend'); //Probably not needed Mage::getSingleton('core/session', array('name'=>'frontend')); $session = Mage::getSingleton("customer/session"); $session->start(); //Probably not needed $session->loginById(122); $product = Mage::getModel('catalog/product')->load(1429); echo $product->getFinalPrice(); Any insight is appreciated.

    Read the article

  • Listening to PHP function calls to intercept the returned value

    - by Lansen Q
    I am working on making use of a Web Services API offered by the hosts of our internal system. I am accessing it via PHP with the built-in SOAP offering. The API session is initiated by a remote call to a function that returns some session tokens; every call to any function thereafter will return a new session token, which must accompany the next request. I have an API Client class that is doing the bulk of the work; what I would like to do is to set something up whereby any SOAP call that is made will make sure to update the API Client class' $session variable with the new session details, and then pass the data along. So far the only way I can think of doing this is creating a new class extending the SoapClient class, with a __call function wrapper to execute the function, update the new session token, and return the results nonetheless. I'm not sure that this will a) work b) be the best way to go about this. The wrapper class would be identical to making a SOAP call, and it would return an identical result, just it would update the session token before you get your result back. Thanks! Hope I explained myself properly.

    Read the article

  • Linux - Only first virtual interface can ping external gateway

    - by husvar
    I created 3 virtual interfaces with different mac addresses all linked to the same physical interface. I see that they successfully arp for the gw and they can ping (the request is coming in the packet capture in wireshark). However the ping utility does not count the responses. Does anyone knows the issue? I am running Ubuntu 14.04 in a VmWare. root@ubuntu:~# ip link sh 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:bc:fc:8b brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip addr sh 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:bc:fc:8b brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:febc:fc8b/64 scope link valid_lft forever preferred_lft forever root@ubuntu:~# ip route sh root@ubuntu:~# ip link add link eth0 eth0.1 addr 00:00:00:00:00:11 type macvlan root@ubuntu:~# ip link add link eth0 eth0.2 addr 00:00:00:00:00:22 type macvlan root@ubuntu:~# ip link add link eth0 eth0.3 addr 00:00:00:00:00:33 type macvlan root@ubuntu:~# ip -4 link sh 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:bc:fc:8b brd ff:ff:ff:ff:ff:ff 18: eth0.1@eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default link/ether 00:00:00:00:00:11 brd ff:ff:ff:ff:ff:ff 19: eth0.2@eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default link/ether 00:00:00:00:00:22 brd ff:ff:ff:ff:ff:ff 20: eth0.3@eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default link/ether 00:00:00:00:00:33 brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip -4 addr sh 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever root@ubuntu:~# ip -4 route sh root@ubuntu:~# dhclient -v eth0.1 Internet Systems Consortium DHCP Client 4.2.4 Copyright 2004-2012 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ Listening on LPF/eth0.1/00:00:00:00:00:11 Sending on LPF/eth0.1/00:00:00:00:00:11 Sending on Socket/fallback DHCPDISCOVER on eth0.1 to 255.255.255.255 port 67 interval 3 (xid=0x568eac05) DHCPREQUEST of 192.168.1.145 on eth0.1 to 255.255.255.255 port 67 (xid=0x568eac05) DHCPOFFER of 192.168.1.145 from 192.168.1.254 DHCPACK of 192.168.1.145 from 192.168.1.254 bound to 192.168.1.145 -- renewal in 1473 seconds. root@ubuntu:~# dhclient -v eth0.2 Internet Systems Consortium DHCP Client 4.2.4 Copyright 2004-2012 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ Listening on LPF/eth0.2/00:00:00:00:00:22 Sending on LPF/eth0.2/00:00:00:00:00:22 Sending on Socket/fallback DHCPDISCOVER on eth0.2 to 255.255.255.255 port 67 interval 3 (xid=0x21e3114e) DHCPREQUEST of 192.168.1.146 on eth0.2 to 255.255.255.255 port 67 (xid=0x21e3114e) DHCPOFFER of 192.168.1.146 from 192.168.1.254 DHCPACK of 192.168.1.146 from 192.168.1.254 bound to 192.168.1.146 -- renewal in 1366 seconds. root@ubuntu:~# dhclient -v eth0.3 Internet Systems Consortium DHCP Client 4.2.4 Copyright 2004-2012 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ Listening on LPF/eth0.3/00:00:00:00:00:33 Sending on LPF/eth0.3/00:00:00:00:00:33 Sending on Socket/fallback DHCPDISCOVER on eth0.3 to 255.255.255.255 port 67 interval 3 (xid=0x11dc5f03) DHCPREQUEST of 192.168.1.147 on eth0.3 to 255.255.255.255 port 67 (xid=0x11dc5f03) DHCPOFFER of 192.168.1.147 from 192.168.1.254 DHCPACK of 192.168.1.147 from 192.168.1.254 bound to 192.168.1.147 -- renewal in 1657 seconds. root@ubuntu:~# ip -4 link sh 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:bc:fc:8b brd ff:ff:ff:ff:ff:ff 18: eth0.1@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default link/ether 00:00:00:00:00:11 brd ff:ff:ff:ff:ff:ff 19: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default link/ether 00:00:00:00:00:22 brd ff:ff:ff:ff:ff:ff 20: eth0.3@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default link/ether 00:00:00:00:00:33 brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip -4 addr sh 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 18: eth0.1@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default inet 192.168.1.145/24 brd 192.168.1.255 scope global eth0.1 valid_lft forever preferred_lft forever 19: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default inet 192.168.1.146/24 brd 192.168.1.255 scope global eth0.2 valid_lft forever preferred_lft forever 20: eth0.3@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default inet 192.168.1.147/24 brd 192.168.1.255 scope global eth0.3 valid_lft forever preferred_lft forever root@ubuntu:~# ip -4 route sh default via 192.168.1.254 dev eth0.1 192.168.1.0/24 dev eth0.1 proto kernel scope link src 192.168.1.145 192.168.1.0/24 dev eth0.2 proto kernel scope link src 192.168.1.146 192.168.1.0/24 dev eth0.3 proto kernel scope link src 192.168.1.147 root@ubuntu:~# arping -c 5 -I eth0.1 192.168.1.254 ARPING 192.168.1.254 from 192.168.1.145 eth0.1 Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 6.936ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 2.986ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 0.654ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 5.137ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 2.426ms Sent 5 probes (1 broadcast(s)) Received 5 response(s) root@ubuntu:~# arping -c 5 -I eth0.2 192.168.1.254 ARPING 192.168.1.254 from 192.168.1.146 eth0.2 Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 5.665ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 3.753ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 16.500ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 3.287ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 32.438ms Sent 5 probes (1 broadcast(s)) Received 5 response(s) root@ubuntu:~# arping -c 5 -I eth0.3 192.168.1.254 ARPING 192.168.1.254 from 192.168.1.147 eth0.3 Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 4.422ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 2.429ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 2.321ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 40.423ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 2.268ms Sent 5 probes (1 broadcast(s)) Received 5 response(s) root@ubuntu:~# tcpdump -n -i eth0.1 -v & [1] 5317 root@ubuntu:~# ping -c5 -q -I eth0.1 192.168.1.254 PING 192.168.1.254 (192.168.1.254) from 192.168.1.145 eth0.1: 56(84) bytes of data. tcpdump: listening on eth0.1, link-type EN10MB (Ethernet), capture size 65535 bytes 13:18:37.612558 IP (tos 0x0, ttl 64, id 2595, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.145 > 192.168.1.254: ICMP echo request, id 5318, seq 2, length 64 13:18:37.618864 IP (tos 0x68, ttl 64, id 14493, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.145: ICMP echo reply, id 5318, seq 2, length 64 13:18:37.743650 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.87 tell 192.168.1.86, length 46 13:18:38.134997 IP (tos 0x0, ttl 128, id 23547, offset 0, flags [none], proto UDP (17), length 229) 192.168.1.86.138 > 192.168.1.255.138: NBT UDP PACKET(138) 13:18:38.614580 IP (tos 0x0, ttl 64, id 2596, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.145 > 192.168.1.254: ICMP echo request, id 5318, seq 3, length 64 13:18:38.793479 IP (tos 0x68, ttl 64, id 14495, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.145: ICMP echo reply, id 5318, seq 3, length 64 13:18:39.151282 IP6 (class 0x68, hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::5a98:35ff:fe57:e070 > ff02::1:ff6b:e9b4: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001:818:d812:da00:8ae3:abff:fe6b:e9b4 source link-address option (1), length 8 (1): 58:98:35:57:a0:70 13:18:39.615612 IP (tos 0x0, ttl 64, id 2597, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.145 > 192.168.1.254: ICMP echo request, id 5318, seq 4, length 64 13:18:39.746981 IP (tos 0x68, ttl 64, id 14496, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.145: ICMP echo reply, id 5318, seq 4, length 64 --- 192.168.1.254 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4008ms rtt min/avg/max/mdev = 2.793/67.810/178.934/73.108 ms root@ubuntu:~# killall tcpdump >> /dev/null 2>&1 9 packets captured 12 packets received by filter 0 packets dropped by kernel [1]+ Done tcpdump -n -i eth0.1 -v root@ubuntu:~# tcpdump -n -i eth0.2 -v & [1] 5320 root@ubuntu:~# ping -c5 -q -I eth0.2 192.168.1.254 PING 192.168.1.254 (192.168.1.254) from 192.168.1.146 eth0.2: 56(84) bytes of data. tcpdump: listening on eth0.2, link-type EN10MB (Ethernet), capture size 65535 bytes 13:18:41.536874 ARP, Ethernet (len 6), IPv4 (len 4), Reply 192.168.1.254 is-at 58:98:35:57:a0:70, length 46 13:18:41.536933 IP (tos 0x0, ttl 64, id 2599, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.146 > 192.168.1.254: ICMP echo request, id 5321, seq 1, length 64 13:18:41.539255 IP (tos 0x68, ttl 64, id 14507, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.146: ICMP echo reply, id 5321, seq 1, length 64 13:18:42.127715 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.87 tell 192.168.1.86, length 46 13:18:42.511725 IP (tos 0x0, ttl 64, id 2600, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.146 > 192.168.1.254: ICMP echo request, id 5321, seq 2, length 64 13:18:42.514385 IP (tos 0x68, ttl 64, id 14527, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.146: ICMP echo reply, id 5321, seq 2, length 64 13:18:42.743856 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.87 tell 192.168.1.86, length 46 13:18:43.511727 IP (tos 0x0, ttl 64, id 2601, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.146 > 192.168.1.254: ICMP echo request, id 5321, seq 3, length 64 13:18:43.513768 IP (tos 0x68, ttl 64, id 14528, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.146: ICMP echo reply, id 5321, seq 3, length 64 13:18:43.637598 IP (tos 0x0, ttl 128, id 23551, offset 0, flags [none], proto UDP (17), length 225) 192.168.1.86.17500 > 255.255.255.255.17500: UDP, length 197 13:18:43.641185 IP (tos 0x0, ttl 128, id 23552, offset 0, flags [none], proto UDP (17), length 225) 192.168.1.86.17500 > 192.168.1.255.17500: UDP, length 197 13:18:43.641201 IP (tos 0x0, ttl 128, id 23553, offset 0, flags [none], proto UDP (17), length 225) 192.168.1.86.17500 > 255.255.255.255.17500: UDP, length 197 13:18:43.743890 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.87 tell 192.168.1.86, length 46 13:18:44.510758 IP (tos 0x0, ttl 64, id 2602, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.146 > 192.168.1.254: ICMP echo request, id 5321, seq 4, length 64 13:18:44.512892 IP (tos 0x68, ttl 64, id 14538, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.146: ICMP echo reply, id 5321, seq 4, length 64 13:18:45.510794 IP (tos 0x0, ttl 64, id 2603, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.146 > 192.168.1.254: ICMP echo request, id 5321, seq 5, length 64 13:18:45.519701 IP (tos 0x68, ttl 64, id 14539, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.146: ICMP echo reply, id 5321, seq 5, length 64 13:18:49.287554 IP6 (class 0x68, hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::5a98:35ff:fe57:e070 > ff02::1:ff6b:e9b4: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001:818:d812:da00:8ae3:abff:fe6b:e9b4 source link-address option (1), length 8 (1): 58:98:35:57:a0:70 13:18:50.013463 IP (tos 0x0, ttl 255, id 50737, offset 0, flags [DF], proto UDP (17), length 73) 192.168.1.146.5353 > 224.0.0.251.5353: 0 [2q] PTR (QM)? _ipps._tcp.local. PTR (QM)? _ipp._tcp.local. (45) 13:18:50.218874 IP6 (class 0x68, hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::5a98:35ff:fe57:e070 > ff02::1:ff6b:e9b4: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001:818:d812:da00:8ae3:abff:fe6b:e9b4 source link-address option (1), length 8 (1): 58:98:35:57:a0:70 13:18:51.129961 IP6 (class 0x68, hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::5a98:35ff:fe57:e070 > ff02::1:ff6b:e9b4: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001:818:d812:da00:8ae3:abff:fe6b:e9b4 source link-address option (1), length 8 (1): 58:98:35:57:a0:70 13:18:52.197074 IP6 (hlim 255, next-header UDP (17) payload length: 53) 2001:818:d812:da00:200:ff:fe00:22.5353 > ff02::fb.5353: [udp sum ok] 0 [2q] PTR (QM)? _ipps._tcp.local. PTR (QM)? _ipp._tcp.local. (45) 13:18:54.128240 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.87 tell 192.168.1.86, length 46 --- 192.168.1.254 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4000ms root@ubuntu:~# killall tcpdump >> /dev/null 2>&1 13:18:54.657731 IP6 (class 0x68, hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::5a98:35ff:fe57:e070 > ff02::1:ff6b:e9b4: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001:818:d812:da00:8ae3:abff:fe6b:e9b4 source link-address option (1), length 8 (1): 58:98:35:57:a0:70 13:18:54.743174 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.87 tell 192.168.1.86, length 46 25 packets captured 26 packets received by filter 0 packets dropped by kernel [1]+ Done tcpdump -n -i eth0.2 -v root@ubuntu:~# tcpdump -n -i eth0.3 icmp & [1] 5324 root@ubuntu:~# ping -c5 -q -I eth0.3 192.168.1.254 PING 192.168.1.254 (192.168.1.254) from 192.168.1.147 eth0.3: 56(84) bytes of data. tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0.3, link-type EN10MB (Ethernet), capture size 65535 bytes 13:18:56.373434 IP 192.168.1.147 > 192.168.1.254: ICMP echo request, id 5325, seq 1, length 64 13:18:57.372116 IP 192.168.1.147 > 192.168.1.254: ICMP echo request, id 5325, seq 2, length 64 13:18:57.381263 IP 192.168.1.254 > 192.168.1.147: ICMP echo reply, id 5325, seq 2, length 64 13:18:58.371141 IP 192.168.1.147 > 192.168.1.254: ICMP echo request, id 5325, seq 3, length 64 13:18:58.373275 IP 192.168.1.254 > 192.168.1.147: ICMP echo reply, id 5325, seq 3, length 64 13:18:59.371165 IP 192.168.1.147 > 192.168.1.254: ICMP echo request, id 5325, seq 4, length 64 13:18:59.373259 IP 192.168.1.254 > 192.168.1.147: ICMP echo reply, id 5325, seq 4, length 64 13:19:00.371211 IP 192.168.1.147 > 192.168.1.254: ICMP echo request, id 5325, seq 5, length 64 13:19:00.373278 IP 192.168.1.254 > 192.168.1.147: ICMP echo reply, id 5325, seq 5, length 64 --- 192.168.1.254 ping statistics --- 5 packets transmitted, 1 received, 80% packet loss, time 4001ms rtt min/avg/max/mdev = 13.666/13.666/13.666/0.000 ms root@ubuntu:~# killall tcpdump >> /dev/null 2>&1 9 packets captured 10 packets received by filter 0 packets dropped by kernel [1]+ Done tcpdump -n -i eth0.3 icmp root@ubuntu:~# arp -n Address HWtype HWaddress Flags Mask Iface 192.168.1.254 ether 58:98:35:57:a0:70 C eth0.1 192.168.1.254 ether 58:98:35:57:a0:70 C eth0.2 192.168.1.254 ether 58:98:35:57:a0:70 C eth0.3

    Read the article

  • Building a SOA/BPM/BAM Cluster Part I &ndash; Preparing the Environment

    - by antony.reynolds
    An increasing number of customers are using SOA Suite in a cluster configuration, I might hazard to say that the majority of production deployments are now using SOA clusters.  So I thought it may be useful to detail the steps in building an 11g cluster and explain a little about why things are done the way they are. In this series of posts I will explain how to build a SOA/BPM cluster using the Enterprise Deployment Guide. This post will explain the setting required to prepare the cluster for installation and configuration. Software Required The following software is required for an 11.1.1.3 SOA/BPM install. Software Version Notes Oracle Database Certified databases are listed here SOA & BPM Suites require a working database installation. Repository Creation Utility (RCU) 11.1.1.3 If upgrading an 11.1.1.2 repository then a separate script is available. Web Tier Utilities 11.1.1.3 Provides Web Server, 11.1.1.3 is an upgrade to 11.1.1.2, so 11.1.1.2 must be installed first. Web Tier Utilities 11.1.1.3 Web Server, 11.1.1.3 Patch.  You can use the 11.1.1.2 version without problems. Oracle WebLogic Server 11gR1 10.3.3 This is the host platform for 11.1.1.3 SOA/BPM Suites. SOA Suite 11.1.1.2 SOA Suite 11.1.1.3 is an upgrade to 11.1.1.2, so 11.1.1.2 must be installed first. SOA Suite 11.1.1.3 SOA Suite 11.1.1.3 patch, requires 11.1.12 to have been installed. My installation was performed on Oracle Enterprise Linux 5.4 64-bit. Database I will not cover setting up the database in this series other than to identify the database requirements.  If setting up a SOA cluster then ideally we would also be using a RAC database.  I assume that this is running on separate machines to the SOA cluster.  Section 2.1, “Database”, of the EDG covers the database configuration in detail. Settings The database should have processes set to at least 400 if running SOA/BPM and BAM. alter system set processes=400 scope=spfile Run RCU The Repository Creation Utility creates the necessary database tables for the SOA Suite.  The RCU can be run from any machine that can access the target database.  In 11g the RCU creates a number of pre-defined users and schema with a user defiend prefix.  This allows you to have multiple 11g installations in the same database. After running the RCU you need to grant some additional privileges to the soainfra user.  The soainfra user should have privileges on the transaction tables. grant select on sys.dba_pending_transactions to prefix_soainfra Grant force any transaction to prefix_soainfra Machines The cluster will be built on the following machines. EDG Name is the name used for this machine in the EDG. Notes are a description of the purpose of the machine. EDG Name Notes LB External load balancer to distribute load across and failover between web servers. WEBHOST1 Hosts a web server. WEBHOST2 Hosts a web server. SOAHOST1 Hosts SOA components. SOAHOST2 Hosts SOA components. BAMHOST1 Hosts BAM components. BAMHOST2 Hosts BAM components. Note that it is possible to collapse the BAM servers so that they run on the same machines as the SOA servers. In this case BAMHOST1 and SOAHOST1 would be the same, as would BAMHOST2 and SOAHOST2. The cluster may include more than 2 servers and in this case we add SOAHOST3, SOAHOST4 etc as needed. My cluster has WEBHOST1, SOAHOST1 and BAMHOST1 all running on a single machine. Software Components The cluster will use the following software components. EDG Name is the name used for this machine in the EDG. Type is the type of component, generally a WebLogic component. Notes are a description of the purpose of the component. EDG Name Type Notes AdminServer Admin Server Domain Admin Server WLS_WSM1 Managed Server Web Services Manager Policy Manager Server WLS_WSM2 Managed Server Web Services Manager Policy Manager Server WLS_SOA1 Managed Server SOA/BPM Managed Server WLS_SOA2 Managed Server SOA/BPM Managed Server WLS_BAM1 Managed Server BAM Managed Server running Active Data Cache WLS_BAM2 Managed Server BAM Manager Server without Active Data Cache   Node Manager Will run on all hosts with WLS servers OHS1 Web Server Oracle HTTP Server OHS2 Web Server Oracle HTTP Server LB Load Balancer Load Balancer, not part of SOA Suite The above assumes a 2 node cluster. Network Configuration The SOA cluster requires an extensive amount of network configuration.  I would recommend assigning a private sub-net (internal IP addresses such as 10.x.x.x, 192.168.x.x or 172.168.x.x) to the cluster for use by addresses that only need to be accessible to the Load Balancer or other cluster members.  Section 2.2, "Network", of the EDG covers the network configuration in detail. EDG Name is the hostname used in the EDG. IP Name is the IP address name used in the EDG. Type is the type of IP address: Fixed is fixed to a single machine. Floating is assigned to one of several machines to allow for server migration. Virtual is assigned to a load balancer and used to distribute load across several machines. Host is the host where this IP address is active.  Note for floating IP addresses a range of hosts is given. Bound By identifies which software component will use this IP address. Scope shows where this IP address needs to be resolved. Cluster scope addresses only have to be resolvable by machines in the cluster, i.e. the machines listed in the previous section.  These addresses are only used for inter-cluster communication or for access by the load balancer. Internal scope addresses Notes are comments on why that type of IP is used. EDG Name IP Name Type Host Bound By Scope Notes ADMINVHN VIP1 Floating SOAHOST1-SOAHOSTn AdminServer Cluster Admin server, must be able to migrate between SOA server machines. SOAHOST1 IP1 Fixed SOAHOST1 NodeManager, WLS_WSM1 Cluster WSM Server 1 does not require server migration. SOAHOST2 IP2 Fixed SOAHOST1 NodeManager, WLS_WSM2 Cluster WSM Server 2 does not require server migration SOAHOST1VHN VIP2 Floating SOAHOST1-SOAHOSTn WLS_SOA1 Cluster SOA server 1, must be able to migrate between SOA server machines SOAHOST2VHN VIP3 Floating SOAHOST1-SOAHOSTn WLS_SOA2 Cluster SOA server 2, must be able to migrate between SOA server machines BAMHOST1 IP4 Fixed BAMHOST1 NodeManager Cluster   BAMHOST1VHN VIP4 Floating BAMHOST1-BAMHOSTn WLS_BAM1 Cluster BAM server 1, must be able to migrate between BAM server machines BAMHOST2 IP3 Fixed BAMHOST2 NodeManager, WLS_BAM2 Cluster BAM server 2 does not require server migration WEBHOST1 IP5 Fixed WEBHOST1 OHS1 Cluster   WEBHOST2 IP6 Fixed WEBHOST2 OHS2 Cluster   soa.mycompany.com VIP5 Virtual LB LB Public External access point to SOA cluster. admin.mycompany.com VIP6 Virtual LB LB Internal Internal access to WLS console and EM soainternal.mycompany.com VIP7 Virtual LB LB Internal Internal access point to SOA cluster Floating IP addresses are IP addresses that may be re-assigned between machines in the cluster.  For example in the event of failure of SOAHOST1 then WLS_SOA1 will need to be migrated to another server.  In this case VIP2 (SOAHOST1VHN) will need to be activated on the new target machine.  Once set up the node manager will manage registration and removal of the floating IP addresses with the exception of the AdminServer floating IP address. Note that if the BAMHOSTs and SOAHOSTs are the same machine then you can obviously share the hostname and fixed IP addresses, but you still need separate floating IP addresses for the different managed servers.  The hostnames don’t have to be the ones given in the EDG, but they must be distinct in the same way as the ETC names are distinct.  If the type is a fixed IP then if the addresses are the same you can use the same hostname, for example if you collapse the soahost1, bamhost1 and webhost1 onto a single machine then you could refer to them all as HOST1 and give them the same IP address, however SOAHOST1VHN can never be the same as BAMHOST1VHN because these are floating IP addresses. Notes on DNS IP addresses that are of scope “Cluster” just need to be in the hosts file (/etc/hosts on Linux, C:\Windows\System32\drivers\etc\hosts on Windows) of all the machines in the cluster and the load balancer.  IP addresses that are of scope “Internal” need to be available on the internal DNS servers, whilst IP addresses of scope “Public” need to be available on external and internal DNS servers. Shared File System At a minimum the cluster needs shared storage for the domain configuration, XA transaction logs and JMS file stores.  It is also possible to place the software itself on a shared server.  I strongly recommend that all machines have the same file structure for their SOA installation otherwise you will experience pain!  Section 2.3, "Shared Storage and Recommended Directory Structure", of the EDG covers the shared storage recommendations in detail. The following shorthand is used for locations: ORACLE_BASE is the root of the file system used for software and configuration files. MW_HOME is the location used by the installed SOA/BPM Suite installation.  This is also used by the web server installation.  In my installation it is set to <ORACLE_BASE>/SOA11gPS2. ORACLE_HOME is the location of the Oracle SOA components or the Oracle Web components.  This directory is installed under the the MW_HOME but the name is decided by the user at installation, default values are Oracle_SOA1 and Oracle_Web1.  In my installation they are set to <MW_HOME>/Oracle_SOA and <MW_HOME>/Oracle _WEB. ORACLE_COMMON_HOME is the location of the common components and is located under the MW_HOME directory.  This is always <MW_HOME>/oracle_common. ORACLE_INSTANCE is used by the Oracle HTTP Server and/or Oracle Web Cache.  It is recommended to create it under <ORACLE_BASE>/admin.  In my installation they are set to <ORACLE_BASE>/admin/Web1, <ORACLE_BASE>/admin/Web2 and <ORACLE_BASE>/admin/WC1. WL_HOME is the WebLogic server home and is always found at <MW_HOME>/wlserver_10.3. Key file locations are shown below. Directory Notes <ORACLE_BASE>/admin/domain_name/aserver/domain_name Shared location for domain.  Used to allow admin server to manually fail over between machines.  When creating domain_name provide the aserver directory as the location for the domain. In my install this is <ORACLE_BASE>/admin/aserver/soa_domain as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/aserver/applications Shared location for deployed applications.  Needs to be provided when creating the domain. In my install this is <ORACLE_BASE>/admin/aserver/applications as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/mserver/domain_name Either unique location for each machine or can be shared between machines to simplify task of packing and unpacking domain.  This acts as the managed server configuration location.  Keeping it separate from Admin server helps to avoid problems with the managed servers messing up the Admin Server. In my install this is <ORACLE_BASE>/admin/mserver/soa_domain as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/mserver/applications Either unique location for each machine or can be shared between machines.  Holds deployed applications. In my install this is <ORACLE_BASE>/admin/mserver/applications as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/soa_cluster_name Shared directory to hold the following   dd – deployment descriptors   jms – shared JMS file stores   fadapter – shared file adapter co-ordination files   tlogs – shared transaction log files In my install this is <ORACLE_BASE>/admin/soa_cluster. <ORACLE_BASE>/admin/instance_name Local folder for web server (OHS) instance. In my install this is <ORACLE_BASE>/admin/web1 and <ORACLE_BASE>/admin/web2. I also have <ORACLE_BASE>/admin/wc1 for the Web Cache I use as a load balancer. <ORACLE_BASE>/product/fmw This can be a shared or local folder for the SOA/BPM Suite software.  I used a shared location so I only ran the installer once. In my install this is <ORACLE_BASE>/SOA11gPS2 All the shared files need to be put onto a shared storage media.  I am using NFS, but recommendation for production would be a SAN, with mirrored disks for resilience. Collapsing Environments To reduce the hardware requirements it is possible to collapse the BAMHOST, SOAHOST and WEBHOST machines onto a single physical machine.  This will require more memory but memory is a lot cheaper than additional machines.  For environments that require higher security then stay with a separate WEBHOST tier as per the EDG.  Similarly for high volume environments then keep a separate set of machines for BAM and/or Web tier as per the EDG. Notes on Dev Environments In a dev environment it is acceptable to use a a single node (non-RAC) database, but be aware that the config of the data sources is different (no need to use multi-data source in WLS).  Typically in a dev environment we will collapse the BAMHOST, SOAHOST and WEBHOST onto a single machine and use a software load balancer.  To test a cluster properly we will need at least 2 machines. For my test environment I used Oracle Web Cache as a load balancer.  I ran it on one of the SOA Suite machines and it load balanced across the Web Servers on both machines.  This was easy for me to set up and I could administer it from a web based console.

    Read the article

  • File transfer problems through VPN when Cisco IPS is enabled

    - by Richard West
    We have a Cisco ASA 5510 firewall with the IPS module installed. We have a customer that we must connect to via VPN to their network to exchange files via FTP. We use the Cisco VPN client (version 5.0.01.0600) on our local workstations, which are behind the firewall and subject to the IPS. The VPN client is successful in connecting to the remote site. However when we start the FTP file transfer we are able to upload only 150K to 200K of data, then everything stops. A minute later the VPN session is dropped. I think I have isolated this to an IPS issue by temporarily disabling the Service Policy on the ASA for the IPS with the following command: access-list IPS line 1 extended permit ip 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 inactive After this command was issued I then established the VPN to the remote site and was successful in transferring the entire file. While still connected to the VPN and FTP session I issued the command to enable the IPS: access-list IPS line 1 extended permit ip 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 The file transfer was tried again and was once again successful so I closed the FTP session and reopened it, while keeping the same VPN session open. This file transfer was also successful. This told me that nothing with the FTP programs was being filtered or causing the problem. Furthermore, we use FTP to exchange files with many sites everyday without issue. I then disconnected the original VPN session, which was established when the access-list was inactive, and reconnected the VPN session, now with the access-list active. After starting the FTP transfer the file stopped after 150K. To me this seems like the IPS is blocking, or somehow interfering with the initial VPN setup to the remote site. This only started happening last week after the latest IPS signature updates were applied (sig version 407.0). Our previous sig version was 95 days old becuase the system was not auto updating itself. Any ideas on what could be causing this problem?

    Read the article

  • Whitelist IP from google-authenticator in sshd pam

    - by spudwaffle
    My Ubuntu 12.04 server uses the google-authenticator pam module to provide two step authentication for ssh. I need to make it so that a certain IP does not need to type the verification code. The /etc/pam.d/sshd file is below: # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password auth required pam_google_authenticator.so I've already tried adding a auth sufficient pam_exec.so /etc/pam.d/ip.sh line above the google-authenticator line, but I can't understand how to check an IP adress in the bash script.

    Read the article

  • Rebooting access point via SSH with pexpect... hangs. Any ideas?

    - by MiniQuark
    When I want to reboot my D-Link DWL-3200-AP access point from my bash shell, I connect to the AP using ssh and I just type reboot in the CLI interface. After about 30 seconds, the AP is rebooted: # ssh [email protected] [email protected]'s password: ******** Welcome to Wireless SSH Console!! ['help' or '?' to see commands] Wireless Driver Rev 4.0.0.167 D-Link Access Point wlan1 -> reboot Sound's great? Well unfortunately the ssh client process never exits, for some reason (maybe the AP kills the ssh server a bit too fast, I don't know). My ssh client process is completely blocked (even if I wait for several minutes, nothing happens). I always have to wait for the AP to reboot, then open another shell, find the ssh client process ID (using ps aux | grep ssh) then kill the ssh process using kill <pid>. That's quite annoying. So I decided to write a python script to reboot the AP. The script connects to the AP's CLI interface via ssh, using python-pexpect, and it tries to launch the "reboot" command. Here's what the script looks like: #!/usr/bin/python # usage: python reboot_ap.py {host} {user} {password} import pexpect import sys import time command = "ssh %(user)s@%(host)s"%{"user":sys.argv[2], "host":sys.argv[1]} session = pexpect.spawn(command, timeout=30) # start ssh process response = session.expect(r"password:") # wait for password prompt session.sendline(sys.argv[3]) # send password session.expect(" -> ") # wait for D-Link CLI prompt session.sendline("reboot") # send the reboot command time.sleep(60) # make sure the reboot has time to actually take place session.close(force=True) # kill the ssh process The script connects properly to the AP (I tried running some other commands than reboot, they work fine), it sends the reboot command, waits for one minute, then kills the ssh process. The problem is: this time, the AP never reboots! I have no idea why. Any solution, anyone?

    Read the article

  • cups log kills ubuntu 12.04 and sudoer permissions changed

    - by peterretief
    I am using Ubuntu 12.04 as a desktop and recently had a weird crash with the log file for cups filling up the entire drive and not letting me back in, also what changed was /var/lib/sudo had changed from root to peter (me) I didn't make this change - I checked the history! I set the sudoers back to root and capped the max size for cups log Anyone had a similar experience? It feels like someone is messing around with my settings Is there any way to trace how the error occurred? Logs auth.log Jan 1 02:04:13 peter-desktop lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) Jan 1 02:04:13 peter-desktop lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :0 Jan 1 02:06:53 peter-desktop lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) Jan 1 02:06:53 peter-desktop lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :0 syslog Jan 1 02:04:13 peter-desktop rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="903" x-info="http://www.rsyslog.com"] start Jan 1 02:04:13 peter-desktop rsyslogd: rsyslogd's groupid changed to 103 Jan 1 02:04:13 peter-desktop rsyslogd: rsyslogd's userid changed to 101 Jan 1 02:04:13 peter-desktop rsyslogd-2039: Could not open output pipe '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ] Jan 1 02:04:13 peter-desktop bluetoothd[898]: Failed to init gatt_example plugin Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Initializing cgroup subsys cpuset Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Initializing cgroup subsys cpu Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Linux version 3.2.0-25-generic-pae (buildd@palmer) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #40-Ubuntu SMP Wed May 23 22:11:24 UTC 2012 (Ubuntu 3.2.0-25.40-generic-pae 3.2.18) Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] KERNEL supported cpus: Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] Intel GenuineIntel Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] AMD AuthenticAMD Jan 1 02:04:13 peter-desktop kernel: [ 0.000000] NSC Geode by NSC

    Read the article

  • Remote assistance from Remote Desktop sessions: unable to control

    - by syneticon-dj
    Since Remote Control (aka Session Shadowing) is gone for good in Server 2012 Remote Desktop Session hosts, I am looking for a replacement to support users in a cross-domain environment. Since Remote Assistance is supposed to work for Remote Desktop Sessions as well, I tried leveraging that for support purposes by enabling unsolicited remote assistance for all Remote Desktop Session Hosts via Group Policy. All seems to be working well except that the "expert" seems to be unable to actually excercise any mouse or keyboard control when the remote assistance session has been initiated from a Remote Desktop session itself. Mouse clicks and keyboard strokes from the "expert" session (Server 2012) seem to simply be ignored even after the assisted user has acknowledged the request for control. I would like to see this working through RD sessions for the support staff due to a number of reasons: not every support agent would have the appropriate client system version to support users on a specific terminal server (e.g. an agent might have a Windows Vista or Windows 7 station and thus be unable to offer assistance to users on Server 2012 RDSHs) a support agent would not necessarily have a station which is a member of the specific destination domain (mainly due to the reason that more than a single domain's users are supported) what am I missing?

    Read the article

  • One Active Directory, Multiple Remote Desktop Services (Server 2012 solution)

    - by Trinitrotoluene
    What I am trying to do is quite complex, so I figured I'd throw it out to a wider audience to see if anyone can find a flaw. What I am trying to do (as an MSP/VAR) is design a solution that will give multiple companies a session based remote desktop (companies that need to be kept completely seperate), using only a handful of servers. This is how I imagine it at the moment: CORE SERVER - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC01 (Active Directory Domain Services for mycloud.local) Server2: Cloud-EX01 (Exchange Server 2010 running multi tenant mode) Server3: Cloud-SG01 (Remote Desktop Gateway) CORE SERVER 2 - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC02 (Active Directory Domain Services for mycloud.local) Server2: Cloud-TS01 (Remote Desktop Session Host for Company A) Server3: Cloud-TS02 (Remote Desktop Session Host for Company B) Server4: Cloud-TS03 (Remote Desktop Session Host for Company C) What I thought about doing was setting up each Organisation in their own OU (perhaps creating their OU structure based on the Excahnge 2010 tenant OU structure so the accounts are linked). Each company would get a Remote Desktop Session Host server that would also serve as a file server. This server would be seperated from the rest on its own range. The server Cloud-SG01 would have access to all these networks and route the traffic to the appropriate network when a client connects and authenticated so they are pushed onto the correct server (Based on session collections in 2012). I won't lie this is something I have come up with quite quickly so there may well be something gapingly obvious that I am missing. Any feedback would be appreciated.

    Read the article

  • Windows 2008 R2 RDS - Double Login

    - by colo_joe
    Issue: Double logins when connecting to RemoteApps or Remote Desktop Environment: Gateway = 1 server 2008 R2 - Roles = Gateway, Session Broker, Connection Mgr, Session Host Configuration server Session hosts = 2 servers 2008 R2 - Roles = App Manager and Session host configuration Testing: I can get to the url http://RDS.domain.com/rdweb - I get prompted for authentication (1) Pass authentication, get list of remote apps. Click on remoteapps or remote desktop, get prompted for authentication again (2). Pass authentication, I get access to app or RDP. Done so far. On session host Signed rdp files with cert. Added the following to the custom RDP settings: Authenticaton level:i:0 = If server authentication fails, connect to the computer without warning (Connect and don’t warn me). prompt for credentials on client:i:1 = RDC will prompt for credentials when connecting to a server that does not support server authentication. enablecredsspsupport:i:1 = RDP will use CredSSP, if the operating system supports CredSSP. Edited the javascript file as found in http://support.microsoft.com/kb/977507 Added Connection ID, and added Web Access server to TS Web Access Computers group on the Session host servers, and Signed apps as found in hxxp://blogs.msdn.com/b/rds/archive/2009/08/11/introducing-web-single-sign-on-for-remoteapp-and-desktop-connections.aspx Note: This double login happens internally and externally.

    Read the article

  • PAM with KRB5 to Active Directory - How to prevent update of AD password?

    - by Ex Umbris
    I have a working Fedora 9 system that's set up to authenticate users via PAM - krb5 - Active Directory. I'm migrating this to Fedora 14, and everything works, but it's working too well :-) On Fedora 9, if a Linux user updated their password, it did not propagate to their Active Directory account. On Fedora 14, it is changing their A/D password. The problem is I don't want A/D to be updated. Here's my password-auth-ac: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_krb5.so use_first_pass auth required pam_deny.so account required pam_unix.so account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account [default=bad success=ok user_unknown=ignore] pam_krb5.so account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 type= password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok password sufficient pam_krb5.so use_authtok password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so -session optional pam_systemd.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_krb5.so I tried removing the line password sufficient pam_krb5.so use_authtok But then when attempting to change the Linux password, if they provide their A/D password for the authentication prompt, they get the error: passwd: Authentication token manipulation error What I want to achieve is: Allow authentication with either the A/D or Linux password (the Linux password is a fall-back for certain sysadmin users in case A/D is unavailable for some reason). This is working now. Allow users to change their Linux passwords without affecting their A/D passwords. Is this possible?

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >