Search Results

Search found 10481 results on 420 pages for 'port mirroring'.

Page 400/420 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • OpenVPN: Connection established but can’t connect to server

    - by Maik
    I am trying to set up OpenVPN to allow me to connect a number of laptops to my network in a way that allows the laptops to connect to specific computers via HTTP (to e.g. a server management page) and windows shares (to access files) In the test environment my laptops live in a network with a 192.168.1.X address range. The host-network has a 10.66.77.X address range The server hosting the OpenVPN server has address 10.77.10.20. I need to access some application server web pages on this machine, accessible on various ports The server with the windows shares as well as some other web based pages I need to access is on address 10.66.77.20 The config files for server and laptop are attached below. The laptop establishes the VPN connection without problems, but I cannot access any of the machines, even a simple ping fails. Maybe a routing problem? The routing table for the laptop is shown below as well - every idea is appreciated! Thanks! Maik Server config file port 1194 dev tun tls-server ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/projects.crt key /etc/openvpn/keys/projects.key dh /etc/openvpn/keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "route 10.66.77.0 255.255.255.0" keepalive 10 60 inactive 600 route 10.8.0.1 255.255.255.0 user openvpn group openvpn persist-tun persist-key verb 4 client config file dev tun proto udp remote SERVERADDR 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert accountingLaptop.crt key accountingLaptop.key ns-cert-type server comp-lzo verb 3 Resulting routing table on client laptop C:\Documents and Settings\User>route print =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...00 23 5a 9b 64 9b ...... Atheros AR8132 PCI-E Fast Ethernet Controller - Packet Scheduler Miniport 0x3 ...00 24 2c 35 c9 6b ...... Dell Wireless 1395 WLAN Mini-Card - Packet Sched uler Miniport 0x4 ...00 ff 5e 03 43 9b ...... TAP-Win32 Adapter V9 - Packet Scheduler Miniport =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.129 25 10.8.0.1 255.255.255.255 10.8.0.5 10.8.0.6 1 10.8.0.4 255.255.255.252 10.8.0.6 10.8.0.6 30 10.8.0.6 255.255.255.255 127.0.0.1 127.0.0.1 30 10.66.77.0 255.255.255.0 10.8.0.5 10.8.0.6 1 10.255.255.255 255.255.255.255 10.8.0.6 10.8.0.6 30 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.1.0 255.255.255.0 192.168.1.129 192.168.1.129 25 192.168.1.129 255.255.255.255 127.0.0.1 127.0.0.1 25 192.168.1.255 255.255.255.255 192.168.1.129 192.168.1.129 25 224.0.0.0 240.0.0.0 10.8.0.6 10.8.0.6 30 224.0.0.0 240.0.0.0 192.168.1.129 192.168.1.129 25 255.255.255.255 255.255.255.255 10.8.0.6 2 1 255.255.255.255 255.255.255.255 10.8.0.6 10.8.0.6 1 255.255.255.255 255.255.255.255 192.168.1.129 192.168.1.129 1 Default Gateway: 192.168.1.1 =========================================================================== Persistent Routes: None

    Read the article

  • Apache-Mina FTPServer Issue — unable to login into apache ftp server while using database user manager

    - by piyush
    I am unable to login into apache ftp server while using database user manager: while entering username and password,I am getting following error in log file: [ INFO] 2013-02-07 20:51:07,779 [] [0:0:0:0:0:0:0:1] RECEIVED: USER piyush [ INFO] 2013-02-07 20:51:07,781 [piyush] [0:0:0:0:0:0:0:1] SENT: 331 User name okay, need password for piyush. [ INFO] 2013-02-07 20:51:07,784 [piyush] [0:0:0:0:0:0:0:1] RECEIVED: PASS ***** [ WARN] 2013-02-07 20:51:07,785 [piyush] [0:0:0:0:0:0:0:1] User failed to log in [ WARN] 2013-02-07 20:51:08,285 [piyush] [0:0:0:0:0:0:0:1] Login failure - piyush [ INFO] 2013-02-07 20:51:08,286 [piyush] [0:0:0:0:0:0:0:1] SENT: 530 Authentication failed. [ INFO] 2013-02-07 20:51:08,286 [piyush] [0:0:0:0:0:0:0:1] RECEIVED: QUIT [ INFO] 2013-02-07 20:51:08,290 [piyush] [0:0:0:0:0:0:0:1] SENT: 221 Goodbye. [ INFO] 2013-02-07 20:51:08,291 [piyush] [0:0:0:0:0:0:0:1] CLOSED here is my xml file ftpd-typical.xml: <?xml version="1.0" encoding="UTF-8"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <server xmlns="http://mina.apache.org/ftpserver/spring/v1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:beans="http://www.springframework.org/schema/beans" xsi:schemaLocation=" http://mina.apache.org/ftpserver/spring/v1 http://mina.apache.org/ftpserver/ftpserver-1.0.xsd " id="Prometheus"> <listeners> <nio-listener name="default" port="2121" /> </listeners> <db-user-manager encrypt-passwords="salted"> <data-source> <beans:bean class="org.apache.commons.dbcp.BasicDataSource" > <beans:property name="driverClassName" value="com.mysql.jdbc.Driver" /> <beans:property name="url" value="jdbc:mysql://localhost/apache_test" /> <beans:property name="username" value="amy" /> <beans:property name="password" value="piyush" /> </beans:bean> </data-source> <insert-user>INSERT INTO FTP_USER (userid, userpassword, homedirectory, enableflag, writepermission, idletime, uploadrate, downloadrate) VALUES ('{userid}', '{userpassword}', '{homedirectory}', {enableflag}, {writepermission}, {idletime}, {uploadrate}, {downloadrate}) </insert-user> <update-user>UPDATE FTP_USER SET userpassword='{userpassword}',homedirectory='{homedirectory}',enableflag={enableflag},writepermission={writepermission},idletime={idletime},uploadrate={uploadrate},downloadrate={downloadrate} WHERE userid='{userid}' </update-user> <delete-user>DELETE FROM FTP_USER WHERE userid = '{userid}' </delete-user> <select-user>SELECT userid, userpassword, homedirectory, enableflag, writepermission, idletime, uploadrate, downloadrate, maxloginnumber, maxloginperip FROM FTP_USER WHERE userid = '{userid}' </select-user> <select-all-users>SELECT userid FROM FTP_USER ORDER BY userid </select-all-users> <is-admin>SELECT userid FROM FTP_USER WHERE userid='{userid}' AND userid='admin' </is-admin> <authenticate>SELECT userpassword from FTP_USER WHERE userid='{userid}'</authenticate> </db-user-manager> </server>

    Read the article

  • How to use Bonjour?

    - by Roman
    First, what exactly Bonjour does (pleas read my guesses written bellow)? Here I found out that Bonjour enables automatic discovery of computers, devices, and services on IP networks. But I thought that it not only "discovers devices on IP network" it also creates an IP network by assigning IP addresses to devices where Bonjour is running. Am I right? And I still miss the essence. Does it work in the following way? First I connect devices (for example laptops) physically so that they potentially can communicate with each other. Then, let say, on some laptops I have Bonjour running and then, as a consequence, these laptops assign IP addresses to them self in automatic way. So, laptops (where Bonjour is running) build an IP network. Does it work in this way? Or may be a computer running Bonjour is not considered as a service and it does not broadcast itself just because Bonjour is running on this computer. I mean that the applications running on the computers need to use Bonjour to broadcast themself. So, it is applications that broadcast themself (not computers) and it is not done automatically (application needs to broadcast themself explicitly). Is it right? How exactly my application can broadcast itself? Can I use command line to register an service (so that all applications using Bonjour knows that a new service appeared)? Further, I would like to have an application which use the IP network created by Bonjour. For that my application needs to know which devices/services are present in the network. In more details, my application needs to have a list of services. Each service in the list should have a name, the IP address where it is running and the port which is used by the application. Can Bonjour provide this information in some way? If it is the case, how exactly it works. How my program can get this information from Bonjour? Can my program read some file created by Bonjour and containing the above mentioned information? Can I use some commands in command line to retrieve this information? I have a special interest in accessing the information about services from files, environment variables or commands in command line. These options seems to me to be the simplest! Since in these case I do not need to use any additional libraries to communicate with Bonjour from a particular programming language. P.S. Pleas ask questions if something is not clear in my question. I will try to formulate my question in a more clear way. P.P.S. I use Windows 7. ADDED: I plan to write my applications in PHP. Every computer should run a Apache web server. And I want to use Bonjour to help computer discover each other (computers are working in a local network).

    Read the article

  • FFSERVER - streaming an ASF video as Webm output

    - by Emmanuel Brunet
    I'm trying to stream an IP webcam ASF live stream to a ffserver to output a webm video format. The server starts successfully but the ffserver commands used to feed the ffserver fails and generates a core dump. Environment Debian 7.5 ffmpeg 2.2 Input stream $ ffprobe http://account:password@webcam/videostream.asf Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, 1 channels, s16p, 32 kb/s ffserver configuration my ffserver configuration is : Port 8091 RTSPPort 554 BindAddress 192.168.1.62 MaxHTTPConnections 1000 MaxClients 100 MaxBandwidth 1000 CustomLog - <Feed webcam.ffm> File /tmp/webcam.ffm FileMaxSize 500M ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Feed> <Stream webcam.webm> # Output stream URL definition Feed webcam.ffm # Feed from which to receive video Format webm # Audio settings AudioCodec vorbis AudioBitRate 64 # Audio bitrate # Video settings VideoCodec libvpx VideoSize 640x480 # Video resolution VideoFrameRate 25 # Video FPS AVOptionVideo flags +global_header # Parameters passed to encoder # (same as ffmpeg command-line parameters) AVOptionVideo cpu-used 0 AVOptionVideo qmin 10 AVOptionVideo qmax 42 AVOptionVideo quality good AVOptionAudio flags +global_header PreRoll 15 StartSendOnKey # VideoBitRate 32 # Video bitrate </Stream> <Stream status.html> Format status # Only allow local people to get the status ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Stream> ffmpeg feed I run the following command that fails $ ffmpeg -i http://account:password@webcam/videostream.asf http://192.168.1.62:8091/webcam.ffm http://192.168.1.62:8091/webcam.ffm Input #0, asf, from 'http://account:password@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x36a80c0] deprecated pixel format used, make sure you did set range correctly Segmentation fault I tryed $ ffmpeg -i http://account:password@webcam/videostream.asf -pix_fmt yuv420p http://192.168.1.62:8091/webcam.ffm But it raises the same error. Thanks for your help Edit For an easy testing (I thought), I tried to publish the whole ASF stream as is, meaning connecting the ASF webcam output stream to the ffserver that outputs ASF format too. And thus with mirrored encoding so I changed the ffserver configuration to ... <Stream webcam.asf> Feed webcam.ffm Format asf VideoFrameRate 25 VideoSize 640X480 VideoBitRate 256 VideoBufferSize 1000 VideoGopSize 30 AudioBitRate 32 StartSendOnKey </Stream> ... And the output is now : Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 1k tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x3d620c0] deprecated pixel format used, make sure you did set range correctly Output #0, ffm, to 'http://192.168.1.62:8091/webcam.ffm': Metadata: creation_time : now encoder : Lavf55.40.100 Stream #0:0: Audio: wmav2, 22050 Hz, mono, fltp, 32 kb/s Metadata: encoder : Lavc55.64.100 wmav2 Stream #0:1: Video: msmpeg4v3 (msmpeg4), yuv420p, 640x480, q=2-31, 256 kb/s, 1k fps, 1000k tbn, 1k tbc Metadata: Stream mapping: Stream #0:1 -> #0:0 (adpcm_ima_wav -> wmav2) Stream #0:0 -> #0:1 (mjpeg -> msmpeg4) Press [q] to stop, [?] for help Segmentation fault I can't even forward the stream.

    Read the article

  • Capistrano asks for SSH password when deploying from local machine to server

    - by GhostRider
    When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server. Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password. I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server. $ cap deploy:setup "no seed data" triggering start callbacks for `deploy:setup' * 13:42:18 == Currently executing `multistage:ensure' *** Defaulting to `development' * 13:42:18 == Currently executing `development' * 13:42:18 == Currently executing `deploy:setup' triggering before callbacks for `deploy:setup' * 13:42:18 == Currently executing `db:configure_mongoid' * executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config" servers: ["dev1.noob.com", "176.9.24.217"] Password: Cap script: # gem install capistrano capistrano-ext capistrano_colors begin; require 'capistrano_colors'; rescue LoadError; end require "bundler/capistrano" # RVM bootstrap # $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require 'rvm/capistrano' set :rvm_ruby_string, 'ruby-1.9.2-p290' set :rvm_type, :user # or :user # Application setup default_run_options[:pty] = true # allow pseudo-terminals ssh_options[:forward_agent] = true # forward SSH keys (this will use your SSH key to get the code from git repository) ssh_options[:port] = 22 set :ip, "dev1.noob.com" set :application, "flyingbird" set :repository, "repo-path" set :scm, :git set :branch, fetch(:branch, "master") set :deploy_via, :remote_cache set :rails_env, "production" set :use_sudo, false set :scm_username, "user" set :user, "user1" set(:database_username) { application } set(:production_database) { application + "_production" } set(:staging_database) { application + "_staging" } set(:development_database) { application + "_development" } role :web, ip # Your HTTP server, Apache/etc role :app, ip # This may be the same as your `Web` server role :db, ip, :primary => true # This is where Rails migrations will run # Use multi-staging require "capistrano/ext/multistage" set :stages, ["development", "staging", "production"] set :default_stage, rails_env before "deploy:setup", "db:configure_mongoid" # Uncomment if you use any of these databases after "deploy:update_code", "db:symlink_mongoid" after "deploy:update_code", "uploads:configure_shared" after "uploads:configure_shared", "uploads:symlink" after 'deploy:update_code', 'bundler:symlink_bundled_gems' after 'deploy:update_code', 'bundler:install' after "deploy:update_code", "rvm:trust_rvmrc" # Use this to update crontab if you use 'whenever' gem # after "deploy:symlink", "deploy:update_crontab" if ARGV.include?("seed_data") after "deploy", "db:seed" else p "no seed data" end #Custom tasks to handle resque and redis restart before "deploy", "deploy:stop_workers" after "deploy", "deploy:restart_redis" after "deploy", "deploy:start_workers" after "deploy", "deploy:cleanup" 'Create symlink for public uploads' namespace :uploads do task :symlink do run <<-CMD rm -rf #{release_path}/public/uploads && mkdir -p #{release_path}/public && ln -nfs #{shared_path}/public/uploads #{release_path}/public/uploads CMD end task :configure_shared do run "mkdir -p #{shared_path}/public" run "mkdir -p #{shared_path}/public/uploads" end end namespace :rvm do desc 'Trust rvmrc file' task :trust_rvmrc do run "rvm rvmrc trust #{current_release}" end end namespace :db do desc "Create mongoid.yml in shared path" task :configure_mongoid do db_config = <<-EOF defaults: &defaults host: localhost production: <<: *defaults database: #{production_database} staging: <<: *defaults database: #{staging_database} EOF run "mkdir -p #{shared_path}/config" put db_config, "#{shared_path}/config/mongoid.yml" end desc "Make symlink for mongoid.yml" task :symlink_mongoid do run "ln -nfs #{shared_path}/config/mongoid.yml #{release_path}/config/mongoid.yml" end desc "Fill the database with seed data" task :seed do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake db:seed" end end namespace :bundler do desc "Symlink bundled gems on each release" task :symlink_bundled_gems, :roles => :app do run "mkdir -p #{shared_path}/bundled_gems" run "ln -nfs #{shared_path}/bundled_gems #{release_path}/vendor/bundle" end desc "Install bundled gems " task :install, :roles => :app do run "cd #{release_path} && bundle install --deployment" end end namespace :deploy do task :start, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Restart the app" task :restart, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Start the workers" task :stop_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:stop_workers" end desc "Restart Redis server" task :restart_redis do "/etc/init.d/redis-server restart" end desc "Start the workers" task :start_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:start_workers" end end

    Read the article

  • Red Hat Yum not working out of the box?

    - by Tucker
    I have a server runnning Red Hat Enterprise Linux v5.6 in the cloud. My project constraints do not allow me to use another OS. When I created the cloud server, I was able to SSH into it and access the shell. I next ran the command: sudo yum update But the command failed. About a month ago I created another server with the same machine image and didn't have that error. Why is it failing now? The following is the terminal output sudo yum update Loaded plugins: security Repository rhel-server is listed more than once in the configuration Traceback (most recent call last): File "/usr/bin/yum", line 29, in ? yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 309, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 178, in main result, resultmsgs = base.doCommands() File "/usr/share/yum-cli/cli.py", line 345, in doCommands self._getTs(needTsRemove) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 101, in _getTs self._getTsInfo(remove_only) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 112, in _getTsInfo pkgSack = self.pkgSack File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 662, in <lambda> pkgSack = property(fget=lambda self: self._getSacks(), File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 502, in _getSacks self.repos.populateSack(which=repos) File "/usr/lib/python2.4/site-packages/yum/repos.py", line 260, in populateSack sack.populate(repo, mdtype, callback, cacheonly) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 168, in populate if self._check_db_version(repo, mydbtype): File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 226, in _check_db_version return repo._check_db_version(mdtype) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1233, in _check_db_version repoXML = self.repoXML File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1406, in <lambda> repoXML = property(fget=lambda self: self._getRepoXML(), File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1398, in _getRepoXML self._loadRepoXML(text=self) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1388, in _loadRepoXML return self._groupLoadRepoXML(text, ["primary"]) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1372, in _groupLoadRepoXML if self._commonLoadRepoXML(text): File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1208, in _commonLoadRepoXML result = self._getFileRepoXML(local, text) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 989, in _getFileRepoXML cache=self.http_caching == 'all') File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 826, in _getFile http_headers=headers, File "/usr/lib/python2.4/site-packages/urlgrabber/mirror.py", line 412, in urlgrab return self._mirror_try(func, url, kw) File "/usr/lib/python2.4/site-packages/urlgrabber/mirror.py", line 398, in _mirror_try return func_ref( *(fullurl,), **kwargs ) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 936, in urlgrab return self._retry(opts, retryfunc, url, filename) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 854, in _retry r = apply(func, (opts,) + args, {}) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 922, in retryfunc fo = URLGrabberFileObject(url, filename, opts) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 1010, in __init__ self._do_open() File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 1093, in _do_open fo, hdr = self._make_request(req, opener) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 1202, in _make_request fo = opener.open(req) File "/usr/lib64/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib64/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib64/python2.4/site-packages/M2Crypto/m2urllib2.py", line 82, in https_open h.request(req.get_method(), req.get_selector(), req.data, headers) File "/usr/lib64/python2.4/httplib.py", line 810, in request self._send_request(method, url, body, headers) File "/usr/lib64/python2.4/httplib.py", line 833, in _send_request self.endheaders() File "/usr/lib64/python2.4/httplib.py", line 804, in endheaders self._send_output() File "/usr/lib64/python2.4/httplib.py", line 685, in _send_output self.send(msg) File "/usr/lib64/python2.4/httplib.py", line 652, in send self.connect() File "/usr/lib64/python2.4/site-packages/M2Crypto/httpslib.py", line 47, in connect self.sock.connect((self.host, self.port)) File "/usr/lib64/python2.4/site-packages/M2Crypto/SSL/Connection.py", line 174, in connect ret = self.connect_ssl() File "/usr/lib64/python2.4/site-packages/M2Crypto/SSL/Connection.py", line 167, in connect_ssl return m2.ssl_connect(self.ssl, self._timeout) M2Crypto.SSL.SSLError: certificate verify failed

    Read the article

  • Debian, 6rd tunnel, and connection troubles

    - by Chris B
    Long story short I am having issues with IPv6 using a 6rd tunnel with my ISP, charter business. They offer a 6rd tunnel that I think I have properly set up, but the server doesn’t reply to every ipv6 request. When the server has the network interfaces idle with no traffic for about 10 minutes, then IPv6 stops accepting inbound connections. to re-allow it, I must go into the server, and make it do a outbound ipv6 connection (normally a ping) to start it back up. Whats weird though i that if I run iptraf when its not working, it still shows a inbound ipv6 packet… the server is just not replying, and I can’t figure out why. Also, if I try to access my server over IPv6 from a house about 1 mile away on the same ISP, it is never able to connect. it always times out, but again the iptraf shows a ipv6 inbound packet. Again, it just does not reply. To test if my server is accessible through IPv6 I always have to use my vzw 4g phone (they use IPv6) or ipv6proxy dot net. Here is all of the configuration information my ISP gives on there tunnel server: 6rd Prefix = 2602:100::/32 Border Relay Address = 68.114.165.1 6rd prefix length = 32 IPv4 mask length = 0 Here is my /etc/network/interfaces for ipv6 (used x's to block real addresses) auto charterv6 iface charterv6 inet6 v4tunnel address 2602:100:189f:xxxx::1 netmask 32 ttl 64 gateway ::68.114.165.1 endpoint 68.114.165.1 local 24.159.218.xxx up ip link set mtu 1280 dev charterv6 here is my iptables config filter :INPUT DROP [0:0] :fail2ban-ssh – [0:0] :OUTPUT ACCEPT [0:0] :FORWARD DROP [0:0] :hold – [0:0] -A INPUT -p tcp -m tcp —dport 22 -j fail2ban-ssh -A INPUT -m state —state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport -j ACCEPT —dports 80,443,25,465,110,995,143,993,587,465,22 -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp —dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp —dport 5900:5910 -j ACCEPT -A fail2ban-ssh -j RETURN -A INPUT -p icmp -j ACCEPT COMMIT and last here is my ip6tables firewall config filter :INPUT DROP [1653:339023] :FORWARD DROP [0:0] :OUTPUT ACCEPT [60141:13757903] :hold – [0:0] -A INPUT -m state —state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport —dports 80,443,25,465,110,995,143,993,587,465,22 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp —dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp —dport 5900:5910 -j ACCEPT -A INPUT -p ipv6-icmp -j ACCEPT COMMIT So Summary: 1.iptraf always shows IPv6 traffic, so its always making it to the server 2.server stops replying on ipv6 after no traffic for awhile (10 minutesish) until a outbound connection is made, then the process repeats. 3.server is NEVER accessable vi same ISP (yet iptraf still shows ipv6 request) Notes: When I try to access it from the same ISP from across town, even with iptables and ip6tables allowing ALL inbound traffic, this is what iptraf shows. IPv6 (92 bytes) from 97.92.18.xxx to 24.159.218.xxx on eth0 ICMP dest unrch (port) (120 bytes) from 24.159.218.xxx to 97.92.18.xxx on eth1 its strange, like its trying to forward to LAN? (eth1 is LAN, eth0 is WAN) even with the IPv6 address being set in the hosts file to the servers domain name. With iptables set up normally with the above configurations it only says this: IPv6 (100 bytes) from 97.92.18.xxx to 24.159.218.xxx on eth0 Im REALLY stuck on this, and any help would be GREATLY appreciated.

    Read the article

  • Rsync: how to mount truecrypt on-the-fly on the receiving side?

    - by deepc
    The short version: how can I keep an rsync backup on a truecrypt volume? The hard part is to mount/unmount this volume on the fly when it is needed for rsync. Details This is my current backup configuration (which works fairly well for the most part): backup source is on Win7 64 bit, destination is a remote Linux box (Debian) actual data transfer is done by rsync via ssh (cwRsync with cygwin) rsync daemon is started on demand via ssh On the Linux box the backup is protected by file permissions only. I want to increase security here and put the backup into a truecrypt volume. I can fuse-mount that volume manually in the shell. The question is now how can I make rsync not only open an ssh connection and starting the rsync daemon, but also to mount the truecrypt volume before (and unmount it after)? My money is on option --rsync-path which can be used to pass a command line to ssh - provided that stdin and stdout still work the same. I guess that command would have to be a shell script. Is this possible, and what would the script look like? For reference, here's a quote of that option: --rsync-path=PROGRAM Use this to specify what program is to be run on the remote machine to start-up rsync. Often used when rsync is not in the default remote-shell's path (e.g. --rsync-path=/usr/local/bin/rsync). Note that PROGRAM is run with the help of a shell, so it can be any program, script, or command sequence you'd care to run, so long as it does not corrupt the standard-in & standard-out that rsync is using to communicate. One tricky example is to set a different default directory on the remote machine for use with the --relative option. For instance: rsync -avR --rsync-path="cd /a/b && rsync" host:c/d /e/ This is the full rsync man page. Truecrypt volume auto-mount Solved! Turns out this option is actually key to auto-mounting the truecrypt volume on the remote side. The following command line does the trick (one line!): rsync $options -e "ssh -p $port -i ../.ssh/id_dsa" --rsync-path="/usr/local/bin/truecrypt -d && /usr/local/bin/truecrypt --fs-options=rw,sync,utf8,uid=$UID,umask=0007 --non-interactive -p $password $pathToVolume $remoteMountDir && rsync" $localSourceDir $user:$remoteMountMountDir Truecrypt volume auto-dismount Still open: how can I unmount the volume when rsync is done? Not sure if the following makes sense to anyone but I give it a try... Right now I am unmounting (truecrypt -d), then mounting again, then continuing with rsync. At this time rsync needs to do its thing but I dont know when its done. Adding ... rsync && truecrypt -d to the command line does not work because then the rsync daemon does not start. This is because rsync starts the daemon with parameter --server on the remote side and that parameter would go to the final truecrypt -d.

    Read the article

  • Magento installation problem on Nginx in Windows

    - by Nithin
    I am trying to install magento locally using nginx as the web server instead of Apache. I copied the magento folder to the html directory. When i try to call the magento folder, I get the 404 not found error. I am able to access other php files setup in the html folder and have PHP installed. Here is my config file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 8080; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; allow all; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME c:/nginx/html/$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } How do I fix this? This is what I found in the error.log file : 2011/09/06 12:22:35 [error] 5632#0: *1 "/cygdrive/c/nginx/html/magento/index.php/install/index.html" is not found (20: Not a directory), client: 127.0.0.1, server: localhost, request: "GET /magento/index.php/install/ HTTP/1.1", host: "localhost:8080"

    Read the article

  • Postfix/SMTPD Relay Access Denied when sending outside the network

    - by David
    I asked a very similar question some 4 or 5 months ago, but haven't tracked down a suitable answer. I decided to post a new question so that I can ... a) Post updated info b) post my most current postconf -n output When a user sends mail from inside the network (via webmail) to email addresses both inside and outside the network, the email is delivered. When a user with an email account on the system sends mail from outside the network, using the server as the relay, to addresses inside the network, the email is delivered. But [sometimes] when a user connects via SMTPD to send email to an external address, a Relay Access Denied error is returned: Feb 25 19:33:49 myers postfix/smtpd[8044]: NOQUEUE: reject: RCPT from host-68-169-158-182.WISOLT2.epbfi.com[68.169.158.182]: 554 5.7.1 <host-68-169-158-182.WISOLT2.epbfi.com[68.169.158.182]>: Client host rejected: Access denied; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<my-computer-name> Feb 25 19:33:52 myers postfix/smtpd[8044]: disconnect from host-68-169-158-182.WISOLT2.epbfi.com[68.169.158.182] Sending this through Microsoft Outlook 2003 generates the above log. However, sending through my iPhone, with the exact same settings, goes through fine: Feb 25 19:37:18 myers postfix/qmgr[3619]: A2D861302C9: from=<[email protected]>, size=1382, nrcpt=1 (queue active) Feb 25 19:37:18 myers amavis[2799]: (02799-09) FWD via SMTP: <[email protected]> -> <[email protected]>,BODY=7BIT 250 2.0.0 Ok, id=02799-09, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as A2D861302C9 Feb 25 19:37:18 myers amavis[2799]: (02799-09) Passed CLEAN, [68.169.158.182] [68.169.158.182] <[email protected]> -> <[email protected]>, Message-ID: <[email protected]>, mail_id: yMLvzVQJloFV, Hits: -9.607, size: 897, queued_as: A2D861302C9, 6283 ms Feb 25 19:37:18 myers postfix/lmtp[8752]: 2ED3A1302C8: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=6.6, delays=0.25/0.01/0.19/6.1, dsn=2.0.0, status=sent (250 2.0.0 Ok, id=02799-09, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as A2D861302C9) Feb 25 19:37:18 myers postfix/qmgr[3619]: 2ED3A1302C8: removed Outgoing Settings on Outlook 2003 match the settings on my iPhone: SMTP server: mail.my-domain.com Username: My full email address Uses SSL Server Port 587 Now, here's postconf -n. I realize the "My Networks" Parameter is a bit nasty. I have these IP addresses in here for just this reason, as others have been complaining of this problem too: alias_database = hash:/etc/postfix/aliases alias_maps = $alias_database append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix content_filter = amavisfeed:[127.0.0.1]:10024 daemon_directory = /usr/libexec/postfix debug_peer_level = 2 disable_vrfy_command = yes html_directory = no inet_interfaces = all mail_owner = postfix mail_spool_directory = /var/spool/mail mailbox_size_limit = 0 mailq_path = /usr/bin/mailq manpage_directory = /usr/share/man message_size_limit = 20480000 mydestination = $myhostname, localhost, localhost.$mydomain mydomain = my-domain.com myhostname = myers.my-domain.com mynetworks = 127.0.0.0/8, 74.125.113.27, 74.125.82.49, 74.125.79.27, 209.85.161.0/24, 209.85.214.0/24, 209.85.216.0/24, 209.85.212.0/24, 209.85.160.0/24 myorigin = $myhostname newaliases_path = /usr/bin/newaliases queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES receive_override_options = no_address_mappings recipient_delimiter = + relay_domains = $mydestination sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail setgid_group = postdrop smtp_bind_address = my-primary-server's IP address smtpd_banner = mail.my-domain.com smtpd_helo_required = yes smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/mailserver/postfix.pem smtpd_tls_key_file = /etc/ssl/mailserver/private/postfix.pem smtpd_tls_loglevel = 3 smtpd_tls_received_header = no smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 554 virtual_alias_maps = mysql:/etc/postfix/mysql-virtual-alias-maps.cf,mysql:/etc/postfix/mysql-email2email.cf virtual_gid_maps = static:5000 virtual_mailbox_base = /var/vmail virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf virtual_minimum_uid = 5000 virtual_transport = dovecot virtual_uid_maps = static:5000 If anyone has any ideas and can help me finally solve this issue once and for all, I'd be eternally grateful.

    Read the article

  • Is there really a need for encryption to have true wireless security? [closed]

    - by Cawas
    I welcome better key-wording here, both on tags and title. I'm trying to conceive a free, open and secure network environment that would work anywhere, from big enterprises to small home networks of just 1 machine. I think since wireless Access Points are the most, if not only, true weak point of a Local Area Network (let's not consider every other security aspect of having internet) there would be basically two points to consider here: Having an open AP for anyone to use the internet through Leaving the whole LAN also open for guests to be able to easily read (only) files on it, and even a place to drop files on Considering these two aspects, once everything is done properly... What's the most secure option between having that, or having just an encrypted password-protected wifi? Of course "both" would seem "more secure". But it shouldn't actually be anything substantial. That's the question, but I think it may need more elaborating on. If you don't think so, please feel free to skip the next (long) part. Elaborating more on the two aspects ... I've always had the feeling using any kind of the so called "wireless security" methods is actually a bad design. I'm talking mostly about encrypting and pass-phrasing (which are actually two different concepts), since I won't even consider hiding SSID and mac filtering. I understand it's a natural way of thinking. With cable networking nobody can access the network unless they have access to the physical cable, so you're "secure" in the physical way. In a way, encrypting is for wireless what building walls is for the cables. And giving pass-phrases would be adding a door with a key. But the cabling without encryption is also insecure. If someone plugin all the data is right there. So, while I can see the use for encrypting data, I don't think it's a security measure in wireless networks. It's wasting resources for too little gain. I believe we should encrypt only sensitive data regardless of wires. That's already done with HTTPS, so I don't really need to encrypt my torrents, for instance. They're torrents, they are meant to be freely shared! As for using passwords, they should be added to the users, always. Not to wifi. For securing files, truly, best solution is backup. Sure all that doesn't happen that often, but I won't consider the most situations where people just don't care. I think there are enough situations where we actually use passwords on our OS users, so let's go with that in mind. I keep promoting the Fonera concept as an instance. It opens up a free wifi port, if you choose so, and anyone can connect to the internet through that, without having any access to your LAN. It also uses a QoS which will never let your bandwidth drop from that public usage. That's security, and it's open. But it's lacking the second aspect. I'll probably be bashed for promoting the non-usage of WPA 2 with AES or whatever, but I wanted to know from more experienced (super) users out there: what do you think?

    Read the article

  • Connection Reset on MySQL query

    - by sunwukung
    OK, I'm flummoxed.(i've asked this question over on Stack too - but I need to get it fixed so I'm asking here too - any help is GREATLY appreciated) I'm trying to execute a query on a database (locally) and I keep getting a connection reset error. I've been using the method below in a generic DAO class to build a query string and pass to Zend_Db API. public function insert($params) { $loop = false; $keys = $values = ''; foreach($params as $k => $v){ if($loop == true){ $keys .= ','; $values .= ','; } $keys .= $this->db->quoteIdentifier($k); $values .= $this->db->quote($v); $loop = true; } $sql = "INSERT INTO " . $this->table_name . " ($keys) VALUES ($values)"; //formatResult returns an array of info regarding the status and any result sets of the query //I've commented that method call out anyway, so I don't think it's that try { $this->db->query($sql); return $this->formatResult(array( true, 'New record inserted into: '.$this->table_name )); }catch(PDOException $e) { return $this->formatResult($e); } } So far, this has worked fine - the errors have been occurring since we generated new tables to record user input. The insert string looks like this: INSERT INTO tablename(`id`,`title`,`summary`,`description`,`keywords`,`type_id`,`categories`) VALUES ('5539','Sample Title','Sample content',' \'Lorem ipsum dolor sit amet, consectetur adipiscing elit. In et pellentesque mauris. Curabitur hendrerit, leo id ultrices pellentesque, est purus mattis ligula, vitae imperdiet neque ligula bibendum sapien. Curabitur aliquet nisi et odio pharetra tincidunt. Phasellus sed iaculis nisl. Fusce commodo mauris et purus vehicula dictum. Nulla feugiat molestie accumsan. Donec fermentum libero in risus tempus elementum aliquam et magna. Fusce vitae sem metus. Aenean commodo pharetra risus, nec pellentesque augue ullamcorper nec. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nullam vel elit libero. Vestibulum in turpis nunc.\'','this,is,a,sample,array',1,'category title') Here are the parameters it's getting before assembling the query (var_dump): array 'id' => string '1' (length=4) 'title' => string 'Sample Title' (length=12) 'summary' => string 'Sample content' (length=14) 'description' => string '<p>'Lorem ipsum dolor sit amet, consectetur adipiscing elit. In et pellentesque mauris. Curabitur hendrerit, leo id ultrices pellentesque, est purus mattis ligula, vitae imperdiet neque ligula bibendum sapien. Curabitur aliquet nisi et odio pharetra tincidunt. Phasellus sed iaculis nisl. Fusce commodo mauris et purus vehicula dictum. Nulla feugiat molestie accumsan. Donec fermentum libero in risus tempus elementum aliquam et magna. Fusce vitae sem metus. Aenean commodo pharetra risus, nec pellentesque augue'... (length=677) 'keywords' => string 'this,is,a,sample,array' (length=22) 'type_id' => int 1 'categories' => string 'category title' (length=43) The next port of call was checking the limits on the table, since it seems to insert if the length of "description" is around the 300 mark (it varies between 310 - 330). The field limit is set to VARCHAR(1500) and the validation on this field won't allow anything past bigger than 1200 with HTML, 800 without. The real kicker is that if I take this sql string and execute it via the command line, it works fine - so I can't for the life of me figure out what's wrong. I've tried extending the server parameters i.e. http://stackoverflow.com/questions/1964554/unexpected-connection-reset-a-php-or-an-apache-issue So, in a nutshell, I'm stumped. Any ideas?

    Read the article

  • Internal drives vs USB-3 with external SSD or eSata with External SSD

    - by normstorm
    I have a need to carry VMWare Virtual Machines with me for work. These are very large files (each VM is 20GB or more) and I carry around about 40 to 50 VM's to simulate different software configurations for different client needs. Key: they won't fit on the internal hard drive of my current laptop. I currently execute the VM's from an external 7200RPM 2.5" USB-2 drive. I keep copies of the VM's on other 5400 external USB-2 drives. The VM's work from this drive, but they are slow, costing me much time and frustration. It can take upwards of 30 minutes just to make a copy of one of the VM's. They can take upwards of 10-15 minutes to fully launch and then they operate sluggishly. I am buying a new laptop (Core I7, 8GB RAM and other high-end specs). I intend to buy an SSD for the O/S volume (C:). This SSD will not be large enough to hold the VM's. I have always wanted a second internal hard drive to operate the VM's. To have two hard drives, though, I am finding that I will have to go to a 17" laptop which would be bulky/heavy. I am instead considering purchasing a 15" laptop with either an eSATA port or USB-3 ports and then purchasing two external drives. One of the drives might be an external SSD (maybe OCX brand) for operating the VM's and the other a 7400RPM 1TB hard drive for carrying around the VM's not currently in use. The question is which options would give me the biggest bang for the buck and the weight: 1) 2nd Internal SSD hard drive. This would mean buying a 17" laptop with two drive "bays". The first bay would hold an SSD drive for the C: drive. I would leave the first bay empty from the manufacture and then purchase/install an aftermarket SSD drive. This second SSD drive would have to be very large (256 GB), which would be expensive. I would still also need another external hard drive for carrying around the VM's not in use. 2) 2nd internal hard drive - 7400 RPM. Again, a 17" laptop would be required, but there are models available with on SSD drive for the C: drive and a second 7200 RPM hard drives. The second drive could probably be large enough to hold the VM's in use as well as those not in use. But would it be fast enough to drive the VM's? 3) USB-3 with External SSD. I could buy a 15" laptop with an SSD drive for the C: drive and a second hard drive for general files. I would operate the VM's from an external USB-3 SSD drive and have a third USB-3 external 7200 RPM drive for holding the VM's not in use. 4) eSATA with External SSD. Ditto, just eSATA instead of USB-3 5) USB-3 with External 7400 RPM drive. Ditto, but the drive running the VM's would be USB-3 attached 7400 RPM drives rather than SSD. 6) eSATA with External 7400 RPM drive. Dittor, but the drive running the VM's would be eSATA attached 7400 RPM drives rather than SSD. Any thoughts on this and any creative solutions?

    Read the article

  • FFmpeg extract clip - stream frame rate differs from container frame rate (x264, aac)

    - by fideli
    Summary H.264 video seems to have a really high frame rate that requires a scaling factor to the applied to the duration of video that I'm trying to extract (900x lower). Body I'm trying to extract a clip from a movie that I have in MP4 format (created using Handbrake). After trying mencoder and VLC, I decided to give FFmpeg a shot since it was the least troublesome when it came to copying the codecs. That is, compared to mencoder and VLC, the resulting file was still playable in QuickTime (I know about Perian, etc, I'm just trying to learn how all this works). Anyway, my command was as follows: ffmpeg -ss 01:15:51 -t 00:05:59 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 During the copy, The following comes up: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from outofsight.mp4': Duration: 01:57:42.10, start: 0.000000, bitrate: 830 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x384, 25 tbr, 22500 tbn, 45k tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to 'out.mp4': Stream #0.0(und): Video: libx264, yuv420p, 720x384, q=2-31, 90k tbn, 22500 tbc Stream #0.1(eng): Audio: libfaac, 48000 Hz, stereo, s16 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 2591 fps=2349 q=-1.0 size= 8144kB time=101.60 bitrate= 656.7kbits/s … Instead of a 5:59 duration clip, I get the entire rest of the movie. So, to test this, I ran the ffmpeg command with -t 00:00:01. What I got was exactly a 15:00 minute clip. So I did some black box engineering and decided to scale my -t option by calculating what value to enter given that 1 second was interpreted as 900 s. For my desired 359 s clip, I calculated 0.399 s and so my ffmpeg command became: ffmpeg -ss 01:15.51 -t 00:00:00.399 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 This works, but I have no idea why the duration is scaled by 900. Investigating further, each ffmpeg run has the line: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) 45000/25 = 1800. Must be a relation somewhere. Somehow, the obscenely high frame rate is causing issues with the timing. How is that frame rate so high? The best part about this is that the resulting clip.mp4 has the exact same feature (due to the copied video codec), and taking further clips from this needs the same scaling for the -t duration option. Therefore, I've made it available for anyone willing to check this out. Appendix The preamble for ffmpeg on my system (built using MacPorts ffmpeg port): FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/opt/local --disable-vhook --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-avfilter-lavf --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libdirac --enable-libschroedinger --enable-libfaac --enable-libfaad --enable-libxvid --enable-libx264 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 1. 4. 0 / 1. 4. 0 libswscale 1. 7. 1 / 1. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jan 4 2010 21:51:51, gcc: 4.2.1 (Apple Inc. build 5646) (dot 1) EDIT Not sure whether it was a bug or not, but it seems to be fixed now in my current version of ffmpeg, at least for this video (version 0.6.1 from MacPorts).

    Read the article

  • Asterisk SIP digest authentication username mismatch

    - by Matt
    I have an asterisk system that I'm attempting to get to work as a backup for our 3com system. We already use it for a conference bridge. Our phones are the 3com 3C10402B, so I don't have the issue of older 3com phones that come without a SIP image. The 3com phones are communicating SIP with the Asterisk, but are unable to register because they present a digest username value that doesn't match what Asterisk thinks it should. As an example, here are the relevant lines from a successful registration from a soft phone: Server sends: WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="1cac3853" Phone responds: Authorization: Digest username="2321", realm="asterisk", nonce="1cac3853", uri="sip:192.168.254.12", algorithm=md5, response="d32df9ec719817282460e7c2625b6120" For the 3com phone, those same lines look like this (and fails): Server sends: WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="6c915c33" Phone responds: Authorization: Digest username="sip:[email protected]", realm="asterisk", nonce="6c915c33", uri="sip:192.168.254.12", opaque="", algorithm=MD5, response="a89df25f19e4b4598595f919dac9db81" Basically, Asterisk wants to see a username in the Digest username field of 2321, but the 3com phone is sending sip:[email protected]. Anyone know how to tell asterisk to accept this format of username in the digest authentication? Here is the sip.conf info for that extension: [2321] deny=0.0.0.0/0.0.0.0 disallow=all type=friend secret=1234 qualify=yes port=5060 permit=0.0.0.0/0.0.0.0 nat=yes mailbox=2321@device host=dynamic dtmfmode=rfc2833 dial=SIP/2321 context=from-internal canreinvite=no callerid=device <2321 allow=ulaw, alaw call-limit=50 ... and for those interested in the grit, here is the debug output of the registration attempt: REGISTER sip:192.168.254.12 SIP/2.0 v: SIP/2.0/UDP 192.168.254.157:5060 t: f: i: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER Max-Forwards: 70 m: ;dt=544 Expires: 3600 User-Agent: 3Com-SIP-Phone/V8.0.1.3 X-3Com-PhoneInfo: firstRegistration=no; primaryCallP=192.168.254.12; secondaryCallP=0.0.0.0; --- (11 headers 0 lines) --- Using latest REGISTER request as basis request Sending to 192.168.254.157 : 5060 (no NAT) SIP/2.0 100 Trying Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Contact: Content-Length: 0 SIP/2.0 401 Unauthorized Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: ;tag=as3fb867e2 Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="6c915c33" Content-Length: 0 Scheduling destruction of SIP dialog 'fa4451d8-01d6-1cc2-13e4-00e0bb33beb9' in 32000 ms (Method: REGISTER) confbridge*CLI REGISTER sip:192.168.254.12 SIP/2.0 v: SIP/2.0/UDP 192.168.254.157:5060 t: f: i: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER Max-Forwards: 70 m: ;dt=544 Expires: 3600 User-Agent: 3Com-SIP-Phone/V8.0.1.3 Authorization: Digest username="sip:[email protected]", realm="asterisk", nonce="6c915c33", uri="sip:192.168.254.12", opaque="", algorithm=MD5, response="a89df25f19e4b4598595f919dac9db81" X-3Com-PhoneInfo: firstRegistration=no; primaryCallP=192.168.254.12; secondaryCallP=0.0.0.0; --- (12 headers 0 lines) --- Using latest REGISTER request as basis request Sending to 192.168.254.157 : 5060 (NAT) SIP/2.0 100 Trying Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Contact: Content-Length: 0 SIP/2.0 403 Authentication user name does not match account name Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: ;tag=as3fb867e2 Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Content-Length: 0 Scheduling destruction of SIP dialog 'fa4451d8-01d6-1cc2-13e4-00e0bb33beb9' in 32000 ms (Method: REGISTER) Thanks for your input!

    Read the article

  • Tunnel is up but cannot ping directly connected network

    - by drmanalo
    We configured a site-to-site VPN and here is the topology. I control the network on the left but not the one on the right. All devices in our network has public IPs. Server---ASA5505---Cisco887======Internet=====ASA5510---devices I can see the tunnel is up and can do extended ping using a loopback interface. From the 10.175 and 10.165 networks, they can also ping my loopback address. I can also dial in using a Cisco VPN client, and can connect to the devices on the right. #show crypto session Crypto session current status Interface: Vlan3 Profile: xxx-profile Session status: UP-ACTIVE Peer: 213.121.x.x port 500 IKEv1 SA: local 77.245.x.x/500 remote 213.121.x.x/500 Active IPSEC FLOW: permit ip 10.0.20.0/255.255.255.240 10.175.0.0/255.255.128.0 Active SAs: 0, origin: crypto map IPSEC FLOW: permit ip 10.0.20.0/255.255.255.240 10.165.0.0/255.255.192.0 Active SAs: 2, origin: crypto map #ping 10.165.29.39 source loopback 2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.165.29.39, timeout is 2 seconds: Packet sent with a source address of 10.0.20.1 !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 16/17/20 ms My problem is the devices on the right cannot reach my server. They could only ping the loopback address and nothing else. I'm pasting some diagnostics related to routing thinking perhaps routing is my issue. I can paste all the running-config on my side of network if needed. #show ip int brief Interface IP-Address OK? Method Status Protocol ATM0 unassigned YES NVRAM administratively down down Ethernet0 unassigned YES NVRAM administratively down down FastEthernet0 unassigned YES unset up up connected to ASA FastEthernet1 unassigned YES unset administratively down down FastEthernet2 unassigned YES unset administratively down down FastEthernet3 unassigned YES unset up up Loopback1 10.0.20.65 YES NVRAM up up Loopback2 10.0.20.1 YES NVRAM up up Virtual-Template1 77.245.x.x YES unset up down Virtual-Template2 77.245.x.x YES unset up down Vlan1 unassigned YES unset down down Vlan3 77.245.x.x YES NVRAM up up connected to the Internet #show run | section ip route ip route 0.0.0.0 0.0.0.0 77.245.x.x ip route 213.121.240.36 255.255.255.255 Vlan3 #show access-list Extended IP access list 102 10 permit ip 10.0.20.0 0.0.0.15 10.175.0.0 0.0.127.255 (3332 matches) 20 permit ip 10.0.20.0 0.0.0.15 10.165.0.0 0.0.63.255 (3498 matches) #show vlan-switch VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------- 1 default active 3 VLAN0003 active Fa0, Fa1, Fa2, Fa3 1002 fddi-default act/unsup 1003 token-ring-default act/unsup 1004 fddinet-default act/unsup 1005 trnet-default act/unsup #show ip route Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP + - replicated route, % - next hop override Gateway of last resort is 77.245.x.x to network 0.0.0.0 S* 0.0.0.0/0 [1/0] via 77.245.x.x 10.0.0.0/8 is variably subnetted, 5 subnets, 3 masks C 10.0.20.0/28 is directly connected, Loopback2 L 10.0.20.1/32 is directly connected, Loopback2 C 10.0.20.64/28 is directly connected, Loopback1 L 10.0.20.65/32 is directly connected, Loopback1 S 10.165.0.0/18 [1/0] via 213.121.x.x 77.0.0.0/8 is variably subnetted, 3 subnets, 3 masks S 77.0.0.0/8 [1/0] via 77.245.x.x C 77.245.x.x/29 is directly connected, Vlan3 L 77.245.x.x/32 is directly connected, Vlan3 213.121.x.0/32 is subnetted, 1 subnets S 213.121.x.x is directly connected, Vlan3 I read some of the posts here which lead to NATing issue but I'not sure of my next step. Should I translate my public address to private and route it to the loopback address? (only guessing) CISCO VPN site to site Site-to-Site VPN between two ASA 5505s only working in one direction Hope someone could help. Thanks in advance!

    Read the article

  • broadcom 5722 NIC not installed on Ubuntu Server, although driver present

    - by Bastien
    Hello, I just installed Ubuntu Server 10.04 LTS, running kernel 2.6.32-24-server, on a brand new Dell T110 server, supposedly fully compatible with Ubuntu Server. I have two NICs: one ONBOARD, the other additional on PCI. both of them are Broadcom netXtreme 5572. on the first boot of the system, I could see both cards as eth0 and eth1 (with ifconfig) I configured eth0 as static IP (as planned), and did not configure eth1. after rebooting, one of the two NICs "disappeared": it does not appear in ifconfig at all. the one that disappeared is the ONBOARD one. I investigated a bit and found the following things: the card is SEEN, but not "installed", it appears as "UNCLAIMED" in lshw: *-network UNCLAIMED description: Ethernet controller product: NetXtreme BCM5722 Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:04:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress cap_list configuration: latency=0 resources: memory:df9f0000-df9fffff *-network description: Ethernet interface product: NetXtreme BCM5722 Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 00 serial: 00:10:18:60:23:64 size: 100MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.102 duplex=full firmware=5722-v3.09 ip=10.129.167.25 latency=0 link=yes multicast=yes port=twisted pair speed=100MB/s resources: irq:35 memory:dfaf0000-dfafffff so I checked my dmesg and found a few strange lines, showing, there actually is a problem bringing up this card: [ 3.737506] tg3: Could not obtain valid ethernet address, aborting. [ 3.737527] tg3 0000:04:00.0: PCI INT A disabled [ 3.737535] tg3: probe of 0000:04:00.0 failed with error -22 [ 3.737553] alloc irq_desc for 17 on node -1 [ 3.737555] alloc kstat_irqs on node -1 [ 3.737560] tg3 0000:05:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 3.737566] tg3 0000:05:00.0: setting latency timer to 64 [ 3.793529] eth0: Tigon3 [partno(BCM95722A2202G) rev a200] (PCI Express) MAC address 00:10:18:60:23:64 [ 3.793532] eth0: attached PHY is 5722/5756 (10/100/1000Base-T Ethernet) (WireSpeed[1]) [ 3.793534] eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] TSOcap[1] [ 3.793536] eth0: dma_rwctrl[76180000] dma_mask[64-bit] that actually shows that one NIC is recognized, the other is not. I researched a bit more, with lspci -v: 04:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Subsystem: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Flags: fast devsel, IRQ 16 Memory at df9f0000 (64-bit, non-prefetchable) [size=64K] Capabilities: [48] Power Management version 3 Capabilities: [50] Vital Product Data <?> Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number 00-00-00-fe-ff-00-00-00 Kernel modules: tg3 05:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Subsystem: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Flags: bus master, fast devsel, latency 0, IRQ 35 Memory at dfaf0000 (64-bit, non-prefetchable) [size=64K] Expansion ROM at <ignored> [disabled] Capabilities: [48] Power Management version 3 Capabilities: [50] Vital Product Data <?> Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable+ Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number 64-23-60-fe-ff-18-10-00 Capabilities: [16c] Power Budgeting <?> Kernel driver in use: tg3 Kernel modules: tg3 here I could see that the MAC address is 00-00-00-FE-FF-00-00-00, which, according to some forum posts on several websites, could be an issue. I've researched everything I could on the net, and found out several people having slightly comparable issues, but they usually involve different HW, and do not provide a proper explanation / solution... I would appreciate if anyone around here has some info to share ! thanks

    Read the article

  • Cisco ASA 5505 site to site IPSEC VPN won't route from multiple LANs

    - by franklundy
    Hi I've set up a standard site to site VPN between 2 ASA 5505s (using the wizard in ASDM) and have the VPN working fine for traffic between Site A and Site B on the directly connected LANs. But this VPN is actually to be used for data originating on LAN subnets that are one hop away from the directly connected LANs. So actually there is another router connected to each ASA (LAN side) that then route to two completely different LAN ranges, where the clients and servers reside. At the moment, any traffic that gets to the ASA that has not originated from the directly connected LAN gets sent straight to the default gateway, and not through the VPN. I've tried adding the additional subnets to the "Protected Networks" on the VPN, but that has no effect. I have also tried adding a static route to each ASA trying to point the traffic to the other side, but again this hasn't worked. Here is the config for one of the sites. This works for traffic to/from the 192.168.144.x subnets perfectly. What I need is to be able to route traffic from 10.1.0.0/24 to 10.2.0.0/24 for example. ASA Version 8.0(3) ! hostname Site1 enable password ** encrypted names name 192.168.144.4 Site2 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.144.2 255.255.255.252 ! interface Vlan2 nameif outside security-level 0 ip address 10.78.254.70 255.255.255.252 (this is a private WAN circuit) ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! passwd ** encrypted ftp mode passive access-list inside_access_in extended permit ip any any access-list outside_access_in extended permit icmp any any echo-reply access-list outside_1_cryptomap extended permit ip 192.168.144.0 255.255.255.252 Site2 255.255.255.252 access-list inside_nat0_outbound extended permit ip 192.168.144.0 255.255.255.252 Site2 255.255.255.252 pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-603.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 access-group inside_access_in in interface inside access-group outside_access_in in interface outside route outside 0.0.0.0 0.0.0.0 10.78.254.69 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout uauth 0:05:00 absolute dynamic-access-policy-record DfltAccessPolicy aaa authentication ssh console LOCAL http server enable http 0.0.0.0 0.0.0.0 outside http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto map outside_map 1 match address outside_1_cryptomap crypto map outside_map 1 set pfs crypto map outside_map 1 set peer 10.78.254.66 crypto map outside_map 1 set transform-set ESP-3DES-SHA crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 no crypto isakmp nat-traversal telnet timeout 5 ssh 0.0.0.0 0.0.0.0 outside ssh timeout 5 console timeout 0 management-access inside threat-detection basic-threat threat-detection statistics port threat-detection statistics protocol threat-detection statistics access-list group-policy DfltGrpPolicy attributes vpn-idle-timeout none username enadmin password * encrypted privilege 15 tunnel-group 10.78.254.66 type ipsec-l2l tunnel-group 10.78.254.66 ipsec-attributes pre-shared-key * ! ! prompt hostname context

    Read the article

  • iptable CLUSTERIP won't work

    - by Rad Akefirad
    We have some requirements which explained here. We tried to satisfy them without any success as described. Here is the brief information: Here are requirements: 1. High Availability 2. Load Balancing Current Configuration: Server #1: one static (real) IP for each 10.17.243.11 Server #2: one static (real) IP for each 10.17.243.12 Cluster (virtual and shared among all servers) IP: 10.17.243.15 I tried to use CLUSTERIP to have the cluster IP by the following: on the server #1 iptables -I INPUT -i eth0 -d 10.17.243.15 -j CLUSTERIP --new --hashmode sourceip --clustermac 01:00:5E:00:00:20 --total-nodes 2 --local-node 1 on the server #2 iptables -I INPUT -i eth0 -d 10.17.243.15 -j CLUSTERIP --new --hashmode sourceip --clustermac 01:00:5E:00:00:20 --total-nodes 2 --local-node 2 When we try to ping 10.17.243.15 there is no reply. And the web service (tomcat on port 8080) is not accessible either. However we managed to get the packets on both servers by using TCPDUMP. Some useful information: iptable roules (iptables -L -n -v): Chain INPUT (policy ACCEPT 21775 packets, 1470K bytes) pkts bytes target prot opt in out source destination 0 0 CLUSTERIP all -- eth0 * 0.0.0.0/0 10.17.243.15 CLUSTERIP hashmode=sourceip clustermac=01:00:5E:00:00:20 total_nodes=2 local_node=1 hash_init=0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 14078 packets, 44M bytes) pkts bytes target prot opt in out source destination Log messages: ... kernel: [ 7.329017] e1000e: eth3 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None ... kernel: [ 7.329133] e1000e 0000:05:00.0: eth3: 10/100 speed: disabling TSO ... kernel: [ 7.329567] ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready ... kernel: [ 71.333285] ip_tables: (C) 2000-2006 Netfilter Core Team ... kernel: [ 71.341804] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) ... kernel: [ 71.343168] ipt_CLUSTERIP: ClusterIP Version 0.8 loaded successfully ... kernel: [ 108.456043] device eth0 entered promiscuous mode ... kernel: [ 112.678859] device eth0 left promiscuous mode ... kernel: [ 117.916050] device eth0 entered promiscuous mode ... kernel: [ 140.168848] device eth0 left promiscuous mode TCPDUMP while pinging: tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 12:11:55.335528 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2390, length 64 12:11:56.335778 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2391, length 64 12:11:57.336010 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2392, length 64 12:11:58.336287 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2393, length 64 And there is no ping reply as I said. Does anyone know which part I missed? Thanks in advance.

    Read the article

  • apache+mod_wsgi configuration for django project(s) on a quad core

    - by Stefano
    I've been experiment quite some time with a "typical" django setting upon nginx+apache2+mod_wsgi+memcached(+postgresql) (reading the doc and some questions on SO and SF, see comments) Since I'm still unsatisfied with the behavior (definitely because of some bad misconfiguration on my part) I would like to know what a good configuration would look like with these hypotesis: Quad-Core Xeon 2.8GHz 8 gigs memory several django projects (anything special related to this?) These are excerpts form my current confs: apache2 SetEnv VHOST null #WSGIPythonOptimize 2 <VirtualHost *:8082> ServerName subdomain.domain.com ServerAlias www.domain.com SetEnv VHOST subdomain.domain AddDefaultCharset UTF-8 ServerSignature Off LogFormat "%{X-Real-IP}i %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" custom ErrorLog /home/project1/var/logs/apache_error.log CustomLog /home/project1/var/logs/apache_access.log custom AllowEncodedSlashes On WSGIDaemonProcess subdomain.domain user=www-data group=www-data threads=25 WSGIScriptAlias / /home/project1/project/wsgi.py WSGIProcessGroup %{ENV:VHOST} </VirtualHost> wsgi.py import os import sys # setting all the right paths.... _realpath = os.path.realpath(os.path.dirname(__file__)) _public_html = os.path.normpath(os.path.join(_realpath, '../')) sys.path.append(_realpath) sys.path.append(os.path.normpath(os.path.join(_realpath, 'apps'))) sys.path.append(os.path.normpath(_public_html)) sys.path.append(os.path.normpath(os.path.join(_public_html, 'libs'))) sys.path.append(os.path.normpath(os.path.join(_public_html, 'django'))) os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' import django.core.handlers.wsgi _application = django.core.handlers.wsgi.WSGIHandler() def application(environ, start_response): """ Launches django passing over some environment (domain name) settings """ application_group = environ['mod_wsgi.application_group'] """ wsgi application group is required. It's also used to generate the HOST.DOMAIN.TLD:PORT parameters to pass over """ assert application_group fields = application_group.replace('|', '').split(':') server_name = fields[0] os.environ['WSGI_APPLICATION_GROUP'] = application_group os.environ['WSGI_SERVER_NAME'] = server_name if len(fields) > 1 : os.environ['WSGI_PORT'] = fields[1] splitted = server_name.rsplit('.', 2) assert splitted >= 2 splited.reverse() if len(splitted) > 0 : os.environ['WSGI_TLD'] = splitted[0] if len(splitted) > 1 : os.environ['WSGI_DOMAIN'] = splitted[1] if len(splitted) > 2 : os.environ['WSGI_HOST'] = splitted[2] return _application(environ, start_response)` folder structure in case it matters (slightly shortened actually) /home/www-data/projectN/var/logs /project (contains manage.py, wsgi.py, settings.py) /project/apps (all the project ups are here) /django /libs Please forgive me in advance if I overlooked something obvious. My main question is about the apache2 wsgi settings. Are those fine? Is 25 threads an /ok/ number with a quad core for one only django project? Is it still ok with several django projects on different virtual hosts? Should I specify 'process'? Any other directive which I should add? Is there anything really bad in the wsgi.py file? I've been reading about potential issues with the standard wsgi.py file, should I switch to that? Or.. should this conf just be running fine, and I should look for issues somewhere else? So, what do I mean by "unsatisfied": well, I often get quite high CPU WAIT; but what is worse, is that relatively often apache2 gets stuck. It just does not answer anymore, and has to be restarted. I have setup a monit to take care of that, but it ain't a real solution. I have been wondering if it's an issue with the database access (postgresql) under heavy load, but even if it was, why would the apache2 processes get stuck? Beside these two issues, performance is overall great. I even tried New Relic and got very good average results.

    Read the article

  • Cannot join Win7 workstations to Win2k8 domain

    - by wfaulk
    I am trying to connect a Windows 7 Ultimate machine to a Windows 2k8 domain and it's not working. I get this error: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain "example.local": The query was for the SRV record for _ldap._tcp.dc._msdcs.example.local The following domain controllers were identified by the query: dc1.example.local dc2.example.local However no domain controllers could be contacted. Common causes of this error include: Host (A) or (AAAA) records that map the names of the domain controllers to their IP addresses are missing or contain incorrect addresses. Domain controllers registered in DNS are not connected to the network or are not running. The client is in an office connected remotely via MPLS to the data center where our domain controllers exist. I don't seem to have anything blocking connectivity to the DCs, but I don't have total control over the MPLS circuit, so it's possible that there's something blocking connectivity. I have tried multiple clients (Win7 Ultimate and WinXP SP3) in the one office and get the same symptoms on all of them. I have no trouble connecting to either of the domain controllers, though I have, admittedly, not tried every possible port. ICMP, LDAP, DNS, and SMB connections all work fine. Client DNS is pointing to the DCs, and "example.local" resolves to the two IP addresses of the DCs. I get this output from the NetLogon Test command line utility: C:\Windows\System32>nltest /dsgetdc:example.local Getting DC name failed: Status = 1355 0x54b ERROR_NO_SUCH_DOMAIN I have also created a separate network to emulate that office's configuration that's connected to the DC network via LAN-to-LAN VPN instead of MPLS. Joining Windows 7 computers from that remote network works fine. The only difference I can find between the two environments is the intermediate connectivity, but I'm out of ideas as to what to test or how to do it. What further steps should I take? (Note that this isn't actually my client workstation and I have no direct access to it; I'm forced to do remote hands access to it, which makes some of the obvious troubleshooting methods, like packet sniffing, more difficult. If I could just set up a system there that I could remote into, I would, but requests to that effect have gone unanswered.) 2011-08-25 update: I had DCDIAG.EXE run on a client attempting to join the domain: C:\Windows\System32>dcdiag /u:example\adminuser /p:********* /s:dc2.example.local Directory Server Diagnosis Performing initial setup: Ldap search capabality attribute search failed on server dc2.example.local, return value = 81 This sounds like it was able to connect via LDAP, but the thing that it was trying to do failed. But I don't quite follow what it was trying to do, much less how to reproduce it or resolve it. 2011-08-26 update: Using LDP.EXE to try and make an LDAP connection directly to the DCs results in these errors: ld = ldap_open("10.0.0.1", 389); Error <0x51: Fail to connect to 10.0.0.1. ld = ldap_open("10.0.0.2", 389); Error <0x51: Fail to connect to 10.0.0.2. ld = ldap_open("10.0.0.1", 3268); Error <0x51: Fail to connect to 10.0.0.1. ld = ldap_open("10.0.0.2", 3268); Error <0x51: Fail to connect to 10.0.0.2. This would seem to point fingers at LDAP connections being blocked somewhere. (And 0x51 == 81, which was the error from DCDIAG.EXE from yesterday's update.) I could swear I tested this using TELNET.EXE weeks ago, but now I'm thinking that I may have assumed that its clearing of the screen was telling me that it was waiting and not that it had connected. I'm tracking down LDAP connectivity problems now. This update may become an answer.

    Read the article

  • Graphics card initialisation problems when booting - requires a "double" boot

    - by DMA57361
    Problem Outline When booting from cold (and my machine is disconnected from main power when off, but leaving it connected doesn't help) the graphics card (single PCI-e card GeForce 460) will not initialise on the first boot, leaving me with the motherboards on-board graphics (which kick in automatically if no PCI-e card is found). However, if I restart the computer - normally I do this by powering it off just after the numlock lights up on the keyboard (ie, just after POST/BIOS and before Windows takes over), wait for the system to whirr down, and power up again - the graphics card will work correctly. Once double-booted in this matter the system seems to work correctly - with no noticeable problems. This is reproducible every time I try to boot - it has been working like this for about a month now. Background Information Sept 2010 - I suffered a hardware malfunction (crashes in Windows and graphics corruption on BIOS screens). By way of spare hardware I determined that replacing the PSU removed the issue, so I replaced the PSU with a brand new one of slightly higher power (460W replaced with 500W). Oct 2010 - The problem resurfaced. I purchased a new graphics card (GeForce 460), which removed the problem. The new graphics card immediately started having the boot initialisation problems mentioned. I presumed there was a motherboard fault all along, but because the system worked once booted, and I was temporarily out of spare money, I left the system alone and continued to use it. Early/Mid Dec 2010 - In the space of 5 days I recieved 3 instances of hard drive corruption (seemlingly fixed by chkdsk and sfc in each case...). Since I was already under the impression the motherboard was faulty, I purchased a new one ASAP, this also required new RAM (as I dropped from 4 slots to 2 and didn't want to drop mem quantity). Past 3-4 weeks - With a brand new PSU, Graphics Card, Motherboard and RAM I'm suffering the problem outlined above. So, what could be causing this and how do I can resolve it? Additional Notes Once double-booted the system seems to work entirely correctly. The graphics card problem has occured on two entirely different motherboards. I do not have the opportunity to test the graphics card in a different computer (I've only the old motherboard, which is dubious, or a really old desktop that still has an AGP port). Under load (ie, modern games for long enough for temperatures to plateau) the system remains stable and performs as expected. The software that came with the new motherboard and SpeenFan both report all voltages and temperatures are within nominal bounds, when idle and when under load. I've looking over the BIOS settings for my motherboard multiple times and can find nothing that helps. This system is configured to run with everything at standard levels - no overclocking. I've tried booting the system with only the mobo and graphics card connected (thinking maybe my new PSU was too weak for the new gfx card, even though it meets the quoted PSU requirements for the card) but the same problem persists (and really if the PSU was weak I'd have problems with the system under load). When the gfx card does not initialise the fan on its cooling unit is running, possibly slower than otherwise - but this measurement is by eye and so unreliable.

    Read the article

  • DNS request timed out. timeout was 2 seconds

    - by sahil007
    i had setup bind dns server on centos. from local lan it will work fine but from remote when i tried to nslookup ..it will give reply like "DNS request timed out...timeout was 2 seconds." what is the problem? this is my bind config---- // Red Hat BIND Configuration Tool options { directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; query-source address * port 53; }; controls { inet 127.0.0.1 allow {localhost; } keys {rndckey; }; }; acl internals { 127.0.0.0/8; 192.168.0.0/24; 10.0.0.0/8; }; view "internal" { match-clients { internals; }; recursion yes; zone "mydomain.com" { type master; file "mydomain.com.zone"; }; zone "0.168.192.in-addr.arpa" { type master; file "0.168.192.in-addr.arpa.zone"; }; zone "." IN { type hint; file "named.root"; }; zone "localdomain." IN { type master; file "localdomain.zone"; allow-update { none; }; }; zone "localhost." IN { type master; file "localhost.zone"; allow-update { none; }; }; zone "0.0.127.in-addr.arpa." IN { type master; file "named.local"; allow-update { none; }; }; zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa." I N { type master; file "named.ip6.local"; allow-update { none; }; }; zone "255.in-addr.arpa." IN { type master; file "named.broadcast"; allow-update { none; }; }; zone "0.in-addr.arpa." IN { type master; file "named.zero"; allow-update { none; }; }; }; view "external" { match-clients { any; }; recursion no; zone "mydomain.com" { type master; file "mydomain.com.zone"; // file "/var/named/chroot/var/named/mydomain.com.zone"; }; zone "0.168.192.in-addr.arpa" { type master; file "0.168.192.in-addr.arpa.zone"; }; }; include "/etc/rndc.key";

    Read the article

  • High Load mysql on Debian server

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ?

    Read the article

  • Varnish cached 'MISS status' object?

    - by Hesey
    My site uses nginx, varnish, jboss. And some url will be cached by varnish, it depends a response header from jboss. The first time, jboss tells varnish doesn't cache this url. Then the second request, jboss tells varnish to cache, but varnish won't cache it. I used varnishstat and found that 1 object is cached in Varnish, is that the 'MISS status' object? I remove grace code and the problem still exists. When I PURGE this url, varnish works fine and cache the url then. But I can't PURGE so much urls every startup time, how can I fix this? The configuration: acl local { "localhost"; } backend default { .host = "localhost"; .port = "8080"; .probe = { .url = "/preload.htm"; .interval = 3s; .timeout = 1s; .window = 5; .threshold = 3; } } sub vcl_deliver { if (req.request == "PURGE") { remove resp.http.X-Varnish; remove resp.http.Via; remove resp.http.Age; remove resp.http.Content-Type; remove resp.http.Server; remove resp.http.Date; remove resp.http.Accept-Ranges; remove resp.http.Connection; set resp.http.keeplive="true"; } else { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } } sub vcl_recv { if(req.url ~ "/check.htm"){ error 404 "N"; } if( req.http.host ~ "store." || req.request == "POST"){ return (pipe); } if (req.backend.healthy) { set req.grace = 30s; } else { set req.grace = 10m; } set req.http.x-cacheKey = "0"; if(req.url ~ "/shop/view_shop.htm" || req.url ~ "/shop/viewShop.htm" || req.url ~ "/index.htm"){ if(req.url ~ "search=y"){ set req.http.x-cacheKey = req.http.host + "/search.htm"; }else if(req.url !~ "bbs=y" && req.url !~ "shopIntro=y" && req.url !~ "shop_intro=y"){ set req.http.x-cacheKey = req.http.host + "/index.htm"; } }else if(req.url ~ "/search"){ set req.http.x-cacheKey = req.http.host + "/search.htm"; } if( req.http.x-cacheKey == "0" && req.url !~ "/i/"){ return (pipe); } if (req.request == "PURGE") { if (client.ip ~ local) { return (lookup); } else { error 405 "Not allowed."; } } if (req.url ~ "/i/") { set req.http.x-shop-url = req.original_url; }else { unset req.http.cookie; } } sub vcl_fetch { set beresp.grace = 10m; #unset beresp.http.x-cacheKey; if (req.url ~ "/i/" || req.url ~ "status" ){ set beresp.ttl = 0s; /* ttl=0 for dynamic content */ } else if(beresp.http.x-varnish-cache != "1"){ set beresp.do_esi = true; /* Do ESI processing */ set beresp.ttl = 0s; unset beresp.http.set-cookie; } else { set beresp.do_esi = true; /* Do ESI processing */ set beresp.ttl = 1800s; unset beresp.http.set-cookie; } } sub vcl_hash { hash_data(req.http.x-cacheKey); return (hash); } sub vcl_error { if (req.request == "PURGE") { return (deliver); } else { set obj.http.Content-Type = "text/html; charset=gbk"; synthetic {"<!--ve-->"}; return (deliver); } } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "N"; } }

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >