Search Results

Search found 1780 results on 72 pages for '400'.

Page 26/72 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • New build won't POST, no video, no beeps

    - by Nate Koppenhaver
    Specs: Motherboard: MSI 760GM-P23 FX Integrated graphics (on MoBo) CPU: AMD Athlon II x4 640 RAM: GeIL Pristine 4GB DDR3 Case/PSU: TOPOWER TP-4107BB-400 Is not POSTing, no video output, no beeps. When RAM is removed, 3 beeps. I have tried removing and replacing the CPU and all the power cables with no change. Resetting the BIOS (by removing and replacing the battery) did nothing as well. Is there something I'm forgetting (1st time building from components), or could one of the components be bad? EDIT: New development: with CPU and RAM installed correctly, it will turn on lights and fans (still no POST) and after running for a minute or so it will turn off and the PSU will make a buzzing noise that ceases only when unplugged.

    Read the article

  • Puppetmaster don't notice changes to site.pp

    - by tore-
    Hi, I've just setup a new production environment with puppet. Using 0.25.4 in client/server. Ruby is at 1.8.5, CentOS 5.4. I've made a simple manifest for configuring yum-updatesd, but the puppetmaster doesn't seem to notice changes done to site.pp: err: Could not parse for environment production: Could not match 'node' at /etc/puppet/manifests/site.pp:1 err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not parse for environment production: Could not match 'node' at /etc/puppet/manifests/site.pp:1 Notice, it says line 1. But line 1 contains an import statement: # cat -n /etc/puppet/manifests/site.pp 1 import "update-notification" 2 3 node default { 4 include update-notification 5 update-notification::configure() 6 } I've tried to reboot the server, delete and recreate site.pp, start and stop puppetmaster and puppet, with no luck. What am I missing?

    Read the article

  • "powershell has stopped working" on PS exit after creating IIS User

    - by Nick Jones
    On a Windows Server 2012 box, I'm using PS to create a new IIS User (for automated deployments using MSDeploy). The command itself appears to work fine -- the user is created -- but as soon as I exit my PowerShell session (typing exit or just closing the command window), a dialog is displayed stating "powershell has stopped working", with the following details: Problem signature: Problem Event Name: PowerShell NameOfExe: powershell.exe FileVersionOfSystemManagementAutomation: 6.2.9200.16628 InnermostExceptionType: Runtime.InteropServices.InvalidComObject OutermostExceptionType: Runtime.InteropServices.InvalidComObject DeepestPowerShellFrame: unknown DeepestFrame: System.StubHelpers.StubHelpers.GetCOMIPFromRCW ThreadName: unknown OS Version: 6.2.9200.2.0.0.400.8 Locale ID: 1033 The PS commands in question are: [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Management") [Microsoft.Web.Management.Server.ManagementAuthentication]::CreateUser("Foo", "Bar") Why is this happening and how can I avoid it? EDIT: I've also confirmed this be a problem with PowerShell 4.0, so I've added that tag. I've also submitted it on Connect. UPDATE: It appears that Windows Server 2012 R2 does not have this same bug.

    Read the article

  • Using naked domain in apache, no "www" on domain in httpd.conf

    - by chrsdgtl
    Incredibly there is no good tutorial or easy reference guide for using naked domains (no subdomain) as the primary URI online that I could find. I'm trying to configure this to happen in my httpd.conf in apache. Since I'm still a relative newb to this server stuff, trying to figure it out myself all I could do was configure some nasty redirect loops and error 400's. There's plenty of notes for the more common: http:// -- https:// and naked to -- www. and a ton of .htaccess stuff (not interested) What I want is http://www.domain.com -- http://domain.com The most helpful thing I found was this: Multiple domains (including www-"subdomain") on apache? I ended using the solution mentioned by ceejayoz in that post that some folks noted was messy and complicated because it got the desired result but I'd like to know the best practice for this in the future. I'd appreciate a nudge in the right direction. Thanks in advance.

    Read the article

  • Inexpensive Remote Assistance software?

    - by Jess
    Any recommendations for Remote Assistance software that does not require firewall modification for clients? To assist client with software problems and perform training, we currently use a tool called Remote Helpdesk to connect to their computers and guide them through the process. This tool was pretty cheap (~$400 onetime for 3 support staff), and worked great - the client's PC actually initiates the connection to us, so there's never any firewall issues (vs. Remote Desktop, VNC software, or many other similar tools). Unfortunately, the product doesn't work well with 64-bit O/S's and Vista in general (slows down by a factor of 10 or so). I am looking for alternatives that provide the same reverse connection capabilities to avoid firewall issues. The only solution I've found is WebEx's Remote Support, which is WAY too expensive ($449/month for us). Thanks for all the assistance!

    Read the article

  • Configuring an apache virtualhost to link to another's virtualhost's subdomain

    - by Laurent Van Winckel
    I have a domain on my server (on its own virtualhost), e.g. domain.com, which has a lot of subdomains with content on it. sub1.domain.com sub2.domain.com This virtualhost looks like this: ServerAdmin [email protected] ServerName domain.com ServerAlias *.domain.com DocumentRoot /var/www/domain.com/public_html/ I want to put other domains on the server and have each of them link to a specific subdomain on domain.com. I don't want it to redirect. I want a similar behavior as the [L,PT] flags on mod rewrite. I've tried this: ServerName otherdomain.com ReWriteEngine on RewriteCond %{HTTP_HOST} ^otherdomain.com$ [NC] RewriteRule ^(.*)$ http://sub1.domain.com/$1 [PT,L] But it is giving me a 400 Bad request error. How would I configure such a virtualhost?

    Read the article

  • Internet speed showing high value but never felt that speed [closed]

    - by Kiran D Pillai
    I use Reliance Netconnect+ in my Toshiba Satellite laptop. My OS is Windows 7 Ultimate. I've seen speeds up to 1500 kbps when travelling and have an average speed of 400 kbps at my room premises. I'm sorry, I've never felt that much speed while browsing or downloading. Sometimes I think some other programs are also sharing the same network. It is just showing 200 or 500 kbps but I feel 25 or 50 kbps. Is there any way to prevent them? I use Google Chrome downloaded from Google site. Please suggest me some solutions.

    Read the article

  • Virtual Machine Manager 2012 CPU Average

    - by Grant
    What exactly is the CPU Average field in VMM 2012 showing me? I'm running Server 2008 R2 with VMM 2012. My server has 2x16 core CPUs installed. An example virtual machine has 4 virtual processors, and shows 20% CPU usage. Is that: 20% of the entire system's available CPU power? 20% of 4 of the 32 core's CPU power? 20% of one core's CPU? (in which case it could go as high as 400%) Something else entirely? How can I tell how much of the entire system's CPU power is being used (all 32 cores)? Edit: Well, I can tell for sure it's not 20% of the entire system's CPU power - since the entire server's CPU averages add up to well over 100% right now.

    Read the article

  • Puppetmaster don't notice changes to site.pp

    - by tore-
    I've just setup a new production environment with puppet. Using 0.25.4 in client/server. Ruby is at 1.8.5, CentOS 5.4. I've made a simple manifest for configuring yum-updatesd, but the puppetmaster doesn't seem to notice changes done to site.pp: err: Could not parse for environment production: Could not match 'node' at /etc/puppet/manifests/site.pp:1 err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not parse for environment production: Could not match 'node' at /etc/puppet/manifests/site.pp:1 Notice, it says line 1. But line 1 contains an import statement: # cat -n /etc/puppet/manifests/site.pp 1 import "update-notification" 2 3 node default { 4 include update-notification 5 update-notification::configure() 6 } I've tried to reboot the server, delete and recreate site.pp, start and stop puppetmaster and puppet, with no luck. What am I missing?

    Read the article

  • Configuring Nginx SSL alongside non-ssl

    - by user55145
    I'm trying to enable SSL on my current Nginx configuration, which works fine. However I'm wondering if it's possible to do this alongside HTTP, so that i do not need another server{} section which would just be a replication of the http section. I thought the following would work, however i get the below when accessing http:// 400 Bad Request The plain HTTP request was sent to HTTPS port Nginx Config: ssl_certificate /etc/nginx/ssl/domains.pem; ssl_certificate_key /etc/nginx/ssl/server.key; server { listen 80; listen 443; //other configuration }

    Read the article

  • Displaying ads over WiFi hotspot

    - by Ahsan
    I have recently distributed my WiFi network with highpseed antennas to my area which covers almost 300-400 peoples. I am not charging them anything but i would like to generate some revenue through Advertisements placed on the websites that they visit. Is it possible to display ads from Google (I know i can do redirect the Advertisements, using some cache server or firewall) . Its just like a free vpn but i would like to have my advertisements above the websites they visit so i can take out the cost for the WiFi that i offer. Any suggestions would be great!

    Read the article

  • ISPConfig 3 SSL automatic rewrite

    - by lol
    I was wondering how you could get apache2 to redirect http://server.com:8080 to https://server.com:8080 - I have an ISPConfig 3 setup and the http://server.com:8080 virtual host currently prints a 400 back request error given that I've tried adding RewriteEngine on RewriteCond %{HTTPS} !^on$ [NC] RewriteRule . https://%{HTTP_HOST}:8080%{REQUEST_URI} [L] to the ispconfig.vhost file (and reloading the conf) with no success --edit!-- I've been playing around with it and adding an 'always redirect to google' into the ispconfig vhost and it works once you've already started talking ssl to it. this means the non-ssl connections are getting 'bad request errors' before the vhost is loaded... but where...? --edit 2!-- nope, the ssl is handled exclusively by the virtual host - if I turn off the ssl engine then the rewriting works perfectly (but obviously there is no ssl at https://) thanks!

    Read the article

  • nginx won't respond to monit

    - by Miko
    Although EngineX is running, monit can't seem to figure it out. Here's my monit log: [PDT Apr 13 02:19:19] error : HTTP error: Server returned status 400 [PDT Apr 13 02:19:19] error : 'nginx' failed protocol test [HTTP] at INET[localhost:80] via TCP [PDT Apr 13 02:19:19] info : 'nginx' trying to restart [PDT Apr 13 02:19:19] info : 'nginx' stop: /etc/init.d/nginx [PDT Apr 13 02:19:20] info : 'nginx' start: /etc/init.d/nginx The monitrc file contains the following configuration: if failed port 80 protocol http and request '/ping.txt' # check for response with timeout 20 seconds then restart I can access the file through lynx http://localhost:80/ping.txt without any problems. Why would monit have trouble requesting the file when nginx is running just fine?

    Read the article

  • hiera_include equivalent for resource types

    - by quickshiftin
    I'm using the yumrepo built-in type. I can get a basic integration to hiera working yumrepo { hiera('yumrepo::name') : metadata_expire => hiera('yumrepo::metadata_expire'), descr => hiera('yumrepo::descr'), gpgcheck => hiera('yumrepo::gpgcheck'), http_caching => hiera('yumrepo::http_caching'), baseurl => hiera('yumrepo::baseurl'), enabled => hiera('yumrepo::enabled'), } If I try to remove that definition and instead go for hiera_include('classes'), here's what I've got in the corresponding yaml backend classes: - "yumrepo" yumrepo::metadata_expire: 0 yumrepo::descr: "custom repository" yumrepo::gpgcheck: 0 yumrepo::http_caching: none yumrepo::baseurl: "http://myserver/custom-repo/$basearch" yumrepo::enabled: 1 I get this error on an agent Error 400 on SERVER: Could not find class yumrepo I guess you can't get away from some sort of minimal node declaration w/ hiera and resource types? Maybe hiera_hash is the way to go? I gave this a shot, but it produces a syntax error yumrepo { 'hnav-development': hiera_hash('yumrepo') }

    Read the article

  • Server Memory with Magento

    - by Mohamed Elgharabawy
    I have a cloud server with the following specifications: 2vCPUs 4G RAM 160GB Disk Space Network 400Mb/s System Image: Ubuntu 12.04 LTS I am only running Magento CE 1.7.0.2 on this server. Nothing else. Usually, the server has a loading time of 4-5 seconds. Recently, this has dropped to over 30 seconds and sometimes the server just goes away and I get HTTP error reports to my email stating that HTTP requests took more than 20000ms. Running top command and sorting them returns the following: top - 15:29:07 up 3:40, 1 user, load average: 28.59, 25.95, 22.91 Tasks: 112 total, 30 running, 82 sleeping, 0 stopped, 0 zombie Cpu(s): 90.2%us, 9.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.2%st PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31901 www-data 20 0 360m 71m 5840 R 7 1.8 1:39.51 apache2 32084 www-data 20 0 362m 72m 5548 R 7 1.8 1:31.56 apache2 32089 www-data 20 0 348m 59m 5660 R 7 1.5 1:41.74 apache2 32295 www-data 20 0 343m 54m 5532 R 7 1.4 2:00.78 apache2 32303 www-data 20 0 354m 65m 5260 R 7 1.6 1:38.76 apache2 32304 www-data 20 0 346m 56m 5544 R 7 1.4 1:41.26 apache2 32305 www-data 20 0 348m 59m 5640 R 7 1.5 1:50.11 apache2 32291 www-data 20 0 358m 69m 5256 R 6 1.7 1:44.26 apache2 32517 www-data 20 0 345m 56m 5532 R 6 1.4 1:45.56 apache2 30473 www-data 20 0 355m 66m 5680 R 6 1.7 2:00.05 apache2 32093 www-data 20 0 352m 63m 5848 R 6 1.6 1:53.23 apache2 32302 www-data 20 0 345m 56m 5512 R 6 1.4 1:55.87 apache2 32433 www-data 20 0 346m 57m 5500 S 6 1.4 1:31.58 apache2 32638 www-data 20 0 354m 65m 5508 R 6 1.6 1:36.59 apache2 32230 www-data 20 0 347m 57m 5524 R 6 1.4 1:33.96 apache2 32231 www-data 20 0 355m 66m 5512 R 6 1.7 1:37.47 apache2 32233 www-data 20 0 354m 64m 6032 R 6 1.6 1:59.74 apache2 32300 www-data 20 0 355m 66m 5672 R 6 1.7 1:43.76 apache2 32510 www-data 20 0 347m 58m 5512 R 6 1.5 1:42.54 apache2 32521 www-data 20 0 348m 59m 5508 R 6 1.5 1:47.99 apache2 32639 www-data 20 0 344m 55m 5512 R 6 1.4 1:34.25 apache2 32083 www-data 20 0 345m 56m 5696 R 5 1.4 1:59.42 apache2 32085 www-data 20 0 347m 58m 5692 R 5 1.5 1:42.29 apache2 32293 www-data 20 0 353m 64m 5676 R 5 1.6 1:52.73 apache2 32301 www-data 20 0 348m 59m 5564 R 5 1.5 1:49.63 apache2 32528 www-data 20 0 351m 62m 5520 R 5 1.6 1:36.11 apache2 31523 mysql 20 0 3460m 576m 8288 S 5 14.4 2:06.91 mysqld 32002 www-data 20 0 345m 55m 5512 R 5 1.4 2:01.88 apache2 32080 www-data 20 0 357m 68m 5512 S 5 1.7 1:31.30 apache2 32163 www-data 20 0 347m 58m 5512 S 5 1.5 1:58.68 apache2 32509 www-data 20 0 345m 56m 5504 R 5 1.4 1:49.54 apache2 32306 www-data 20 0 358m 68m 5504 S 4 1.7 1:53.29 apache2 32165 www-data 20 0 344m 55m 5524 S 4 1.4 1:40.71 apache2 32640 www-data 20 0 345m 56m 5528 R 4 1.4 1:36.49 apache2 31888 www-data 20 0 359m 70m 5664 R 4 1.8 1:57.07 apache2 32511 www-data 20 0 357m 67m 5512 S 3 1.7 1:47.00 apache2 32054 www-data 20 0 357m 68m 5660 S 2 1.7 1:53.10 apache2 1 root 20 0 24452 2276 1232 S 0 0.1 0:01.58 init Moreover, running free -m returns the following: total used free shared buffers cached Mem: 4003 3919 83 0 118 901 -/+ buffers/cache: 2899 1103 Swap: 0 0 0 To investigate this further, I have installed apache buddy, it recommeneded that I need to reduce the maxclient connections. Which I did. I also installed MysqlTuner and it suggests that I need to set my innodb_buffer_pool_size to = 3.0G. However, I cannot do that, since the whole memory is 4G. Here is the output from apache buddy: ### GENERAL REPORT ### Settings considered for this report: Your server's physical RAM: 4003MB Apache's MaxClients directive: 40 Apache MPM Model: prefork Largest Apache process (by memory): 73.77MB [ OK ] Your MaxClients setting is within an acceptable range. Max potential memory usage: 2950.8 MB Percentage of RAM allocated to Apache 73.72 % And this is the output of MySQLTuner: -------- Performance Metrics ------------------------------------------------- [--] Up for: 47m 22s (675K q [237.552 qps], 12K conn, TX: 1B, RX: 300M) [--] Reads / Writes: 45% / 55% [--] Total buffers: 2.1G global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 2.5G (64% of installed RAM) [OK] Slow queries: 0% (0/675K) [OK] Highest usage of available connections: 26% (40/151) [OK] Key buffer size / total MyISAM indexes: 36.0M/18.7M [OK] Key buffer hit rate: 100.0% (245K cached / 105 reads) [OK] Query cache efficiency: 92.5% (500K cached / 541K selects) [!!] Query cache prunes per day: 302886 [OK] Sorts requiring temporary tables: 0% (1 temp sorts / 15K sorts) [!!] Joins performed without indexes: 12135 [OK] Temporary tables created on disk: 25% (8K on disk / 32K total) [OK] Thread cache hit rate: 90% (1K created / 12K connections) [!!] Table cache hit rate: 17% (400 open / 2K opened) [OK] Open file limit used: 12% (123/1K) [OK] Table locks acquired immediately: 100% (196K immediate / 196K locks) [!!] InnoDB buffer pool / data size: 2.0G/3.5G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: query_cache_size ( 64M) join_buffer_size ( 128.0K, or always use indexes with joins) table_cache ( 400) innodb_buffer_pool_size (= 3G) Last but not least, the server still has more than 60% of free disk space. Now, based on the above, I have few questions: Are these numbers normal? Do they make sense? Do I need to upgrade the server? If I don't need to upgrade and my configuration is not correct, how do I optimize it?

    Read the article

  • Cisco ASA - Enable communication between same security level

    - by Conor
    I have recently inherited a network with a Cisco ASA (running version 8.2). I am trying to configure it to allow communication between two interfaces configured with the same security level (DMZ-DMZ) "same-security-traffic permit inter-interface" has been set, but hosts are unable to communicate between the interfaces. I am assuming that some NAT settings are causing my issue. Below is my running config: ASA Version 8.2(3) ! hostname asa enable password XXXXXXXX encrypted passwd XXXXXXXX encrypted names ! interface Ethernet0/0 switchport access vlan 400 ! interface Ethernet0/1 switchport access vlan 400 ! interface Ethernet0/2 switchport access vlan 420 ! interface Ethernet0/3 switchport access vlan 420 ! interface Ethernet0/4 switchport access vlan 450 ! interface Ethernet0/5 switchport access vlan 450 ! interface Ethernet0/6 switchport access vlan 500 ! interface Ethernet0/7 switchport access vlan 500 ! interface Vlan400 nameif outside security-level 0 ip address XX.XX.XX.10 255.255.255.248 ! interface Vlan420 nameif public security-level 20 ip address 192.168.20.1 255.255.255.0 ! interface Vlan450 nameif dmz security-level 50 ip address 192.168.10.1 255.255.255.0 ! interface Vlan500 nameif inside security-level 100 ip address 192.168.0.1 255.255.255.0 ! ftp mode passive clock timezone JST 9 same-security-traffic permit inter-interface same-security-traffic permit intra-interface object-group network DM_INLINE_NETWORK_1 network-object host XX.XX.XX.11 network-object host XX.XX.XX.13 object-group service ssh_2220 tcp port-object eq 2220 object-group service ssh_2251 tcp port-object eq 2251 object-group service ssh_2229 tcp port-object eq 2229 object-group service ssh_2210 tcp port-object eq 2210 object-group service DM_INLINE_TCP_1 tcp group-object ssh_2210 group-object ssh_2220 object-group service zabbix tcp port-object range 10050 10051 object-group service DM_INLINE_TCP_2 tcp port-object eq www group-object zabbix object-group protocol TCPUDP protocol-object udp protocol-object tcp object-group service http_8029 tcp port-object eq 8029 object-group network DM_INLINE_NETWORK_2 network-object host 192.168.20.10 network-object host 192.168.20.30 network-object host 192.168.20.60 object-group service imaps_993 tcp description Secure IMAP port-object eq 993 object-group service public_wifi_group description Service allowed on the Public Wifi Group. Allows Web and Email. service-object tcp-udp eq domain service-object tcp-udp eq www service-object tcp eq https service-object tcp-udp eq 993 service-object tcp eq imap4 service-object tcp eq 587 service-object tcp eq pop3 service-object tcp eq smtp access-list outside_access_in remark http traffic from outside access-list outside_access_in extended permit tcp any object-group DM_INLINE_NETWORK_1 eq www access-list outside_access_in remark ssh from outside to web1 access-list outside_access_in extended permit tcp any host XX.XX.XX.11 object-group ssh_2251 access-list outside_access_in remark ssh from outside to penguin access-list outside_access_in extended permit tcp any host XX.XX.XX.10 object-group ssh_2229 access-list outside_access_in remark http from outside to penguin access-list outside_access_in extended permit tcp any host XX.XX.XX.10 object-group http_8029 access-list outside_access_in remark ssh from outside to internal hosts access-list outside_access_in extended permit tcp any host XX.XX.XX.13 object-group DM_INLINE_TCP_1 access-list outside_access_in remark dns service to internal host access-list outside_access_in extended permit object-group TCPUDP any host XX.XX.XX.13 eq domain access-list dmz_access_in extended permit ip 192.168.10.0 255.255.255.0 any access-list dmz_access_in extended permit tcp any host 192.168.10.29 object-group DM_INLINE_TCP_2 access-list public_access_in remark Web access to DMZ websites access-list public_access_in extended permit object-group TCPUDP any object-group DM_INLINE_NETWORK_2 eq www access-list public_access_in remark General web access. (HTTP, DNS & ICMP and Email) access-list public_access_in extended permit object-group public_wifi_group any any pager lines 24 logging enable logging asdm informational mtu outside 1500 mtu public 1500 mtu dmz 1500 mtu inside 1500 no failover icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 60 global (outside) 1 interface global (dmz) 2 interface nat (public) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface 2229 192.168.0.29 2229 netmask 255.255.255.255 static (inside,outside) tcp interface 8029 192.168.0.29 www netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.13 192.168.10.10 netmask 255.255.255.255 dns static (dmz,outside) XX.XX.XX.11 192.168.10.30 netmask 255.255.255.255 dns static (dmz,inside) 192.168.0.29 192.168.10.29 netmask 255.255.255.255 static (dmz,public) 192.168.20.30 192.168.10.30 netmask 255.255.255.255 dns static (dmz,public) 192.168.20.10 192.168.10.10 netmask 255.255.255.255 dns static (inside,dmz) 192.168.10.0 192.168.0.0 netmask 255.255.255.0 dns access-group outside_access_in in interface outside access-group public_access_in in interface public access-group dmz_access_in in interface dmz route outside 0.0.0.0 0.0.0.0 XX.XX.XX.9 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.0.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet timeout 5 ssh 192.168.0.0 255.255.255.0 inside ssh timeout 20 console timeout 0 dhcpd dns 61.122.112.97 61.122.112.1 dhcpd auto_config outside ! dhcpd address 192.168.20.200-192.168.20.254 public dhcpd enable public ! dhcpd address 192.168.0.200-192.168.0.254 inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics host threat-detection statistics access-list no threat-detection statistics tcp-intercept ntp server 130.54.208.201 source public webvpn ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect ip-options inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp !

    Read the article

  • Agent admitted failure to sign using the key.

    - by Delirium tremens
    .ssh dir is chmodded 700, id_rsa.pub 600, id_rsa 400. I ran ssh-keygen -t rsa, imported key to launchpad and ran bzr branch lp:unity, but got error message: Agent admitted failure to sign using the key. Permission denied (publickey). bzr: ERROR: Connection closed: Unexpected end of message. Please check connectivity and permissions, and report a bug if problems persist. auth.log: Nov 28 20:23:13 ubuntu sudo: deltrem : TTY=pts/0 ; PWD=/home/deltrem/Documentos/repositories ; USER=root ; COMMAND=/usr/bin/bzr branch lp:unity Nov 28 20:39:01 ubuntu CRON[2959]: pam_unix(cron:session): session opened for user root by (uid=0) Nov 28 20:39:01 ubuntu CRON[2959]: pam_unix(cron:session): session closed for user root Nov 28 20:41:04 ubuntu gnome-screensaver-dialog: gkr-pam: unlocked login keyring

    Read the article

  • Azure VM with many IPs or SSL certificates

    - by timmah.faase
    I am looking to move our hosting environment to Azure and by doing so have created a sandpit VM to figure things out. We host around 300-400 websites in IIS and about 2% of these sites have unique, non wildcard certificates all requiring a unique public IP in our current setup. Can you get a range of IPs pointing to 1 VM/Endpoint? Or is it possible to create an SSL proxy? I've never created an SSL proxy but like the idea of it. I'd need advise here on how to proceed if this is the best option. Sorry if this has been answered! Sorry also if my question isn't worded eloquently.

    Read the article

  • Spamassassin: How to delete all spam messages on the server?

    - by Beck
    Can't find out, how to configure spamassassin to delete all spam messages. Currenly it's only mark messages as spam, but pass them throught IMAP to client. How to block them from passing through to IMAP clients? http://spamassassin.apache.org/full/3.3.x/doc/Mail_SpamAssassin_Conf.html And it's blocking some of our notification messages... -1.4 ALL_TRUSTED Passed through trusted hosts only via SMTP 0.0 HTML_MESSAGE BODY: HTML included in message 2.4 HTML_IMAGE_ONLY_08 BODY: HTML: images with 400-800 bytes of words 2.9 TVD_SPACE_RATIO BODY: TVD_SPACE_RATIO 1.7 MIME_HTML_ONLY BODY: Message only has text/html MIME parts 1.1 HTML_MIME_NO_HTML_TAG HTML-only message, but there is no HTML tag 1.1 HTML_SHORT_LINK_IMG_1 HTML is very short with a linked image -1.4 AWL AWL: From: address is in the auto white-list This is what clients getting on their mails in place of our notification messages. Any idea how to pass those messages through and how to delete incoming spam? Thanks ;) I have this setup: postfix spamassassin clamav-daemon amavis

    Read the article

  • HTTP downloads slow - FTP of same file very fast - Windows 2003

    - by Paul Hinett
    I am having some issues with download speeds on my site via http, i am averaging around 70kbps downloading a file that is around 70mb. But if i connect to my server via FTP and download the same file on the same computer / connection i am averaging about 300+kbps. I know my server has alot of connections at any one time, probably around 400 connections. My server has a 1gbps connection to the internet so there is plenty of bandwidth available, as proven with the FTP. I have no throttling of any kind enabled in IIS. If interested there is a test file here you can download to check the speed: http://filesd.house-mixes.com/test.zip I am based in the UK and the server is in Washington, USA if that makes any difference. Paul

    Read the article

  • tag structured Filesystems

    - by A.Rashad
    I hope this is the correct site, I lose my way between the 4 sister sites :) Let me ask the question this way. all file systems I have seen before are hierarchical, that means a root directory, with some branched directories, and so on until we have files residing in these directories. except for AS/400 file structure, where it has a concept of a Library that serve somehow as a directory but one level only. Why not have directory-less filesystems where files are placed in a single location, but the file identifiers would be referenced by a database of tag/ file relation ships. This way there will be no need for symbolic links, one file may have multiple relations to multiple subjects, not only a single parent directory to contain. I hope the idea is clear.

    Read the article

  • How do I extract all the files in a VHD to a hard disk including permissions?

    - by Middletone
    I'd like to know wha thte best way is to make an exact copy of a vhd image and pu tit onto my hard disk. I've tried xcopy but there seems to be a number of issues rlated to permissions when doing this. Ideally I'd like to copy the bits so that they match exactly on the new drive. I encountered this when trying to restore a vista backup only to discover the idiots work who decided to not let me restore a 400 gig image to a 1 TB drive size. I've sucessfully mounted the drive in Win 7 which is the environment in which I'm trying ot copy these files.

    Read the article

  • Control truetype font tracking in fontforge

    - by ??O?????
    Hello. I'm designing a font intended to be used mainly as a subtitle font. It's a sans-serif font not unlike Helvetica. The em size is 2000 (I know it "should" be 2048 for a truetype font), the ascent/descent are 1600/400. Most glyphs don't go higher than 1450 though, with the exception of accented Latin characters. There lies the issue: when my font includes accented Latin characters, the tracking (line -of-text intradistance, but correct me if I got the term "tracking" wrong) skyrockets. I can't seem to find a way to control that. As my intended design goes, I don't mind if I have a descender (e.g. "g") above and an accented capital Latin letter below and they almost touch. It's something that does not happen very often, anyway. How can I control tracking? It seems it's auto-calculated.

    Read the article

  • ZFS and SAN -- best practices?

    - by chris
    Most discussions of ZFS suggest that the hardware RAID be turned off and that ZFS should directly talk to the disks and manage the RAID on the host (instead of the RAID controller). This makes sense on a computer with 2-16 or even more local disks, but what about in an environment with a large SAN? For example, the enterprise I work for has what I would consider to be a modest sized SAN with 2 full racks of disks, which is something like 400 spindles. I've seen SAN shelves that are way more dense than ours, and SAN deployments way larger than ours. Do people expose 100 disks directly to big ZFS servers? 300 disks? 3000 disks? Do the SAN management tools facilitate automated management of this sort of thing?

    Read the article

  • .htaccess configuration issue

    - by Hammad Haider
    Hi, i am using two website on one domain like: www.example.com & www.example.com/site2, i want to know that on my site2, in my site2 their are 2 folders name folder1 and folder2 my index.php is in folder2 but the defination of methods defined in folder2 i am including the files through .htaccess but i am unable to get those files which are in folder1 and getting Error-500 and 400 on browser and i am using following lines but they are not working in .htaccess file The line below works fine RedirectMatch ^/$ http://www.example.com.pk/site2/views/ AllowOverride All php_value include_path ".:/home/example/public_html/site2/system" waiting for your quick response. Thanks Regards, Hammad Haider.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >