Search Results

Search found 8501 results on 341 pages for 'status'.

Page 259/341 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • Slowdown upon router/modem setup change

    - by Ollie Saunders
    I’ve been using a Belkin FSD7632-4 modem router to connect to my TalkTalk provided ADSL internet connection for some time and been pretty happy with it. Recently, however, the connection has been failing and I decided to get a ASUS RT-N16 instead, which is also a much more capable router generally. The ASUS RT-N16 doesn’t come with a modem built-in so I purchased as Zoom modem as well. I’ve set them both up and am using them to post this message. But I’m a bit miffed to find that I get a significantly and consistently slower downstream rate from the new configuration than with the old Belkin. Belkin modem router: downstream: 3.45 mbps upstream: 0.73 mbps ASUS router + Zoom modem: downstream: 2.71 mbps upstream: 0.66 mbps Any ideas why this is? The really weird thing about this is that the Zoom supports ADSL2 and ADSL2+ but I don’t think the old Belkin does. At first I thought it might be due to the Zoom modem being limited to PPPoE instead of PPPoA, which my ISP supports, but then I tried using PPPoE with the Belkin and that still gave a high speed. I’m using VC-Mux encapsulation with both. VPI of 0 and VCI of 38. I pulled this data off the Zoom: Mode: ADSL2 Line Coding: Trellis On Status: No Defect Link Power State: L0 Downstream Upstream SNR Margin (dB): 12.3 11.8 Attenuation (dB): 43.0 24.9 Output Power (dBm): 12.9 0.0 Attainable Rate (Kbps): 3936 844 Rate (Kbps): 3194 840 MSGc (number of bytes in overhead channel message): 59 10 B (number of bytes in Mux Data Frame): 99 14 M (number of Mux Data Frames in FEC Data Frame): 2 16 T (Mux Data Frames over sync bytes): 1 8 R (number of check bytes in FEC Data Frame): 8 8 S (ratio of FEC over PMD Data Frame length): 1.9833 9.0594 L (number of bits in PMD Data Frame): 839 219 D (interleaver depth): 32 2 Delay (msec): 15 4 Super Frames: 15808 14078 Super Frame Errors: 0 4294967232 RS Words: 513778 111753 RS Correctable Errors: 126 4294967238 RS Uncorrectable Errors: 0 N/A HEC Errors: 0 4294967279 OCD Errors: 0 0 LCD Errors: 0 0 Total Cells: 1920175 237597 Data Cells: 205993 392 Bit Errors: 0 0 Total ES: 0 0 Total SES: 0 0 Total UAS: 34 0

    Read the article

  • MySql server answer #2002

    - by LOIC
    Since this morning, phpmyadmin is giving me the error message #2002 the server doesn't answer (or the connection to local Mysql is not well configured) and a message about control-use. I'm disappointed it used to work until 2am last night and now the MySQL engine doesn't want to start (told me sometimes about sockets ..) LAMPP is installed on a ubuntu 12.04 lm@Famou:~$ sudo service mysqld status mysqld: unrecognized service    & lm@Famou:~$ sudo service mysqld start mysqld: unrecognized service  : It never works with 'service' !!! and root@Famou:/opt/lampp# /opt/lampp/lampp restart Stopping XAMPP for Linux 1.7.7... XAMPP: Stopping Apache with SSL... XAMPP: XAMPP-MySQL is not running. XAMPP: Stopping ProFTPD... XAMPP stopped. Starting XAMPP for Linux 1.7.7... XAMPP: Starting Apache with SSL (and PHP5)... XAMPP: Starting MySQL... XAMPP: Couldn't start MySQL! XAMPP: Starting ProFTPD... XAMPP for Linux started. is the result of restart lampp

    Read the article

  • Nginx + PHP-FPM Too Many Resources

    - by user3393046
    My Server has the following Specs CPU: 6 Cores Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz RAM: 32 GB I have a problem with nginx+php-fpm. They are taking too many resources for an unknown reason. Even if i restart the nginx + php-fpm the start up processes will use many resources. My nginx Config is the following: user nginx; worker_processes auto; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; worker_rlimit_nofile 300000; events { worker_connections 6000; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } My php-fpm pool config is the following [www] user = nginx group = nginx listen = /var/run/php5-fpm.sock listen.owner = nginx listen.group = nginx listen.allowed_clients = 127.0.0.1 pm = ondemand pm.max_children = 1500; pm.process_idle_timeout = 5; chdir = / security.limit_extensions = .php I'm using on pm.ondemand since my website has to support many concurrent connections at the same time and i was unable to to it with dynamic/static. I guess this isnt the problem because as i said earlier when i restart nginx+php-fpm at the same time, they are taking too much resources without any request. Here is the screenshot with the CPU Usage http://s28.postimg.org/v54q25zod/Untitled.png

    Read the article

  • CentOS 5 VPN Server won't work

    - by Miro Markarian
    I have a CentOS 5 server configured to be both a L2TP server and a PPTP server + a radius server for hosting the AAA. My problem is that, the L2TP works great and I can connect to it, but can't connect to PPTP and every-time it ends up with error #619 when it gets to the verifying username and password section. Here is the log I got from /var/log/messages Dec 17 07:40:02 serverdl pptpd[8570]: CTRL: Client 5.52.247.62 control connection started Dec 17 07:40:03 serverdl pptpd[8570]: CTRL: Starting call (launching pppd, opening GRE) Dec 17 07:40:03 serverdl pppd[8571]: Plugin radius.so loaded. Dec 17 07:40:03 serverdl pppd[8571]: RADIUS plugin initialized. Dec 17 07:40:03 serverdl pppd[8571]: Plugin radattr.so loaded. Dec 17 07:40:03 serverdl pppd[8571]: RADATTR plugin initialized. Dec 17 07:40:03 serverdl pppd[8571]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Dec 17 07:40:03 serverdl pppd[8571]: pptpd-logwtmp: $Version$ Dec 17 07:40:03 serverdl pppd[8571]: pppd 2.4.4 started by root, uid 0 Dec 17 07:40:03 serverdl pppd[8571]: Using interface ppp0 Dec 17 07:40:03 serverdl pppd[8571]: Connect: ppp0 <--> /dev/pts/2 Dec 17 07:40:03 serverdl pptpd[8570]: GRE: read(fd=7,buffer=80515e0,len=8260) from network failed: status = -1 error = Protocol not available Dec 17 07:40:03 serverdl pptpd[8570]: CTRL: GRE read or PTY write failed (gre,pty)=(7,6) Dec 17 07:40:03 serverdl pppd[8571]: Modem hangup Dec 17 07:40:03 serverdl pppd[8571]: Connection terminated. Dec 17 07:40:03 serverdl pppd[8571]: Exit. Dec 17 07:40:03 serverdl pptpd[8570]: CTRL: Client 5.52.247.62 control connection finished Just yesterday when I hadn't set up the L2TP yet PPTP was working great but then I uninstalled it and removed all it's config from /etc/* and installed L2TP first and then installed PPTP after it. and then it stopped to work. I believe it must be a radiusclient issue because both of the PPTP and L2TP services use radius to authenticate. And another thing I think must be the issue is that when assigning IPs to the PPP interfaces, I have done the following config. Is that right? For L2TP: localip 10.10.10.1 remoteip 10.10.10.2-254 For PPTP: localip 10.10.9.1 remoteip 10.10.9.2-254

    Read the article

  • Apache Bench reports different result with same page

    - by Aspis
    I'm running into a little problem base-lining an Apache2/fcgi/php-fpm server I am setting up. 1) If I run: ab -n 15000 http://mysite.com/index.php. Apache Bench returns Time per request: 41ms but document length: 0 bytes and html transferred: 0 bytes. The Transfer rate: 7.9Kb/s. 2) If I run: ab -n 15000 http://mysite.com/ Apache Bench returns Time per request: 83ms along with the accurate document length and html transferred total. The APC cache status reports identical hit counts from both test. Also Apache Bench reports no errors in either case. Overall, no errors on any test sites and all logs are clean, etc. DocumentRoot is set to index.php so I would expect both of these test runs to produced a similar result. My 2 question(s) are: 1) why the discrepancy? 2) which is the correct result? I've seen plenty of results like test 1 posted (with out question) but frankly from my own experience and those of others, accurate testing is hard to come by. Even with out goofy issues like this.

    Read the article

  • Ubuntu server crashes; need help figuring how to figure out why

    - by neezer
    I have a 768 Slice at slicehost.com running Ubuntu Server 8.04.2 LTS (hardy) with a LAMP stack on it that periodically crashes, though why I am not sure. From what I can tell, there is a process that basically goes rogue and consumes all the memory on the slice, suffocating all the other programs running until the whole thing comes to a grinding halt, and I have to do a hard reboot of the slice to get it back up and running again. I can't detect any pattern for this (it seems to happen about once a month, more or less). Here's a screenshot of my console during the last crash: I would assume that a possible cause might a PHP script or an apache configuration rule that might cause the crash if triggered? How would I be able to find out which one is the offending one? I've checked and rechecked all my PHP scripts, and running them doesn't seem to trigger the crash. I've also been able to log on to my system during a crash and see what's running (with top), but I can't tell how the offending process was started, so I can't trace the root of the problem! I know my description is overly generic, but unfortunately my expertise in tracking down the source of these glitches is very limited. If you need any additional information about my system in order to help me figure this out, please let me know in the comments, and I will append it to the question. My only other lead as to the culprit here is Wordpress, which we have installed on this server. Here are the details: Wordpress 3.0.3 with the following plugins installed and activated: Addmarx - Bookmark/Share/Email Dropdown, Akismet, All in One SEO Pack, Animated Banners, Automatically publish highlights of any website, directly to your Blog, Broken Link Checker, CMS Dashboard, Collapsing Categories, Status Updater, SubHeading, Ultimate Google Analytics, VastSubCat, WP-CMS Post Control, and WP Super Cache

    Read the article

  • Trouble with nginx and serving from multiple directories under the same domain

    - by Phase
    I have nginx setup to serve from /usr/share/nginx/html, and it does this fine. I also want to add it to serve from /home/user/public_html/map on the same domain. So: my.domain.com would get you the files in /usr/share/nginx/html my.domain.com/map would get you the files in /home/user/public_html/map With the below configuration (/etc/nginx/nginx.conf) it appears to be going to my.domain.com/map/map as noticed by this: 2011/03/12 09:50:26 [error] 2626#0: *254 "/home/user/public_html/map/map/index.html" is forbidden (13: Permission denied), client: <edited ip address>, server: _, request: "GET /map/ HTTP/1.1", host: "<edited>" I've tried a few things but I'm still not able to get it to cooperate, so any help would be greatly appreciated. ####################################################################### # # This is the main Nginx configuration file. # ####################################################################### #---------------------------------------------------------------------- # Main Module - directives that cover basic functionality #---------------------------------------------------------------------- user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; #---------------------------------------------------------------------- # Events Module #---------------------------------------------------------------------- events { worker_connections 1024; } #---------------------------------------------------------------------- # HTTP Core Module #---------------------------------------------------------------------- http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; server { listen 80; server_name _; #access_log logs/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } location /map { root /home/user/public_html/map; index index.html index.htm; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } include /etc/nginx/conf.d/*.conf; }

    Read the article

  • Internet Explorer / Windows 7 does not want to show HTML file from local network drive

    - by Jaanus
    Setup: I have Windows 7 running inside VirtualBox on Mac OS X host. I have a shared drive with some HTML files, that I am mounting as a local drive W: in Windows, from the VirtualBox server \VBOXSVR. I want to look at them with a browser in Windows. Chrome in Windows 7 opens and shows those HTML files just fine (file:///W:/welcome.html). But Internet Explorer does not, and shows this error instead of the files: Internet Explorer cannot display the web page What you can try: [button Diagnose Connection Problems] More information This problem can be caused by a variety of issues, including: Internet connectivity has been lost. The website is temporarily unavailable. The Domain Name Server (DNS) is not reachable. The Domain Name Server (DNS) does not have a listing for the website's domain. If this is an HTTPS (secure) address, click Tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For the internet zone in the status bar, it shows: Internet | Protected Mode: On IE settings are a mystery to me, and I could possibly get it to work by tweaking IE settings, but I don't know which ones. How do I make IE show the same files that Chrome is happy to show? (Chrome showing them means that the files themselves are fine, there is something about the setup that just makes IE be a diva.)

    Read the article

  • Cannot Start Passenger 3.0.18 Using Mountain Lion (OS X Server) and RVM

    - by LightBe Corp
    I recently did a clean install of Mountain Lion on my Mac Mini Server. I installed version 3.0.18 using a gem according to the directions on http://www.phusionpassenger.com with no errors that I could see. rvmsudo gem install passenger-enterprise-server-3.0.18.gem rvmsudo passenger-install-apache2-module Here are my entries in /etc/apache2/httpd.conf with my username masked: LoadModule passenger_module /Users/username/.rvm/gems/ruby-1.9.3-p327/gems/passenger-enterprise-server-3.0.18/ext/apache2/mod_passenger.so PassengerRoot /Users/username/.rvm/gems/ruby-1.9.3-p327/gems/passenger-enterprise-server-3.0.18 PassengerRuby /Users/username/.rvm/wrappers/ruby-1.9.3-p327/ruby I uncommented out the following statement: Include /private/etc/apache2/extra/httpd-vhosts.conf Here is a sample virtual host entry. I have three of them in the file. <VirtualHost *:80> ServerName www.mydomain.com ServerAlias mydomain.com PassengerAppRoot /Users/username/Sites/myfolder/ DocumentRoot /Users/username/Sites/myfolder/public <Directory /Users/username/Sites/myfolder/public> Allow from all AllowOverride all Options -MultiViews </Directory> </VirtualHost> I have restarted Apache several times. Here is information from my server: [~]$ ps -ef | grep Passenger 501 18804 303 0 12:39PM ttys000 0:00.00 grep Passenger [~]$ rvmsudo passenger-status Password: **ERROR: Phusion Passenger doesn't seem to be running.** [~]$ rvmsudo passenger-config --version 3.0.18 I have tried doing online searches on this. I was surprised that there was not all that much on this specific error even though from my understanding Passenger has been around for a few years. I have posted this issue on the Phusion Passenger Google Groups but have not heard anything. Any help would be appreciated, the sooner the better LOL. Seriously I need to have one of my three websites up by tomorrow evening. This is the only issue stopping that from happening. Thanks again.

    Read the article

  • How do I push my initial snapshot to a subscriber server in SQL Server 2000?

    - by Kev
    I'm configuring Transactional Replication using the Push model. The scenario is: The SQL Servers: SQL01 (publisher) and SQL02 (subscriber) - both running SQL 2000 SP4. Both servers are standalone (i.e. not domain members) Both servers have their FQDN and NETBIOS names in their HOSTS files I've managed to configure SQL01 to publish my database and configured a Push subscription for SQL02 using the Push New Subscription wizard and set the Distribution Agent to update the subscription continuously. On the Push Subscription wizard "Initialise Subscription" page I've selected "Yes, initialise the schema and data" and ticked the "Start the Snapshot Agent to begin the initialisation process immediately" option. All the required services are running (SQL Agent). When I complete the wizard and browse the Replication - Publications folder I can see my publication (blue book with arrow). The publication shows the Push subscription and its status is Pending. If I look in the c:\Program Files\Microsoft SQL Server\Mssql\Repldata folder I see a number of T-SQL scripts for each table e.g. Products.bcp, Products.sch, Products.idx. What should happen now? Should my replicated database now (magically) appear on the subscription server?

    Read the article

  • In Icinga (Nagios), how do I configure hosts with multiple IPs?

    - by gertvdijk
    I'm setting up Icinga (Nagios fork) and I have some machines with multiple interfaces. Some services are only listening on one of them and to check them correctly, I like to know if it's possible to have multiple IP addresses configured for a single host in Icinga. Here's a minimal example: Remote Server: eth0: 1.2.3.4 (public IP) eth1: 10.1.2.3 (private IP, secure tunnel) Apache listening on 1.2.3.4:80. (public only) OpenSSH listening on 10.1.2.3:22. (internal network only) Postfix SMTP listening on 0.0.0.0:25 (all interfaces) Icinga Server: eth0: 10.2.3.4 (private IP, internet access) Now if I define a host: define host { use generic-host host_name server1 alias server1.gertvandijk.net address 10.1.2.3 } This will not check the HTTP status correctly. And defining an additional host: define host { use generic-host host_name server1-public alias server1.gertvandijk.net address 1.2.3.4 } will check everything, but shows up as two independent hosts. Now I want to 'aggregate' these two hosts to show up as a single host, yet providing an easy configuration to check the services on their proper address. What is the most elegant number-of-configuration-lines-saving solution to this? I read about several plugins available to workaround this, but I can't figure out what is the current way to address it. Solutions go back to 2003, but I'm running Icinga 1.7.1, already capable of the address6 option, yet that triggers IPv6-only resolving on the hostname... Ideally, I wish to configure Icinga to be intelligent enough to know that the Postfix instance running on 10.1.2.3:25 is the same as 1.2.3.4:25 and thus not triggering two alarms. I guess this must have been tackled before and sysadmins have it set up now. Please share your solution to this. Thanks! :)

    Read the article

  • Printer spooler service stop running when sent print job

    - by Hanan N.
    Every time i am sending a print job to the printer, i am don't get any response from the printer, and at the printer job list at the status of the job, i see that there was an Error, but it don't give me any clue on what could be the problem. After some investigation i found that every time that i send the print job to the printer the printer spooler service stops to run, then after a second or two it start again (i think that this behavior is related to the printer spooler settings to rerun it self after it stops). Things that i have tried so far: Remove and Install again the Driver. After removing the driver, i have removed the unnecessary registry keys according to this article from Microsoft, these are: Rename all files and folders in: c:\windows\system32\spool\drivers\w32x86 Remove anything but Drivers Print and Processors: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Print\Environment\Windows NT x86 Remove anything in here: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Print\Monitors but: BJ Language Monitor Local Port Microsoft Document Imaging Writer Monitor Microsoft Shared Fax Monitor Standard TCP/IP Port USB Monitor WSD Port Disconnect and Reconnect the Printer. Clean the computer from Viruses & Spywares. Currently i am stuck, i have no more things to try, if anybody know about any kind of solution please let me know about it. Since i am want to keep this post as general problem that relate to the printer spooler, and not just my particular problem, i didn't included inside the windows version & the printer model, they are (although i think that it isn't relate just for that particular model): Windows 7 32bit, HP Officejet 4500 G510g-m (connect to the computer via USB). Thanks.

    Read the article

  • Problems Installing slapd On Ubuntu Server 11.10

    - by Zach Dziura
    I know that there's a Ubuntu-specific StackExchange website, but I thought that I'd ask here because it's a server-specific question. If I'm wrong in my logic... Well, you people are better at this than I am! O=) On with the show! I'm in the process of installing Oracle Database 11g R2 Standard Edition onto Ubuntu Server 11.10. I found a guide on the Oracle Support Forums that walks you through the process fairly easily. Unfortunately, I'm running into issues installing one particular dependency: slapd. When I go to install it, I get this error message: (Reading database ... 64726 files and directories currently installed.) Unpacking slapd (from .../slapd_2.4.25-1.1ubuntu4.1_amd64.deb) ... Processing triggers for man-db ... Processing triggers for ufw ... Processing triggers for ureadahead ... Setting up slapd (2.4.25-1.1ubuntu4.1) ... Usage: slappasswd [options] -c format crypt(3) salt format -g generate random password -h hash password scheme -n omit trailing newline -s secret new password -u generate RFC2307 values (default) -v increase verbosity -T file read file for new password Creating initial configuration... Loading the initial configuration from the ldif file () failed with the following error while running slapadd: str2entry: invalid value for attributeType olcRootPW #0 (syntax 1.3.6.1.4.1.1466.115.121.1.15) slapadd: could not parse entry (line=1051) dpkg: error processing slapd (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: slapd E: Sub-process /usr/bin/dpkg returned an error code (1) After much Google searches and forum trolling, I have yet to find a definitive answer as to what's going wrong. The error messages seem straight forward enough, but I have no idea how to debug this. Can anyone offer some assistance? Again, if I'm asking in the wrong place, I apologize. If I'm indeed asking properly, then thank you for any and all help!

    Read the article

  • What are the IR codes the new Apple Remote (alu) uses?

    - by index
    I would like to clone the new Apple Remote (infrared, second generation, aluminium) just for fun with a microcontroller. Most codes of the previous model can be found in the LIRC remote control database (all except the key combinations menu + <<,play, which unpair, change ID, pair the remote. I also don't know which bit encodes the battery status. It uses a modified 32 bit NEC protocol (reverse LIRC codes bytewise). But the new Apple remote uses two additional codes for the play and the new select button. I don't have a mac, so I can't brute force test codes either ;-) So if someone possesses such a remote and the ability of recording those two new buttons and three combinations I'd really appreciate it. If you can't run LIRC (or it gets confused by the new codes) and you don't have an oscilloscope or logic analyser, maybe you could hook up a photo diode to your sound input and record the codes with Audacity? Just hit record, hit each button and combo a few times, hit stop, upload the uncompressed WAV file to a sharing site, done. That'd be great!

    Read the article

  • dig show only answer

    - by Zulakis
    I want dig only to show the answer of my query. Normally, it prints out alot of additional info like this: ;; <<>> DiG 9.7.3 <<>> google.de ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55839 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;google.de. IN A ;; ANSWER SECTION: google.de. 208 IN A 173.194.69.94 ;; Query time: 0 msec ;; SERVER: 213.133.99.99#53(213.133.99.99) ;; WHEN: Sun Sep 23 10:02:34 2012 ;; MSG SIZE rcvd: 43 I want this to be reduced to just the answer section. dig has alot of options, a good one i found was +noall +answer ; <<>> DiG 9.7.3 <<>> google.de +noall +answer ;; global options: +cmd google.de. 145 IN A 173.194.69.94 It leaves out most of the stuff, but still shows this options thing. Any ideas on how to remove it using dig options? I sure could cut it out using other tools, but a option with dig itself would be the cleanest and nicest.

    Read the article

  • AJP Connector Apache-Tomcat with php and java application

    - by Safari
    I have a question about proxy and ajp module. On my machine I have a Apache web server and a Tomcat servlet container. On Tomcat is running a my java webapplication. On Apache I have some services and I can call these in this way: http://myhos/service1 http://myhos/service2 http://myhos/service3 I would configurate a ajp connector to call my tomcat webapplication from Apache. I would somethin as http://myhost to call the Tomcat webapp. So, I configurated my apache in this way..and I have what I wanted: I can use http://myHost to visualize the Tomcat webApp by Apache. <VirtualHost *:80> ProxyRequests off ProxyPreserveHost On ServerAlias myserveralias ErrorLog logs/error.log CustomLog logs/access.log common <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /server-status ! ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ stickysession=JSESSIONID nofailover=Off maxattempts=1 <Proxy balancer://mycluster> BalancerMember ajp://myIp:8009 min=10 max=100 route=portale loadfactor=1 ProxySet lbmethod=bytraffic </Proxy> <Location /balancer-manager> SetHandler balancer-manager Order deny,allow Allow from localhost </Location> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent </VirtualHost> But, now I can't use the apache services: If I use http://myhos/service1 I have an error because apache try to search service1 on my tomcatWebApp. Is there a way to fix it?

    Read the article

  • Generic/Text Printer on Windows 7 not prompting for file name

    - by Trevor Tippins
    Hope someone can shed some light on this. I am downloading reports from an AIX-based system by directing them to a TT printer which the terminal emulator (MultiView 2000) intercepts and directs to the default printer on the local system. This local printer is configured as a vanilla Generic/Text printer attached to a FILE port. When I print from AIX, the output is spooled down and the local printer prompts for a file name into which to save the file...but not under Windows 7. This has worked fine for many years, on both Win2K and WinXP. However, on Windows 7 the output gets spooled as a file into spool\PRINTERS (and looks as expected) but the print job then hangs with a status of "Error - Printing" and never prompts for a file name. I have to cancel the job. The Generic/Text printer works as expected with other applications. I have tried setting the printer to print directly rather than spooling but this only serves to hang the terminal session too. I've also tried to run the emulator in Windows 2000 Compatibility Mode and as Administrator in case it was something like that but with no luck. As you might expect, it does work fine in XP Mode (as long as I print to a printer defined therein and not the host's printer) but operationally this isn't going to be an option. Obviously this emulation software is a decade old (at least) and I could just cross/upgrade all the users (at a cost) but, before I do so, has anyone seen this sort of behaviour before and found some sort of fix? Remote OS: AIX 5 Client OS: Windows 7 Pro (32-bit) Printer: Generic/Text on a FILE port TE Software: MultiView 2000 (32-bit) Thanks in advance.

    Read the article

  • Salary Survey Entry Level network position [closed]

    - by will
    Hello, I started interning with a company about 5 months ago and for the past 7 months I have been a normal part time employee. This week I have a review, where I am hoping to get a raise. I started at $8 interning, and now I'm up to $13. What I am trying to figure out is how to survey what others are making in similar positions so I can take it to the review as a base number. here are my thoughts on my position right now. Review Thoughts My Qualifications: • Associates of applied Science - IT network Specialist • CompTIA A+ certification • CompTIA Server+ certification • CompTIA Network+ certification • Currently pursuing Cisco certifications • Junior status at Insert college Pursuing Bachelors in Information Technology with an emphasis in Networking. 3.4 GPA • 1 year of working at Insert Company. My contributions to Insert Company • Offering near fulltime through semester and fulltime through summer. • Ability to work after hours and on weekends • Developed and support helpdesk system • Set up and maintain Update server to keep desktop clients up to date • Deployed and maintain antivirus solution for end users • Assist with main projects such as SAN, Virtualization, and network survey. Any tips on determining an asking number would help. I was thinking $17-$18, am I way off here? • Migrating end user stations to Windows 7 (current project) • Developing imaging solution for Desktop PCs (current project)

    Read the article

  • Mailgun Is Not Detecting My New MX Records

    - by Tyler Crompton
    When I issue a DiG command to verify my MX records, I get the following output: $ dig example.com MX ; <<>> DiG 9.9.5-3-Ubuntu <<>> example.com MX ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47700 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 5, ADDITIONAL: 5 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;example.com. IN MX ;; ANSWER SECTION: example.com. 85468 IN MX 10 mxa.mailgun.org. example.com. 85468 IN MX 10 mxb.mailgun.org. ;; REMAINDER OF OUTPUT REMOVED FOR BREVITY However, when I click "Check DNS Records Now" on Mailgun, it verifies the changes to the TXT and CNAME records but says that my MX records have not been changed. Type | Priority | Enter This Value | Current Value -----+----------+------------------+-------------------- MX | 10 | mxa.mailgun.org | 10 mail.example.com MX | 10 | mxb.mailgun.org | 10 mail.example.com I updated these records three to fours ago. I know it said to wait up to twenty-four to forty-eight hours. But I feel that if it detected the other DNS changes, then it should detect the MX record changes. Am I being impatient or is this a legitimate concern? What do you suggest I do? Note: I'd create a Mailgun tag for this; I feel that it'd be appropriate, but I don't have enough reputation to do so.

    Read the article

  • Windows 7 ignores F6/F8 and will not boot

    - by P.Brian.Mackey
    I have a work PC with sophos safeguard encryption on it. Windows failed to start. When I bootup I receive an error saying a recent hardware or software change might be the cause. File: \Boot\BCD Status: 0xc0000098 Info: The windows boot configuration data file does not contain a valid OS entry. This began after the PC forced me to run a system recovery. My machine had powered down improperly (power outage?) and simply would not respond to my keyboard input to cancel the option to scan my system. After the scan "repaired" a boot file, my system crashed. Now it tells me I can insert my windows 7 disk and run recovery. I can't simply do this because of Safeguard. The system recovery can't see my encrypted drive. I tried hitting F2 to manually login to Safeguard and then selected the option to boot from media. The computer prompts me to hit any key to boot from disk...which I do, but once again it is not reading my keyboard input. I can't get F8/F6 to bypass startup files and get me to a command prompt like the old days. If I could get to a command prompt I might could recover the file windows jacked up from its backup location...though I may need to use the windows recovery disk UI to do this..??? In the past I've been able to slap in a PS/2 keyboard when the USB keyboards stop responding like this. I have no PS/2 keyboard available. Anyone have any idea how I can undo the damage windows system recovery has done with safeguard installed?

    Read the article

  • Failover Cluster Quorum Failing

    - by oruchreis
    Hi, I have two nodes which boots from iscsi to implement windows 2008 cluster. And I'm using disk majority option as quorum over iscsi. But when the quorum's iscsi connection failed(May be san server reset), the failover cluster is failed too. If I reset one of the nodes, it can open, but its system disk goes offline. I cant change its status as online, because it says that its reserved by failover cluster(disk is on iscsi, beacuse iscsi boot). And this disk works as readonly. Anything on it cant be deleted or written. So, I cant rejoin the node to the cluster again. I have to reinstall windows. So, what I'm asking is, how can I implement more quorum backup? I mean, can I use both disk majority and file share majority at same time? AFAIK, every nodes also keep the quorum's copy too. But I don't know sometimes san servers goes offline. And quorum's iscsi connection and nodes' iscsi connections get lost. So, nor the quorum that is kept in the nodes neither the quorum iscsi disk is not enough to start the cluster again. I want to use both disk majority and file share majority at the same time. Can I do this? Have you any other suggestion? Regards.

    Read the article

  • S.M.A.R.T. broken sectors

    - by Jeffrey Vandenborne
    Recently I received my hard drive (LaCie) that I've sent away for warranty, my disk failed, and I used Palimpsest Disk utility to check if anything was wrong in the S.M.A.R.T Status. And it said that there were a few broken sectors. So the next day, I went to the store and told the story. 4 weeks later I actually got my drive back. The first thing I did was plugging it in and starting the disk utility, and weirdly it showed me pretty much the exact same things, even the values of most tests were the same as they were before when my drive broke. The serial number is different though, but it does show a very peculiar value. Now I'm wondering, I'm almost sure it's the exact same drive and it still says I've got broken sectors, does it just say that because it has been cached in the drive somewhere while LaCie DID actually fix it? Or should I run the extended self test (which seems to take hours) first? Also I've tried the smartctl command tool, it says the drive has smart support, but it doesn't show anything, it says that it's enabled, but then it says that it's disabled, picture below The picture of the Disk utility: Thanks in advance

    Read the article

  • sharepoint crawl not indexing main site

    - by user22215
    Guys I'm having some strange search issues' going on with my main portal application. First off let me give you a little back ground on the problem web app. Our Sharepoint environment was originally set up by a consultant that did not follow best practices. She used one web app to house our companies' intranet site, ssp, and mysites. Since than I have provisioned a new ssp that I have segmented correctly I moved all of our other sites over to the new ssp with out any problems . However, I could not assign the main portal app to the new ssp since the portal app housed the ssp site collection. So I deleted the ssp site collection after that I deleted the ssp and assigned the portal app to my new ssp. Now this is where the problem starts when I attempt to crawl this application the crawl starts than stops 5 seconds later with a status of success also it reports that 1 item was successfully crawled. The funny thing is the main portal app has nearly 30000 items. I have tracked the problem down to the web app if I create a test web app than restore the content I have no problem crawling all 30000 items. Also all of my other web apps that use the same ssp have no problem completing crawls. I don't see anything in the ULS logs or server 2003's event viewer. Also I'm using a separate dedicated index server that's configured to crawl itself via host file configuration. I would like to fix this problem with out having to recreate our main portal site due to the fact that we have several custom code modifications where DLL's were registered to the IIS bin folder also I don't even want to get into the Silverlight mods that were done. Any help with this problem is much appreciated Same problem as minehttp://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/MS-SharePoint/Q_23885820.html

    Read the article

  • Postfix relay all mail through SES except for one sending domain / address

    - by Kevin
    I'm thinking this is really really super simple, but I can't figure out what I need to do. I don't mess with Postfix much (Just let it run and do its thing) so I've got no idea where to even start with this. We have postfix currently configured to relay all mail out through SES using the code below. We need to modify this so that emails sent from one of our domains (domain.com) DO NOT go through SES. Everything else should continue to flow out through the SES connection. I'm assuming this is like a one line thing but my google skills are not helping me at all. relayhost = email-smtp.us-east-1.amazonaws.com:25 smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_use_tls = yes smtp_tls_security_level = encrypt smtp_tls_note_starttls_offer = yes smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt smtp_destination_concurrency_limit = 450 Update I have created sender_transport file in /etc/postfix. In it is @domain.com smtp: I then ran this through postmap and placed sender_dependent_default_transport_maps = hash:/etc/postfix/sender_transport above the above block of code and restarted postfix, but still all email is going out through SES. Log after sending Oct 22 14:38:48 web postfix/smtp[19446]: 4B19D640002: to=<[email protected]>, relay=email-smtp.us-east-1.amazonaws.com[54.243.47.187]:25, delay=1.4, delays=0.01/0/0.92/0.44, dsn=2.0.0, status=sent (250 Ok 00000141e21b181f-ee6f7c4f-f0f5-4b0f-ba69-2db146a4f988-000000) Oct 22 14:38:48 web postfix/qmgr[19435]: 4B19D640002: removed I don't think this log is what you're looking for, but it's the only thing that is logged when mail goes out, and this is with me running /usr/sbin/postfix -v start manually and not with the init script.

    Read the article

  • Mac updated just now, postgres now broken

    - by user52224
    I run postgres 9.1 / ruby 1.9.2 / rails 3.1.0 on a maxbook air for local dev. It's all been running smoothly for months, (though this is the first time I've done development on a mac.) It's a macbook air from last year, and today I got the mac osx software update message as I have a few times before, and my system downloaded approx 450mb of updates and restarted. It now says it's on OSX 10.7.3. Point is, postgres has stopped working, when I start my thin server (mirror heroku cedar) as normal, and then browse to my rails app I get: PG::Error could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What happened? After browsing around a few questions I'm still confused, but here's some extra info: Running psql from command line gives same error I can run pgadmin 3 and connect via it and run SQL no problems Running which psql shows the version as /usr/bin/psql I created a PostgreSQL user back when I got the mac (it's always been on lion) I've no idea why, almost certainly I was following a tutorial which I neglected to store in my notes. Point is I am aware there is a _postgres user as well. I know it's rubbish, but apart from a note on passwords, I don't have any extra info on how I configured postgres - though the obvious implication is that I did not use the _postgres user. Anyone have suggestions or information on what might have changed / what I can try to debug and fix? Thanks. Edit: Playing around based on this question and answer: http://stackoverflow.com/questions/7975414/check-status-of-postgresql-server-mac-os-x, see this string of commands: $ sudo su postgreSQL bash-3.2$ /Library/PostgreSQL/9.1/bin/pg_ctl start -D /Library/PostgreSQL/9.1/data pg_ctl: another server might be running; trying to start server anyway server starting bash-3.2$ 2012-04-08 19:03:39 GMT FATAL: lock file "postmaster.pid" already exists 2012-04-08 19:03:39 GMT HINT: Is another postmaster (PID 68) running in data directory "/Library/PostgreSQL/9.1/data"? bash-3.2$ exit

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >