Search Results

Search found 51790 results on 2072 pages for 'long running'.

Page 367/2072 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • Why doesn't Apache start from xampp control panel after changes to vhosts config?

    - by Grafica
    I'm running xampp on my local server, and want to host multiple sites, so I changed the httpd-vhosts.conf file. Will somebody let me know if there is something wrong with my code? Apache was running while I had only one site in the config, but after I added another site, I stopped apache, and I'm not able to restart it. # # Virtual Hosts # # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # ##NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host.localhost" ##ServerName dummy-host.localhost ##ServerAlias www.dummy-host.localhost ##ErrorLog "logs/dummy-host.localhost-error.log" ##CustomLog "logs/dummy-host.localhost-access.log" combined ##</VirtualHost> ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host2.localhost" ##ServerName dummy-host2.localhost ##ServerAlias www.dummy-host2.localhost ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined ##</VirtualHost> NameVirtualHost * <VirtualHost *> DocumentRoot "C:\xampp\htdocs" ServerName localhost </VirtualHost> <VirtualHost *> DocumentRoot "C:\xampp\htdocs" ServerName evamagnus.com <Directory "C:\xampp\htdocs\"> Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *> DocumentRoot "C:\xampp\htdocs2\" ServerName mygrafica.com <Directory "C:\xampp\htdocs2\"> Order allow,deny Allow from all </Directory> </VirtualHost> Here is what it says in the control panel: 2:17:37 PM [apache] Starting apache service... 2:17:38 PM [apache] Status change detected: running 2:17:39 PM [apache] Status change detected: stopped Thanks in advance.

    Read the article

  • cisco 6500 crash enabling netflow

    - by bleomycin
    Hello everyone, i have a cisco 6503 running IOS 12.2(33)sxi5 and i'm trying to enable netflow. Following the instructions here http://www.manageengine.com/products/netflow/help/cisco-netflow/cisco-ios-netflow.html enabling for interface vlan 3, shortly after ip flow-export version 5 console outputs: CPU_MONITOR-6-NOT_HEARD: CPU monitor messages have not been heard for 30 seconds crashlog here: http://pastebin.com/Niv2H8xD it then writes a crash log and reloads the router. Has anyone else experienced anything like this before? Here is my running config prior to adding the options in the above link: http://pastebin.com/AgNb1ahG Thank you for any help!

    Read the article

  • Losing WLAN connections but maintaining internet connections on WIndows 7 Workgroup

    - by Di
    I have 4 computers all running Windows 7 networked in a Work group through Billion 7404vgp-m wireless router.All drivers and firmware for wireless adapters and router are up to date. Windows Firewall and Defender disabled.Disconnected ipv6. Running Nod 32 anti virus software. All have own static IP address 192.XXX.X.XXX. When I Reset the router all computers have Internet and LAN access for about 1 hour and then they will lose the LAN connection but maintain Internet connection. Resetting wireless adapters or restarting computers does nothing to fix this but resetting router will. What is causing this and how do I fix it. Thanks Di

    Read the article

  • prevent domain controller using wpad for windows update

    - by BeowulfNode42
    We have a 2012 domain controller in an environment where we are running a web proxy auto discovery (WPAD) setup for client devices, and that proxy server requires authentication. However windows update does not support proxy servers requiring authentication. So we want to prevent windows update on our servers from using the WPAD proxy settings. On a domain member server we can log in to the local administrator account (not domain admin) and un-tick the the "Auto detect proxy settings" in IE internet options and that fixes the issue on those servers. But a domain controller does not have a local admin account, as that account is the domain admin account. Doing this to the domain admin account on the DC does not prevent it from using WPAD. Our whole purpose of running a proxy server that requires authentication is so we can identify what the users on our session based remote desktop servers are doing on the internet. See this MS KB Article for some info about Windows update and proxy servers "How the Windows Update client determines which proxy server to use to connect to the Windows Update Web site" - http://support.microsoft.com/kb/900935

    Read the article

  • Postfix SMTP server down on Ubuntu

    - by Paddington
    I have a Plesk server running Postfix on Ubuntu 10.04 and the SMTP service on port 25 is down. When I stop and then start postfix the server comes up only for a minute and goes down again. I have checked the load on the server and it is low as shown: *top - 04:29:33 up 19 days, 3:25, 4 users, load average: 1.47, 1.78, 2.34 Tasks: 936 total, 1 running, 935 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.3%sy, 0.0%ni, 86.6%id, 11.7%wa, 0.6%hi, 0.1%si, 0.0%st Mem: 6110496k total, 6072988k used, 37508k free, 251244k buffers Swap: 12000544k total, 95264k used, 11905280k free, 4370432k cached* IMAP clients are not experiencing a problem and there are no issues with receiving emails for both POP or IMAP. Only SMTP (port 25) is a problem. If I ask clients to use the submission port (587) messages are delivered. netstat -lnt shows the following results , so its not a port issue. tcp 0 0 0.0.0.0:25 0.0.0.0: LISTEN tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN*

    Read the article

  • Is it possible to run two servers in one system ?

    - by srikanth
    hi, i have a small problem while trying to execute the wamp server. At present in my system i am running Apache server. i have a php application. for that i am trying to install wamp server in my system. wamp server is not running. i change the port no of wamp server as : in my C:\wamp\bin\apache\Apache2.2.11\conf\ i have httpd.conf file. in that i change listener and host name with another port no. then also it is not working. any one can you help me......... plz...

    Read the article

  • Win Server 2008 force kerberos setting

    - by ftiaronsem
    I am currently facing the problem that a linux machine running Ubuntu 10.04 LTS with samba and winbindd installed is unable to join a Domain, that is managed by a Windows 2008 DC. The linux config, is probably alright, since I have successfully used it at multiple sites, running 2008 as well as 2003 DCs. The error I get ("libads/kerberos.c: Join to domain is not valid. Client credentials have been revoked"), indicates that there is a kerberos problem. Normally the linux box is supposed to authenticate via NTLM and is configured that way. The only reason I can image why it tries kerberos is that the DC is forcing it. Do you know whether there is any setting in the security policies of a window 2008 server, that would completely block NTLM, forcing kerberos? If so, where can I find this setting?

    Read the article

  • PHP shared extensions on Linux

    - by F21
    I am running Ubuntu Server 12.04 and prefer to compile PHP myself as opposed to installing it using apt-get. PHP is running as PHP-FPM. When compiling extensions, I can set it to be compiled as a shared extension using something like --with-bcmath=shared and so on. Are there any benefits to compiling the extensions as shared? I also noticed that the extensions are compiled into a pretty convoluted folder. On my system (my php prefix is /usr/local/php-5.4.9) the extensions end up in /usr/local/php-5.4.9/lib/php/extensions/no-debug-non-zts-20100525. Is there a global way to set a folder so that all shared extensions will be compiled in there? I understand that I can do something like --with-foobar=shared,/usr/local/foobar/ but having to set the extension folder for each shared extension is inefficient and error-prone.

    Read the article

  • Why can't I change the domain when connecting to network shares in Windows 8?

    - by Nathan Osman
    I have a computer running Windows 8.1 Pro. I also have Ubuntu 12.04.3 Server running within Hyper-V on the same machine. The Ubuntu server has Samba installed. Both the host and guest OS are connected to the workgroup WORKGROUP. When I open up "Network" in Windows 8.1 and attempt to connect to the server (both by name and IP address), I receive the following prompt: Entering the correct username and password for an account on the server fails. /etc/samba/smb.conf has the following share definitions set: [homes] comment = Home Directories browseable = no

    Read the article

  • Internet Connection Sharing/FTP issues

    - by SirSkidmore
    I am currently using a Linux Mint desktop along with a Windows 8 netbook running Internet Connection Sharing to my desktop. On my desktop, I can't access FTP sites, but my laptop can, so I think it might be a porting issue. I can ping the server from Mint, so I know it's up and running, but I can't access it via telnet. On my Windows 8 netbook, I have every protocol checked, including FTP. Originally, the FTP server indicated that "Scotty" (my netbook) was hosting the service, so I tried inputting the IP of my router, 192.168.1.1 to no avail. Any ideas?

    Read the article

  • Puppet file transfer slow

    - by Noodles
    I have a puppet master and slaves in different datacenters. The latency between them is ~40ms. When I run "puppet agent --test" on a slave to apply the latest manifest it takes ~360 seconds to finish. After doing some digging I can see the main cause of the slow down is file transfers. It seems it's taking ~10 seconds to transfer each file. The files are only small (configuration files) so I can't understand why they would take so long. This is an example of a file in my manifest: file { "/etc/rsyncd.conf" : owner => "root", group => "root", mode => 644, source => "puppet:///files/rsyncd/rsyncd.conf" } Running puppet-profiler I see this: 10.21s - File[/etc/rsyncd.conf] It also seems I cannot update more than one server at once using puppet. If I run two servers at the same time then puppet takes twice as long. I have changed the puppet master from using webrick to mongrel, but this doesn't seem to help. This is making deploying changes painful. A simple config change can take an hour to roll out to all servers.

    Read the article

  • My dedicated server keeps getting very slow that it fails to load the application

    - by server
    I have an application running on Windows Server 2008, running IIS 7.5, SQL Server 2008, 4GB RAM from brinkster. The problem is, every couple of days I get the same 10,000 calls that the system is very slow, and its not operating properly, then after 30 minutes of that it just fails to load. I try to access the server from the remote desktop connection but I can't access it. The only way it I can get it working again is to call the support at brinkster and have them do a manual reboot of the server. After that it works well for some time, and the it re-crashes after some time. Support over there, are not helping a lot.

    Read the article

  • How can I tell what user account is being used by a service to access a network share on a Windows 2008 server?

    - by Mike B
    I've got a third-party app/service running on a Windows 2003 SP2 server that is trying to fetch something from a network share on Windows 2008 box. Both boxes are members of an AD domain. For some reason, the app is complaining about having insufficient permissions to read/write to the store. The app itself doesn't have any special options for acting on the authority of another user account. It just asks for a UNC path. The service is running with a "log on as" setting of Local System account. I'd like to confirm what account it's using when trying to communicate with the network share. Conversely, I'd also like more details on if/why it's being rejected by the Windows 2008 network share. Are there server-side logs on 2008 that could tell me exactly why a connection attempt to a share was rejected?

    Read the article

  • MSSQL instance shuts down

    - by citronas
    I'm currently developing a new ASP.net project hosted on a Windows Server 2008 RC2 with an MSSQL 2008 Express Database. I have three SQL instances (for different purposes) running which currently all contain a single database. For apprently no reason, these instances tend to shut down after some days, for no apparent reason. There might be low or none traffic to these instances, because there might be some days in a row, where I can't develop. It now occured several times, that one or two of these three instances just shut down, so that I can't access the database, without manually starting the instance. I can't seem to find a event log entry for the shutdown, which is most likely because I just enabled logging (why is the default setting off?) So the questions are: * Why does a SQL instance shut down? (Is there such thing as a "Shut down instance after 3 days of inactivity"? * How can I achieve that the instances are running 24/7?

    Read the article

  • When Citrix desktop disconnects SAP Client holds session and can't log back in

    - by Stradas
    We have a fairly large Citrix implementation and have just pushed out SAP desktop client to all of the desktops. Everything else is working fine except the following problem: If a user Disconnects their session and the session is running the SAP client, (logging off works fine) the user can not reconnect and log back in. We have a script on the server that terminates the session as a work around. We can see on the server that it is the SAP client that is holding on and running. This is at a large office, but the SAP servers are in another hemisphere. As is the custom Citrix says its SAP and SAP says it is Citrix. I don't like using a powershell script as a substitute for a system configuration solution.

    Read the article

  • How to get top command output to show rake arguments?

    - by wbharding
    In the past, all of our servers have automatically shown command arguments passed to rake when we view them in top. For example: But on this particular server, we get this instead (picture is top running, showing the rake command, but not showing any of the arguments that had been passed to rake): Both servers are running Ubuntu (though the server without rake commands is a newer flavor of ubuntu). Both run rake through ruby enterprise edition (as powered by rvm). Can't seem to find any documentation on how top chooses what to show in the "command" column, other than the obvious "more data/less data" toggle (all screenshots are shown with the extra data enabled. Anyone encountered anything similar to this?

    Read the article

  • Quitting dozens of the same process in OS X Terminal

    - by Artur Sapek
    Whenever I'm testing a python class I'm working on, I initiate and re-initiate python a lot to refresh the updates I make to the code. When I close the Terminal window later, I get a window that says I am about to quit a LOT of running instances of python. Is this a bug on terminal's part, or am I really running all those? I Ctrl-Z out of it each time but it always says [8]+ Stopped Python where the 8 is incremental and often gets into the 20's and 30's. Am I doing something stupid?

    Read the article

  • Why does 1080p through a VGA cable fit my HDTV but is oversized when through an HDMI cable?

    - by GraemeF
    I have put together a new PC with a XFX GeForce GTX 260 graphics card and have it connected to my HDTV. First, I used an old VGA cable with a DVI to VGA adapter and plugged it in to my HDTV's VGA port. Running at 1920x1080 it fit the screen perfectly. Now, to avoid running another cable across the room, I have connected it with a DVI to HDMI cable to my TV's HDMI port, and the desktop at 1920x1080 is cropped by the edge of the screen. I have "fixed" the cropping by using NVIDIA's "Adjust desktop size and position" tool, which created a screen resolution of 1814x1022 to fit the screen, but this is no longer the TV's native resolution and confuses some software (e.g. WoW). Why does VGA work as expected, but HDMI is scaled up? Can it be avoided?

    Read the article

  • LAMP stack security question - uploading files to server

    - by morpheous
    I am running Ubuntu 9.10 desktop on my home machine. I need to upload files from my local machine, to my web server, on a periodic basis. My server is running Ubuntu Server LTS. I want my server to be secure, and only run the LAMP stack and possibly, an email server. I do not (ideally) want to have FTP or anything that can allow (more) knowledgeable hackers to be able to hack into my server. Can anyone recommend how I may send files from my local machine to the server? This may seem an easy/trivial question, but I am relatively new to Linux - and I got my previous Windows server machine serious hacked in the past, hence the move to Linux, and thats why I am so security conscious.

    Read the article

  • using flock in bash, why does killing a child process kill the parent process too?

    - by Robby
    In the code snippet below, I want the script to be limited to only running one copy at a time, and for it to restart server.x if it dies for any reason. Without flock involved, the loop correctly restarts if I kill the server process, but once I use flock to ensure the script is only running once, if I kill server.x it also kills the parent process. How can I ensure that killing the child process in a flock script keeps the parent around? #!/bin/bash set -e ( flock -x -n 200 while true do ./server.x $1 done ) 200>/var/lock/.m_rst.$1.lock

    Read the article

  • Installing perforce visual client on linux

    - by Manish
    I am from Mac background trying my hand at installing perforce client visual(P4V) on my linux box.For this I download the correct version here and untar the files. Then I cd to the directory ~/Desktop/p4v-2012-blah-blah/bin I also say chmod +x p4* After this i try running p4v (by double clicking) but I dont see anything .The file type is shown as a "text executable" but i dont know why it is not running. On mac i had done the same thing -just clicked on p4v and the client would show up(where I filled the server address and everything )But not sure what is going wrong here.Can someone give me directions? FWIW i did check out this link .

    Read the article

  • Linux process management

    - by tanascius
    Hello, I started a long running background-process (dd with /etc/urandom) in my ssh console. Later I had to disconnect. When I logged in, again (this time directly, without ssh), the process still seemed to to run. I am not sure what happened - I did not use disown. When I logged in later, the process was not listed in top at first, but after a while it reclaimed a high CPU percentage, as I expected. So I assume dd is still running. Now, I'd like to see the progress. I use kill -USR1 <pid> but nothing is printed. Is there any way to get the output again?

    Read the article

  • FTP transfer hangs for random files

    - by hoffmandirt
    I've been stuck on this FTP issue for a while now. I have IIS 7 setup with an IIS 6 FTP server running on a Windows Server 2008 box. The problem I am running into is that I can't download certain files from the FTP server, even though I uploaded those files to the FTP server. The connection times out after 120 seconds. I have used Wireshark and checked the log files. The only message I see is the timeout message. The first thing that came to my mind was permission issues, however I have probably tried every combination of permissions that I can think of, with the end goal of getting the permissions to be the same for the files that work and the files that do not work. With the list of files I have now, I can download the zip, war, and msi files, but not the txt or sql files. It almost seems like a binary thing, but I've changed my transfer mode on the FTP client and also toggled the Active/Passive options around.

    Read the article

  • Detach current session and attach to another session, done with one script, can I?

    - by Jimm Chen
    After reading the vague official doc of GNU screen( http://www.gnu.org/software/screen/manual/screen.html ) and asking quite some questions at this site. I still cannot figure out how to accomplish such a task with a shell script. This task costs some words to describe. Assume I'm using PuTTY to telnet into my Linux server. ?STEP 1? Launch 2 telnet connections . From putty window 1 (PTWIN1),telnet into Linux Bash shell, execute screen -RR to launch a screen session, and get session name 21385.pts-4.linux-ic37 . From putty window 2 (PTWIN2), do that same as in PTWIN1, but this time, I get session name 22041.pts-9.linux-ic37 . Now, we have two screen sessions running simultaneously. We can check this: $ screen -ls There are screens on: 22041.pts-9.linux-ic37 (Attached) 21385.pts-4.linux-ic37 (Attached) 2 Sockets in /var/run/uscreens/S-chj2. ?STEP 2? Assume that for some reason, PTWIN1's TCP connection is lost abnormally(but server doesn't know that), and an urgent work is pending on session 21385 and I want to quickly regain control of it. Fortunately, we know the 21385 session is still there, so, I want to have PTWIN2 attach to session 21385. Because I hate to remember the esoteric screen option all the time, so I decide to write a script called sttach. I hope that sttach 21385.pts-4.linux-ic37 can let me attach to session 21385(for PTWIN2). Now, let's say sttach works well and I take control of 21385 on PTWIN2. ?STEP 3? Some minutes later. I want to go back to work on session 22041. Here, please allow me to have PTWIN2 remain associated with session 21385. What I would like to do is to launch another putty window (PTWIN3), telnet into server, and execute sttach 22041.pts-9.linux-ic37 in hope that I can resume session 22041 on PTWIN3 . You can see the benefit of sttach: as long as I know the target session name, I can call it to have my PuTTY window switch to that session, regardless whether the target session is "(Attached)" or "(Detached)", and regardless whether the running context is inside a screen session or not. Now the question: How to write the (Bash) script sttach? I mean, run screen with appropriate options in sttach to accomplish the goal. Waiting for your kind answer. Thank you. My previous questions regarding GNU screen: GNU screen, how to get current sessionname programmatically Is it possible to change GNU screen session name after created? How do I know I'm running inside a linux "screen" or not? My env: openSUSE Linux 11.3, GNU screen 4.00.03 (FAU) 23-Oct-06

    Read the article

  • Snap Server 18000 connection help!

    - by sicko666
    I wonder if anyone here can help me. I have a home server setup made up of old secondhand computers, 2 servers running Windows Server 2003, 1 workstation running Windows 7, a 16 port switch & an adsl ethernet modem. All these connect and talk to each other fine but then I got a "Snap Server 18000" and a "Snap disk 30sa" sata array. When I turn the Snap on, it boots past the BIOS, runs a kernel, then displays: This device cannot be managed via the video/kbd/mouse interface. The video is now disabled. You may access the management functions from your web browser. Only, none of the other PCs detect it, so no browser can find it! I have checked all cables, and all LEDs indicate there's a connection. I have installed the windows "iscsi" and the adaptec "Snap Server Manager" on all PCs but still it's not detected. I don't know what else to do, please advise!

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >