Search Results

Search found 15209 results on 609 pages for 'configuration'.

Page 498/609 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • How should I configure my Apache Hosts File to serve a different site for localhost than for my domain/publicip?

    - by rofls
    I'm trying to test out a LAMP (with PHP5 specifically) setup with Django already serving a website. I want to do the PHP stuff on localhost for now, so that when I do something like this: curl http://localhost/database/script.php?var=1, I get a response from the php server. Right now I'm getting a Django error. I tried something like this in the default file in sites-available: Listen 80 <VirtualHost aaa.bbb.ccc.ddd> ServerName localhost DocumentRoot /home/phpsite </VirtualHost> where aaa.bbb.ccc.ddd is the local ip address, and changing my actual site's settings to specify the public ip, like this: Listen 80 <VirtualHost www.xxx.yyy.zzz> ServerName mysite.com DocumentRoot /srv/www/mysite WSGIScriptAlias / /srv/www/mysite.wsgi </VirtualHost> but then I start getting all kinds of errors when I start apache, such as port ::[80] is already in use or something. I noticed that the hosts file that's located in /etc/apache2/ is apparently pointing everything to mysite.com, including my local ip as well as 127.0.0.1 and 127.0.1.1; Do I need to change the configuration there too?

    Read the article

  • Setup shared internet connection on virtualbox with fixed IP

    - by Tom
    I am a web developer and until recently I have been using ubuntu as my OS. For many reasons, I have switched back to windows. I still want to keep my server on linux platform, so I setup my local server as a virtual machine. Everything works great, but i have a little struggle with the networking. Since I am working in different places and going around clients, I connect to all sorts of network with different settings. That means the possible IP range is very dynamic which causes issues when I work on my local server. At the moment I have a dynamic IP on my host and static IP on my guest. That way I can access the server from my host (by adding record to hosts file). I also have internet connection on the guest. But once i change networks, it does not work (assuming the network has different configuration). My question is, how to setup host-guest networking, so no matter what network I connect to, I can keep my static IP on guest, which is registered in hosts file on my host so I can access the webserver and also I will have internet connection on the guest? Hope it make sense. Thank you

    Read the article

  • Install mod_perl2 on Apache 2.2.14 (Ubuntu10.04)

    - by MICADO
    Hi guys, I have installed via synaptic package ibapache2-mod-perl2. I tried this line in httpd.conf: "LoadModule perl_module modules/mod_perl.so" Apache tells me when I reload the server : "[warn] module perl_module is already loaded, skipping". Well ok, but when i try to look in the browser to a repertory i don't have access .Apache send me the error : Forbidden You don't have permission to access /cgi-bin/ on this server. Apache/2.2.14 (Ubuntu) Server at 192.168.0.10 Port 90 But this should show modperl is installed and that's not the case... I would like my virtual host that follows run with mod_perl2 <VirtualHost v1:80> ServerAdmin webmaster@localhost ServerName v1 DocumentRoot /var/www/v1 <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/v1/html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /var/www/v1/cgi-bin/ <Directory "/var/www/v1/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> I'd like to know how to configure mod_perl2. Do i have to change something in the apache configuration file to make my cgi repertory works with mod_perl2? Thanks to any help!

    Read the article

  • How to change password on RAR archive w/o modifying arch. files attributes (modified/created)?

    - by Larry78
    How do I change the password of an .RAR archive, without changing the date/time attributes of the files in the archive? Unfortunately you can't directly change the password of the archive with WinRAR, you have to extract the files, and then make a new archive with the new password. So the created/modified attributes of the files in the archive get changed. I know you can manually change the attributes of a file with available utilities - but there are hundreds of files in the archive, each with unique attributes, so it would take a very long time to "fix" each file before re-archiving it. I'm using WinRAR 3.51, the last free version. Windows XP Pro SP3. Update: I don't care if the output is a .RAR file or a ZIP file IZArc4.1 will convert the RAR to a ZIP, and it keeps the dates. The problem is it compresses the file - there isn't a "store" option, and setting the default to store in the main configuration doesn't effect conversions. The RAR contains uncompressed files. None of these other archiving programs will even do a conversion. A couple claim to, or try to, but the errors returned indicate a very lousy application. So far I've tried PeaZip, 7-Zip, FilZip, TugZip, SimplyZipSE, QuickZip, and WinShrink (from downloads.cnet.com). WinRAR gives the error "skipping encryped archive" when I try the conversion. It asks for the password first, and I know it's right, as I opened the archive, and I can read/view all the files in it. It works on non-encrypted files.

    Read the article

  • Serve web application error messages from Http server

    - by licorna
    I have nginx as a http server with tomcat as a backend (using proxy_pass). It works great but I want to define my own error pages (404, 500, etc.) and that they are served by nginx and not tomcat. For example I have the following resource: https://domain.com/resource which doesn't exist. If I [GET] that URL then I get a Not Found message from Tomcat and not from nginx. What I want is that every time Tomcat responds with a 404 (or any other error message) nginx sends itself a message to the user: some html file accessible by nginx. The way I have my nginx server configured is very easy, just: location / { proxy_pass http://localhost:8080/<webapp-name>/; } And I've configured port 8080, which is tomcat, as not accessible from outside this machine. I don't think that using different location directives in nginx configuration will work, because there are some resources that depend on the URL: https://domain.com/customer/<non-existent-customer-name>/[GET] Will always return 404 (or any other error message), while: https://domain.com/customer/<existent-customer>/[GET] Will return anything different from 404 (the customer exists). Is there any way of serving Tomcat (Application Server) error messages with Nginx (http Server)? To check the message sent by the proxy_pass directive and act upon it?

    Read the article

  • Apache APC (Windows) Can I optimize these APC settings more?

    - by ar099968
    I would like to optimize APC some more but I am not sure where I could do something. First here is the stats after 1 week of running with the current configuration: General Cache Information APC Version 3.1.9 PHP Version 5.4.4 APC Host XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Server Software Apache Shared Memory 1 Segment(s) with 128.0 MBytes (IPC shared memory, Windows Slim RWLOCK (native) locking) Start Time 2014/06/08 05:00:00 Uptime 6 days, 11 hours and 55 minutes File Upload Support 1 Host Status Diagrams Memory Usage Free: 99.7 MBytes (77.9%) Used: 28.3 MBytes (22.1%) Hits & Misses Hits: 510818 (99.9%) Misses: 608 (0.1%) Detailed Memory Usage and Fragmentation Fragmentation: 0.60% (609.8 KBytes out of 99.7 MBytes in 83 fragments) File Cache Information Cached Files 693 ( 35.4 MBytes) Hits 5143359 Misses 1087 Request Rate (hits, misses) 13.24 cache requests/second Hit Rate 13.24 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.01 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters -/apc.php$, -/apc_clean.php$, -.tpl.cache.php$, -.tpl.php$, -.string.cache.php$, -.string.php$ apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 2M apc.num_files_hint 7000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.shm_strings_buffer 4M apc.slam_defense 0 apc.stat 1 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1

    Read the article

  • Ubuntu + LigHTTPd: Server requests taking ages

    - by ctrl_freak
    I've had an issue since upgrading my distro a couple of weeks ago from hardy; receiving data after making a request has increasing intervals of nothing, as you can see from the picture below. http://i49.tinypic.com/2w5lvr9.png I have since reinstalled fresh from an Ubuntu 10.04 Server (i386) disk, but am still having the same issues. I'm running on a LigHTTPd, MySQL, PHP5 stack. The surprising thing is, that local browsing using lynx is super fast, as expected. Initially, after reinstalling, I copied over the old configuration files from the previous installation, but have since reinstalled LigHTTPd and rebuilt the config file from scratch. The only correlation I could find, was that I attempted installation of ionCube and Zend Optimizer for a script I was testing, however I would think that it could no longer impact seeing I had reinstalled the OS. I have also removed Suhosin just in case, however it had no impact. I'm thinking it possibly has something to do with networking, but I wouldn't know where to start. The server is manually assigned an IP by it's MAC address on the router. The fact that the time seems to be exponential (to a point) worries me. I've tried strace'ing the LigHTTPd and MySQL processes, however I couldn't see anything obvious, not that I'd really know what I'm looking for. RAM and CPU usage don't seem to be out of the ordinary, but I can't say its perfect.. I'm hoping someone has experienced the same, or can point me in a direction, as searching has proved fruitless as I don't know anything specific. Config files can be posted, if requested.

    Read the article

  • Can't ssh from CentOS 6.5 to SUSE LINUX 10.1

    - by Pavel Tankov
    We have a quite old installation of SUSE LINUX 10.1 (i586) in the office. The problem shortly: I can successfully ssh to it from machines in the same LAN (192.168.1.0) and not from others (that are in 10.23.0.0). The SuSE has SSH server openssh-4.2p1-18.12. I have ruled out the firewall and hosts.allow and hosts.deny files. When my ssh login attempt fails, here is what the logs say: on the client: $ ssh -vvv 192.168.1.5 OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.5 [192.168.1.5] port 22. debug1: Connection established. debug1: identity file /home/nbuild/.ssh/identity type -1 debug1: identity file /home/nbuild/.ssh/identity-cert type -1 debug1: identity file /home/nbuild/.ssh/id_rsa type -1 debug1: identity file /home/nbuild/.ssh/id_rsa-cert type -1 debug1: identity file /home/nbuild/.ssh/id_dsa type -1 debug1: identity file /home/nbuild/.ssh/id_dsa-cert type -1 on the server: Aug 21 16:34:25 serverhost sshd[20736]: debug3: fd 4 is not O_NONBLOCK Aug 21 16:34:25 serverhost sshd[20736]: debug1: Forked child 20739. Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: entering fd = 7 config len 403 Aug 21 16:34:25 serverhost sshd[20736]: debug3: ssh_msg_send: type 0 Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: done Aug 21 16:34:25 serverhost sshd[20739]: debug1: rexec start in 4 out 4 newsock 4 pipe 6 sock 7 Aug 21 16:34:25 serverhost sshd[20739]: debug1: inetd sockets after dupping: 3, 3 Aug 21 16:34:25 serverhost sshd[20739]: debug3: Normalising mapped IPv4 in IPv6 address Aug 21 16:34:25 serverhost sshd[20739]: Connection from 10.23.1.11 port 44340 The above log on the server is when I enable DEBUG3 log level. However, with the default log level (INFO), the only thing the server logs is this: Aug 21 16:38:32 serverhost sshd[20749]: Did not receive identification string from 10.23.1.11 Any hints? I feel I've tried everything already.

    Read the article

  • Disable all the idiot-checking in Mac OS X

    - by Fake Name
    I am a Windows/Linux user, who is learning Mac OS X out of interest in doing dev-work for the iPad which I recently purchased. However, OS X is driving me nuts by trying to protect all it's system files, hiding all of the important OS components I want to tweak, and generally making it impossible to do any modification to the OS in general to make it more usable. Therefore, is there a way to turn off all the idiot-checking in Finder? On XP, I can disable "Hide Protected Operating system files" and set "Show Hidden Files". On linux, there really aren't many hidden files, and changing the configuration for .files is easy enough in Gnome and XFCE. How can I set up OS X in a similar way. I am not new to computers, and I am fully aware that deleting system files can damage or even irreparably disable a OS install. Therefore, If I intentionally try to delete a file, or move something, it's probably intentional, and I am willing to accept the consequences in any case. At this point, I have fallen back to doing everything through the command line (which takes forever), because Finder is practically unusable. (As for what I am attempting to do, I also asked about GUI changes here.)

    Read the article

  • Outlook 2010 Crashing Unpredictably

    - by cbkadel
    Very often when I open up Outlook 2010 and start doing actions in it, it will hang and become non responsive. I have tried letting it finish, but it never does come back (up to 20 minutes of letting it try). I generally have to restart Outlook and try again. Usually after about an hour of doing this, Outlook somehow snaps out of it and works for the rest of the day. It's generally in the morning (though I doubt that's the key variable). Generally, the emails that cause problems are HTML/formatted, but not always. What I've done so far to troubleshoot: Install Latest Outlook Hotfix (I think Dec 14, 2010) Start Outlook in Safe Mode Neither of those steps seem to make a difference. Usually - after about 10-15 restarts of Outlook on any given day, then it starts working thereafter. My next step is to uninstall/reinstall Office 2010, but I'm hoping someone has seen this and knows what to do about it - though not sure. My configuration is like this: Microsoft Online Services (using Microsoft's Sign In App) - Connecting to Exchange I have two other Exchange accounts in this profile (new feature in 2010) connected through Outlook Anywhere. Life Meeting Conferencing Add In I've disabled the People tab/add in. I've disabled the "Send to Bluetooth" add-in. Not sure what else to do?

    Read the article

  • Can next hop address be same as destination address?

    - by Raj
    Like if host address is 100.0.0.1 and next hop address is 100.0.0.2 and destination ip address is also 100.0.0.2 Is this a valid use case? Any real life usage? <dest ip> <next hop> ip route 100.0.0.2 255.255.255.255 100.0.0.2 weight 1 next-hop-vrf GlobalRouter Above is the command on a router inside a VRF. 100.0.0.2 is pingable from host. 100.0.0.1 & 100.0.0.2 are an ip address assigned to a VLAN on host & destination respectively. On a linux box, Such configuration is valid. [root]# netstat -r -n Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 55.55.55.55 55.55.55.55 255.255.255.255 UGH 0 0 0 eth0 [root]# ip route show 55.55.55.55 via 55.55.55.55 dev eth0 As per my understanding, If a destination IP is reachable (i.e in the same subnet of host IP) we dont need a next hop. I came across one application for using next hop for destination IP in same subnet (i.e for VPN) See this: Will packets send to the same subnet go through routers? If next hop != destination IP but they are in same subnet as that of host, is a valid scenario for VPN, then i am wondering what are the applications of next_hop==dest_ip & subnet same as host? This is my first post in Super User. Extremely happy with the quick and warm response.

    Read the article

  • Windows 2008 R2 DNS cant resolve own SOA

    - by user46742
    We have two Domain Controllers for our network. They both run DHCP, DNS, and ADS. They are both VM's sitting on MS Hyper V Server 2008 on separate physical hosts. We had our primary DC go down a week ago. I upgraded an already existing VM to Primary DC and built a new VM for the secondary. Both DNS servers are running and the SOA is configured correctly for Primary DC 1. However when I run the best practice analyzer it states the server cannot resolve it's own SOA. Check the configuration in the adapter. I checked and they are configured properly. I also went through the DNS entries thoroughly and made sure there was no records of the previous DC that went down. NSLOOKUP resolves the domain and primary dc fine. I also checked the firewalls on the machines and our physical firewall for any deny packets. Any suggestions? I appreciate any help!

    Read the article

  • Windows-7 Ultimate 64 bit wont connect to my wired/wireless networks.

    - by A302
    Windows 7 Ultimate 64 bit. Everything was working fine & then just stopped working. The nic card Realtek PCIe GBE Family Controller is enabled but does not connect to my router (cables & router ports are good). Wireless Atheros AR5007EG is enabled but the connection is limited (encryption type / key have been verified). A laptop running XP can connect both wired / wireless. SSID is not being broadcast, connect to network if it is not broadcasting is checked. Have checked services.msc for Bonjour & did not see it listed. Network & sharing center does not list any active networks. Device manager lists both devices as functioning properly. Router configuration has not been changed. Virus scan has not found anything. I would like to fix this rather than using Acronis to do a system restore. Thanks in advance for any advice offered in solving this. 26 Jan, the nic card & wireless are working using PCLinux OS Live CD. It appears that the problem is Windows 7 related.

    Read the article

  • Linux on MacBook Air

    - by enduser
    I'm thinking of getting a MacBook Air. The answers to this post will help me make my decision. My questions and my understanding of current solutions are: How difficult is it to install a Linux-based OS (like Fedora or Ubuntu)? I've heard a little about rEFIt, but am not sure what to make of it. Is it completely necessary? Do I still need it if I don't plan to dual boot with Mac OS X? Also a dual-boot isn't necessary, I'd just like to run Fedora/Ubuntu by itself, but I'm curious to know if a dual boot is simple. Does everything 'just work'? In my current laptop I need to add a wireless driver (Broadcom card). I've heard Macs use Broadcom wireless cards. Will this be an issue? How about graphics/touchpad (& multitouch)/sound? I'm aware there are tutorials out there on how to install some older version of some os on your Mac, but my questions are a bit more general: Will it be easy to use (install and configure drivers for) recent Linux distributions with a new MacBook Air? Note: I don't mind extra configuration, but would like to know where it'll be necessary, because if it's too much of a hassle I'll look at other hardware.

    Read the article

  • Windows XP - Power surge on hub port

    - by Swift-Tuttle
    Hi, Since last few weeks I constantly get this error, as status bar balloon: Power Surge on Hub Port - A USB device has exceeded the power limits of its hub port. Due to this now I am unable to access any USB devices properly, they get disconnected intermittently. I did quite a few things to resolve this problem, firstly obviously through the Windows help. I even tried all the things told on the Microsoft website(which essentially says is to check and update the driver) but in vain. One suggestion, I found when I google'd was to disable the USB2 controller through the Device Manager and since at every startup the System configuration comes up complaining that it has been changed etc.(On that same site it is mentioned to ignore this message.) But after everything I still cant solve this problem. Any help is much appreciated. The system is installed with Windows XP service Pack 3 and all the updates till last month. Please let me know if any other hardware info is required. **UPDATE** My laptop is about 5 years old now, its an HP with Celeron 1.4G processor. Windows XP SP3 installed. All latest windows updates installed. 2 USB ports available. BIOS is HP 68DTD ver F.0A Do I need to update my BIOS from somewhere ? or is this a hardware problem altogether?

    Read the article

  • Problem creating ODBC connection to SQL Server 2008 with Vista

    - by earlz
    Well, I'm trying to get a database schema thing working, first I tried just doing it in Linux where I'm more comfortable, but ODBC seems to be a hack there and I couldn't get it to work. So I figured it shouldn't be too hard in Windows.. Ok, so I created a SQL Server Client Alias so that I can simply same windowsserver to refer to my SQL server. Then, I went to the ODBC configuration in Control Panel. I clicked Add in the User DSN section. I chose Native SQL Server (10), and then clicked next. Then I typed a short name and a description and gave the servername as windowsserver/SQLEXPRESS Then, I click next, give it my user name and password and click next. Then, after like 2 minutes it says "Login Timeout Expired" What can be wrong here? I know the server is configured cause I have SQL Server Management Studio opened up with that server in it. I'm also just trying to connect over regular TCP/IP and my firewall is disabled.

    Read the article

  • What can inexperienced admin expect after server setup completed seemingly fine? [closed]

    - by Miloshio
    Inexperienced person seems to have done everything fine so far. This is his very first time that he is the only one in charge for LAMP server. He has installed OS, network, Apache, PHP, MySQL, Proftpd, MTA & MDA software, configured VirtualHosts properly (facts because he calls himself admin), done user management and various configuration settings with respect to security recommendations and... everything is fine for now... For now. If you were directing horror movie for server admin above mentioned what would you make up for boogieman that showed up and started to pursue him? Omitting hardware disaster cases for which one cannot do anything 'from remote', what is the most common causes of server or part-of-server or server-related significant failure when managed by inexperienced admin? I have in mind something that is newbie admins very often missing which is leading to later intervention of someone with experience? May that be some uncontrolled CPU-eating leftover process, memory-related glitch, widely-used feature that messes up something unexpected on anything like that? Newbie admin for now only monitors disk-space and RAM usage, and number of running processes. He would appreciate any tips regarding what's probably going to happen to his server over time.

    Read the article

  • sudoer scheme to allow useful access to another web developer yet retain future control of a virtual

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, so standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done. I'm sure that people have solved this type of problem before somehow, though, and I'd like to go with something somewhat tested as opposed to something I've homegrown.

    Read the article

  • Nagios send mail when server is down

    - by tzulberti
    I am using nagios 3.06 to monitor the servers. When a service is critical, it sends a mail, but when a server is down no mail is sent. Even if all the services go to critical state, no mail is sent. I have the following configuration: define command {     command_name notify-host-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$" "******** Nagios ****\n\n Host: $HOSTNAME$\n Description: the server is down" } define command{     command_name notify-service-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$: $SERVICEDESC$ ($NOTIFICATIONTYPE$)" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\nDate/Time: $LONGDATETIME$\nAdditional Info:$SERVICEOUTPUT$" } The python script is a script to sent a mail. It works if I execute it from the command line, but it doesn't sents an email from nagios. What I am doing wrong? UPDATE: The contact data is: define contact{     contact_name root     alias Root     service_notification_period 24x7     host_notification_period 24x7     service_notification_options w,u,c,r     host_notification_options d,r     service_notification_commands notify-service-by-email     host_notification_commands notify-host-by-email     email [email protected] } define contactgroup{     contactgroup_name admins     alias Nagios Administrators     members root }

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • Connecting to network device behind NAT from local LAN using the external port and IP

    - by lumbric
    I noticed at several different LANs connected to the Internet through a NAT the following phenomena. There is a server in the LAN and there is a port forwarding to reach this server also from outside the LAN through the NAT. E.g. consider a LAN with the address 192.168.0.* and a SSH server at 192.168.0.2 with port 22 and a forwarding from port 2222 at the NAT 192.168.0.1 to 192.168.0.2:22. If the NAT's external IP is 44.33.22.11, one can connect to the SSH server through 44.33.22.11:2222. Surprisingly this works only from outside the LAN. If one tries to connect to 44.33.22.11:2222 from behind the NAT, there is no answer. Of course one could simply use 192.168.0.2:22, but often it is simpler to use the external IP. The typical use case for me is the configuration on a laptop computer. Usually the user uses any arbitrary Internet connection to connect to his home or office server, but sometimes he will use also the LAN to connect to it and it would be annoying to have to different configurations or bookmarks. Why does it fail to connect from inside the LAN? Is there any good work around?

    Read the article

  • lighttpd domains and url matching

    - by Manuel
    I'm trying to configure lighttpd so that: www.domain1.org/admin uses config1 any other URL on www.domain1.org uses config2 any url on www.domain2.org uses config2 So basically, domain1 and domain2 should use the same configuration except for when domain1 is accessed via an URL that starts with /admin I tried so far a number of variations, including this one: $HTTP["host"] =~ "domain1.org" { $HTTP["url"] =~ "^/admin" { // config1 alias.url = ("/media/admin" => "/usr/share...", "/static" => "/var/www/...") url.rewrite-once = ( "^(/media/admin.*)$" => "$1", "^(/static.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/application.fcgi$1", ) server.document-root="/var/www/application" fastcgi.debug = 1 fastcgi.server = ( "/application.fcgi" => ( "main" => ( "socket" => "/var/www/application/application.sock", "check-local" => "disable", ) ), ) } else $HTTP["url"] !~ "^/admin" { // config2 } $HTTP["host"] !~ "domain1.org" { // config2 } But no matter what, accessing domain1.org/admin yields a 404. Is there anything that I am missing?

    Read the article

  • Proper approach to debug PC startup problems (POST)

    - by saurabhj
    My CPU was heating up to around 65 deg C and last time this had happened (about a year ago), I got thermal paste put between the CPU and heat sink and this managed to get it down to about 45 - 50 degrees. This time, I got some thermal paste and put it myself. However, my PC is not showing the POST display and not starting up. This is what happens LEDs light up HDDs spin Mouse is getting power All fans including the processor fan starts No display on monitor No diagnostic beep sounds (no sounds at all) What I have tried Removing everything including RAM, HDD, PCI cards, AGP card Boot up machine No changes from first state. What steps can I take to figure out where the problem lies? Note (might be important) When I removed the heat sink, the processor came out with it (it was stuck to it inspite of the processor latch on) Had to pry it separate with a screw-driver. Configuration Pentium 4, 2.8 Ghz with HT (very old, I know) Original Intel Mobo with onboard sound and graphics (GB series) 2x512 Mb DDR-RAM 2 SATA disks (320 Gigs / 250 gigs) DVD Writer Creative Sound Card Network card Any help would be appreciated. Thanks!

    Read the article

  • Is there a way to use VirtualBox without using it's resource registry?

    - by Catskul
    Summary VirtualBox seems to want everything to be "registered" which makes it much more annoying to work with on the command line. I'm attempting to create an automated script which will create, move, start, stop, and destroy virtual machines and virtual disks. Requiring registration will complicate the task for the following reasons. leaves state information around that can cause unpredicted edgecases causing scripts to fail. creates potential name space collisions for multiple process creating VMs with the same name moving/copying resources on the same machine is more complicated because references in the registry need to be updated copying resources (disk + vm combination) to another machine require reconfiguration once they reach their target machine, and require the transfer of extra meta data to do the reconfiguration. If something unexpectedly fails, and an unregister thus fails to happen, left over configuration information can cause problems in subsequent runs. Use Case My specific use case is for a continuous integration server which creates and destroys VMs and Disk images potentially with the same name, and would require more logic to deal with the registry's statefulness. Imaginary Example It seems that I should just be able to for example (using some imaginary and/or incorrect commands): mkdir foobar customdiskimg_script ./foo/foo.vdi vboxmanage createvm --name "foo" --ostype Linux --basefolder ./foo/foo.xml vboxmanage storagectl ./foo/foo.xml --name foo --add ide vboxmanage storageattach --storagectl foo --medium ./foo/foo.vdi ./foo/foo.xml vboxmanage startvm ./foo/foo.xml TLDR Is there a way to use virtualbox without "registering" harddisks and VMs?

    Read the article

  • /etc/hosts: What is loghost? (fresh install of Solaris 10 update 9)

    - by cjavapro
    # # Internet host table # ::1 localhost 127.0.0.1 localhost XX.XX.XX.XX myserver loghost What is the purpose of loghost? If it was not for having loghost in there, all the /etc/hosts files on all the servers in this particular network could be identical. Edit: I looked at /etc/syslog.conf #ident "@(#)syslog.conf 1.5 98/12/14 SMI" /* SunOS 5.0 */ # # Copyright (c) 1991-1998 by Sun Microsystems, Inc. # All rights reserved. # # syslog configuration file. # # This file is processed by m4 so be careful to quote (`') names # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. # *.err;kern.notice;auth.notice /dev/sysmsg *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator *.alert root *.emerg * # if a non-loghost machine chooses to have authentication messages # sent to the loghost machine, un-comment out the following line: #auth.notice ifdef(`LOGHOST', /var/log/authlog, @loghost) mail.debug ifdef(`LOGHOST', /var/log/syslog, @loghost) # # non-loghost machines will use the following lines to cause "user" # log messages to be logged locally. # ifdef(`LOGHOST', , user.err /dev/sysmsg user.err /var/adm/messages user.alert `root, operator' user.emerg * ) Very interesting. when shutting down,, alerts go to all users probably through *.emerg * Looking at ifdef, it seems that the first parameter checks to see if current machine is a loghost, second parameter is what to do if it is and third parameter is what to do if it is not. Edit: If you want to test a logging rule you can use svcadm restart system-log to restart the logging service and then logger -p notice "test" to send a test log message where notice can be replaced with any type such as user.err, auth.notice, etc.

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >