Search Results

Search found 17314 results on 693 pages for 'vpn setup'.

Page 381/693 | < Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >

  • Setting up shared connection

    - by Calvin Froedge
    I have a network that is connected to the internet via a switch connected to a router. I have it setup like this so I can work on the new network without causing problems on the old. Anyway, I'm trying to enable internet connection sharing. Internet comes to server like this: Modem - Router - Switch - Ubuntu 11.10 (Eth0) I want to share the connection through Eth1 (Eth1 - Managed Switch - Clients). Here is my config for /etc/network/interfaces: I have a DHCP server running on Eth1. Here is my config: ddns-update-style none; option domain-name "myserver.local"; option domain-name-servers 192.168.1.2, 8.8.8.8; default-lease-time 600; max-lease-time 7200; authoritative; subnet 192.168.1.0 netmask 255.255.255.0 { interface eth1; range 192.168.1.3 192.168.1.254; option routers 192.168.1.1; option subnet-mask 255.255.255.0; option broadcast-address 192.168.1.255; } Here is /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet dhcp #Used for internal network auto eth1 iface eth1 inet static address 192.168.1.2 netmask 255.255.255.0 broadcast 192.168.1.255 network 192.168.1.0 Here is /etc/hosts: 127.0.0.1 localhost 127.0.1.1 myserver.isp.com server 192.168.1.2 server.myserver.local server myserver.local In /etc/sysctl.conf, I've set the following: net.ipv4.ip_forward=1 Finally, in /etc/rc.local, I've set the following: /sbin/iptables -P FORWARD ACCEPT /sbin/iptables --table nat -A POSTROUTING -o eth1 -j MASQUERADE When I ping 8.8.8.8 (google's DNS) from a client that is authenticated with my DHCP server (they have been assigned a local ip, like 192.168.1.10), I get a timeout. How can I debug this further to figure out where my problem is?

    Read the article

  • Windows 2008 DHCP service fails - "...failed to see a directory server for authorization."

    - by ewwhite
    I have a small environment running Windows 2008 R2 where the DHCP service on the domain controller fails every two weeks. The most-visible error is Event ID 1059 and the Event Viewer message is: "The DHCP service failed to see a directory server for authorization." The setup features two domain controller and the usual services and roles (file, print, Exchange). Restarting the service fails for a variety of reasons. I've had the following messages at different times: "Not enough storage is available to complete this operation". "Unable to determine the DHCP Server version for the Server 192.168.x.x" "The DHCP service has detected that it is running on a DC and has no credentials configured for use with Dynamic DNS registrations initiated by the DHCP service." A reboot of the domain controller resolves the issue for ~2 weeks. The systems are virtualized and there are no network connectivity issues. Any ideas what's happening here?

    Read the article

  • POP Forums v10 beta posted for ASP.NET MVC 4

    - by Jeff
    Finally got some momentum and replaced the beta formerly known as v9.3. You can get it here, where you’ll find the information below. You can also read my previous post on why I ditched jQuery Mobile. This is the beta for POP Forums v10, with the mobile special sauce. It requires ASP.NET MVC 4 RC, which you can download here. Of course, feel free to submit bugs to the issue tracker. See a live demo here: http://popforums.com/Forums What's new? Uses a very light weight CSS and Javascript package to provide a touch-friendly interface for mobile devices. Numbers are formatted (sensitive to culture) when 1,000 or higher. CSS is more integration friendly, and specific to the ForumContainer element. Mail delivery from queue is now parallel, so you can specify a sending interval, and the number of messages to process on each interval. Background "services" refactored, and will only run with a call on app start to PopForumsActivation.StartServices(). This is partly to facilitate future use in Web farms/multiple Web roles. Update to jQuery v1.7.1. Replaced use of .live() with .on() in script, pursuant to jQuery update, which deprecates .live(). FIX: Bug in topic repository around caching keys for single-server data layer. FIX: Pager links on recent topics pointed to incorrect route. FIX: Deleting a post didn't update last user/post time. FIX: Ditched attempt at writing to event log with super failures, since almost no one has permission in production. FIX: Bug in grayed-out fields in admin mail setup. FIX: Weird color profiles would break loading of images for resize. FIX: TOS text on account sign-up was double encoded. Known issues None yet, but ditching jQuery Mobile from the previous beta turned out to be a good decision.

    Read the article

  • Skinning with DotNetNuke 5 Super Stylesheets Layouts - 12 Videos

    In this tutorial we demonstrate how to use Super Stylesheets in DotNetNuke for quickly and easily designing the layout of your DotNetNuke skin. Super Stylesheets are ideal for both beginner and experienced skin designers, the advantage of Super Stylesheets is that you can easily create a skin layout which works in all browsers without the need to learn complex CSS techniques. We show you how to build a skin from the very beginning using Super Stylesheets. The videos contain: Video 1 - Introduction to the Super Stylesheets DNN Layouts and Initial Setup Video 2 - Setting Up the Skin Layout Template Code Video 3 - Using the ThreeCol-Portal Layout Template for a Skin Video 4 - How to Add Tokens to the Skin Video 5 - Setting Background Colors for Content Panes and Creating CSS Containers Video 6 - How to Create a Footer Area and Reset the Default Styles Video 7 - How to Style the Text in the Content, Left and Right Panes Video 8 - SEO Skin Layouts for DotNetNuke Tokens Video 9 - Creating Several Skin Layouts Using the Layout Templates Video 10 - Further Layout Templates and MultiLayout Templates Video 11 - SEO Layout Template Skins Video 12 - Final SEO Positioning of the Skin Code Total Time Length: 97min 53secsDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • If fiber runs 1gig fine, are there any concerns when considering upgrading to 10gig transceivers?

    - by Eric
    We had fiber installed (connecting ~10 buildings) around 5 years ago and it has been working great. The initial setup involved Procurve 2848 and 2824 switches w/ 1gig transceivers. However, lately we have been considering upgrading our network both to increase bandwidth and possibly add VOIP. However, a lot of this is assuming that we can just use pop the existing fiber into 10gig XFP transceivers in better switches and call it a day. If the fiber works fine at 1G does that mean it should be fine for 10gig? If not, how can we confirm that our existing fiber trunks will work, preferably in an affordable fashion?

    Read the article

  • Torrentflux broke after upgrade

    - by parker.sikand
    A working torrentflux setup seems to have gone broken after upgrading PHP to 5.3 and Postrgres to 9.2beta3 on a FreeBSD 8.2 server. The login screen will show up fine, but after clicking the login button, I get an error : Fatal error: Call to undefined function pg_escape_string() in \ /usr/home/parker/tf/html/inc/lib/adodb/drivers/adodb-postgres64.inc.php \ on line 241 Seems to be an error with PHP and the pgsql php package. The phppgsql package itself is not totally broken because I'm using it to host database driven apps on this server. This is the first and only error I'm seeing from it. How might I go about fixing this problem?

    Read the article

  • Password problem while creating domain

    - by Murdock
    Hi, I'm freshman so far in server management stuff but this seems to be clearly against logic. After updating my Windows Server 2008 Standard 32bit, installing DNS server and AD DS I wanted to create domain via using CMD and dcpromo.exe setup. But no matter if I disable demand for comlex password in Password policies or create a password which fully comply with requirements for strong and complex password, still I can't get any further and it says that my password doesn't meet requirements. I'm also asked there to activate password demand by NET USER -passwordreq:yes and when I do so, this password doesn't work any more and I have to remove it from other admin account to be at least able to login with proper Administrator account.

    Read the article

  • Using Lighttpd: apache proxy or direct connection?

    - by Halfgaar
    Hi, I'm optimizing a site by using lighttpd for the static media. I've found that a recommended solution is to use Apache Proxy to point to the lighttpd server. But, does that use up an Apache thread/process per request? In my setup, I've noticed that all my processes are used up, even though they aren't doing anything, CPU wise. To free up apache processes, I've configured lighttpd and the amount of processes needed is lowered significantly, Munin shows. However, I've set it up to connect directly to lighty, to prevent apache workers from being occupied by serving static media. My question is: when using Apache Proxy, does that also use up a process/worker per request?

    Read the article

  • Is anyone else using OpenBSD as a router in the enterprise? What hardware are you running it on?

    - by Kamil Kisiel
    We have an OpenBSD router at each of our locations, currently running on generic "homebrew" PC hardware in a 4U server case. Due to reliability concerns and space considerations we're looking at upgrading them to some proper server-grade hardware with support etc. These boxes serve as the routers, gateways, and firewalls at each site. At this point we're quite familiar with OpenBSD and Pf, so hesitant at moving away from the system to something else such as dedicated Cisco hardware. I'm currently thinking of moving the systems to some HP DL-series 1U machines (model yet to be determined). I'm curious to hear if other people use a setup like this in their business, or have migrated to or away from one.

    Read the article

  • Windows Remote-App Server 2012 Office 2013 User Settings not saved

    - by dave
    I have a Windows Server 2012 with RemoteApps enabled. It's running the latest Patches etc. It has Office 2013 installed and Excel and Word are shared to all users. Now I got the Problem that after each Reboot all User Settings are lost. I have a few users who pin previously opened Documents so they dont need to remember all Paths and those are all gone after Reboot. Also last opened Documents is empty and after a Server reboot it brings the office 2013 Window for First time setup where it asks if you want to connect to skydrive and all that. In the RemoteApps Collection I enabled a Userprofile-Drive 100GB drive E: for Storing User profile data. There is a Domain of course and there is no GPO Preventing the user from Storing settings etc. We also got an older Terminal Server 2003 in the same Domain where this is not happening. Any ideas why this is happening that all the Settings are lost after Reboot?

    Read the article

  • Running 12.04 as a gateway - resolvconf, dhclient and dnsmasq integration

    - by Adam
    I have a gateway server which is set up originally with Ubuntu desktop 12.04 - perhaps a mistake, I don't know, something to bear in mind. I ripped out network-manager and now want to get resolvconf, dhclient and dnsmasq to play well together. dhclient gets the gateway's eth0 WAN ip address and the ISP DNS name server from the modem. dnsmasq needs to serve dhcp to the rest of the lan on eth1 and acts as a DNS cache both for the lan and for the gateway machine. I also set up iptables as a firewall. Right now, the gateway's /etc/resolv.conf shows only name server = 127.0.0.1 which is correct AFAIK. However I don't think that dhclient is giving dnsmasq the ISP DNS name server nor is dnsmasq picking up the OpenDNS and Google name servers I specified in /etc/network/interfaces - at the moment look-ups, i.e. ping or surfing, don't work unless I manually edit /etc/resolv.conf to put in an upstream name server like 8.8.8.8 So I removed the resolvconf package. Now I'm not getting dhcp on my lan and I'm not able to do DNS look-ups on the host itself - I can surf and ping on the net, but not 127.0.0.1. Where do I go from here? This setup with the config for dhclient and dnsmasq, and the same resolv.conf and hosts files worked on my old debian box.

    Read the article

  • Not able to connect to perforce server outside of localhost

    - by bobber205
    My setup is a Qwest PK5000 router with a Linksys router running Tomato. I have DMZ pointed towards my router. (The server is on the tomato router). I tried my applications that open up sockets and Utorrent (port 6883) and I ended having to do advanced port forwarding and forward specific ports in addition to having DMZ on my router. The problem is that I cannot connect to perforce when on another machine on the LAN or off. Any ideas? :) Thanks!

    Read the article

  • Can't perform ODBC connection to MySQL server on local network

    - by Emmanuel
    I have a wamp server running on LAN ip address 192.168.1.101 . From the browser on my PC which is on the LAN I can access the webserver and have as well set the phpmyadmin.conf file to be able to access the phpmyadmin interface. This works smoothly. On the wamp server I have a database which I'd need to access from any PC on the LAN using the MySQL Connector/ODBC. The problem is that I do not manage to setup the connection correctly. Here are the paramenters I use: Data Source Name: test_connection Description: test conenction Server: 192.168.1.101 Port: 3306 User: root Password: Database: The error message I get is the following: Connection Failed: [HY000][MySQL][ODBC 5.1 Driver]Can's connect to MySql server on '192.168.1.101' (10060) Would anybody have a hint to set up correctly the connection?

    Read the article

  • Puppet nodes cant' find master, ec2 public versus internal ip addresses and hosts files

    - by Blankman
    If I setup my hosts files such that they reference all other ec2 nodes using the internal ip addresses, will this work or do I have to use the external ip addresses? Do I need to specify anything in my security group to get internal ip addresses to work? e.g. /etc/hosts ip-10-11-12-13.internal some_node_name If I do this, can I reference some_node_name anywhere in my scripts where I would have used the ip address previously? On my puppet agent servers, I have a reference to my puppet master like: public-ip-here puppet When I reboot my puppet agent's, syslog shows they couldn't find the master with the message: getaddinfo : name or service not known I did get it to work by updating /etc/default/puppet and I added to the options: --server=public-ip-here From what I read, puppet will by default try using 'puppet', and I set this in my hosts file so why wouldn't it be picking this up?

    Read the article

  • Any good PostgreSQL client for linux?

    - by senotrusov
    stackoverflow points me "belongs-on-serverfault" on this, so crossposting. I am frustrated of not having a good Linux GUI administration and development tool for PostgreSQL. pgAdmin III is buggy and unusable piece of... hmm, software, compared to Windows-only PostgreSQL Maestro and EMS PostgreSQL manager. phpPgaAmin does not looks promising. EMS PostgreSQL manager can work under Wine, but such setup have a number of issues. Requirements are: Table data editing and browsing for large tables (1M+), able to jump by FK or some master-slave editing, GUI filtering and so on. ER diagrams with in-place schema editing Schema editing and browsing with all useful GUI support Schema changes log to put into DB versioning (migrations script). Tabbed interface to be able to work with a number of tables and SQL queries at once. And so on. Any ideas?

    Read the article

  • How to ...set up new Java environment - largely interfaces...

    - by Chris Kimpton
    Hi, Looks like I need to setup a new Java environment for some interfaces we need to build. Say our system is X and we need to interfaces to systems A, B and C. Then we will be writing interfaces X-A, X-B, X-C. Our system has a bus within it, so the publishing on our side will be to the bus and the interface processes will be taking from the bus and mapping to the destination system. Its for a vendor based system - so most of the core code we can't touch. Currently thinking we will have several processes, one per interface we need to do. The question is how to structure things. Several of the APIs we need to work with are Java based. We could go EJB, but prefer to keep it simple, one process per interface, so that we can restart them individually. Similarly SOA seems overkill, although I am probably mixing my thoughts about implementations of it compared to the concepts behind it... Currently thinking that something Spring based is the way to go. In true, "leverage a new tech if possible"-style, I am thinking maybe we can shoe horn some jruby into this, perhaps to make the APIs more readable, perhaps event-machine-like and to make the interface code more business-friendly, perhaps even storing the mapping code in the DB, as ruby snippets that get mixed in... but thats an aside... So, any comments/thoughts on the Spring approach - anything more up-to-date/relevant these days. EDIT: Looking a JRuby further, I am tempted to write it fully in JRuby... in which case do we need any frameworks at all, perhaps some gems to make things clearer... Thanks in advance, Chris

    Read the article

  • How to get a Sun Ray to load a firmware from elsewhere

    - by vdiozguy
    I run a Sun Ray/VDI demo environment internally within the company - and because it's not a public service, I need to tell my Sun Rays to connect to it directly so that I don't get redirected to the corporate servers. To get any new Sun Ray to connect to *my* setup I usually pull out my laptop so that the Sun Ray can load the new version of the F/W along with the permission to pull up the management GUI via STOP-S.But there is a better way if you have another Sun Ray server handy:1) allow your Sun Ray to connect to the default corporate server2) log in to a "regular" session, that is a Solaris or Linux desktop on the Sun Ray server itself3) in a terminal, utswitch to your server (/opt/SUNWut/bin/utswitch -h myserver)4) again, login to a regular session there5) in a terminal,  issue "/opt/SUNWut/lib/utload -S myserver -w"6) Watch your firmware load and wait7) the Sun Ray will reboot and connect to the first server again. Repeat steps 2-48) issue "/opt/SUNWut/lib/utload -S myserver -f SunRay.enableGUI"9) Press STOP-S and be merryNOTE: I'm sure there is even yet a better way - this is totally unsupported, most likely a figment of my imagination. In any case, this post will self-destruct in BOOM.

    Read the article

  • Problem with Jumbo Frames

    - by Spookyone
    Hello, I am trying to set up jumbo frames on my gigabit home LAN but no luck so far. My setup is: * D-Link DIR-655 router, HW Revision A3, Firmware 1.21 EU * Synology DS107+, Firmware 3.0-1337 * Laptop w/ Win7 x64, external PCIx NIC managed by "Generic Marvel Yukon 88E8053 based Ethernet Controller" The router is supposed to support jumbo frames but doesn't feature any relevant setting. I set the Jumbo Packet value to 9000 on both the NIC and the Synobox but it doesn't work, ping -f -l 8972 says "Packet needs to be fragmented but DF set". Is there any other setting I overlooked, the DIR-655 doesn't actually support jumbo frames, or what else could be the problem?

    Read the article

  • PSQL 64bit driver error

    - by Alex Holsgrove
    I have an Ubuntu 12.04 64bit server setup under Hyper-V. I have installed Pervasive 64bit SQL drivers so that a stock-updater script can run daily (Updates external MySQL database from another local server running Exchequer software / PSQL database). These drivers seem to conflict, as I found out when trying to run any apt-get commands: apt-get update apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by apt-get) apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by apt-get) apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by apt-get) apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by /usr/lib/x86_64-linux-gnu/libapt-pkg.so.4.12) apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by /usr/lib/x86_64-linux-gnu/libapt-pkg.so.4.12) apt-get: /usr/local/psql/lib64/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by /usr/lib/x86_64-linux-gnu/libapt-pkg.so.4.12) Any help would be great.

    Read the article

  • Creating Persistent Drive Labels With UDEV Using /dev/disk/by-path

    - by Matt
    I have a new BackBlaze Pod (BackBlaze Pod 2.0). It has 45 3TB drives and they when I first set it up they were labeled /dev/sda through /dev/sdz and /dev/sdaa through /dev/sdas. I used mdadm to setup three really big 15 drive RAID6 arrays. However, since first setup a few weeks ago I had a couple of the hard drives fail on me. I've replaced them but now the arrays are complaining because they can't find the missing drives. When I list the the disks... ls -l /dev/sd* I see that /dev/sda /dev/sdf /dev/sdk /dev/sdp no longer appear and now there are 4 new ones... /dev/sdau /dev/sdav /dev/sdaw /dev/sdax I also just found that I can do this... ls -l /dev/disk/by-path/ total 0 lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:02:04.0-scsi-0:0:0:0 -> ../../sdau lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:1:0:0 -> ../../sdb lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:2:0:0 -> ../../sdc lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:3:0:0 -> ../../sdd lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-0:4:0:0 -> ../../sde lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:02:04.0-scsi-2:0:0:0 -> ../../sdae lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:1:0:0 -> ../../sdg lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:2:0:0 -> ../../sdh lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:3:0:0 -> ../../sdi lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-2:4:0:0 -> ../../sdj lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:02:04.0-scsi-3:0:0:0 -> ../../sdav lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:1:0:0 -> ../../sdl lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:2:0:0 -> ../../sdm lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:3:0:0 -> ../../sdn lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:02:04.0-scsi-3:4:0:0 -> ../../sdo lrwxrwxrwx 1 root root 10 Sep 19 18:08 pci-0000:04:04.0-scsi-0:0:0:0 -> ../../sdax lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:1:0:0 -> ../../sdq lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:2:0:0 -> ../../sdr lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:3:0:0 -> ../../sds lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-0:4:0:0 -> ../../sdt lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:0:0:0 -> ../../sdu lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:1:0:0 -> ../../sdv lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:2:0:0 -> ../../sdw lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:3:0:0 -> ../../sdx lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-2:4:0:0 -> ../../sdy lrwxrwxrwx 1 root root 9 Sep 19 18:08 pci-0000:04:04.0-scsi-3:0:0:0 -> ../../sdz I didn't list them all....you can see the problem above. They're sorted by scsi id here but sda is missing...replaced by sdau...etc... So obviously the arrays are complaining. Is it possible to get Linux to reread the drive labels in the correct order or am I screwed? My initial design with 15 drive arrays is not ideal. With 3TB drives the rebuild times were taking 3 or 4 days....maybe more. I'm scrapping the whole design and I think I am going to go with 6 x 7 RAID5 disk arrays and 3 hot spares to make the arrays a bit easier to manage and shorten the rebuild times. But I'd like to clean up the drive labels so they aren't out of order. I haven't figured out how to do this yet. Does anyone know how to get this straightened out? Thanks, Matt

    Read the article

  • Natively boot Virtualbox Image

    - by isync
    I am faced with a Windows hardware/software problem left over from another person. It's on me to resolve. It's a mission critical setup. The situation is: I've got a physical server machine with: -Disk C:\ (one disk) containing a basic install of Windows Server 2008 R2, formerly Win Vista Pro, now gone. -Disk D:\ (software Raid) containing a VirtualBox disk image of a configured Windows Server 2008 R2 running SQL Server R2 among others. What shall I do now? Migrate all the stuff from the configured VM to the basic but natively installed C:\ Windows Server 2008 R2 (with the possibility of breaking stuff)? Or, Setting up the machine to "natively boot" the VM with the help of bcdedit.exe (something I've read about, what I've never done, what I don't know of if it works, if it hits performance, or if it is stable for production) For me, being old skool, I am in the process of de-virtualising everything (option 1). But I'd be happy if someone suggests I am ok to go down the "natively boot" route.

    Read the article

  • Apache + SuExec + php-fpm - how to set them up?

    - by FractalizeR
    Hello. I wonder if there is a good guide on how to setup Apache + SuExec + php-fpm? I have a server which I am going to use several separate website. So, I need php to be run as site-owner user. As I can see, php-fpm is a little different from php-fcgi. Is there a need in mod_fcgid from Apache in this case? How to set this all up? For now my site is running Apache + mod_suphp + php-cgi, so... it's good, but a little slow. I want to preserve security and gain an ability to use APC.

    Read the article

  • PHP Connection Strings

    - by Campo
    I have setup mirroring on my MSSQL server it is an automatic fail over. Lets say the SQL server goes down. I have found connection strings to reconnect the site to the mirror database for MSSQL 2008 Data Source=myServerAddress;Failover Partner=myMirrorServerAddress;Initial Catalog=myDataBase;Integrated Security=True; OR Provider=SQLNCLI10;Data Source=myServerAddress;Failover Partner=myMirrorServerAddress;Initial Catalog=myDataBase;Integrated Security=True; OR Driver={SQL Server Native Client 10.0};Server=myServerAddress;Failover_Partner=myMirrorServerAddress;Database=myDataBase; Trusted_Connection=yes; Is there something similar I can use for PHP to do the same sort of thing. This way if only the database goes down the site instantly fails over to the mirror database as soon as it is online. Thoughts/Suggestions/Comments All appreciated. I checked connectionstring.com but did not find a section for PHP

    Read the article

  • Advise about performance for local or remote SQL Server?

    - by TruMan1
    I currently have my web server and SQL Express / MySQL server on the same server. It is on a VPS. I have been having problems with my hosting so I am thinking of separating the web and db server into 2 VPS servers. Does anyone recommend this? I am worried that changing my setup from a local DB server to a remote one will degrade performance heavily. They will not be on the same network, but will reference each other via an IP address. Anything I should be aware of?

    Read the article

  • Confusion about HSRP Groups

    - by Kyle Brandt
    If I have a router that has several LANs on it, and each of these LAN is attached to a second router, do I need to use different HSRP groups for each LAN? With this set up, each virtual gateway will be on a Layer 2 segment. And within a router, no interface will have multiple gateways. So, For example: Router 1: F0/0: ip address 192.168.1.2 255.255.255.0 standby ip 192.168.1.1 F2/0: ip address 192.168.2.2 255.255.255.0 standby ip 192.168.2.1 Router 2: F0/0: ip address 192.168.1.3 255.255.255.0 standby ip 192.168.1.1 F2/0: ip address 192.168.2.3 255.255.255.0 standby ip 192.168.2.1 Will this work, or do I need standby 1 ip 192.168.2.1 on the F2/0 interfaces? Since according to the RFC, the group number of the packet is in the HSRP multicast packets, my guess is that I don't need different groups, and that multiple groups are only needed when they are all on the same Layer 2 segment. However, I haven't been able to find this setup....

    Read the article

< Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >