Search Results

Search found 45542 results on 1822 pages for 'enable add ons'.

Page 621/1822 | < Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >

  • How can I use two Internet connections in Ubuntu?

    - by Martin
    My goal is to be able to do something like this: curl google.com --interface ppp0 curl google.com --interface p2p2 ppp0 is a DSL connection, and p2p2 is a separate direct Internet connection. Currently I can only get one of these to work at a time. When I enable one, the other one stops working. /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # DSL auto p2p1 iface p2p1 inet manual auto dsl-provider iface dsl-provider inet ppp pre-up /sbin/ifconfig p2p1 up # line maintained by pppoeconf provider dsl-provider # DIRECT auto p2p2 iface p2p2 inet dhcp ifconfig: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 p2p1 Link encap:Ethernet inet6 addr: fe80::20a:ebff:fe21:99c6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 p2p2 Link encap:Ethernet inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::20a:ebff:fe17:1249/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 ppp0 Link encap:Point-to-Point Protocol inet addr:53.193.231.167 P-t-P:53.193.224.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 10.0.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 53.193.224.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 p2p2 By default, only ppp0 works. If I run "route add default gw 192.168.1.1 p2p2" then I can use p2p2 but ppp0 stops working. If I then run "route add default gw 53.193.224.1 ppp0" then I can use ppp0 again but p2p2 stops working. What can I do to be able to use both interfaces selectively?

    Read the article

  • Debian/OVH: How to configure multiple Failover IP on the same Xen (Debian) Virtual Machine?

    - by D.S.
    I have a problem on a Xen virtual machine (running latest Debian), when I try to configure a second failover IP address. OVH reports that my IP is misconfigured and they complaint they receive a massive quantity of ARP packets from this IPs, so they are going to block my IP unless I fix this issue. I suspect there's a routing issue, but I don't know (and can't find any useful info on the provider's website, and their support doesn't provide me a valid solution, just bounce me to their online - useless - guides). My /etc/network/interfaces look like this: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address AAA.AAA.AAA.AAA netmask 255.255.255.255 broadcast AAA.AAA.AAA.AAA post-up route add 000.000.000.254 dev eth0 post-up route add default default gw 000.000.000.254 dev eth0 # Secondary NIC auto eth0:0 iface eth0:0 inet static address BBB.BBB.BBB.BBB netmask 255.255.255.255 broadcast BBB.BBB.BBB.BBB And the routing table is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 000.000.000.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 0.0.0.0 000.000.000.254 0.0.0.0 UG 0 0 0 eth0 In these examples (true IP addresses are replaced by fake ones, guess why :)), 000.000.000.000 is my main server's IP address (dom0), 000.000.000.254 is the default gateway OVH recommends, AAA.AAA.AAA.AAA is the first IP Failover and BBB.BBB.BBB.BBB is the second one. I need both AAA.AAA.AAA.AAA and BBB.BBB.BBB.BBB to be publicly reachable from Internet and point to my domU, and to be able to access Internet from inside the virtual machine (domU). I am using eth0 and eth0:0 because due to OVH support, I have to assign both IPs to the same MAC address and then create a virtual eth0:0 interface for the second IP. Any suggestion? What am I doing wrong? How can I stop OVH complaining about ARP flood? Many thanks in advance, DS

    Read the article

  • How do I fix a custom Event Viewer Log that merges automatically with the Application log?

    - by NightOwl888
    I am trying to create a custom event log for a Windows Service on Windows Server 2003. I would like to name the custom log "(ML) Startup Commands". However, when I add a registry key with that name to HKLM\SYSTEM\CurrentControlSet\Services\Eventlog\, it adds a log but shows the exact same events that are in the Application log when looking in the event viewer. If I add a registry key with the name "(ML) Startup Commands 2" to the event log, it shows a blank event log as expected. In fact, any other name will work correctly except for the one I want. I have searched through the registry for other keys with the string "(ML)" and removed all other references to this key name, however I continue to get merged results in the viewer when I create a key with this name. My question is, how can I fix the server so I can create a custom event log with this name that shows only the events from my application, not the events from the default Application event log that is installed with Windows? Update: I rebooted the server and woudn't you know it, the log started acting normally. I got a strange error message in the Application log: The EventSystem sub system is suppressing duplicate event log entries for a duration of 86400 seconds. The suppression timeout can be controlled by a REG_DWORD value named SuppressDuplicateDuration under the following registry key: HKLM\Software\Microsoft\EventSystem\EventLog. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. I can only hope this error doesn't mean the problem will come back after 86400 seconds. I guess I will have to wait and see.

    Read the article

  • Debian network bridge configuration - /etc/network/interfaces

    - by Mathias
    I'm running a Lenny Xen dom0 hosting multiple virtual machines in a routed IP setup. To get an additional private subnet, I created the bridge xenbr0 in the dom0 with the following commands: brctl addbr xenbr0 ifconfig xenbr0 10.0.0.1 netmask 255.255.255.0 ifconfig xenbr0 up This works as expected, and domU interfaces are added to the bridge by Xen on VM start. My only problem is: how the heck do i specify this configuration in /etc/network/interfaces that it remains permanent and the bridge is available after a reboot? I tried the following config as found on a lot of tutorials: auto xenbr0 iface xenbr0 inet static address 10.0.0.1 netmask 255.255.255.0 network 10.0.0.0 broadcast 10.0.0.255 bridge_stp no I get 2 different errors, depending on if the bridge already exists or not. If it doesn't exist: root@dom0:~# brctl show bridge name bridge id STP enabled interfaces root@dom0:~# /etc/init.d/networking restart Reconfiguring network interfaces...if-up.d/mountnfs[eth0]: waiting for interface xenbr0 before doing NFS mounts (warning). SIOCSIFADDR: No such device xenbr0: ERROR while getting interface flags: No such device SIOCSIFNETMASK: No such device SIOCSIFBRDADDR: No such device xenbr0: ERROR while getting interface flags: No such device xenbr0: ERROR while getting interface flags: No such device Failed to bring up xenbr0. done. And if it exists: root@dom0:~# brctl show bridge name bridge id STP enabled interfaces xenbr0 8000.000000000000 no root@dom0:~# /etc/init.d/networking restart Reconfiguring network interfaces...if-up.d/mountnfs[eth0]: waiting for interface xenbr0 before doing NFS mounts (warning). RTNETLINK answers: File exists Failed to bring up xenbr0. done. Could anyone point me in the right direction please? The bridge works fine when created manually, i just need the right config file entries. The most tutorials I found add some devices to the bridge in the config, is that maybe the problem why it is not working? I don't have any interfaces I want to add to the bridge on creation as they get added later on VM start... Thanks, Mathias

    Read the article

  • NFS headaches with FreeBSD 4.9

    - by Ernie
    Once upon a time, this used to work, and I kept the configuration the same, but... now nothing. I'm just trying to get an NFS server set up on a FreeBSD 4.9 server. The process should be about as complicated as this: Add this entry to /etc/exports: /var/home /var/vpopmail/domains -maproot=root XXX.XX.XX.XXX Execute this: portmap nfsd -u -t -n 4 mountd -r Then this should work, regardless of network and firewall issues: showmount -e localhost But showmount -e localhost fails with the following error: RPC: Port mapper failure showmount: can't do exports rpc And even if I kill off the NFS daemon, and try a rpcinfo -p localhost, I get this error: rpcinfo: can't contact portmapper: rpcinfo: RPC: Unable to receive; errno = Connection reset by peer The portmapper is still running. Why the heck does nothing work as if it isn't? Edit to add: FYI: Sockstat gives me this: $ sockstat |egrep "(nfsd|portmap)" root nfsd 86310 3 udp4 *:2049 *:* root nfsd 86310 4 udp4 *:973 *:* root portmap 45920 0 tcp4 *:111 *:* Then, at a later time (say, 5 minutes) it's as if nfsd isn't acting as a server: $ sockstat |egrep "(nfsd|portmap)" root portmap 45920 0 tcp4 *:111 *:* But the nfs daemon is still running: $ ps ax |grep nfsd 86311 ?? I 0:00.00 nfsd: server (nfsd) 86312 ?? I 0:00.00 nfsd: server (nfsd) 86313 ?? I 0:00.00 nfsd: server (nfsd) 86314 ?? I 0:00.00 nfsd: server (nfsd)

    Read the article

  • Copying email with qmail and Plesk

    - by Greg
    I need to keep a copy of all outgoing and incoming email (for a single domain if possible) using qmail or Plesk. I can't recompile qmail, so qmailtap is out of the question, as is setting QUEUE_EXTRA in extra.h. I'm pretty sure it should be possible with Plesk's mailmng utility, aka Mail Handlers but I'm having trouble getting them to work. I've registered 2 hooks: incoming hook ./mailmng --add-handler --handler-name=incoming --recipient-domain=example.com --executable=/xxx/incoming.sh --context=/xxx/incoming/ --hook=before-local incoming.sh #!/bin/bash # The email is passed on stdin - grab it to a variable e=`cat -` # $1 = context (/xxx/incoming) # $3 = recipient ([email protected]) # Create /xxx/incoming/[email protected] mkdir -p $1$3 # Save the email to /xxx/incoming/[email protected]/0123456789.txt echo "$e" > $1$3/`date +%s%N`.txt # Echo PASS to stderr echo 'PASS' >&2 # Echo the email to stdout echo "$e" outgoing hook # ./mailmng --add-handler --handler-name=outgoing --sender-domain=holidaysplease.com --executable=/xxx/outgoing.sh --context=/xxx/outgoing/ --hook=before-remote The outgoing.sh file is the same as incoming.sh, except replace $3 (recipient) with $2 (sender). The incoming hook does work, but saves 2 copies of each email - one before and one after SpamAssassin has run. The outgoing hook doesn't seem to get called at all. So finally, my questions are: How can I make the incoming hook save only a single copy (preferably after SpamAssassin has run)? How can I get the outgoing hook to work?

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

  • adding or routing additional domain email addresses

    - by Mustafa Ismail Mustafa
    We have exchange 2007 and we bought a new domain name and we're still keeping the old one so that we can wean everyone off of the old emails. Now, I'm wondering how to go about this. I need to add the new domain as accepted and authoritative by the exchange server. Emails on the new domain need to get routed to the inbox and ditto the old emails, however, I want to be able to change the reply-to in the header to the new email address automatically. I also want to set the new email addresses as the defaults. Ideally, I'd like to be able to add a message at the bottom of every externally outgoing email saying that the new email is [email protected]. But this is a nice to have, certainly not a must have. I've added the new domain as authoritative, and managed to change the primary smtp email addresses to the new one, but sent emails are not being routed to them and neither are the old email addresses! Now how the heck would I go about fixing all of that? I'm completely stumped! TIA

    Read the article

  • MSDTC - Communication with the underlying transaction manager has failed (Firewall open, MSDTC network access on)

    - by SocialAddict
    I'm having problems with my ASP.NET web forms system. It worked on our test server but now we are putting it live one of the servers is within a DMZ and the SQL server is outside of that (on our network still though - although a different subnet) I have open up the firewall completely between these two boxes to see if that was the issue and it still gives the error message "Communication with the underlying transaction manager has failed" whenever we try and use the "TransactionScope". We can access the data for retrieval it's just transactions that break it. We have also used msdtc ping to test the connection and with the amendments on the firewall that pings successfully, but the same error occurs! How do i resolve this error? Any help would be great as we have a system to go live today. Panic :) Edit: I have created a more straightforward test page with a transaction as below and this works fine. Could a nested transaction cause this kind of error and if so why would this only cause an issue when using a live box in a dmz with a firewall? AuditRepository auditRepository = new AuditRepository(); try { using (TransactionScope scope = new TransactionScope()) { auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#1", 1); auditRepository.Save(); auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#2", 1); auditRepository.Save(); scope.Complete(); } } catch (Exception ex) { Response.Write("Test Error For Transaction: " + ex.Message + "<br />" + ex.StackTrace); }

    Read the article

  • Changing Mac OS X 10.6 Routing after VPN'd In

    - by Matt Rogish
    I have a coffee shop around the corner that I use to do some work when I want to get away from home. They offer free wi-fi and I then use my Mac 10.6 VPN to log into my work network. I have "Send all traffic over VPN connection" checked. Before, their network was 10.0.0.x. I think they got a new router because it's now 192.168.2.x However, this interferes with one of the subnets at work so now I can't visit 192.168.2.x at work. So: 1) Office network: VPN gives IPs as 192.168.1.x. Another network is 192.168.2.x 2) Coffee network: Gives IPs as 192.168.2.x I think if I set a route to send all 2.x traffic over the tunnel, it would blow up my routing to their system, right? What should I do? I know the individual IPs of the servers I want... Maybe I could add each one, or can I add all of them minus the default gateway of their router? How do I set that up "temporarily" in my Mac? Thanks!!

    Read the article

  • Windows 8 - no internet connetction to some hosts while VPN is active

    - by HTD
    I use VPN to access the servers at work. When VPN is used, all network traffic to the Internet passes through my company network. It worked without any problems on Windows 7, now on Windows 8 some sites suddenly became inaccessible. Please note - I don't try to connect them over RDP, they are public Internet addresses, outside company network. They are inaccessible using any protocol. Ping returns "General failure.". I know it could be a misconfiguration on my company's server side, but it's very strange, since the same VPN connection used on Windows 7 works properly. What's wrong? Is it a Windows 8 bug, or is there something I could do on my company servers to make VPN work as expected with Windows 8? My company network works on Windows Server 2008 R2 and uses Microsoft TMG firewall. I couldn't find any rules blocking the traffic to mentioned sites, all network traffic for VPN users are passed through for all IPs and protocols. Any clues? UDPATE: Important - one whole day it worked. I hibernated and restarted the computer, connected and disconnected VPN - nothing could break my connection. Today it broke again, and restarting Windows didn't help. And now the solution: route add -p 0.0.0.0 MASK 255.255.255.255 192.168.1.1 Oh, OK, I know what it did, added my default gateway to routing table. But it still didn't work sometimes. So I removed my main network gateway route with: route delete -p 0.0.0.0 MASK 0.0.0.0 192.168.0.1 And added modified with: route add -p 0.0.0.0 MASK 255.255.255.255 192.168.0.1 And it works. Now. But I don't trust this. I don't know what really happened.

    Read the article

  • Increasing Java's heapspace in Tomcat startup script

    - by Ankur
    I want to increase my heap size when using Tomcat. I was told to add this line export CATALINA_OPTS=-Xms16m -Xmx256m; In to the startup.sh script - I did so (at the beginning) but got the error export: 24: -Xmx256m: bad variable name Where am I supposed to add it, am I doing something else wrong? <b>export CATALINA_OPTS=-Xms16m -Xmx256m;</b> # Better OS/400 detection: see Bugzilla 31132 os400=false darwin=false case "`uname`" in CYGWIN*) cygwin=true;; OS400*) os400=true;; Darwin*) darwin=true;; esac # resolve links - $0 may be a softlink PRG="$0" while [ -h "$PRG" ] ; do ls=`ls -ld "$PRG"` link=`expr "$ls" : '.*-> \(.*\)$'` if expr "$link" : '/.*' > /dev/null; then PRG="$link" else PRG=`dirname "$PRG"`/"$link" fi done PRGDIR=`dirname "$PRG"` EXECUTABLE=catalina.sh # Check that target executable exists if $os400; then # -x will Only work on the os400 if the files are: # 1. owned by the user # 2. owned by the PRIMARY group of the user # this will not work if the user belongs in secondary groups eval else if [ ! -x "$PRGDIR"/"$EXECUTABLE" ]; then echo "Cannot find $PRGDIR/$EXECUTABLE" echo "This file is needed to run this program" exit 1 fi fi exec "$PRGDIR"/"$EXECUTABLE" start "$@"

    Read the article

  • Are Windows Domain Service Accounts Really Necessary?

    - by Zach Bonham
    One of the biggest problems we have in automating application deployments is the idea that running IIS AppPools and Windows Services under domain service accounts is a 'best practice'. Unfortunately, this best practice sometimes causes deployment headaches in that either we need to provision a new domain level service account quickly, or once we have the account, we now need to manage the account credentials. I had a great conversation about not making domain level service accounts a requirement and effectively taking one of two approaches: Secure at the node level using machine account(domain\machine$) and add the node to appropriate ActiveDirectory/Sql groups/roles Create local app specific accounts on each machine (machine\myapp) and add that account to appropriate ActiveDirectory/Sql groups/roles (the password here can change per deployment, it doesn't need to be stored) In both cases, it seems that its easier to manage either adding an account to appropriate group/role, or even stand up new, local account, than it is to have to provision a new domain level account and manage those credentials. This would hopefully ease the management burden on ActiveDirectory, Sql Server and Operations teams as there would be no more password management. We've not actually been able to implement this in practice yet. I am coming from a development background, so I'm curious as to how many ways this approach could go wrong? Can we really get rid of domain level service accounts with this direction? I'd appreciate any thoughts from anyone who has taken this path! Thanks! Zach

    Read the article

  • Struggling to set-up NLB cluster

    - by Chris W
    I'm trying to set up NLB on a couple of Windows 2008 R2 virtual servers running on top of Hyper V R2. The servers each have a single vNIC for LAN access (and a second vNIC for SAN access). I'm setting up the cluster to use Multicast mode. The vNICs are each set to allow MAC spoofing. Essentially I'm finding that i can add SERVER1 as a host and it will pick up and respond to the cluster IP from a remote subnet. If I then 'stop' the node in NLB manager it still responds when I would expect it to stop answering on that IP. If I recreate the cluster and add SERVER2 as the first host, the wizard completes correctly and an IPCONFIG on the server shows that it now has the cluster IP but I can't ping the cluster IP from a remote subnet but I can from another machine on the same subnet. As a final test - with both servers in the cluster, pinging from another machine on the same subnet I still get a response from the cluster IP when both nodes are stopped according to the NLB manager. The two VMs are sat on the same physical blade and are built up exactly the same as they'll be used as SharePoint web front end servers. I'm at a loss as to what could be wrong with the second VM that prevents it taking on the address just as the sole node in the cluster, never mind the strange behaviour of the cluster when I stop/start nodes.

    Read the article

  • location of index.html CentOS 6

    - by user2118559
    Based on this http://www.servermom.com/how-to-add-new-site-into-your-apache-based-centos-server/454/ tutorial installed Apache-based CentOS Server I use putty.exe as editor vi /etc/httpd/conf/httpd.conf at very bottom modified to <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/fikitipis.com/public_html ServerName www.fikitipis.com ServerAlias fikitipis.com ErrorLog /var/www/fikitipis.com/error.log CustomLog /var/www/fikitipis.com/requests.log common </VirtualHost> So expect that index is at /var/www/fikitipis.com/public_html When in browser type ip address of server, see Apache 2 Test Page powered by CentOS and so on You may now add content to the directory /var/www/html/ Then [root@vps ~]# ls /var/www/ see cgi-bin domain.com error fikitipis.com html icons Checking content of directories ls /var/www/domain.com/public_html, ls /var/www/fikitipis.com/public_html, /var/www/html/ are empty Where is index.html? Did touch /var/www/fikitipis.com/public_html/index1.html then vi /var/www/fikitipis.com/public_html/index1.html, typed a, then wrote some text in file, then Escape and shift+zz. And in browser http://111.111.11.111/index1.html and see what I had wrote. So until now seems that all works

    Read the article

  • What is Best storage servers infrastructure ? DAS/NAS/SAN or installing GlusterFS/LUSTER/HDFS/RBDB

    - by TORr0t
    I am trying to design an infrastucture for the project I am working on. It would be somehow a file-sharing/downloading project (like rapidshare) and I would need high storage sizes and good scability, and I would add new storage nodes after my project grows up. I have come up with 3 solutions for my project which are using Luster, GlusterFS, HDFS, RDBD. For start, i would have 2 servers, one server is for glusterfs client + webserver + db server+ a streaming server, and the other server is gluster storage node. (After sometime, i would be adding more node servers, and client servers (dont know how many new client new servers to add, will see later) So, i am thinking to work with glusterfs. But i really wonder that if i have to use high performance servers with high sotrage sizes or avarage/slow servers with high storage sizes? Or nas/das/san solutions are better for glusterfs storage nodes? I might buy a nas and install glusterfs onto it. I would be happy to listen to your recommendations for the server properties (for each clients and nodes) . I really dont know if I really need high amount of ram and good cpus to for the nodes. I am sure i need it for client servers. The files would be streamed as well, so the Automatic file replication is important, thus, my system should work like a cloud, when needed, according to high traffic, the storage nodes should copy the most demanded file to be streamed and would help me to get rid of scability problems and my visitors would able to stream/download those files. Also, i am open to your experiences/thoughts about any good solution. Luster, hdfs, rbdb are the other options and i would be happy to listen to your thoughts here. I would be very very happy to hear back from anyone commented of any words I have used here. Thanks

    Read the article

  • How do you automatically close 3rd party applications when LiberKey is shut down?

    - by NoCatharsis
    Within LiberKey, I have added my own portable applications that are not included within the LiberKey library. When you go into the Properties menu for the app in the LiberKey UI, the Advanced tab has an option for Autoexecute. This dropdown menu seems to have no visible effect, at least on my current installation. I found that I could right click within the primary GUI and select "Add software group", add all 3rd party applications, then go to the Advanced tab within THAT Properties screen and select Autoexecute - "Always on startup". This solved the problem for starting the apps when LiberKey starts. However, now I'm having the same issue when closing out LiberKey. I have created a new 3rd party app that calls the same .exe, but sends the Parameter "/close". I then went to the Advanced tab and selected Autoexecute - "Always on shutdown". Seems pretty logical right? But the apps will not close on LiberKey shutdown. I cannot handle the app close-outs in the same way with a software group, as I did with the startup issue because the Autoexecute drop-down does not have an "Always on shutdown" option. Unfortunately, many of the Q&A forums on liberkey.com are in French and I took Spanish in high school. Otherwise I've not been able to find a workable answer. Any suggestions?

    Read the article

  • Primary zone will not transfer to secondary zone

    - by Matt Beckman
    Using DNS on Windows Server 2008, there is a constant struggle with adding primary and secondary zones. I will add a primary zone to NS1 for a new domain, edit it as needed, and when it's ready add the secondary zone to NS2. However, MOST of the time, the secondary zone remains in an error state, and will never acquire the primary zone data. I have gone back to domains a few weeks after adding them to find out that Windows never propagated the change. Annoying. Anyway, I recently updated SP1 to SP2 thinking this would help, but it hasn't. I added two new domains today, and spent an hour after the secondary zone would just not sync. During that time, the only error in the logs I had seen was for one of them where DNS complained about not being authoritative. In order to eventually resolve the issue, I ended up deleting the primary zone, creating a new primary zone, and hitting "Apply" after each and every field change. For example, after modifying the serial number from "1" to a date appropriate "2010093001", I hit apply, and then the Primary Server (apply), Responsible Person (apply), and finally Name Servers (apply). After I did this, the secondary zone didn't waste any time getting the data. Ideas?

    Read the article

  • Ubuntu root privs installation issue

    - by Pam
    I am a fairly new Ubuntu user (and Linux user, for that matter) and I just downloaded a program whose installer was a .sh file. Not thinking, I copied the installer to an /opt subdirectory, thinking that I was going to install the application there: sudo cp ~/Downloads/fooInstaller.sh /opt/someDir I can't remember, but I either had to use sudo because /opt required it, or I just used it without thinking, but in any case, I prefixed with sudo. Once in /opt/someDir, I executed the installer again, using sudo: sudo sh fooInstaller.sh The terminal went crazy, and a few seconds later, a graphical install wizard popped up that guided me through the rest of the process. At the end of the wizard I was prompted to launch the program, and I did, and everything was great. Until... I closed the program, and attempted to add it to my Ubuntu "panel" (the icon panel at the top of the screen). The program was installed to /usr/local/foo/theProgram, and so I specified that URL as the command in the custom app launcher. When I open the program through the panel/launcher (at the top of the screen), the program doesn't load or operate correctly. I get a lot of error messages complaining about being denied permissions. I'm assuming that this is a "superuser/installation/privs" issue, and not a problem with the application (hence this post at superuser.com instead of the application's forums), because when I launch the program from the terminal with sudo, it opens and executes perfectly fine, just like it did the first time around after the install wizard finished. I realize I'm probably going to have to uninstall the program completely, and re-install it differently. Finally, my question: After uninstalling, can I avoid all these issue by just running the installer (sh fooInstaller.sh) right out of my Downloads directory, sans the sudo prefix? If not, how do I get the program to install without root privs so that I can add it to my panel/launcher and get it executing correctly? Sorry for the long post but I didn't want to omit any details because, as I'm sure you can tell, I'm not really sure I know what I'm doing. Thanks for any help here!

    Read the article

  • Sharepoint/WSS Reporting Services Integration woes

    - by mhollers
    after a number of failed attempts i seem to have successfully installed the Reporting services add-in to my WSS farm. However, I seem to be missing most of the enhanced functionality eg no report library template, no report center site template. the only additional functionality available is the report viewer web part. background: 2 server WSS 3.0 farm with CA (Central admin) WFE (web front end) and reporting services addin installed on 1, and SQL05 SP2 with Reporting services (RS) and all databases installed on other. I have a VM environment set up and have rolled this back and repeated a number of times. I have configured RS within CA and activated 'Report Server INtegration Feature'. Within the 'site settings' I have a 'Reporting Services' heading with a 'manage shared schedules' item/link, not sure if there should be other options? I was of the understanding that to view reports within sharepoint i could either create a new site using the 'report center' template or add a report library to an existing site, neither of which seems available I am at a loss as to what to do, as all online information seems to do with dealing with installation issues/errors, which i seem to have eventually got past

    Read the article

  • Routing for Two Hosts Behind a IPSec Tunnel

    - by Brent
    Network A 10.110.15.0/24 Firewall is .1 Host A is .2 Network B 10.110.16.0/24 Firewall is .1 Host B is .2 Two Cisco ASA's. IPSec tunnel with a crypo map that secures 10.110.15.0/24 <- 10.110.16.0/24. Let's say two hosts, 10.110.15.2 and 10.110.16.2 need to talk to each other. Normally I have to enter a persistent static route on a each host along the lines of: route add 10.110.16.0 mask 255.255.255.0 10.110.15.1 metric 1 -p (on the "A" box) I also have to enter another persistent static route on the .16 host in order for the traffic to know how to get back to the .15 network. Note that the default for each machine IS the firewall, so .1. I have no problem adding persistent routes on Windows/ESX/*nux machines but what about a smart switch in the .16 network that I want to manage from the .15 network. Do I need to run a routing protocol? Do I need to have Reverse Route Injection enabled on both ends of the IPSec tunnel? Should I add a route on the firewall? If so, how do you formulate it? Does it get a metric of 1 and my default route 0.0.0.0 get a metric of 2?

    Read the article

  • MSE updating fails, no warning or error message.

    - by WebDevHobo
    I'm running Windows 7 Ultimate, 32-bit. For the last couple of days, MSE doesn't fails to update, remaining stuck at version 1.75.119 I presume that an error log is created somewhere, or an event log, but I don't know where to find those. It just says "connection failed". Tried it at home, at work and friends places, but never works. Restarted computer a lot of times now, checked for Microsoft Updates in general, but nothing shows up. EDIT: I've opened a bounty for this, because I really don't know what to do anymore. The oldest answer(the long post) here did not work. Besides this problem, I'm having trouble using MSI installers too. I've had to add the SYSTEM group to a lot of maps and give them full control, but shouldn't the SYSTEM already be there? Also, I had to remove the "read-only" attribute from the ProgramData and Users folders, add the SYSTEM group there too and give them full control. Only then will the MSI install work and even then, it says I doesn't have the rights to create a shortcut on the desktop. Don't know what I need to modify and where for that. I'm saying this because I don't know how MSE updates, but if they use MSI files to do that, that might explain things. The SYSTEM group remains added, but every time I take away the read only attribute, click OK and check the settings again, read-only is still active... That's all I know. Screenshot, all those updates were manual:

    Read the article

  • sudo or acl or setuid/setgid ?

    - by Xavier Maillard
    Hi, for a reason I do not really understand, everyone wants sudo for all and everything. At work we even have as many entries as there are way to read a logfile (head/tail/cat/more, ...). I think, sudo is defeating here. I'd rather use a mix of setgid/setuid directories and add ACL here and there but I really need to know what are the best practices before starting up. Our servers have %admin, %production, %dba, %users -i.e many groups and many users. Each service (mysql, apache, ...) has its own way to install privileges but members of the %production group must be able to consult configuration file or even log files. There is still the solution to add them into the right groups (mysql...) and set the good permission. But I do not want to usermod all users, I do not want to modify standards permissions since it could change after each upgrade. On the other hand, setting acls and/or mixing setuid/setgid on directories is something I could easily do without "defacing" the standard distribution. What do you think about this ? Taking the mysql example, that would look like this: setfacl d:g:production:rx,d:other::---,g:production:rx,other::--- /var/log/mysql /etc/mysql Do you think this is good practise or should I definetely usermod -G mysql and play with standard permissions system ? Thank you

    Read the article

  • OS X Keeps prompting me for SSH private key passphrase (OS X 10.6.8)

    - by Danny Englander
    I have a private key to ssh into my server and the connection works. In my hosts file I have: Host myhost HostName xxx.xxx.xxx.xx GlobalKnownHostsFile ~/.ssh/known_hosts port 22 User myuser IdentityFile ~/.ssh/mykey_dsa IdentitiesOnly yes .. and then I type ssh myhost Every time I connect, I get the Mac OS X keychain prompt and I tell OS X to remember the passphrase but then when I disconnect from ssh and re-connect, I am prompted to add the passphrase to the keychain again. This is only a recent problem so I suspect and issue with Keychain? To be clear, I can 're-add' to keychain every time and connect but this defats the purpose. The permissions on my dsa key are set at 600 or -rw-------@ I tried repairing disk permissions but that did no good. My Google-foo is also failing me, nothing of use came up. So I am not sure if this an OS X / keychain issue or an SSH issue. update: When I try ssh -vvv myhost, I think it reveals the issue: debug1: Trying private key: /Users/danny/.ssh/mykey_dsa debug1: PEM_read_PrivateKey failed debug1: read PEM private key done: type <unknown> debug3: Not a RSA1 key file /Users/danny/.ssh/mykey_dsa. debug1: read PEM private key done: type DSA Identity added: /Users/danny/.ssh/mykey_dsa (/Users/danny/.ssh/mykey_dsa) debug1: read PEM private key done: type DSA debug3: sign_and_send_pubkey debug2: we sent a publickey packet, wait for reply debug1: Authentication succeeded (publickey). ... and after that I get connected. I think this crux of the matter is: PEM_read_PrivateKey failed

    Read the article

  • Ruby Passeger + Nginx or lighthttpi + fgci for shared hosting

    - by devnull
    I have set up a passenger + nginx setup and I plan to offer a free non-commercial hosting (or in fact on the fly deployment) for rack-based frameworks (e.g. camping, sinatra). I am facing an "issue" with passenger. For each application you need to configure nginx.conf (it would be the same with apache so it is not an nginx issue) with: server { ... passenger_base_uri /app1; passenger_base_uri /app2; passenger_base_uri /app3; } Now this is not inherently bad as, in theory, I could allow a user to run just one app on his webspace but even in this case I need to create a new server directory on nginx e.g. (user.domain.com). As this will mainly be used to deploy apps the behavior I am looking at is more the possibility to auto map several apps (e.g. app1, app2, app3, app4) under the same server (your app.com/app1 yourapp.com/app2) without having to update the nginx or apache file each time. This seems to be a limitation in passenger. As such I am thinking about an alternative with lighttpd and fastcgi. Would this allow immediate deployment without touching the lighttpd config file e.g. I create a new directory with app2 and it will run immediately ? What is your experience in performance difference between passenger + nginx vs. lighttpd + fastcgi ? thanks in advance scenario details: on nginx + passenger - user cannot add a new sub-folder and run another sinatra/camping app without declaring the path on nginx.conf and restarting the server; wished behavior with the new setup: - user can add a new folder with a new app and it would run on lighttpd+fcgi without any extra configuration of the web server;

    Read the article

< Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >