Search Results

Search found 16899 results on 676 pages for 'local'.

Page 75/676 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Stop duplicate icmp echo replies when bridging to a dummy interface?

    - by mbrownnyc
    I recently configured a bridge br0 with members as eth0 (real if) and dummy0 (dummy.ko if). When I ping this machine, I receive duplicate replies as: # ping SERVERA PING SERVERA.domain.local (192.168.100.115) 56(84) bytes of data. 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=114 ms (DUP!) 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms (DUP!) Using tcpdump on SERVERA, I was able to see icmp echo replies being sent from eth0 and br0 itself as follows (oddly two echo request packets arrive "from" my Windows box myhost): 23:19:05.324192 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324212 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324217 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324221 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324264 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324272 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 It's worth noting, testing reveals that hosts on the same physical switch do not see DUP icmp echo responses (a host on the same VLAN on another switch does see a dup icmp echo response). I've read that this could be due to the ARP table of a switch, but I can't find any info directly related to bridges, just bonds. I have a feeling my problem lay in the stack on linux, not the switch, but am opened to any suggestions. The system is running centos6/el6 kernel 2.6.32-71.29.1.el6.i686. How do I stop ICMP echo replies from being sent in duplicate when dealing with a bridge interface/bridged interfaces? Thanks, Matt [edit] Quick note: It was recommended in #linux to: [08:53] == mbrownnyc [gateway/web/freenode/] has joined ##linux [08:57] <lkeijser> mbrownnyc: what happens if you set arp_ignore to 1 for the dummy interface? [08:59] <lkeijser> also set arp_announce to 2 for that interface [09:24] <mbrownnyc> lkeijser: I set arp_annouce to 2, arp_ignore to 2 in /etc/sysctl.conf and rebooted the machine... verifying that the bits are set after boot... the problem is still present I did this and came up empty. Same dup problem. I will be moving away from including the dummy interface in the bridge as: [09:31] == mbrownnyc [gateway/web/freenode/] has joined #Netfilter [09:31] <mbrownnyc> Hello all... I'm wondering, is it correct that even with an interface in PROMISC that the kernel will drop /some/ packets before they reach applications? [09:31] <whaffle> What would you make think so? [09:32] <mbrownnyc> I ask because I am receiving ICMP echo replies after configuring a bridge with a dummy interface in order for ipt_netflow to see all packets, only as reported in it's documentation: http://ipt-netflow.git.sourceforge.net/git/gitweb.cgi?p=ipt-netflow/ipt-netflow;a=blob;f=README.promisc [09:32] <mbrownnyc> but I do not know if PROMISC will do the same job [09:33] <mbrownnyc> I was referred here from #linux. any assistance is appreciated [09:33] <whaffle> The following conditions need to be met: PROMISC is enabled (bridges and applications like tcpdump will do this automatically, otherwise they won't function). [09:34] <whaffle> If an interface is part of a bridge, then all packets that enter the bridge should already be visible in the raw table. [09:35] <mbrownnyc> thanks whaffle PROMISC must be set manually for ipt_netflow to function, but [09:36] <whaffle> promisc does not need to be set manually, because the bridge will do it for you. [09:36] <whaffle> When you do not have a bridge, you can easily create one, thereby rendering any kernel patches moot. [09:36] <mbrownnyc> whaffle: I speak without the bridge [09:36] <whaffle> It is perfectly valid to have a "half-bridge" with only a single interface in it. [09:36] <mbrownnyc> whaffle: I am unfamiliar with the raw table, does this mean that PROMISC allows the raw table to be populated with packets the same as if the interface was part of a bridge? [09:37] <whaffle> Promisc mode will cause packets with {a dst MAC address that does not equal the interface's MAC address} to be delivered from the NIC into the kernel nevertheless. [09:37] <mbrownnyc> whaffle: I suppose I mean to clearly ask: what benefit would creating a bridge have over setting an interface PROMISC? [09:38] <mbrownnyc> whaffle: from your last answer I feel that the answer to my question is "none," is this correct? [09:39] <whaffle> Furthermore, the linux kernel itself has a check for {packets with a non-local MAC address}, so that packets that will not enter a bridge will be discarded as well, even in the face of PROMISC. [09:46] <mbrownnyc> whaffle: so, this last bit of information is quite clearly why I would need and want a bridge in my situation [09:46] <mbrownnyc> okay, the ICMP echo reply duplicate issue is likely out of the realm of this channel, but I sincerely appreciate the info on the kernels inner-workings [09:52] <whaffle> mbrownnyc: either the kernel patch, or a bridge with an interface. Since the latter is quicker, yes [09:54] <mbrownnyc> thanks whaffle [edit2] After removing the bridge, and removing the dummy kernel module, I only had a single interface chilling out, lonely. I still received duplicate icmp echo replies... in fact I received a random amount: http://pastebin.com/2LNs0GM8 The same thing doesn't happen on a few other hosts on the same switch, so it has to do with the linux box itself. I'll likely end up rebuilding it next week. Then... you know... this same thing will occur again. [edit3] Guess what? I rebuilt the box, and I'm still receiving duplicate ICMP echo replies. Must be the network infrastructure, although the ARP tables do not contain multiple entries. [edit4] How ridiculous. The machine was a network probe, so I was (ingress and egress) mirroring an uplink port to a node that was the NIC. So, the flow (must have) gone like this: ICMP echo request comes in through the mirrored uplink port. (the real) ICMP echo request is received by the NIC (the mirrored) ICMP echo request is received by the NIC ICMP echo reply is sent for both. I'm ashamed of myself, but now I know. It was suggested on #networking to either isolate the mirrored traffic to an interface that does not have IP enabled, or tag the mirrored packets with dot1q.

    Read the article

  • Possible to host CentOS netinstall files on a local HTTP/FTP?

    - by garlicman
    I'm running XenServer on an Dell R610 and am running into a catch-22. During install from DVD, CentOS can't find the DVD package catalogue. It's a reported error for some, XenServer + CentOS6 + DVD install in some hardware configurations = failed install. Yes, I checked the MD5 and let the disc test pass. In every reported case, the netinstall was the solution. The issue is my net access is required to go through a web proxy that prompts before you can download a file. This naturally breaks any download automation. I've been waiting on our IT to put in an exception rule to allow my lab to bypass the prompt, but it's been over 3 weeks now and they don't seem responsive. (I've been working on this a day or two a week) I want to try and host the netinstall files local in my Xen network. Right now I only have a bunch of Windows based VMs, CentOS won't install so I don't have any Linux tools. I had tried simply hosting all the DVD contents off one of the Windows servers using Mongoose. (I didn't want to setup IIS) I copied them to a hosted sub-directory similar to all the mirrors out there (e.g. http:///centos/6.2/os/i386/) with no auth or anything. Then in the netinstall I correctly pointed to it. I now realize just copying the DVD files over won't work. The repodata will point to a local device, not the site I'm hosting. (e.g. the DVD repodata includes xml that points to where the packages are) Clearly I'm hosting them over HTTP, not from a DVD. Is there an easy way to sort this out? I'm just trying to install CentOS6 on Xen. If there's a turnkey downloadable Xen image with CentOS 6.2 on it, or a downloadable repo image, I'll take that too! Thank you in advance!

    Read the article

  • How to make local apache server public/visible ? [closed]

    - by George
    Hello. I am running an Apache2 server on a Fedora 13. I'd like to make it publicly accessible(visible).For example I'd like when somebody types http://my.ip.numbes/ that they would see what I have in my document root folder. Just for a presentation of a course work at university. Permissions are set to 755. User owning the document root is apache. SELinux is temporarily disabled. But port 80 is closed. I tried to open it by adding an entry to iptables and restarting them, no change. I guess I am missing something big here. Help would be greatly appreciated.

    Read the article

  • apache using mod_auth_kerb always asks for the password twice

    - by DrStalker
    (Debian Squeeze) I'm trying to set apache up to use Kerberos authentication to allow AD users to log in. It is working, but prompts the user twice for a username and password, with the first time being ignored (no matter what is put it in.) Only the second prompt includes the AuthName string from the config (i.e.: the first windows is a generic username/password one, the second includes the title "Kerberos Login") I'm not worried about integrated windows authentication working at this stage, I just want users to be able to login with their AD account so we don't need to set up a second repository of user accounts. How do I fix this to eliminate that first useless prompt? The directives in the apache2.conf file: <Directory /var/www/kerberos> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms ONEVUE.COM.AU.LOCAL Krb5KeyTab /etc/krb5.keytab KrbServiceName HTTP/[email protected].LOCAL require valid-user </Directory> krb5.conf: [libdefaults] default_realm = ONEVUE.COM.AU.LOCAL [realms] ONEVUE.COM.AU.LOCAL = { kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL master_kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL admin_server = SYD01PWDC01.ONEVUE.COM.AU.LOCAL default_domain = ONEVUE.COM.AU.LOCAL } [login] krb4_convert = true krb4_get_tickets = false The access log when accessing the secured directory (note the two seperate 401's) 192.168.10.115 - - [24/Aug/2012:15:52:01 +1000] "GET /kerberos/ HTTP/1.1" 401 710 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - - [24/Aug/2012:15:52:06 +1000] "GET /kerberos/ HTTP/1.1" 401 680 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - myaccount.lastname@MYDOMAIN.LOCAL [24/Aug/2012:15:52:10 +1000] "GET /kerberos/ HTTP/1.1" 200 375 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" And one line in error.log [Fri Aug 24 15:52:06 2012] [error] [client 192.168.0.115] gss_accept_sec_context(2) failed: An unsupported mechanism was requested (, Unknown error)

    Read the article

  • Github Workflow: Pushing small fix branches to remote, or keep them local?

    - by Isaac Hodes
    In Scott Chacon's workflow (explained eg in this SO answer), with essentially two silos (development, and master), if, say I have a small bug to fix (e.g. can be fixed with a few characters) is the optimal way of doing that: a) branch off of development a branch called e.g. fix_123. Push this branch to origin as I work on it. When it's done, code-reviewed, whatever, merge into development and push development to origin. b) Same as above, but without pushing fix_123 to origin.

    Read the article

  • Windows Server 2008 R2 DFS Root Namespace Required?

    - by caleban
    I would prefer to set up our DFS such as: \domain.local\users \domain.local\customers \domain.local\support etc. Is this a problem? Do I need to instead set all of the above folders as targets under a root such as: \domain.local\files\users \domain.local\files\customers \domain.local\files\support Other than the path being shorter in the top example, which is what I would prefer, is there a difference in functionality in Windows DFS between the two examples shown? Thanks in advance.

    Read the article

  • Apache - virtualhost - works only one

    - by user1811829
    I need a couple of virtualhosts on my local dev machine. Unfortunately it needs to be windows. httpd-vhost.conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs" ServerName localhost </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs/manadom.local/public" ServerName manadom.local ErrorLog "logs/manadom.local-error.log" CustomLog "logs/manadom.local-access.log" combined </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs/galeriabiznesu" ServerName gb.loc ErrorLog "logs/gb.loc-error.log" CustomLog "logs/gb.loc-access.log" combined </VirtualHost> And hosts file: 127.0.0.1 localhost 127.0.0.1 manadom.local 127.0.0.1 gb.loc The problem is: localhost points to C:/xampp/htdocs/manadom.local/public manadom.local points to C:/xampp/htdocs/manadom.local/public too gb.loc points to C:/xampp/htdocs/manadom.local/public I can't idea what's wrong? Please help me, i'm not an admin but i read about it lot and i don't know what possibly i can do wrong.

    Read the article

  • How to run a local and external website on same computer with 2 NIC's, 2 Routers and 3 seperate networks?

    - by CandN
    Hello and hopefully I can get some answers to my question, though I think I'm making it more complicated for myself than it has to be. My business is a used auto dealership, and I'm in the process of connecting it to the world - via ethernet from the business server [running Xubuntu] to the ISP's ethernet router/modem, so that I can host our own website (no more than 5-10 people probably visiting at any time - mainly paying their bill), as well as set up a web based internal-intranet site - via DD-WRT Router on the 2nd NIC on the business server - that'll be accessed over Wifi from employees personal devices. On the other end of this is trying to offer free wifi to customers that is completely seperate of the 2 mentioned above networks. Quick Rundown: 1. Web Site for Customers to access. I'm going to use no-ip.org for DNS for the moment being, so I'll have a site that customers can access from anywhere in the world at "mybiz.no-ip.org". This will be forwarded to NIC #1 on the server, possibly at an address like "108.69.." as its being provided an IP from the ISP's modem/router, that is from Time Warner, and they allow NO! configuration options. Web Site for employees to access. I'm trying not to use the server too much as a desktop, only for critical situations, so having a backend thats seperate from the front-facing website is critical. This will be the DD-WRT router hardwired into NIC #2 on the server. This WiFi will be password accessible. Public WiFi for customers. The DD-WRT can seperate networks if I'm correct, I just can't seem to understand how to seperate the 2 and still have internet access on both. I've done it before, but the "Public" wifi (with no password set to connect) kept dropping the connection like a problem was happening that I couldn't figure out. So if I could do a little drawing, this is how it would/should possibly look. ISP -- [Sends Public Facing IP of 108.69.*.1/8] -- ISP Modem Router ISP Modem Router (Ethernet Only) -- [Gives Private IP 108.69.*.2] -- Server NIC #1 Server NIC #1 -- [Gives Private IP 108.69.*.3] -- DD-WRT Router DD-WRT Router -- [DHCP Enabled Giving IP's 172.16.0.0/16] -- Employees Network | | --------- [DHCP Enabled Giving IP's 192.168.1.0/24] -- Public WIFI Hope it's not too confusing, but it anyone could give me some good direct tutorials on how to accomplish this, or if YOU know, then it'll be alot of help. Thanks to all in advance. Need anything else to be explained? Don't hesitate to ask! *Using The LAMP stack with Webmin/VirtualMin -Customer site is located in /var/www2/ -Private Employees site is located in /var/www/ Using no-ip.org's dynamic client updater

    Read the article

  • SVN Authentication with LDAP and Active Directory

    - by Alex Holsgrove
    I am having a few problems getting SVN authentication to work with LDAP / Active Directory. My SVN installation works fine, but after enabling LDAP in my apache vhost, I just can't get my users to authenticate. I can use a selection of LDAP browsers to successfully connect to Active Directory, but just can't seem to get this to work. SVN is setup in /var/local/svn Server is svn.domain.local For testing, my repository is /var/local/svn/test My vhost file is as follows: <VirtualHost *:80> ServerAdmin [email protected] ServerAlias svn.domain.local ServerName svn.domain.local DocumentRoot /var/www/svn/ <Location /test> DAV svn #SVNListParentPath On SVNPath /var/local/svn/test AuthzSVNAccessFile /var/local/svn/svnaccess AuthzLDAPAuthoritative off AuthType Basic AuthName "SVN Server" AuthBasicProvider ldap AuthLDAPBindDN "CN=adminuser,OU=SBSAdmin Users,OU=Users,OU=MyBusiness,DC=domain,DC=local" AuthLDAPBindPassword "admin password" AuthLDAPURL "ldap://192.168.1.6:389/OU=SBSUsers,OU=Users,OU=MyBusiness,DC=domain,DC=local?sAMAccountName?sub?(objectClass=*)" Require valid-user </Location> CustomLog /var/log/apache2/svn/access.log combined ErrorLog /var/log/apache2/svn/error.log </VirtualHost> In my error.log, I don't seem to get any bind errors (should I be looking elsewhere?), but just the following: [Thu Jun 21 09:51:38 2012] [error] [client 192.168.1.142] user alex: authentication failure for "/test/": Password Mismatch, referer: http://svn.domain.local/test/ At the end of "AuthLDAPURL", I have seen people using TLS and NONE but neither seem to help in my case. I have the ldap modules loaded and have checked as much as I know, so any help would be most welcome. Thanks

    Read the article

  • Does using structure data semantic LocalBusiness schema markup work for local EMD URL's?

    - by ElHaix
    Based on what I have read about Google's recent Panda and Penguin updates, I'm getting the impression that using semantic markup may help improve SEO results. On a EMD (exact match domain) site, that may have been hit, we list location-based products. We are now going to be adding a itemtype="http://schema.org/Product" to each product, with relevant details. However, that product may be available in Los Angeles and also in appear in a Seattle results page. We could add a LocalBusiness item type on each geo page to define the geo location for that page. While the definition states: A particular physical business or branch of an organization. Examples of LocalBusiness include a restaurant, a particular branch of a restaurant chain, a branch of a bank, a medical practice, a club, a bowling alley, etc. We could add use the location property which would simply include the city/state details. I realize that this looks like it is meant for a physical location, however could this be done without seeming black-hat?

    Read the article

  • How can I get Haproxy to not log local requests?

    - by coneybeare
    I am trying to clean out some of the log clutter from my machines and am starting by removing requests that are generated from the server themselves. I have cache warmers running around the clock and I don't want these polluting the logs. I was able to get apache to stop logging local requests by adding a dontlog for the local IP: SetEnvIf Remote_Addr "RE\.DA\.CT\.ED" dontlog CustomLog "|logger -p local3.info -t http" combined env=!dontlog and now I am looking for something similar to put in a configuration for the Haproxy log. How can I prevent 127.0.0.1 requests from writing to the Haproxy log? UPDATE: 2/15/11 I use the excellent loggly service to pull out logs in the cloud, but I am seeing tons of logs like this: 2011 Feb 15 06:09:42.000 ip-10-251-194-96 http: RE.DA.CT.ED - - [15/Feb/2011:06:09:42 -0500] "HEAD /search/Nevad/predictive/txt HTTP/1.0" 200 - "-" "Wget/1.10.2 (Red Hat modified)" 2011 Feb 15 06:09:42.000 127.0.0.1 haproxy[10390]: 127.0.0.1:58408 [15/Feb/2011:06:09:42] www i-5dd7a331.0 0/0/0/8/8 200 210 - - --NI 0/0/0 0/0 "HEAD /search/Nevad/predictive/txt HTTP/1.1" and I want them gone. This question focuses on how to remove that haproxy log line from writing to the server side log in the first place.

    Read the article

  • How come I can not install plugins on my local Wordpress install?

    - by classer
    Hello, I got WordPress up and running fine on Ubuntu 10.04 by using this source except that when I try to update and install themes/plugins I get this following error message in wp-admin page: Installing Plugin: WordPress.com Stats 1.8.1 Downloading install package from http://downloads.wordpress.org/plugin/stats.1.8.1.zip… Unpacking the package… Could not create directory. /var/www/wordpress/wp-content/upgrade/stats.tmp/stats Actions: Return to Plugin Installer At first I thought I had to setup an FTP account but searched more and I found some information that says that I need to change the permissions of the wp-content folder which is located in the directory: /var/www/wordpress/wp-content I tried changing it by doing: sudo chmod -R 777 wp-content/ but when I tried installing a plugin I got the same error message. I also tried passing it 755 as a permission but still got the same thing. I settled on 755 because it is more secure I have read. How can I solve this problem safely and securely?

    Read the article

  • Service Layer - how broad should it be, and should it be used also on the local application?

    - by BornToCode
    Background: I need to build a main application with some operations (CRUD and more) (-in winforms), I need to make another application which will re-use some of the functions of the main application (-in webforms). I understood that using service layer is the best approach here. If I understood correctly the service should be calling the function on the BL layer (correct me if I'm wrong) The dilemma: In my main winform UI - should I call the functions from the BL, or from the service? (please explain why) Should I create a service for every single function on the BL even if I need some of the functions only in one UI? for example - should I create services for all the CRUD operations, even though I need to re-use only update operation in the webform? YOUR HELP IS MUCH APPRECIATED

    Read the article

  • How can I manage changes between a local config file and a remote config file in a mobile application?

    - by hib
    I have an application with a configuration file that is stored in the application bundle. This config file stores the names of images on a remote server. Whenever the application is started, I download the configuration file from the server and see if there are any changes or updates. If there are changes, I iterate over the array of configuration settings and download the changed images to the user's iPhone I think that I will first list all of the name changes in an array, and after that start loading that changed images. However, I'm wondering if there is a better approach to solving this problem.

    Read the article

  • How to use update-java-alternatives with a local installation of the JDK?

    - by user827992
    I have Ubuntu 12.04 amd64 installed on my machine, on the previous versions of Ubuntu it was deadly easy, now there is this command update-java-alternatives with a really bad man page. I just have my JDK unpacked on a mounted partition like /media/mydisk/jdk, how i can force the use of that JDK instead of the one that comes in the Ubuntu repository? What is the logic behind this update-java-alternatives ?

    Read the article

  • Service Layer - how broad should it be, and should it also be on the local application?

    - by BornToCode
    Background: I need to build a desktop application with some operations (CRUD and more) (=winforms), I need to make another application which will re-use some of the functions of the main application (=webforms). I understood that using service layer is the best approach here. If I understood correctly the service should be calling the function on the BL layer (correct me if I'm wrong) The dilemma: In my main winform UI - should I call the functions from the BL, or from the service? (please explain why) Should I create a service for every single function on the BL even if I need some of the functions only in one UI? for example - should I create services for all the CRUD operations, even though I need to re-use only update operation in the webform? YOUR HELP IS MUCH APPRECIATED

    Read the article

  • Secure, efficient, version-preserving, filename-hiding backup implemented in this way?

    - by barrycarter
    I tried writing a "perfect" backup program (below), but ran into problems (also below). Is there an efficient/working version of this?: Assumptions: you're backing up from 'local', which you own and has limited disk space to 'remote', which has infinite disk space and belongs to someone else, so you need encryption. Network bandwidth is finite. 'local' keeps a db of backed-up files w/ this data for each file: filename, including full path file's last modified time (mtime) sha1sum of file's unencrypted contents sha1sum of file's encrypted contents Given a list of files to backup (some perhaps already backed up), the program runs 'find' and gets the full path/mtime for each file (this is fairly efficient; conversely, computing the sha1sum of each file would NOT be efficient) The program discards files whose filename and mtime are in 'local' db. The program now computes the sha1sum of the (unencrypted contents of each remaining file. If the sha1sum matches one in 'local' db, we create a special entry in 'local' db that points this file/mtime to the file/mtime of the existing entry. Effectively, we're saying "we have a backup of this file's contents, but under another filename, so no need to back it up again". For each remaining file, we encrypt the file, take the sha1sum of the encrypted file's contents, rsync the file to its sha1sum. Example: if the file's encrypted sha1sum was da39a3ee5e6b4b0d3255bfef95601890afd80709, we'd rsync it to /some/path/da/39/a3/da39a3ee5e6b4b0d3255bfef95601890afd80709 on 'remote'. Once the step above succeeds, we add the file to the 'local' db. Note that we efficiently avoid computing sha1sums and encrypting unless absolutely necessary. Note: I don't specify encryption method: this would be user's choice. The problems: We must encrypt and backup 'local' db regularly. However, 'local' db grows quickly and rsync'ing encrypted files is inefficient, since a small change in 'local' db means a big change in the encrypted version of 'local' db. We create a file on 'remote' for each file on 'local', which is ugly and excessive. We query 'local' db frequently. Even w/ indexes, these queries are slow, since we're often making one query for each file. Would be nice to speed this up by batching queries or something. Probably other problems that I've now forgotten.

    Read the article

  • How configure 2 Lan cards in Windows 7/8 pc one to connect to Internet and other to Local Network

    - by Maharshi Raval
        I am about to install a dedicated VOIP server in our office. It is a 3CX pbx system on Windows 7/8 machine. The environment currently is a Windows SBS 2011 with 8 client machines. I want to use a dedicated broadband connection for the PBX (3CX) box, but the box also needs to be accessible in the local network as we will be using IP Phones and software IP phones. How configure two network cards on PBX box, so that one will be always used to connect to our SIP host over the Internet and the other will be connected to local network accessible from other client pc to connect to the pbx system. It must be noted that currently the Windows SBS 2011 acts as the Primary Domain Controller and gateway for all the client machines.     I cannot use a load balancer as it will conflict and cause issues within the current setup of our SBS2011 as it is also our Exchange Server. Any input is much appreciated. thanks in advance

    Read the article

  • How configure 2 Lan cards in Windows 7/8 pc one to connect to Internet and other to Local Network

    - by Maharshi Raval
        I am about to install a dedicated VOIP server in our office. It is a 3CX pbx system on Windows 7/8 machine. The environment currently is a Windows SBS 2011 with 8 client machines. I want to use a dedicated broadband connection for the PBX (3CX) box, but the box also needs to be accessible in the local network as we will be using IP Phones and software IP phones. How configure two network cards on PBX box, so that one will be always used to connect to our SIP host over the Internet and the other will be connected to local network accessible from other client pc to connect to the pbx system. It must be noted that currently the Windows SBS 2011 acts as the Primary Domain Controller and gateway for all the client machines.     I cannot use a load balancer as it will conflict and cause issues within the current setup of our SBS2011 as it is also our Exchange Server. Any input is much appreciated. thanks in advance

    Read the article

  • What is wrong with those crontabs?

    - by Guillaume Boudreau
    I want my projectors to Power On before the mall opens, and Power Off when the mall closes. So I created crontab entries (that I placed in /etc/cron.d/mall), but today (Thu Nov 22 18:58:29 EST 2012 is the current date on that server), the power-off.sh script got executed at 17:20 (see syslog excerpt below). Being Thu, Nov. 22, I would have expected the power-off.sh script to be executed at 21:20, per the 4th crontab line below. Why did power-off.sh execute at 17:20? What is wrong with my crontab entries? Content of /etc/cron.d/mall: 40 9 22-30 Nov Mon-Sat myuser /usr/local/projectors/power-on.sh 40 10 22-30 Nov Sun myuser /usr/local/projectors/power-on.sh 20 18 22-30 Nov Mon-Wed myuser /usr/local/projectors/power-off.sh 20 21 22-30 Nov Thu-Fri myuser /usr/local/projectors/power-off.sh 20 17 22-30 Nov Sat-Sun myuser /usr/local/projectors/power-off.sh 40 9 1-22 Dec Mon-Sat myuser /usr/local/projectors/power-on.sh 40 10 1-22 Dec Sun myuser /usr/local/projectors/power-on.sh 20 21 1-22 Dec Mon-Fri myuser /usr/local/projectors/power-off.sh 20 17 1-22 Dec Sat-Sun myuser /usr/local/projectors/power-off.sh syslog excerpt: $ grep power-off.sh /var/log/syslog Nov 22 17:20:01 lanner-ubu-c2d CRON[23007]: (myuser) CMD (/usr/local/projectors/power-off.sh)

    Read the article

  • Openbsd init script for ssh VPN tunnel

    - by manthis
    I have a server hosting SSH tunnels and Openbsd 4.5 clients connecting to it. Things work just fine but I am in the need of automating the connection from the client to the server. So that if the client is accidentally rebooted, then the connection initiates unattended. So it should be as straight forward as to include the ssh connection in an init script. However I have miserably failed to do so by including it to /etc/rc.local, which is the file I usually do this sort of things in. Right now I am using autossh to also restart the connection if necessary and the script that I put on /etc/rc.local follows: #!/bin/sh # # Example script to start up tunnel with autossh. # # This script will tunnel 2200 from the remote host # to 22 on the local host. On remote host do: # ssh -p 2200 localhost # # $Id: autossh.host,v 1.6 2004/01/24 05:53:09 harding Exp $ # ID=root HOST=example.com #AUTOSSH_POLL=600 #AUTOSSH_PORT=20000 #AUTOSSH_GATETIME=30 #AUTOSSH_LOGFILE=$HOST.log #AUTOSSH_DEBUG=yes #AUTOSSH_PATH=/usr/local/bin/ssh export AUTOSSH_POLL AUTOSSH_LOGFILE AUTOSSH_DEBUG AUTOSSH_PATH AUTOSSH_GATETIME AUTOSSH_PORT autossh -2 -f -M 20000 ${ID}@${HOST} The script detaches just fine when run manually so I just include it on /etc/rc.local as echo -n 'starting local daemons:' if [ -x /usr/local/sbin/autossh.sh ]; then echo -n 'ssh tunnel' /usr/local/sbin/autossh.sh fi echo '.' I have also tried calling it from /etc/hostname.tun0 in case there may be issues with /etc/rc.local not being called at the right time when network connections are ready, so I would use: inet 10.254.254.2 255.255.255.252 10.254.254.1 !/usr/local/sbin/autossh.sh Your input is highly appreciated.

    Read the article

  • How do you setup Postfix/Dovecot/MySQL to not look for local accounts?

    - by thiesdiggity
    I am having an issue with one of my Postfix/Dovecot mail servers and I'm unsure how to fix the problem. I will try to explain it in detail, here it goes: I have an Ubuntu server setup using Virtual hosting with Postfix, Dovecot and MySQL. We have one domain setup as a virtual domain, for this example I am going to use mail.example.com. Under that domain we have one email address. I have another server (MS Exchange) setup using another one of my sub-domains, ex.example.com. The problem is that when I SMTP into the account on mail.example.com and try to send an email to an account on ex.example.com, I get the email returned back to us with an "unknown host" error. Now, I know that the mail.example.com server can resolve the ex.example.com domain because I can ping/dig while SSH'd into it. I can also log into Postfix via Telnet and send an email to an ex.example.com mailbox. I'm guessing that it has something to do with Postfix/Dovecot looking locally for the domain in the virtual domain list because of the tld domain (example.com)? If that's the case, how do I get Postfix/Dovecot to only look locally for the entire URL (mail.example.com) and if it doesn't find it, send it to the correct server by looking up the MX/A records (which I know exist and are setup correctly)? I have been working on this all day and any guidance would be GREATLY appreciated! Thanks for your time!

    Read the article

  • $PATH is driving me nuts

    - by Chris4d
    OK, apologies if this is something dumb, but I'm running out of ideas. Goal: prepend /usr/local/bin to $PATH Problem: $PATH won't do what I want or expect How I got here: I want to start learning to program, so I'm getting comfortable messing around under the hood, but don't have a lot of experience. I installed the fish shell (because it's friendly!) using homebrew and set it as my default shell (under system prefs>users & groups>advanced). At some point, I ran brew doctor to see if my installs were all kosher, and it suggested I move /usr/local/bin to the front of $PATH so that I could use my installation of git rather than the system copy. Fine - but between path_helper and fish, something was happening to $PATH that was out of my control, and I could never get the paths arranged in the right way. Environment: OSX 10.8.2, upgraded from 10.7ish, with xcode and devtools installed, plus x11, homebrew, and fish More info: I've set my user's default shell back to bash, and tried a variety of shells thru terminal.app - bash, fish, sh. I moved /usr/local/bin to the top of /etc/paths but it didn't change anything. I looked thru the various config.fish files and commented out stuff that might mess with $PATH, didn't help. I have the following files in /etc/paths.d/: ./10-homebrew containing /usr/local/bin ./20-fish containing /usr/local/Cellar/fish/1.23.1/bin ./40-XQuartz containing /opt/X11/bin I added set +x to my profile and when I start terminal.app I get: Last login: Mon Oct 1 13:31:06 on ttys000 + '[' -x /usr/libexec/path_helper ']' + eval '/usr/libexec/path_helper -s' ++ /usr/libexec/path_helper -s PATH="/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/Cellar/fish/1.23.1/bin:/opt/X11/bin"; export PATH; + '[' /bin/bash '!=' no ']' + '[' -r /etc/bashrc ']' + . /etc/bashrc ++ '[' -z '\s-\v\$ ' ']' ++ PS1='\h:\W \u\$ ' ++ shopt -s checkwinsize ++ '[' Apple_Terminal == Apple_Terminal ']' ++ '[' -z '' ']' ++ PROMPT_COMMAND='update_terminal_cwd; ' ++ update_terminal_cwd ++ local 'SEARCH= ' ++ local REPLACE=%20 ++ local PWD_URL=file://Chriss-iMac.local/Users/c4 ++ printf '\e]7;%s\a' file://Chriss-iMac.local/Users/c4 Chriss-iMac:~ c4$ So it looks like path_helper runs, but then running echo $PATH nets me /usr/bin:/bin:/usr/sbin:/sbin. So, it looks like path_helper isn't even doing what it's supposed to anymore? I'm sure there is some well-defined behavior here that I don't understand, or I borked something while trying to fix it. Please help!

    Read the article

  • Why does HP Update at remote system trigger RDP printing at local system?

    - by lcbrevard
    This is obscure. When connected with RDP to another system that has HP Update installed on it, either directly running the HP Update or having the notification pop up to ask if you want to run HP Update causes the local system to try to print something to peculiarly-chosen-local-printer. Case 1: Desktop Win 7 Ult system RDP connected to HP Laptop Win 7 Ult system. When HP Update runs on the laptop a dialog for XPS Writer Save As... appears on Desktop system. Even if you put in a name, nothing gets generated and the dialog repeats. And repeats. Until you (a) close the RDP connection and (b) clean out the queued entries. If the HP Update pops up the request to run the update and you are not at the desk when this happens, there can be dozens of queued requests for this bogus printing. NOTE: the XPS Writer is not selected as a default printer on either system. Case 2: (Different) HP Laptop Win 7 Ult system RDP connected to XP Pro "brand X" desktop system but with HP printer drivers installed. If the request to run HP Update notification pops on the XP system, dozens of attempts to print, in this case to a Versa Check Printer driver, are queued. Dismissing the HP request, closing RDP, and cleaning out the queue are required to stop this. NOTE: the Versa Check Writer is not selected as a default printer on either system. THE QUESTION: What the heck is going on here? Some kind of scripting or COM activity that is misdirected?

    Read the article

  • Best strategy to discover a web service in a local network?

    - by Ucodia
    I am currently doing some research for a project. The setup is simple, I have a computer running a service in my home network and any device connected to that same network should be able to discover the service automatically and use it. I have no specific technology requirement whether it is on the server or client side. The client knows about the service definition. Other than that I have no idea what strategy to use, what technology to look at or whether I should go for a SOAP or a HTTP based service. I think going HTTP with REST API is the best for targeting all devices but I am opened to any suggestions. Thanks.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >