Search Results

Search found 17958 results on 719 pages for 'local delivery'.

Page 267/719 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • How to make Thunderbird play nice with Google mail

    - by Christi
    Thunderbird and gmail aren't exactly the best of friends. Gmail's tags mean that Thunderbird often downloads multiple copies of a single mail. Anything tagged in gmail will appear in a folder related to that tag, the "all mail" folder, and possibly the "inbox" and "sent mail" folders too. Thus a mail with multiple tags could potentially be stored more than four times in a local Thunderbird cache. This can make searching difficult, and is obviously wasteful of disk space. The best solution I have come up with is as follows. Operate a zero inbox policy (i.e. use the inbox for processing live mail only and archive everything else) which eliminates an extra copy in the inbox. Secondly, configure Thunderbird not to sync the "Sent Mail" folder - this is a bit of a pain, since I actually find it quite useful to be able to look through just the mails I've sent, but a search can duplicate this functionality. In this way, most of the duplicates are removed, and only mail with tags is stored locally more than once. Ideally, however, I'd only like one copy of each mail to be stored locally. I am surprised Thunderbird doesn't store mail by some sort of hashing algorithm to prevent precisely this problem - but it wouldn't be compatible with the way the folders are mirrored in a local directory structure, I suppose. Can anyone think of a better way to get Thunderbird to cache a Google mail account locally efficiently.

    Read the article

  • Run Python script at startup using upstart

    - by MarcusMaximus
    I'm trying to create an upstart script to run a python script on startup. In theory it looks simple enough but I just can't seem to get it to work. I'm using a skeleton script I found here and altered. description "Used to start python script as a service" author "Me <[email protected]>" # Stanzas # # Stanzas control when and how a process is started and stopped # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn # When to start the service start on runlevel [2345] # When to stop the service stop on runlevel [016] # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background expect fork # Start the process script exec python /usr/local/scripts/script.py end script The test script I want it to run is currently a simple python script that runs without any issue when run from a terminal. #!/usr/bin/python2 import os, sys, time if __name__ == "__main__": for i in range (10000): message = "UpstartTest " , i , time.asctime() , " - Username: " , os.getenv("USERNAME") #print message time.sleep(60) out = open("/var/log/scripts/scriptlogfile", "a") print >> out, message out.close() The location/var/log/scripts has permissions 777 The file /usr/local/scripts/script.py has permissions 775 The upstart script /etc/init.d/pythonupstart.conf has permissions 755

    Read the article

  • On Ubuntu get: "-bash: ./flume No such file or directory" BUT flume is there and executable. Same binary OK on RHEL

    - by lcbrevard
    This is already posted in serverfault - and may be more apprpriate there. Reworked a bit from the orginal posting. We have a product built on CentOS 4 32-bit Linux that runs unmodified on 32- and 64-bit CentOS/RHEL 4 and 5 and SLES 10. It also runs unmodified on SLES 9 64-bit. [SLES 9 32-bit requires a different libstdc++.] The name of the main binary executable is 'flume' Yesterday we tried to put this on 64-bit Ubuntu 10 and, even though the file is there and the right size, we get: -bash: ./flume: No such file or directory 'file flume' shows it to be a 32-bit ELF (can't remember the exact output and the system is on an isolated network) If put into /usr/local/bin, then 'which flume' returns: /usr/local/bin/flume The file is marked as executable (did 'chmod +x flume') and lsattr shows no problems with attribute bits. I was not able to try 'ldd flume' yet. I have also not tried 'strace flume'. Currently I am with an air conditioning failure. [It's been that kind of week!] I now suspect that some library is not there. This is a profoundly unhelpful message and one I have never seen before. Is this peculiar to Ubuntu or perhaps just to this installation. We gave up and moved to a RHEL 4 system and everything is fine. But I sure would like to know what causes this.

    Read the article

  • How to setup bindings for development IIS 7.5 with lot of sites

    - by Antonio Bakula
    I am a programmer in a small ASP.NET shop with very little expirience in server administration, and I have to setup IIS 7.5 to host lot of sites on newly installed windows server 2008 R2, these sites are test "clones" for sites on "real" web server and they should be accessible only in local network (domain). Developers should add new sites for our new customers. Project managers use this server to check progress and test new sites and new features, QA people have to have access to this site and test before we copy it to the "real" web server. Developers only have access to IIS console, in fact they can use RDP to test server with their developer domain credentials and permissions, also developers are local admins on that machine (tester). On our previous server I used different port numbers for each site. That worked but don't like this solution, I would prefer to use subdomains. But here are the problems: manually adding DNS records is not an option because we do not wont that developers have to administer domain DNS server, and currently this had to be done with domain administrator credentials. Is there a some way to add DNS record automatically ? I tried to add DNS record for subdomains on test server with wildcard (*.tester) and that seems to work for some time but that change coused some bad problems in our domain network and admin forbid me to mess with DNS, he said that I have to add DNS record for every subdomain manually and that I can not use wildcards, and there is nothing that I can do about it, mainly for "politicall" reasons :( obviously our admin is pretty much uncooperative, outsorced from different organization and I can't do anything about that. can I add another DNS server on that machine ? What must be setup on clients machines to "tell" them to use domain DNS server and tester domain server ? So please I need someone to give me some advice, what should I do ? Is different port numbers only option left ? Thanks !

    Read the article

  • OpenVPN + iptables / NAT routing

    - by Mikeage
    I'm trying to set up an OpenVPN VPN, which will carry some (but not all) traffic from the clients to the internet via the OpenVPN server. My OpenVPN server has a public IP on eth0, and is using tap0 to create a local network, 192.168.2.x. I have a client which connects from local IP 192.168.1.101 and gets VPN IP 192.168.2.3. On the server, I ran: iptables -A INPUT -i tap+ -j ACCEPT iptables -A FORWARD -i tap+ -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On the client, the default remains to route via 192.168.1.1. In order to point it to 192.168.2.1 for HTTP, I ran ip rule add fwmark 0x50 table 200 ip route add table 200 default via 192.168.2.1 iptables -t mangle -A OUTPUT -j MARK -p tcp --dport 80 --set-mark 80 Now, if I try accessing a website on the client (say, wget google.com), it just hangs there. On the server, I can see $ sudo tcpdump -n -i tap0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap0, link-type EN10MB (Ethernet), capture size 96 bytes 05:39:07.928358 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 558838 0,nop,wscale 5> 05:39:10.751921 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 559588 0,nop,wscale 5> Where 74.125.67.100 is the IP it gets for google.com . Why isn't the MASQUERADE working? More precisely, I see that the source showing up as 192.168.1.101 -- shouldn't there be something to indicate that it came from the VPN? Edit: Some routes [from the client] $ ip route show table main 192.168.2.0/24 dev tap0 proto kernel scope link src 192.168.2.4 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.101 metric 2 169.254.0.0/16 dev wlan0 scope link metric 1000 default via 192.168.1.1 dev wlan0 proto static $ ip route show table 200 default via 192.168.2.1 dev tap0

    Read the article

  • Alias for Drupal "Sites" folder with Apache on Windows Server 2008

    - by sgtbeano
    I'm having to move a number of sites from a LAMP stack to a WAMP one, provided by Zend, and I've hit a problem. Our architecture is a number of loadbalanced web servers which have their own local webapp drives which are kept in sync with one server performing as the master copy. There is then a separate DFS share provided to all web servers from our pillar san. Usually a Drupal install under our LAMP cluster would have the main Drupal web app in a local HTDOCS mount for each server and the SITES directory within Drupal would then be symlinked out to the DFS or NFS share so that there is a common FILES and TMP directory. The problem I'm having is that there seems to be no equivalent of symlinks on Win Server 2008, shortcuts have a .ink at the end making Apache see them as a distinct file. So I've tried using an alias call in the vhost file like this; <Location /drupal-626/sites> Order deny, allow Allow from all </Location> Alias /drupal-626/sites "Z:\Path to alternate sites directory" The root for this test is; http://main-domain-url/drupal-626/ Unfortunately this isn't work so I'm wondering if any of you have a solution which would work? Many thanks for taking the time to read this.

    Read the article

  • pam_ldap.so before pam_unix.so? Is it ever possible?

    - by user1075993
    we have a couple of servers with PAM+LDAP. The configuration is standard (see http://arthurdejong.org/nss-pam-ldapd/setup or http://wiki.debian.org/LDAP/PAM). For example, /etc/pam.d/common-auth contains: auth sufficient pam_unix.so nullok_secure auth requisite pam_succeed_if.so uid >= 1000 quiet auth sufficient pam_ldap.so use_first_pass auth requiered pam_deny.so And, of course, it works for both ldap and local users. But every login goes first to pam_unix.so, fails, and only then tries pam_ldap.so successfully. As a result, we have a well-known failure message for every single ldap user login: pam_unix(<some_service>:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=<some_host> user=<some_user> I have up to 60000 of such log messages per day and I want to change the configuration so, that PAM will try ldap authentication first, and only if it fails - try pam_unix.so (I think it can improve the i/o performance of the server). But if I change common-auth to the following: auth sufficient pam_ldap.so use_first_pass auth sufficient pam_unix.so nullok_secure auth requiered pam_deny.so Then I simply can't login anymore with local (non-ldap) user (e.g., via ssh). Does somebody knows the right configuration? Why Debian and nss-pam-ldapd have pam_unix.so at first by default? Is there really no way to change it? Thank you in advance. P.S. I don't want to disable logs, but want to set ldap authentication on the first place.

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • Vim autocommand on BufDelete prevents opening help window

    - by Kyle Strand
    I have the autocommand described here in my .vimrc: http://superuser.com/a/669463/199803 EDIT: Copied into body of question for convenience: function CountListedBuffers() let cnt = 0 for nr in range(1,bufnr("$")) if buflisted(nr) && ! empty(bufname(nr)) let cnt += 1 endif endfor return cnt endfunction function QuitIfLastBuffer() if CountListedBuffers() == 1 :q endif endfunction autocmd BufDelete * :call QuitIfLastBuffer() Bizarrely, though, it seems that if I have exactly one listed buffer, and I try to use the :help command, the help window fails to open (or perhaps opens and is immediately closed). If I comment out that autocommand line in my .vimrc, this behavior no longer occurs. Why is this happening, and how can I fix it? Why would :h even trigger the BufDelete event? EDIT: setting verbosity to level 12, I can see that the BufDelete event is indeed occurring. Here's the portion of the output that looks relevant to me: Executing BufAdd Auto commands for "*" autocommand call <SID>BMAdd() calling function <SNR>6_BMAdd calling function <SNR>6_BMAdd..<SNR>6_BMFilename calling function <SNR>6_BMAdd..<SNR>6_BMFilename..<SNR>6_BMMunge calling function <SNR>6_BMAdd..<SNR>6_BMFilename..<SNR>6_BMMunge..<SNR>6_BMTruncName function <SNR>6_BMAdd..<SNR>6_BMFilename..<SNR>6_BMMunge..<SNR>6_BMTruncName returning '/usr/local/share/vim/vim74/doc' continuing in function <SNR>6_BMAdd..<SNR>6_BMFilename..<SNR>6_BMMunge function <SNR>6_BMAdd..<SNR>6_BMFilename..<SNR>6_BMMunge returning 'help\.txt\ (4)\^I/usr/local/share/vim/vim74/doc' continuing in function <SNR>6_BMAdd..<SNR>6_BMFilename calling function <SNR>6_BMAdd..<SNR>6_BMFilename..<SNR>6_BMHash function <SNR>6_BMAdd..<SNR>6_BMFilename..<SNR>6_BMHash returning #340582286 continuing in function <SNR>6_BMAdd..<SNR>6_BMFilename function <SNR>6_BMAdd..<SNR>6_BMFilename returning #0 continuing in function <SNR>6_BMAdd function <SNR>6_BMAdd returning #0 continuing in BufAdd Auto commands for "*" Executing BufDelete Auto commands for "*" autocommand :call QuitIfLastBuffer()

    Read the article

  • Postfix not delivering from external senders and not logging anything

    - by simendsjo
    Some semi-recent upgrades must have broken my postfix+dovecot configuration, but I'm having problems finding out what the cause is. My domain is simendsjo.me with the MX record mail.simendsjo.me. I can send mail to both local and external recipients, and it delivers mail from internal mailboxes. The problem is that mail from external senders isn't delivered, and nothing is logged at all. The external sender also doesn't receive any errors. I have no idea where to ever start looking as nothing is logged at all when external mail is sent to my server. So the first issue would be: How can I turn on some debug messages for postfix? I've tried: debug_peer_level = 2 debug_peer_list = simendsjo.me .. And _level = 999 and _list = gmail.com where I'm trying to send emails from. but nothing is logged. When sending mails from a local mailbox (but from an outside computer, not localhost), a lot is logged. I don't have any rules in iptables either. Any ideas how I can get some debug messages for postfix?

    Read the article

  • extra managed+unmanaged switches @ home/office -- best (mis)usage scenario? what would you do?

    - by locuse
    up front -- definitely NOT a mission-critical kind of question. after a 'spring cleaning' of my local office, i've ended up with two 'spare' GigE switches at my home/office -- one managed, capable of VLANs, QoS, etc, and the other unmanaged. i've got more ports than i need. in fact EACH switch has more total ports than i need. but, since i can't have these just sitting around not doing SOMETHING ... ;-) i'm interested in ideas for best combined use of these switches. my local topology is simple: [ net ] -- [ adsl2 modem ] -- [linux firewall/router/DNS ] _______________| | [ some arrangement of the 2 GigE switches ] | ( ... stuff on the lan ... ) [WAP1] [voip ATA] [printer] [desktop1] [mail server] [Xen server [desktop2] ( mostly dev, [desktop3] + file server [desktop4] + media server)] the MailServer is a production mail server the XenServer serves some low vol to the 'net; the MediaServer guest serves ONLY to the LAN is there, e.g., any performance value in segmenting off any of the LAN using the managed switch (VLAN? QoS tagging? something?), feeding the rest into the connected unmanaged switch? or should i simply use one of the switches & be done with it, and use the other for a coffee-cup stand?

    Read the article

  • Trouble with Remote Desktop pulling through printers. Drive Redirection works, and the ports created but not the printers

    - by Windex
    I've run out of things to look into. All the support documents have been gone through and still provide no resolution. I've checked the service permissions, (sc sdshow spooler) they all match up with other systems and what is output on the support documents. I'm nearly positive that the issue can't be permissions anyway as the software requires all users to be an administrator, so all users are a local administrator. (I haven't looked into why yet but its on the list, I was just recently brought into this team and we've put procedures in place for quick recovery.) We've applied hot fixes relating to RDS and printing, though I'm not sure which ones they were. I've combed through group policy and no where is printer redirection disabled. It's setup with all default values regarding the use and redirection of printers and a quick install of W2k8 R2 shows that it works by default. This dev install was joined to the same domain, placed in the same OU, shows the same policies applied, etc, etc, etc, The server generates all the correct redirected ports but no printers are created. It will also redirect drives without issue, this would seem to rule out the usermode service that handles redirects being broken. No events are logged related to any of the events and there are no events from the TerminalServices-Printer source. There were local printers setup. I didn't think it would mattter but as I was running out of ideas I tried deleting them all with no change. The TS was configured for the software it will be running before we checked out the redirection of printers so the other team responsible to setting up new servers wants to find a fix instead of reloading a new server. I'm not sure where or what else to look for. Any ideas?

    Read the article

  • Distributing Microsoft Office Template or Macro over the network

    - by zfranciscus
    We have around 400 users who use Word and we want to make their life easier by distributing templates and macros over the network. The easiest way to do this of course to setup a shared network folder and let them get the appropriate templates and macros. Of course, each user has to know where to copy these files to in their local PC, and we have to rely on constant email communication to let them know for newer version of the macro and templates. The next alternative is to ask them to configure Word to point to these network folder. But of course any disruption to the network means disruption to their work. We are thinking to setup a synchronization mechanism that downloads new templates to their local machine. We are also thinking to make this sync tool to prompt users that it will download new templates - you know just to give them visibility that they are receiving changes. We are wondering what is the best approach that people usually use in their workplaces ? Are there any specific tool that can make this task easier ?

    Read the article

  • IIS 7: One Page Works, All Others Fail With "Error code: ssl_error_rx_record_too_long"

    - by Michael
    On my local machine, I have a second site bound to Port 81. Within that site is a certain page which I can browse from other machines with no problems, but all other pages fail with "Error code: ssl_error_rx_record_too_long". Each of the failing pages (as well as the lone working page), works with localhost. So, from any machine, local or remote: http://cmwmach01.mydomain.biz:81/RD/SS/SS.aspx (works) http://cmwmach01:81/RD/SS/SS.aspx (works) http://cmwmach01.mydomain.biz:81/RD/POV/SC.aspx (fails - gets changed to https) http://cmwmach01:81/RD/POV/SC.aspx (fails - gets changed to https) Everything works with localhost (locally, of course). I've tagged this question with SSL because, at one point, it would warn about an SSL cert issue (maybe this was self-signed at one point?), but now it doesn't. While there may be an issue around that, I don't see how this could cause the issue I am seeing (but, as I mention below, am I way out of my depth here). I am way out of depth here in trying to figure out why that one page works (or the others don't), so that I can make them all work. Any ideas?

    Read the article

  • Scanning to network share

    - by tking
    In our environment we have several printers with the scan to folder functionality set up. It worked pretty well. Long story short- I needed to reboot our file server one evening (the server whose $hares where all the scans go), upon on boot up I received a message stating the name of the file server (i.e AcmeFS1; Windows Server 2003) could not be used because there was a duplicate name on the network. I found that somehow another printer on the network was using the name of the file server. Weird - I know. People could no longer scan to their folders. I ran an nbtstat on the file server and found that the local netbios name table was empty. I renamed the offending printer and rebooted the file server once more. This time, no error, and once I ran nbtstat again, I found that correct name and domain in the local netbios name table. Problem is, scan to folder is still not working. I know this was working before I rebooted the file server the first time. Anyone have any idea what is going on and how to fix? Thanks. FIXED: Not sure why the reboot didnt fix it but I restarted the "Server" service on the file server and the problem went away.

    Read the article

  • wireless clients not getting correct dhcp addresses

    - by szeli
    I apologise first if this is a stupid problem. I'm new to Cisco networking. I need some help with an existing configuration done by my vendor. Environment: 1. Core switch - Catalyst 6509e vlans configured: a. vlan 50 (wired clients) 10.0.50.x/24 interface IP 10.0.50.20 b. vlan 70 (wireless clients) 10.0.70.x/24 interface IP 10.0.70.20 c. vlan 192 (guest clients) 192.168.1.x/24 interface IP 192.168.1.20 d. trunk port for WLC native vlan 70 allowed vlan 50, 70, 192 2. Cisco 4402 WLC interfaces a. management untagged IP 10.0.70.10 b. ap-manager untagged IP 10.0.70.11 c. service-port n/a IP 192.168.10.1 d. virtual n/a IP 1.1.1.1 e. guestwlan vlan192 IP 192.168.1.100 3. Cisco AIR-LAP1142N-S-K9 LAP01 (WLAN local, interface: management) IP 10.0.70.21/24 GW 10.0.70.20 DHCP server 10.0.50.10 (scope 10.0.70.101 to 200) LAP02 (WLAN guest, interface: guestwlan) IP 192.168.1.21/24 GW 192.168.1.20 DHCP server 192.168.1.10 (scope 192.168.1.101 to 200) here's the problem, wireless clients connected to WLAN guest keep getting DHCP leases from WLAN local 10.0.50.10 (scope 10.0.70.101 to 200) can anyone please help? thanks!

    Read the article

  • Weblogic WLST classpath

    - by user43736
    When I run the WLST .sh script to set the env as follows why can't I see the updated path when I do echo? [linbox2 bin]$ ./setWLSEnv.sh CLASSPATH=/directory/ols_wls/patch_wlss1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_wls1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_oepe1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_ocm1031/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/jrockit_160_14_R27.6.5-32/lib/tools.jar: /directory/ols_wls/utils/config/10.3/config-launch.jar: /directory/ols_wls/wlserver_10.3/server/lib/weblogic_sp.jar: /directory/ols_wls/wlserver_10.3/server/lib/weblogic.jar: /directory/ols_wls/modules/features/weblogic.server.modules_10.3.2.0.jar: /directory/ols_wls/wlserver_10.3/server/lib/webservices.jar: /directory/ols_wls/modules/org.apache.ant_1.7.0/lib/ant-all.jar: /directory/ols_wls/modules/net.sf.antcontrib_1.0.0.0_1-0b2/lib/ant-contrib.jar: PATH=/directory/ols_wls/wlserver_10.3/server/bin: /directory/ols_wls/modules/org.apache.ant_1.7.0/bin: /directory/ols_wls/jrockit_160_14_R27.6.5-32/jre/bin: /directory/ols_wls/jrockit_160_14_R27.6.5-32/bin: /usr/kerberos/bin: /usr/local/bin: /bin: /usr/bin: /usr/X11R6/bin: /usr/java/j2sdk1.4.2_11/bin/bin: /home/oracle/bin: /directory/wls_olwcs/jdk160_14_R27.6.5-32/bin: /directory/ccanywhere81/bin:/directory/oracle/oracle/product/10.2.0/client_1/bin Your environment has been set. [linbox2 bin]$ export CLASSPATH [linbox2 bin]$ export PATH [linbox2 bin]$ echo $PATH /usr/kerberos/bin: /usr/local/bin: /bin: /usr/bin: /usr/X11R6/bin: /usr/java/j2sdk1.4.2_11/bin/bin: /home/oracle/bin: /directory/wls_olwcs/jdk160_14_R27.6.5-32/bin: /directory/ccanywhere81/bin: /directory/oracle/oracle/product/10.2.0/client_1/bin [linbox2 bin]$

    Read the article

  • Run a shell script using cron

    - by Blanca
    Hi! I have this FeedIndexer.sh: #!/bin/sh java -jar FeedIndexer.jar Just to run FeedIndexer.jar which is in the same directory as the .sh, I would like to run it using crontab, so I did this: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) 01 01 * * * root run-parts --report /home/slosada/workspace/FeedIndexer/target/FeedIndexer.sh # But I don't know how to run it. Have i made any mistake?? Thank you!

    Read the article

  • How do I set up postfix to store e-mail in a file instead of relaying it?

    - by GomoX
    I want to run a staging copy of a production server on a local environment. The system runs a PHP application, which sends e-mail to customers in various scenarios and I want to make sure no e-mail is ever sent from the staging environment. I can tweak the code so it uses a dummy e-mail sender, but i'd like to run the exact same code as the production environment. I can use a different MTA (Postfix is just what we use in production), but I'd like something that is easy to set up under Debian/Ubuntu :) So, I'd like to set up the local Postfix install to store all e-mail in (one or more) files instead of relaying it. Actually, I don't really care how it's stored as long as it's feasible to check the e-mail that was sent. Even a set up option that tells postfix to keep the e-mail in the mail queue would work (I can purge the queue when I reload the staging server with a copy from production). I know this is possible, I just haven't found any good solution online for what seems like a fairly common need. Thanks!

    Read the article

  • How to reliably mount a shared folder /volume/folder at boot up

    - by Tanmay
    Following is my sample.sh in /usr/local/bin/ #!/bin/sh mkdir -p /Volumes/folder mount -t afp -o rw afp://user:password@server_name/folder_name /Volumes/folder Following is my com.apple.sample.plist in /Library/LaunchAgents/ ?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.apple.sample</string> <key>ProgramArguments</key> <array> <string>/usr/local/bin/sample.sh</string> </array> <key>RunAtLoad</key> <true/> </dict> </plist> Where as when I am able to run sample.sh independently and is working fine. Also I have tried using launchd.conf as mkdir -p /Volumes/folder mount -t afp -o rw afp://user:[email protected]/testsuites /Volumes/folder Still not working.

    Read the article

  • Windows - Decrypt encrypted file when user account is destroyed

    - by dc2
    I have a Virtual Machine running on my Windows Server 2008 computer that originally was received by me encryped, as the builder of the VM did it on a MAC, which decrypts files by default. I never thought to decrypt these files, as they automatically 'decrypt' when you have permission over them, so the VM has been running for over a year despite the encryption. I just upgraded my computer to Domain Controller (dcpromo.exe). Now when I try to access/run the VM, I can't because I don't have permission to decrypt the files as that was on another logon (local administrator) and now I am the domain administrator. Apparently the local admin is totally nuked when you upgrade to domain controller. I have tried EVERYTHING - taking ownership of the files, which works. Doesn't do anything for me. Adding full control to everyone on the files. I go to File Properties Advanced Details (under encryption) Users who can access this file. The only user is administrator@localcomputername, and there is a cert number. I try adding a new cert, I don't have permission. I don't have permission to: Decrypt the file (access is denied). Copy the file (to another computer) - access denied. I am totally stumped and this VM is a production machine and needs to get up right now. Does anyone have any ideas?

    Read the article

  • No Network Connection in WinXP image from Microsoft running on VirtualBox 3.1.6 OSE (Ubuntu 10.04) due to missing CD Rom

    - by Bevor
    I'd like to test local websites in IE7 and IE8.To do that I thought about using the free Microsoft images: http://www.microsoft.com/windowsxp/using/networking/setup/default.mspx I converted the VHDs to VDIs to make them run in VirtualBox. ( http://www.qc4blog.com/?p=721 ) This works fine. The problem is that in this Windows XP installation there is no Network Adapter configured. Actually nothing at all is configured because it needs the Windows XP CD Rom to do that. If I would have a Windows XP CD Rom, I would not need to run the Microsoft image, so is there some kind of workaround to get an internet connection? Meanwhile I set "bridged" in VirtualBox. But this doesn't help because "ipconfig /all" in the guest system doesn't show any data because nothing is configured. How can I get a connection to my local Apache (Host system). http://localhost would be enough. By the way: I can't install the "Guest additions". When I do that, the 3 days trial period of the guest system is suddenly gone, so I can't use it anymore and it is senseless. Any ideas? Update: I've tried the Vista image and it gets an internet connection. From Vista image I can get to my site with 192.168.1.3/mywebsite in the browser url. So actually I don't care about the WinXP issue anymore but I would be glad if anyone still knows a solution.

    Read the article

  • How should I configure my Apache Hosts File to serve a different site for localhost than for my domain/publicip?

    - by rofls
    I'm trying to test out a LAMP (with PHP5 specifically) setup with Django already serving a website. I want to do the PHP stuff on localhost for now, so that when I do something like this: curl http://localhost/database/script.php?var=1, I get a response from the php server. Right now I'm getting a Django error. I tried something like this in the default file in sites-available: Listen 80 <VirtualHost aaa.bbb.ccc.ddd> ServerName localhost DocumentRoot /home/phpsite </VirtualHost> where aaa.bbb.ccc.ddd is the local ip address, and changing my actual site's settings to specify the public ip, like this: Listen 80 <VirtualHost www.xxx.yyy.zzz> ServerName mysite.com DocumentRoot /srv/www/mysite WSGIScriptAlias / /srv/www/mysite.wsgi </VirtualHost> but then I start getting all kinds of errors when I start apache, such as port ::[80] is already in use or something. I noticed that the hosts file that's located in /etc/apache2/ is apparently pointing everything to mysite.com, including my local ip as well as 127.0.0.1 and 127.0.1.1; Do I need to change the configuration there too?

    Read the article

  • Determine which user initiated call in Asterisk

    - by adaptive
    I had the following code in my extensions.conf file: [local] exten => _NXXNXXXXXX,1,Set(CALLERID(name)=${OUTGOING_NAME}) exten => _NXXNXXXXXX,n,Set(CALLERID(num)=${OUTGOING_NUMBER}) Now I want to change this code to set the CallerID and number based on the user/extension that is making the call. In fact I have four(4) users/extensions in my sip.conf and only one of them (the one I use for business) is supposed to send a different caller id/number. Everything is in the same context (for simplicity) since all lines need to be able to pick up an incoming call. The only difference is when line1 needs to make a call, it has to send a different caller id/number and use a different provider. This is what I have so far: [local] exten => _NXXNXXXXXX,1,Set(line=${SIP_HEADER(From)}) exten => _NXXNXXXXXX,n,Verbose(line variable is <${line}>) exten => _NXXNXXXXXX,n,Set(CALLERID(name)=${IF($[ ${line} = line1 ]?${COMPANY_NAME}:${FAMILY_NAME})}) exten => _NXXNXXXXXX,n,Set(CALLERID(num)=${IF($[ ${line} = line1 ]?${COMPANY_NUMBER}:${FAMILY_NUMBER})}) exten => _NXXNXXXXXX,n,Dial(${IF($[ ${line} = line1]?SIP/${EXTEN}@${COMPANY_PROVIDER}:SIP/${EXTEN}@${FAMILY_PROVIDER})}) I really don't know if this is correct and I'm afraid to commit these changes to my extensions.conf before validating. Any help will be greatly appreciated.

    Read the article

  • SSL certificates: how to use it?

    - by Rod
    I have a central server and I want to purchase a SSL certificate for it. The architecture is based on this central server and many connected web-servers which are on the client-side (one for each user). A client could access both the main server and its local server. Moreover the two servers exchange data between them. I would like client's web browser to trust all servers and always activating https and a secure connection when connecting to them. Assuming I can name all servers on the same domain name (I was thinking about a wildcard certificate anyway), which kind of certificate or use of it can make these secure connections working? There is the possibility that main server and client side server are not connected for a while. Is possible to activate an https connection for a client to its local server in this case? When I will need to renew or change the certificate, I would like to change it just on the main server avoiding to have the need of touch all the servers on the side of clients. Can I do that in some way?

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >