Search Results

Search found 20761 results on 831 pages for 'chef client'.

Page 713/831 | < Previous Page | 709 710 711 712 713 714 715 716 717 718 719 720  | Next Page >

  • Tunnell network requests with Windows 7

    - by mark
    I've Windows 7 64bit Pro client in a private LAN behind a Netgear wgr614v7 router. I've also a remote Debian server machine outside. I'd like to tunnel all (or specified ports/protocols) over this outside server, so when I'm on the Windows machine and I request serverfault.com it would not appear from the wgr614v7 public IP but from the server. But it's not only about HTTP traffic, it's basically about everything I'd like to: other TCP ports, even UDP, etc. It must be transparent to the application, e.g. they shouldn't be aware of this. All their requests just appear as being from the server and the tunnel between them takes care about the packets. I'm aware of e.g. Putty and forwarding individual ports or using it as a socks proxy, however not many applications to support this and the support in windows itself looks non-existent to me. I might add it should be something "reasonable" easy to set up. I've heard about PPTP but I'm unsure about it's security implications (by design). Should I go for VPN? There seem to be two common solutions for Linux (OpenSwan and StrongSwan), why would I pick the one over the other? I also fear that setting up a VPN might be quite complex, OTOH maybe it's the only sane way to do the things right? Or is OpenVPN sufficient? I'm seeking for open (source) solutions, what other options to I have or which direction should I head to?

    Read the article

  • AT&T Upload Filtering?

    - by xpda
    Using an AT&T DSL, I cannot ftp upload or ftp download a few files of a large 1500 set. The problem is the file name. I can change a few characters of the file name, and they upload fine. I can change the filenames from upper to lower case and they upload fine. If I change back to the original filename, it will not upload again. When it doesn't upload, it starts, transfers about 5% of a 5-10 meg file, and then times out. I have uploaded one of the files under a different name, changed the name back to the original, and it will not download via ftp. It will download onto a browser, and it will ftp download just fine with a different name. It just will not download with ftp. I have reproduced this uploading to three different servers on 1and1 and Amazon EC2. When I try it on a non-AT&T ISP client, it works OK. Here is a file that did not upload until I had renamed it. (I have changed it back to the original name): "http://xpda.com/nautnew/11302 STOVER POINT TO PORT BROWNSVILLE SIDE A.png" This problem is unrelated to connection, speed, and file content. Only things I can see that makes a difference are the file name and ATT DSL. Does ATT have some kind of ftp file filtering? Is there anything else that could cause this behavior?

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    Hi, I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • File upload folder permission fastCGI - How to make it writeable?

    - by user6595
    I am using centos 5.7 with cPanel WHM running fastcgi/suEXEC I am trying to make a particular folder writable to allow a script to upload files but seem to be having problems. The folder (and all recursive folders) I want to be writable is: /home/mydomain/public_html/uploads And I want only scripts run by the user "songbanc" to be able to write to this directory. I have tried the following: chown -R songbanc /home/mydomain/public_html/uploads chmod -R 755 /home/mydomain/public_html/uploads But it still doesn't seem to work. The script will only upload files if I set the permissions manually via FTP client to 777. I assume I am misunderstanding how to set permission for users with fastcgi and hopefully someone can help me. Thanks in advance EDIT: Running getfacl on one of the scripts or folders gives the following: # file: home/mydomain/public_html/ripples/1.jpg # owner: songbanc # group: songbanc So it appears that the owner is correct? I'm now totally confused! EDIT 2: The plot thickens... lsattr and chattr are returning Inappropriate ioctl for device While reading flags on...

    Read the article

  • Route all traffic of home network through VPN [migrated]

    - by user436118
    I have a typical semi advanced home network scenario: A cable modem - eth A wireless router (netgear n600) eth and wlan A home server (Running ubuntu 12.04 LTS, connected over wlan) A bunch of wireless clients (wlan) Lying around I have anoher cheaper wlan router, and two different USB wlan NIC's that are known to work with Linux. ACTA struck. I want to route ALL of my WAN traffic through a remote server through a VPN. For sake of completition, lets say there is a remote server running debian sqeeze where a VPN server is to be installed. The network is then to behave so that if the VPN is not operative, it is separated from the outside world. I am familiar with general system/network practices, but lack the specific detailed knowledge to accomplish this. Please suggest the right approach, packages and configurations you'd use to reach said solution. I've also envisioned the following network configuration, please improve it if you see fit: Client ip:10.1.1.x nm:255.0.0.0 gw:10.1.1.1 reached via WLAN Wlan router 1: ip: 10.1.1.1 nm:255.0.0.0 gw: 10.10.10.1 reached via ETH Homeserver: <<< VPN is initiated here, and the other endpoint is somewhere on the internet. eth0: ip:10.10.10.1 nm: 0.0.0.0 gw:192.168.0.1 reached via WLAN Homeserver: wlan0: ip: 192.168.0.2 nm: 255.255.255.0 gw: 192.168.0.1 reached via WLAN Wlan router 2: ip: 192.168.0.1 nm: 0.0.0.0 gw: set via dhcp uplink connector: cable modem Cable Modem: Remote DHCP. Has on-board DHCP server for ethernet device that connects to it, and only works this way. All this WLAN fussery is because my home server is located in a part of the house where a cable link isnt possible unfortunately.

    Read the article

  • Local traffic through VPN, global traffic through WAN

    - by ikonoma
    I have an issue with my internet connection. I am using VPN (Aventail Client) to access the local resources. When connected to VPN the Internet traffic passes through it, not through my LAN or Wi-Fi network. I would like to change the routing table to use the Wi-Fi adapter of the PC for WAN traffic. I have routing file, which works very well and routes the traffic in this way, but only when I am physically connected to the local network through LAN. But I can't set it to work with the VPN connection, because I have no gateway when I am connected to it. Etc this in bold is missing. What to do? route change 0.0.0.0 mask 0.0.0.0 172.16.76.1 metric 200 if 12 route change 0.0.0.0 mask 0.0.0.0 10.44.2.1 metric 400 if 11 route add 150.251.0.0 mask 255.255.0.0 10.44.2.1 metric 100 if 11 route add 10.0.0.0 mask 255.0.0.0 10.44.2.1 metric 100 if 11 pause

    Read the article

  • Can I use squid (or anything) to do this?

    - by user269334
    I have a really crappy VPS, and a really good computer at my office (with a really good internet connection), but behind a NAT. Is it possible to expose my good computer by doing this: 1. The good computer connects to the VPS (and keeps the connection alive) 2. The users connects to the VPS, and sends http(s) requests to the VPS. 3. The VPS just passes that http(s) requests to the good computer (including some identifications, so the servers can distinguish connections) 4. The good computer passes that http(s) response to the VPS 5. In turn, the VPS receives the http(s) response, and passes back to the client. Is it possible to do this? (btw, the VPS and the good computer are located in different countries) And also, is this "reverse proxy"? I heard that reverse proxy is for protecting the internal network by putting a middle server. And will this affect SSL configurations? (or make SSL impossible?) I'm intending to run nginx on the good computer. Thanks in advance : )

    Read the article

  • How do I get Tomcat 7 to start up faster in Linux CentOS kernel version 2.6.18?

    - by user1786833
    I am experiencing a problem with slow start up times for Tomcat 7. I have done some testing by tweaking configuration parameters both on Linux CentOS kernel version 2.6.18 and on Windows 7 using this link as my primary guide: http://wiki.apache.org/tomcat/HowTo/FasterStartUp and managed only a modest improvement. The improvements seemed to result when I added metadata-complete="true" attribute to the element of my WEB-INF/web.xml file and when I added the names of almost all the jars we use for our application to the tomcat.util.scan.DefaultJarScanner.jarsToSkip property in conf/catalina.properties file. I've also used this JAVA_OPTS in the setenv.sh file: JAVA_OPTS="$JAVA_OPTS -server -Xms1536m -Xmx1536m -XX:MaxPermSize=256m -XX:NewRatio=2 -XX:+UseParallelGC -XX:ParallelGCThreads=2 -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000 -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true " but actually saw my start up times increase slightly. Our QA and production environments are on Linux CentOS so I'm hoping to get more information on improving Tomcat 7 start up times in that environment. My primary role is java developer and I don't have much system administration experience so I appreciate any input. Thank you for your time and suggestions.

    Read the article

  • OSX server setup suggestions

    - by Tom
    I am looking into the possibility to setup an OSX server for my employees, and would like some input on what is the best approach to meet my needs, and perhaps some suggestions if I am moving in the wrong direction. I am thinking of a Mac Mini OSX server, and are not sure if my needs will be met, and what possibilities are out there. I want these capabilities: - Groups/Users managed on server - Shared folders and private folders for users/groups - Access to activated services - Server hosting software for the users (developing tools ++) - Similar to Windows Terminal Server - Virtual desktop environment (both local and over internet/VPN) - Possible to access trough Mac and Windows The reason I am looking at OSX server is that my employees almost only work in OSX environment, and I want to offer the capabilities to logon to the server trough some kind of terminal software, and have full access to their work OSX environment and software on their mac or pc, from anywhere they might be. Instead of having to have multiple setups and need for spending alot of time installing and setting up needed software on every client. This is a small business, where some work on local network, and others from the internet, preferably trough VPN. But a terminal server solution, that are fast and easy to manage would be perfect for our needs. So if anyone have any experience with a similar setup, please let me know what you did, and your experiences with your setup.

    Read the article

  • SQL Server: Network pauses after installing cheap SATA card: Is there a solution?

    - by samsmith
    At the risk of being assigned to the "bad DBA" club... I did something desperate, and may have to undo it. Problem: After installing a low cost eSATA board, my SQL Server is intermittently unresponsive (seemingly when there is a lot of IO to the eSATA drive). Questions: 1) Is there a solution to the intermittent unresponsiveness that allows me to keep the eSATA in place? 2) Whether or not (1==true): What is a decent, low cost way to add 1-3 TB storage to SQL for non-critical SQL DBs? Detail: Our SAN is full, and expanding it is costly and will take a month. I have a pressing need to add 1-3 TB for some development DBs (e.g. not mission critical; data loss is OK). As a bandaid, I threw a $20 eSATA PCI board in the Dell 1950 server, and attached an external 2TB eSATA drive. This seemed to work fine, but I notice that our production SQL DBs, and even remote desktop, now experience network "pauses" that they never did before (with both SQL client apps and remote desktop throwing "networking problem" errors). This SQL Server has lots of memory, and runs an instance of SQL 2005 (where all line of business apps reside) and an instance SQL 2008 (for development db's). SQL Server RAM has been appropriately configured, and this setup has run great for years. The server is: Dell 1950 Win2003 x64 14GB RAM PERC controller, 2 mirrored hd's internal Dell SAN over gbit ethernet, dual homed 2 PCIx slots (1 used by NIC for SAN, 1 now in use for eSATA board) Thank you for suggestions!

    Read the article

  • Windows 7 : access denied to ONE server from ONE computer

    - by Gregory MOUSSAT
    We have a domain served by some Windows 2003 servers. We have several Windows 7 Pro clients. ONE client computer can't acces ONE member Windows 2003 server. The other computers can acces every servers. And the same computer can access other servers. With explorer, the message says the account is no activated. With the command line, the message says the account is locked. With commande line : net use X: \\server\share --> several seconds delay, then error (says the account is locked) net use X: \\server\share /USER:current_username --> okay net use X: \\server\share /USER:domain_name\current_username --> okay From the same computer, the user can access other servers. From another computer, the same user can access any server, including the one denied from the original computer. Aleady done : unjoin then join the cilent from the domain. check the logs on the server : nothing about the failed attempts (?!) Is their any user mapping I'm not aware of ?

    Read the article

  • Swapping RAID sets in and out of the same controller

    - by hazymat
    This is a really simple question, and the answer is probably encoded in various wikipedia articles, however my question is reasonably specific, and I need a bulletproof answer! I'm not sure if my question pertains to hardware RAID in general, or to the specific RAID controller I'm working on. Either way it is the Dell SAS 6/iR (this is an LSI sas1068e chipset). I simply want to: remove a set of striped (RAID 0) disks from this RAID controller in a server put in another set of disks, and create a RAID 1 array (or create a new 'virtual disk', as they call it in the SAS 6/iR manual) Do stuff with the new RAID 1 array Have the option of putting back the old set of disks (the RAID 0 striped ones) I am quite sure this is possible, but I need some form of reliable, evidence-based answer as it's for a client of mine, and I need to migrate their data safely. The question: can I actually do the above? Does the RAID configuration get stored on the disks themselves, or in the hardware controller? Is any data stored in the hardware controller? If there is any chance I cannot completely restore operation of the first set of disks I removed, then I need to know about it! The manual alludes to the answer to this question (see page 45 of this document), and talks about activating an array of disks. I just need someone to confirm I can definitely do the above. See, simple question, right? :)

    Read the article

  • Install KVM based Windows 2008 remotely over SSH on a headless, no graphics Ubuntu 10.04 server?

    - by taazaa
    Hi, I have a Dell server at a remote data center with Ubuntu 10.04 as the host. It is a minimal install with the necessary virtulization packages. There is no X and the machine is headless. I have the win2008 DVD in the machine and want to remotely install it. I tried: virt-install --connect qemu:///system -n vmwin2k8 -r 1024 --disk path=server2k8.qcow2,size=50 --cdrom /dev/sr0 --vnc --noautoconsole --os-type windows --os-variant win2k8 The qcow2 image get created However I don't understand how to connect to see the install via VNC. This is my first time doing it so it may be trivial or may not be possible. Remotely I have a Win 7 machine with Putty and RealVNC viewer. Where is the graphic output of VNC going? Do I have to have VNC server running on the host or some other machine and then connect to it from my VNC client? Please let me know or point me to the right direction. I have been searching the web for several days to figure out how this is suppose to work. Thanks!

    Read the article

  • Lenovo Thinkpad T430 not booting from HDD if there is a USB modem connected

    - by user93353
    I have a T430 Levono Thinkpad running Win7. I use a ZTE USB modem (something like this) for my internet connection. I usually keep the modem plugged into the USB drive even when the laptop is shutdown or hibernating. This worked fine on my earlier laptops. But with the Lenovo, my laptop doesn't boot if the modem is in the USB drive. It shows the initial character based screen where it gives the Thinkpad message & BIOS details and then waits. If I pull out the modem, it goes ahead. I have disabled USB as a boot option in my BIOS settings, but even then this happens sometimes (but not all the times). Likewise while resuming from hibernation. The USB modem also has drivers & ISP connection client which getting installed the first time you use it on any machine. I have used multiple laptops (HP, DELL, Acer, Gateway) but never faced this problem before. I have friends who use other Thinkpad models but haven't faced this issue. Any resolutions, workarounds for this?

    Read the article

  • rsync via cron. How do I enable logging?

    - by tetranz
    Hi all I'm backing up a remote server to another computer using rsync. In cron.daily I have a file with this: rsync -avz -e ssh [email protected]:/ /mybackup/ It uses a public / private key pair to login. This seems to work well most of the time however, I've (foolishly) only ever really checked it by looking at the dates on some important files (MySQL dumps) that I know change every day. Obviously, an error could occur after that file. Sometimes it fails. When I run it manually, something like "client reset" sometimes happens. What is the best way to log it so that I can check with certainty if it completed or not? The cron log doesn't indicate any errors. I haven't tried it but the rsync man page on the oldish version of CentOS on the backup machine doesn't show the --log-file option. I guess I could redirect stdout with but I don't really want to know about every file. I just want to know if it all worked or not.. Thanks

    Read the article

  • Exchange 2013 really slow outside of localhost

    - by ItsJustJP
    We've got a 12 core xeon, 24GB of ram 2012 server. We've recently migrated from exchange 2010 (which was on another server) to exchange 2013 which resides on our new 12 core server. Accessing the OWA on the exchange server is fine; it's very quick and responsive however accessing it via any other computer connect to the domain via a 1 gpbs connection and it'll take 10-15 seconds to load. Also running slow is public calenders that people in my place need to access, again taking 10-15 seconds to access and can sometimes cause outlook to not respond. Further to that we have phones that connect via the internet (of course) to the exchange so people can get work emails when they are out of the office. Guess what, this is also running slow. I've have search for many solutions and have tried changing outlook authentication methods but there is no change in speed. The old exchange 2010 server no longer exists but there was no problem before the migration. Has anyone got any suggestions? Thanks :) Must also mention that server 2012 that exchange 2013 is installed on is also the DC. Update: It would appear that any connection via https is slow. It took more than 15 mins for an outlook client to download 50MB of emails (outlook anywhere).

    Read the article

  • Notepad++ FTP best solution

    - by yoda
    Hi, I use Notepad++ for more than 2 years now, and there's only 1 thing that it needs for me to be perfect: a actually-working-ftp-plugin. It has an ftp plugin, written by someone that meanwhile left the project (by meanwhile I mean a long time ago), and since then nobody had courage to improve it. The problem is that it does't handle connections very well. Sometimes it lost connection with the server and literally "blocks", others don't save the files properly, other only load half of the ftp files, etc etc .. My question is: Is there a way to use ftp and notepad++ (without using its build-in ftp or a ftp client like filezilla)? I've tried using NetDrive, but it gets stuck sometimes (makes the editor crash), and everytime the temporary file is refreshed by windows / NetDrive, it will load the new file without asking and skip the pointer to the end of the file (very very very annoying). In case you know how to make the built-in notepad++ ftp plugin work at 100%, I'd be much more happy! I'd like to have some feedback from you guys :) (I'm using Windows Vista) Thanks in advance!

    Read the article

  • SMTP Server setting on Windows 2008 R2

    - by user223298
    I am very very new to this and just trying to configure SMTP virtual server. I have followed a few threads to get it all running, but the mails are not being delivered. What I have done so far - 1) Install SMTP server. 2) SMTP server Properties General Tab - IP address is set to 'All Unassigned'. Access Tab - Authentication is anonymous access. Everything else is left to Default settings. Delivery Tab - Outbound security is anonymous access. In Advance section, entered the domain name in the FQDN field, and localhost in Smart host field. 3) Created an Inbound Rule for SMTP service to allow connections to Port 25. When I try to telnet, everything works up until the point the mail has to be send. Now, the sender's domain is different to the receiver's domain. Not sure if settings have to be changed to allow that? I had set the Relay restrictions on SMTP server, but because I couldn't send the mails, I thought I might as well make it work without the relay first. The error I see while sending the mail is 451 Timeout waiting for client input. I used to get some other error before when I had Relay restrictions on. Can anyone please point me in the right direction? Please let me know if you need more information. Thanks.

    Read the article

  • Executing a git command using remote powershell results in a NativeCommmandError

    - by user204777
    I am getting an error while executing a remote PowerShell script. From my local machine I am running a PowerShell script that uses Invoke-Command to cd into a directory on a remote Amazon Windows Server instance, and a subsequent Invoke-Command to execute script that lives on that server instance. The script on the server is trying to git clone a repository from GitHub. I can successfully do things in the server script like "ls" or even "git --version". However git clone, git pull, etc. result in the following error: Cloning into 'MyRepo'... + CategoryInfo : NotSpecified: (Cloning into 'MyRepo'...:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError This is my first time using PowerShell or a Windows Server. Can anyone provide some direction on this problem. The client script: $s = new-pssession -computername $server -credential $user invoke-command -session $s -scriptblock { cd C:\Repos; ls } invoke-command -session $s -scriptblock { param ($repo, $branch) & '.\clone.ps1' -repository $repo -branch $branch} -ArgumentList $repository, $branch exit-pssession The server script: param([string]$repository = "repository", [string]$branch = "branch") git --version start-process -FilePath git -ArgumentList ("clone", "-b $branch https://github.com/MyGithub/$repository.git") -Wait I've changed the server script to use start process and it is no longer throwing the exception. It creates the new repository directory and the .git directory but doesn't write any of the files from the github repository. This smells like a permissions issue. Once again invoking the script manually (remote desktop into the amazon box and execute it from powershell) works like a charm.

    Read the article

  • Best way to say "sync all system clocks to this server, when and ONLY when I say so?" Mixed setup of Windows+Linux servers.

    - by twblamer
    Title pretty much explains it. Let's say there's 100 servers, various versions of Windows and Linux, and one Windows server is the "master clock." I did look at this question: How do I synchronize clocks between Linux and Windows? This hints that ntp can do what I want if I run "ntpd -q" on a client (?). If I install ntp I also need to guarantee that it will only sync the times when I force it to. Even better if I have a log that tells me every time a sync was performed. I'm doing benchmark runs and I need to be able to say something like this: "Clocks were synced on all the benchmark systems at 09:42:01am on the master. A benchmark run was then initiated and allowed to run for six hours. None of the system clocks were altered during this time interval." I understand there is subsequent clock drift, but for now that's the way we're doing things and I'm doing it with a manual process. I'd rather at least automate the one-time sync.

    Read the article

  • Mail server DNS failed to resolved by Mac clients

    - by Concordus Applications
    We have two internal DNS servers. One is located on a linux server box and the other is the router's DNS management. We set the linux box as primary DNS via DHCP and the router as secondary. We have a few Mac clients that are accessing our internal mail server (hostnamed "mail" internally). When using IMAP or SMTP against the mail server internally, the mac boxes will sometimes fail to locate the server. If I use NSLOOKUP I can see that "mail" is pointed to the correct IP address and is being resolved via the correct DNS server, but if I ping "mail" it fails. ~ (bash)$ nslookup mail Server: 254.254.254.206 Address: 254.254.254.206#53 Name: mail.example.com Address: 254.254.254.205 Note: I replaced our actual internal IP address with 254.254.254.* If I wait a few minutes (3-5 minutes), somehow it resolves itself and sends successfully. This happens multiple times a day. The /etc/hosts file on the mac boxes is the default config. ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost Is there something about Mac clients I should know to prevent this failed DNS resolution? Client boxes are: OSX 10.7.4, 8GB RAM, i5 MacBooks Server is: Ubuntu 12.04 Server

    Read the article

  • How do I tunnel an HTTPS proxy through a virtual machine (VMWare)

    - by Kyle
    I have a personal setup at home using VMWare Workstation. I also have a set of Virtual Private Machines that run Squid, and therefore provide me HTTPS proxy tunnels. Using Proxifier, I can tunnel all traffic for given applications through these tunnels. However, I also have a few virtual machines for dev/staging/experimentation/etc. I generally just use NAT to provide Internet access to the machines, and if I need to use these proxies, I can just setup Proxifier (or a Linux equivalent) to pipe the traffic through them. No problem. But... I got to thinking: Wouldn't it be great if I could assign these proxy tunnels to a virtual machine, so that when I start up the VM, it has instant-on access through the tunnel and not my local connection? (EDIT: Of course, it would USE my local connection, but it would tunnel traffic through the proxy.) To be more clear: I want a solution that binds the proxy to a VM, so that when I start the VM, I don't have to use a proxy client to connect to the tunnel - I am already piping all traffic from that VM through that proxy. I did a bit of searching, and the closest thing I could find was this: How to route public static IP to a virtual machine on a vmware ESXi host? Which wasn't all that applicable. The proxies are protected by user/pass but do not filter by IP. Again, they are HTTPS proxies setup through Squid. Any ideas on how to make this happen? Thanks a ton.

    Read the article

  • Downloading Emails locally with Thunderbird

    - by r_honey
    I am using Gmail (web interface) for sometime now, and have well over 20 labels and some thousand mails there archived to different labels in Gmail. Now I want to have a local copy of all my mails and following points are important in the context: The Primary mail access mechanism would continue to be Gmail web for me. I just want a backup of my mail account locally. Ideally the mails should download locally in folders named after Gmail labels (I know this is possible via IMAP but probably not by POP) After all my mails are available locally, I will delete most of them in Gmail to free up space and because I want to archive them. The mails should continue to exist locally and should not be deleted when I delete when from Gmail web interface. I would be syncing my gmail account locally let's say every month. So, the new mails that I have sent/received during this period should come over to my local mailbox in the folders named after Gmail labels. I do understand that Gmail maintains a single copy of email having 2 different labels and such email would get duplicated locally in the 2 folders and I am okay with that. Essentially you can see I just want to archive my mails from the Gmail server to a local backup and then sync (one way from Gmail to locally) new mails at regular intervals. For some points above, POP seems to be the option while IMAP seems for the others. I am really confused and need help in deciding which of POP or IMAP would suit me best. I have currently chosen Thunderbird to be my local email client but would not have a problem switching to Outlook or anything else as long as I get my desired archiving functionality.

    Read the article

  • Fedora 17 transparent Ethernet Bridge not forwarding IP traffic

    - by mcdoomington
    I am running on Fedora 17 with the latest ebtables and have been trying to setup a transparent bridge - using the following script, I send a ping through the bridged host and only see the requests on the bridge (among other traffic from eth0), BUT, arps and arp replies are making it through. My host is setup - Client 192.168.1.10 <-- eth0 -- eth2 192.168.1.20 Ethernet script: #!/bin/sh brctl addbr br0; brctl stp br0 on; brctl addif br0 eth0; brctl addif br0 eth2; (ifdown eth0 1>/dev/null 2>&1;); (ifdown eth2 1>/dev/null 2>&1;); ifconfig eth0 0.0.0.0 up; ifconfig eth2 0.0.0.0 up; echo "1" > /proc/sys/net/ipv4/ip_forward; ebtables -P INPUT DROP ebtables -P FORWARD DROP ebtables -P OUTPUT DROP ebtables -A FORWARD -p ipv4 -j ACCEPT ebtables -A FORWARD -p arp -j ACCEPT Any assistance would be great!

    Read the article

  • Softfail / Failure Notice on SMTP

    - by pascal1954
    Hey, i'm searching for an answer for about 24 hours now and I still can't find any really useful help... The problem appears as the following: I'm running a debian server with Plesk 9.5.3 and qmail. Since a few weeks I'm not able to send mails to some particular servers (like web.de, aol.com). Hence I get failure notices like "Sorry, I wasn't able to establish an SMTP connection." But when I try to send mails to gmail.com - it works! Gmail only reports a softfail in the mail header like so: Received: from h1600XXX.?none? (DOMAIN2.TLD [XX.XXX.XX.XX]) Received-SPF: softfail (google.com: best guess record for domain of transitioning [email protected] does not designate 85.XXX.XX.XX as permitted sender) client-ip=85.XXX.XX.XX This sounds like a dns problem for me, but I can't get an answer for that... What makes me wondering is: h1600XXX is correct, but it should look like h1600XXX.stratoserver.net, not ?none? DOMAIN2.TLD (first line) is different from DOMAIN1 (second line). Both are hosted on this machine, but is this correct? DOMAIN1 is the one I send this mail from. Hopefully someone could help me! If you need more specific information, let me know. Thanks in advance!!! Best regards

    Read the article

< Previous Page | 709 710 711 712 713 714 715 716 717 718 719 720  | Next Page >