Search Results

Search found 11280 results on 452 pages for 'zend newbie dev'.

Page 257/452 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Basic OpenVPN setup not working

    - by WalterJ89
    I am attempting to connect 2 win7 (x64+ x32) computers (there will be 4 in total) using OpenVPN. Right now they are on the same network but the intention is to be able to access the client remotely regardless of its location. The Problem I am having is I am unable to ping or tracert between the two computers. They seem to be on different subnets even though I have the mask set to 255.255.255.0. The server ends up as 10.8.0.1 255.255.255.252 and the client 10.8.0.6 255.255.255.252. And a third ends up as 10.8.0.10. I don't know if this a Windows 7 problem or something I have wrong in my config. Its a very simple set up, I'm not connecting two LANs. this is the server config (removed all the extra lines because it was too ugly) port 1194 proto udp dev tun ca keys/ca.crt cert keys/server.crt key keys/server.key # This file should be kept secret dh keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-to-client duplicate-cn keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 6 this is the client config client dev tun proto udp remote thisdomainis.random.com 1194 resolv-retry infinite nobind persist-key persist-tun ca keys/ca.crt cert keys/client.crt key keys/client.key ns-cert-type server comp-lzo verb 6 Is there anything I missed in this? keys are all correct and the vpn's connect fine, its just the subnet or route issue. Thank You EDIT it seems on the server the openvpn-status.log has the routes for the client SERVER OpenVPN CLIENT LIST Updated,Wed May 19 18:26:32 2010 Common Name,Real Address,Bytes Received,Bytes Sent,Connected Since client,192.168.10.102:50517,19157,20208,Wed May 19 17:38:25 2010 ROUTING TABLE Virtual Address,Common Name,Real Address,Last Ref 10.8.0.6,client,192.168.10.102:50517,Wed May 19 17:38:56 2010 GLOBAL STATS Max bcast/mcast queue length,0 END Also this is from the client.log file: Which seems to be correct C:\WINDOWS\system32\route.exe ADD 10.8.0.0 MASK 255.255.255.0 10.8.0.5 Another EDIT 'route print' on the server shows the route: Destination Mask Gateway Interface 10.8.0.0 255.255.255.0 10.8.0.2 10.8.0.1 the same on the client shows 10.8.0.0 255.255.255.0 10.8.0.5 10.8.0.6 So the routes are there.. what can the problem be? Is there anything wrong with my configs? Why would OpenVPN be having problems communicating?

    Read the article

  • Domino 8.5.3: Modify Subject of Incoming Email

    - by Void
    I am a newbie in managing and development for Domino. Recently, I have request from other teams at work to set up a filter or agent for incoming mail. This is the requirement for the request: Look for Incoming Mail addressed to #CRITICAL (mutlipurpose, internal group containing a list of engineers) For mail matching Point 1, append "For Immediate Action: " to the front of the Subject Some restrictions I have: Only the Domino server is under my charge, not to touch on network-side or other servers No 3rd party software to be installed I have gone through the configurations in the Domino server and the closest thing I have to filtering Email is the Router/SMTP Restrictions... Rules. But this is not able to fulfill Point 2 in any way. Is this even possible using just Domino server settings, or through agents?

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • Unecrypted Image of Truecrypt-Encrypted System Partition

    - by Dexter
    The general tenor around the internet seems to be that you can't create images of system partitions that have been encrypted (with truecrypt) other than with dd or similar sector-by-sector copy tools. These files however are very impractical given their size (and are obviously incompressible) which makes keeping multiple states/backups of your system partition rather expensive (..especially considering current hdd prices). The problem is that backup tools (like Acronis True Image, Clonezilla, etc.) won't give you the option to create an image of (mounted/opened) Truecrypt partitions, or that there is no recovery environment for restoring the backup, that would allow to run truecrypt before doing any actual restoring. After some trial and error however, I believe I have found a very simple way. Since Truecrypt (running in Linux) creates a virtual block device, that it uses for mounting the unencrypted partitions into the file system, partclone can be used for creating/restoring images. What I did: boot up a linux live disk mount/open the drive/device/partition in truecrypt unmount the filesystem mount point again, like so: umount /media/truecryptX ("X" being the partition number assigend by truecrypt) use partclone (this is what clonezilla would do too, except that clonezilla only offers you to back up real drive partitions, not virtual block devices): partclone.ntfs -c -s /dev/mapper/truecryptX -o nameOfBackupFile for restoring steps 1-3 remain the same, and step 4 is partclone.ntfs -r -s nameOfBackupFile -o /dev/mapper/truecryptX A backup and test-restore of the system (with this method) seems to have worked fine (and the changed settings were reverted to the backup-state). The backup file is ~40 GB (and compressible down to <8GB with 7zip/LZMA2 on the "fast" setting). I can't quite believe that I'm the only one that wants to create images of encrypted drives, but doesn't want to waste 100GB on the backup of one single system state. So my question now is, given how simple this was, and that no one seems to mention anywhere that this is possible - did I miss something? or did I do something wrong? Is there any situation that I didn't think of where this method will fail? Obviously, the backup file needs to be stored in some other encrypted place in order to still remain confidential, since it is unencrypted. Also, in order to do a full "bare metal" restore, one would have to actually first (re-)install Windows, encrypt it, and only then restore the backup file. The funny thing however is that you won't need to backup any partition tables, etc. since the reinstall will effectively take care of that. Is there anything else? This is imho still a lot better than having sector-by-sector images..

    Read the article

  • Hybrid gmail MX + postfix for local accounts

    - by krunk
    Here's the setup: We have a domain, mydomain.com. Everything is on our own server, except general email accounts which are through gmail. Currently gmail is set as the MX record. The server also has various email aliases it needs to support for bug trackers and such. e.g. [email protected] |/path/to/issuetracker.script I'm struggling with a setup that allows the following, both locally and from user's email clients. guser1 - has a gmail account and a local account guser2 - only has a gmail account bugs - has a pipe alias in /etc/aliases for issue tracker Scenarios mail to [email protected] from local host (crons and such) needs to go to gmail account mail to [email protected] from local host mail to [email protected] needs to be piped to the local issue tracker script So, the first stab was creating a transport map. In this scenario, the our server would be set as teh MX and guser* destined emails are sent to gmail. Put the gmail users in a map like so: [email protected] smtp:gmailsmtp:25 [email protected] smtp:gmailsmtp:25 Problems: Ignores extensions such as [email protected] Only works if append_at_myorigin = no (if set to yes, gmail refuses to connect with: E4C7E3E09BA3: to=, relay=none, delay=0.05, delays=0.02/0.01/0.02/0, dsn=4.4.1, status=deferred (connect to gmail-smtp-in.l.google.com[209.85.222.57]:25: Connection refused)) since append_at_myorigin is set to no, all received emails have (unknown sender) The second stab was to set explicit localhost aliases in /etc/aliases and do a domain wide forward on mydomain. This too requires setting the local server as the MX: root: root@localhost # transport mydomain.com smtp:gmailsmtp:25 Problems: * If I create a transport map for a domain that matches "$myhostname", the aliases file is never parsed. So when a local user (or daemon) sends an email like: mail -s "testing" root < text.txt Postfix ignores the /etc/alias entry and maps to [email protected] and attempts to send it to the gmail transport mapping. Third stab: Create a subdomain for the bugs, something like bugs.mydomain.com. Set the MX for this domain to local server and leave the MX for mydomain.com to the Gmail server. Problems: * Does not solve the issue with local accounts. So when the bug tracker responds to an email from [email protected], it uses a local transport and the user never receives the email. % postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_at_myorigin = no append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = $myhostname, localhost.$myhostname, localhost myhostname = mydomain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_cert_file = /etc/ssl/certs/kspace.pem smtp_tls_enforce_peername = no smtp_tls_key_file = /etc/ssl/certs/kspace.pem smtp_tls_note_starttls_offer = yes smtp_tls_scert_verifydepth = 5 smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_recipient_restrictions = permit_mynetworks, reject_invalid_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_destination smtpd_tls_ask_ccert = yes smtpd_tls_req_ccert = no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport

    Read the article

  • Install latest kernel to Ubuntu and have grub acknowledge the new installed kernel

    - by Phuong Nguyen
    I have an installed Ubuntu Distro (Karmic 9.10) already. However, due to some problems with xorg ati driver, I cannot standby my computer. Some guy have suggested me to try the latest version of xorg driver which in turn requires a newer version of Linux kernel than the newest release available from Ubuntu Central Repository (2.6.33). I have searched though several articles on how to install a custom Linux kernel. However, these articles are so 2004/2005 and they were talking about lilo (???). Since then, I'm afraid that I cannot make the Grub Boot recognize the new Linux kernel properly (I'm just a newbie to Linux). I would love to know how to install the kernel into Ubuntu and have grub acknowledge the new installed kernel.

    Read the article

  • What is the garbage text that is being printed by wvdial in terminal?

    - by Hrishi
    When I dial using wvdial, sometimes it prints some garbage text into the terminal. This is not happening every time, but in the garbage text I can see some readable strings which is often irc logs(from xchat) or GET requests from the browser. One of my friend told me that this is probably something it's reading from /dev/random for Random entropy, but I couldn't find any supporting information. What is this text, and why is it being printed to the terminal? See the below picture for an example:

    Read the article

  • Windows XP Professional Version 2002 SP3 restarts when I try to Hibernate

    - by dario_ramos
    I googled this and tried some solutions, but nothing seems to work. In the past, Hibernate worked fine. Someone told me that this can be caused by specific hardware, and I have lots of that, because this is a dev machine which we use to develop modules that interact with lots of different hardware. Moreover, this machine also uses RAID, which might have something to do. Is there any way to troubleshoot this by looking at some log file or using some tool?

    Read the article

  • Routing and arp

    - by adolgarev
    If I have >ip ro 192.168.14.0/24 dev eth0 another host can obtain my mac address. But if I flush routing info: >ip ro flush table main arp resolution doesn't work. Broadcast packets "Who has 192.168.14.149" reach eth0 but OS (Linux) doesn't respond despite eth0 has address 192.168.14.149. What connection exists between routing and arp resolution?

    Read the article

  • Advice on moving a machine room to a new location?

    - by MikeJ
    Our company is moving to new offices in a couple of months, and I am responsible for looking after the move of the development servers in the company. most of the dev equipment is in 5, 42U cabinets + rack for switching/routing equipment. How do most people do this sort of thing? Move the cabinent whole or extract the indvidual components and move the racks empty. any advise on prep and shutdown before the move would be welcome

    Read the article

  • Shouldn't emesene themes change the emesene indicator icon?

    - by D Connors
    As it stands right now, I can't get the emesene icon in the indicator applet (or notification area, or whatever it's named) to change. No matter what icon theme I choose, even installing new themes, the icon is always the default one. Is that a standard behavior, or am I getting a bug? I'm running the latest version of emesene from their ppa (1.6-dev), on Lucid Lynx.

    Read the article

  • Why is pdf format used?

    - by dan_vitch
    I will admit that I am new to the tech/dev field. It seems to become a trend that every time I have to work with pdfs a part of me dies. Why is this format as ubiquitous as it seems to be? Is it just non-tech people that prefer pdfs?

    Read the article

  • http benchmarking?

    - by Sam Williams
    im running varnish-nginx(php-fpm) and im using ab but it keeps messing up. [root@localhost src]# ab -k -n 100000 -c 750 http://192.168.135.12/index.php This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking 192.168.135.12 (be patient) apr_socket_recv: Connection reset by peer (104) is there anything else i can use? or am i doing it wrong?

    Read the article

  • Hyper-V management remotely

    - by Péter
    I'll tell you in advance that I'm newbie in the topic. I have a Win8 (Home) machine with Hyper-V installed behind a router. The router has a public IP and a domain attached. I have another Win8 (Work) machine also installed Hyper-V. I want to access to my home Hyper-V via Hyper-V Manager so I can manage my virtual machines from work. I found this article but I don't know if it's applicable to me. I thought that a simple port forwarding should work and I only need to do is grant the Work HV manager my domain and the port I choose and if it's pop a login form I only need to fill the user data of my Home computer? How can I solve this? My thoughts revolve around: - Port forwarding - set domain+port and set my home user - Set up a VPN and use the local ip address of my home computer (it looks like a little cumbersome and my router only support PPTP) I'm open to any other solution too. Thanks, Péter

    Read the article

  • How to tune TCP TIME_WAIT timeout on Solaris?

    - by Hongli Lai
    I'm trying to change the TCP TIME_WAIT timeout on Solaris. According to some Google results I need to run this command: ndd -set /dev/tcp tcp_time_wait_interval 60000 However I get: operation failed: Not owner What am I doing wrong? I'm already running ndd as root. Is there another way to tune TIME_WAIT?

    Read the article

  • Could not upload .htaccess

    - by syalam
    I am using Springloops to automatically take my SVN repo and deploy onto my server. I am getting the following error: Could not upload .htaccess Could not upload .htaccess using BINARY transfer ---------------------------------------------------- Connecting to dev.convrrt.com Logging in as convrrt Entering destination directory ~/ Entering passive mode REVISION: 1 -> 30 Getting changes Deleting files Removing directories Creating directories and files Extracting file: .htaccess...OK Uploading file: .htaccess [644] R: interrupted How can I diagnose this?

    Read the article

  • RHEL json gem installation requires make?

    - by Salahuddin559
    When I try to install json gem (gem install json), at first it fails to do so, because of some dev package issue. After fixing it, it fails saying that "sh: make: command not found" and "ERROR: Error installing json: ERROR: Failed to build gem native extension.". Why is it failing on make? Notice this is not Mac, this is in RHEL 5 (4 or 5, not sure). Why is it not able to do some "build gem native extension"?

    Read the article

  • Use bootable vhd with Virtualbox

    - by user18151
    Hello, I want to create a bootable Windows 7 vhd using the steps mentioned at: http://www.microsoft.com/downloads/details.aspx?FamilyID=80ede31d-3509-407b-a896-0beea8705589&displaylang=en However, I wanted to know if I will be able to access the vhd using Virtualbox too. I intend to install VS2008 in the VM and use it in Virtualbox when doing quick work and on native hardware when doing a lot of work. I don't want to mess up my actual Win7 installation with VS2008 dev work.

    Read the article

  • Cross-platform restart of Apache

    - by l0b0
    I'd like to have a single command that'll restart Apache on any *nix OS. Currently I'm working with Ubuntu, which has /usr/sbin/apache2ctl /usr/sbin/service no apachectl no httpd and Scientific Linux CERN 5, which has /usr/sbin/apachectl /etc/init.d/httpd no apache2ctl no service I'd like to avoid using a hack like which service 2>/dev/null || which /etc/init.d/httpd

    Read the article

  • Mount Windows share on Linux boot

    - by Delameko
    I'm running VirtualBox in Windows. I have Linux 10.04 installed as a VM. Whenever I log in I have to run to following command to mount my shared Windows web dev folder: sudo mount.vboxsf web_apps /mnt/web_apps Where can I put this line (minus the sudo) so that it will run once when Linux boots up? I'm guessing there must be a root .profile or .login script that runs at some point?

    Read the article

  • Nagios shell script cannot be executed

    - by MeinAccount
    I'm trying to monitor GitLab with nagios. I've created the following command definition and shell script but when checking the service I'm receiving the following e-mail. How can I solve this? The file is executable. [...] nagios : 3 incorrect password attempts ; TTY=unknown ; PWD=/ ; USER=git ; COMMAND=/bin/bash -c /var/lib/nagios/custom_plugins/check_gitlab.sh Command definition: define command { command_name custom_check_gitlab command_line /var/lib/nagios/custom_plugins/check_gitlab.sh } Shell script: #! /bin/sh # [...] RAILS_ENV="production" # Script variable names should be lower-case not to conflict with internal /bin/sh variables such as PATH, EDITOR or SHELL. app_root="/home/git/gitlab" app_user="git" unicorn_conf="$app_root/config/unicorn.rb" pid_path="$app_root/tmp/pids" socket_path="$app_root/tmp/sockets" web_server_pid_path="$pid_path/unicorn.pid" sidekiq_pid_path="$pid_path/sidekiq.pid" ### Here ends user configuration ### # Switch to the app_user if it is not he/she who is running the script. if [ "$USER" != "$app_user" ]; then sudo -u "$app_user" -H -i $0 "$@"; exit; fi # Switch to the gitlab path, if it fails exit with an error. if ! cd "$app_root" ; then echo "Failed to cd into $app_root, exiting!"; exit 1 fi ### Init Script functions check_pids(){ if ! mkdir -p "$pid_path"; then echo "Could not create the path $pid_path needed to store the pids." exit 1 fi # If there exists a file which should hold the value of the Unicorn pid: read it. if [ -f "$web_server_pid_path" ]; then wpid=$(cat "$web_server_pid_path") else wpid=0 fi if [ -f "$sidekiq_pid_path" ]; then spid=$(cat "$sidekiq_pid_path") else spid=0 fi } # Checks whether the different parts of the service are already running or not. check_status(){ check_pids # If the web server is running kill -0 $wpid returns true, or rather 0. # Checks of *_status should only check for == 0 or != 0, never anything else. if [ $wpid -ne 0 ]; then kill -0 "$wpid" 2>/dev/null web_status="$?" else web_status="-1" fi if [ $spid -ne 0 ]; then kill -0 "$spid" 2>/dev/null sidekiq_status="$?" else sidekiq_status="-1" fi } check_pids check_status if [ "$web_status" != "0" -a "$sidekiq_status" != "0" ]; then echo "GitLab is not running." exit 2 fi if [ "$web_status" != "0" ]; then printf "The GitLab Unicorn webserver is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$sidekiq_status" != "0" ]; then printf "The GitLab Sidekiq job dispatcher is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$web_status" = "0" -a "$sidekiq_status" = "0" ]; then printf "GitLab and all it's components are \033[32mup and running\033[0m.\n" exit 0 fi

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >