Search Results

Search found 10107 results on 405 pages for 'remote backups'.

Page 142/405 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • VDR - Trouble writing to destination volume, error -1020 (sharing violation)

    - by woodwarp
    Using VMware Data Recovery 1.1, backing up to CIFS share and getting this error 1/18/2010 8:55:31 AM: Performing incremental back up of disk "Lun VM/VM-DB1-flat.vmdk" using "SCSI Hot-Add" 1/18/2010 8:55:32 AM: Trouble writing to destination volume, error -1020 ( sharing violation) Integrity checks of the destination complete successfully and I tried rebooting the VDR appliance just in case. To resolve the issue I removed the share from the VDR, pointed the backups to other destinations and renamed the VMware Data Recovery subfolder in the destination, then re-added the share and pointed the backups, this of course creates a new Backup Store. Anyone have any ideas why this error is occuring, means I can't backup into this Backup Store any longer.

    Read the article

  • GRE keepalive with Linux and RouterOS

    - by eri
    I have a Linux host and couple of routerboadrs. I created a GRE tunnel, but Linux does not answer keepalive packages. Then router mark gre connection as unreachable, so I cant send to Linux host from router subnet. If linux sends something into tunnel (ping, etc.) - RouterOS mark connection as reacheble. Second and next packages routed nicely until one minute idle (no traffic). Tunnel in linux a make in this way: remote=x.x.x.x dev=gre21 network=10.21.0.0/16 ip tunnel add ${dev} mode gre remote ${remote} ttl 255 ip addr add 172.16.1.1/24 peer 172.16.1.21 dev ${dev} ip link set ${dev} up ip route add ${network} dev ${dev} And ip l: 14: gre21: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1476 qdisc noqueue state UNKNOWN link/gre 0.0.0.0 peer 109.60.170.15 How to set state "running"? How to keep alive tunnel? Ping in cron?

    Read the article

  • 401 - Unauthorized On Server 2008 R2 IIS 7.5

    - by mxmissile
    I have a web application deployed to Server 2008 IIS 7.5 box. From remote it gives this error: 401 - Unauthorized: Access is denied due to invalid credentials. (remote = desktops on the same LAN) Have tried several remote clients using different browsers, all the same result. (IE, FF, and Chrome) Hitting the application from the desktop of the server itself works flawlessly. The application is using Anonymous Authentication. The application is written in .NET 4.0 Asp.Net using the MVC framework. Sysinternals procmon returns these 2 results for each request: FAST IO DISALLOWED and PATH NOT FOUND. I have 2 other MVC apps running fine on the same server. I have checked the security on the folders and they all match. App runs fine on a Server 2008 IIS 7.0 box. Nothing shows up in the Event log on the server related to this. Pulling my hair out here, any troubleshooting tips?

    Read the article

  • Apache SSLProxyMachineCertificateFile does not work

    - by Serge - appTranslator
    I'm setting up an Apache reverse proxy that exposes a client certificate to the remote host. I do it using SSLProxyMachineCertificateFile /etc/tls/pki/certandkey.pem Problem: The remote host does not recognize the client certificate. Notes: certandkey.pem contains the unencrypted key and the cert. from the proxy box, curl -E /etc/tls/pki/certandkey.pem https://www.remote.com works fine. It's a GoDaddy SSL certificate. It's bundled with a gd_bundle.crt. Should I use SSLProxyMachineCertificateChainFile? I'm on CentOS 6.3 with Apache 2.2.15 (SSLProxyMachineCertificateChainFile not available)

    Read the article

  • Installing Xen 4.0.1 from Source on Ubuntu 10.10

    - by markus
    I'm trying to install Xen 4.0.1 from Source on Ubuntu 10.10 Server Edition. I started with a clean machine and followed the instructions from https://help.ubuntu.com/community/Xen. So I installed the packages mentioned there with: sudo apt-get install gettext bin86 bcc libc6-dev-i386 iasl texinfo git When making the source with make world I get this error: + git clone -o xen -n git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git linux-2.6-pvops.git.tmp Initialized empty Git repository in /home/homer/xen/linux-2.6-pvops.git.tmp/.git/ remote: Counting objects: 1855434, done. remote: Compressing objects: 100% (291939/291939), done. Receiving objects: 100% (1855434/1855434), 368.49 MiB | 11.00 MiB/s, done. remote: Total 1855434 (delta 1553214), reused 1847760 (delta 1546656) Resolving deltas: 100% (1553214/1553214), done. + cd linux-2.6-pvops.git.tmp + git checkout -b xen/stable-2.6.32.x xen/xen/stable-2.6.32.x fatal: git checkout: branch xen/stable-2.6.32.x already exists make[3]: *** [linux-2.6-pvops.git/.valid-src] Error 128 Does anybody have an idea what i can do?

    Read the article

  • Error when reloading supervisord: unix:///tmp/supervisor.sock no such file

    - by Yarin
    I'm running supervisord on my CentOS 6 box like so, /usr/bin/supervisord -c /etc/supervisord.conf and when I launch supervisorctl all process status are fine, but if I try to reload using supervisorctl I get unix:///tmp/supervisor.sock no such file I'm using the same config file I've used successfully on other boxes, and im running everything as root. I can't undesrtand what the problem is... Config file: ; Sample supervisor config file. [unix_http_server] file=/tmp/supervisor.sock ; (the path to the socket file) ;chmod=0700 ; socket file mode (default 0700) ;chown=nobody:nogroup ; socket file uid:gid owner ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) ;[inet_http_server] ; inet (TCP) server disabled by default ;port=127.0.0.1:9001 ; (ip_address:port specifier, *:port for all iface) ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) [supervisord] logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log) logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB) logfile_backups=10 ; (num of main logfile rotation backups;default 10) loglevel=info ; (log level;default info; others: debug,warn,trace) pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid) nodaemon=false ; (start in foreground if true;default false) minfds=1024 ; (min. avail startup file descriptors;default 1024) minprocs=200 ; (min. avail process descriptors;default 200) ;umask=022 ; (process file creation umask;default 022) ;user=chrism ; (default is current user, required if root) ;identifier=supervisor ; (supervisord identifier, default is 'supervisor') ;directory=/tmp ; (default is not to cd during start) ;nocleanup=true ; (don't clean up tempfiles at start;default false) ;childlogdir=/tmp ; ('AUTO' child log dir, default $TEMP) ;environment=KEY=value ; (key value pairs to add to environment) ;strip_ansi=false ; (strip ansi escape codes in logs; def. false) ; the below section must remain in the config file for RPC ; (supervisorctl/web interface) to work, additional interfaces may be ; added by defining them in separate rpcinterface: sections [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket ;serverurl=http://127.0.0.1:9001 ; use an http:// url to specify an inet socket ;username=chris ; should be same as http_username if set ;password=123 ; should be same as http_password if set ;prompt=mysupervisor ; cmd line prompt (default "supervisor") ;history_file=~/.sc_history ; use readline history if available ; The below sample program section shows all possible program subsection values, ; create one or more 'real' program: sections to be able to control them under ; supervisor. ;[program:foo] ;command=/bin/cat [program:embed_scheduler] command=/opt/web-apps/mywebsite/custom_process.py process_name=%(program_name)s_%(process_num)d numprocs=3 ;[program:theprogramname] ;command=/bin/cat ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=999 ; the relative start priority (default 999) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups=10 ; # of stderr logfile backups (default 10) ;stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions (def no adds) ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample eventlistener section shows all possible ; eventlistener subsection values, create one or more 'real' ; eventlistener: sections to be able to handle event notifications ; sent by supervisor. ;[eventlistener:theeventlistenername] ;command=/bin/eventlistener ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;events=EVENT ; event notif. types to subscribe to (req'd) ;buffer_size=10 ; event buffer queue size (default 10) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=-1 ; the relative start priority (default -1) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups ; # of stderr logfile backups (default 10) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample group section shows all possible group values, ; create one or more 'real' group: sections to create "heterogeneous" ; process groups. ;[group:thegroupname] ;programs=progname1,progname2 ; each refers to 'x' in [program:x] definitions ;priority=999 ; the relative start priority (default 999) ; The [include] section can just contain the "files" setting. This ; setting can list multiple files (separated by whitespace or ; newlines). It can also contain wildcards. The filenames are ; interpreted as relative to this file. Included files *cannot* ; include files themselves. ;[include] ;files = relative/directory/*.ini

    Read the article

  • TeamViewer cannot connect

    - by Cetin Sert
    Last week I decided to use TeamViewer VPN to administer software on a server behind a firewall using RemoteDesktop. It was easy to configure to start-up with the system and make VPN available on the other side but now it fails to connect at the step shown below: The remote machine is running Windows Server 2008 R2. Is there a native way to circumvent the external firewall using a server role or feature to make Windows Server do the VPN work? Do people have better / more reliable experiences with other products such as Hamachi? The requirements are as follows: Start at remote system start-up time Make VPN connections to the remote machine possible

    Read the article

  • How can I connect JConsole to WebLogic using the WL SSL Listen Port

    - by Mircea Vutcovici
    I would like to be able to use JConsole on remote WebLogic servers via the multiplexer port on SSL. Is it possible this without doing any configuration changes WebLogic? Only by adding some jars (e.g. wljmxclient.jar) or parameters to JConsole. I've tried with variations of the following command without success: $JAVA_HOME/bin/jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:\ $JAVA_HOME/lib/tools.jar:$WL_HOME/server/lib/wljmxclient.jar \ -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -debug \ service:jmx:rmi:///jndi/iiop://server_name:7441/jmxrmi I think that one of the problem is that the SSL is not enabled in JConsole.

    Read the article

  • What backup solution for Windows 2008 R2 servers on XenServer 5.0?

    - by Niels R.
    A friend of mine is hosting a lot of Linux VM's on his servers using XenServer 5.0. He uses rdiff-backup to make daily backups. I'm trying to convince him to host some Windows VM's (Windows 2008 R2 Web Edition) too, so he could provide (me) .NET hosting. The main problem at the moment is a backup strategy for these Windows VM's. I would like to see something like a weekly full backup (snapshot of the VM?) with daily incrementals. I've looked at Windows Backup, but because the backups are made onto network shares it doesn't provide incrementals (for what I understand). Does anyone has any experience with this situation? How did you solve this in a "not-too-hard-to-install/maintain" way?

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • FTPS SSH Host Key after IP Address Change

    - by David George
    I have a Secure FTP (FTPS) server that my remote sites to upload files to daily via scripted routines that run. I have had issues in the past when upgrading hardware and deploying new servers causing the RSA Fingerprint to change for that server. Then all my remote sites can't connect until I have the old key removed (usually via ssh_keygen -r myserver.com). I now have to change the IP address for myserver.com and I wondered if there is anyway to proactively generate new host keys so that when the server address changes all my FTPS client remote sites don't break?

    Read the article

  • rsync set group owner, group permission

    - by ChrisInEdmonton
    I want to use rsync to transfer files from my computer to a remote Linux system. Regardless of the local file's group ownership, I want to set these values on the remote side. If I was on the remote Linux system, I could create the directory and set the ownership and permissions as: mkdir my_directory chown :my_group my_directory chmod 775 my_directory If I create the directory locally and then use rsync (remember, I don't have my_group locally), I do: rsync -ae ssh --chmod=ug+rw,Dug+rwx my_directory remoteserver:dest That works, but I cannot figure out how to set the group owner through rsync. If I do a chmod g+s dest, my_directory has the correct group owner but all of the files inside have the incorrect group owner.

    Read the article

  • Archiving to Tape

    - by Bruno
    This is not about backups, this is about archiving. For arguments sake lets say I have 2TB 7z file that I would like to archive to tape. I have 4 LTO-5 tapes ( 1.5TB each ). This may be a stupid question but what set up would I need that would allow me to drag and drop those files directly onto those 2 tapes and would automatically split the file accross 2 tapes like so: ------------------ | Copy 1 | | 1.5TB | ------------------ ------------------ | Copy 1 | | 0.5TB | ------------------ ------------------ | Copy 2 | | 1.5TB | ------------------ ------------------ | Copy 2 | | 0.5TB | ------------------ I just want to be able to specify which files go on which tapes as oppose to backups where the tapes just rotate. Thanks.

    Read the article

  • How to enable connection security for WMI firewall rules when using VAMT 2.0?

    - by Ondrej Tucny
    I want to use VAMT 2.0 to install product keys and active software in remote machines. Everything works fine as long as the ASync-In, DCOM-In, and WMI-In Windows Firewall rules are enabled and the action is set to Allow the connection. However, when I try using Allow the connection if it is secure (regardless of the connection security option chosen) VAMT won't connect to the remote machine. I tried using wbemtest and the error always is “The RPC server is unavailable”, error code 0x800706ba. How do I setup at least some level of connection security for remote WMI access for VAMT to work? I googled for correct VAMT setup, read the Volume Activation 2.0 Step-by-Step guide, but no luck finding anything about connection security.

    Read the article

  • Strategy for using snapshots to back up Ubuntu Linux server?

    - by MountainX
    I need some backup advice for my home file server. Here are the mount points, volume groups, logical volumes and used/total space of all the volumes on my Ubuntu 8.10 home file server. / vgA/lvRoot [7.5G/50G] /tmp vgB/lvTmp [195M/30G] /var vgB/lvVar [780M/30G] swap vgB/lvSwap [16.00 GB] /media1 vgC/lvMedia1 [400G/975G] /media2 vgC/lvMedia2 [75G/295G] /boot partition (no volume group) [95M/200M] /video partition (no volume group) [450G/950G] /backups vgD/lvBackupTarget [800G/925G] /home vgE/lvHome [85G/200G] I have just added a 2.0 TB external USB drive that I would like to use to backup everything. (It will be a close fit to get it all on one 2.0 TB drive. I actually have a 2nd external USB drive if needed.) I'd like to backup "/", var, /media1, media2 and /home. I'll deal with /boot and /video separately since they are not logical volumes. For all the logical volumes I'm anticipating taking snapshots and then copying those snapshots to the 2.0 TB external USB drive. I have never done a task like that before. If I do that, I could use the tutorial I found here: http://www.howtoforge.com/linux_lvm_snapshots My questions are: What is the best overall strategy? Is it LVM snapshots, as I'm assuming? How should I prepare, subdivide and mount the 2.0 TB external USB drive? 2.a. Should I create one or more regular partitions or should I create a physical volume with one or more logical volumes? 2.b. Would it be advisable to extactly mirror the source pv/lv layout on the external drive, and if so, is this a good strategy? What's the best way to get the snapshots onto the external drive? dd? Even though this is a strategy question, feedback with actual commands is appreciated. I need step-by-step cookbook-style help because I don't do much server admin work. (Background: This is a home file server that I have rarely had to touch in about 2 years. It has done its job without much intervention. The really old PC that I used to back everything up recently failed, so I'm replacing that with the external USB drive(s) and I'd like to upgrade my backup strategy at the same time. Previously, I just copied stuff from /backups over to the other computer and that would not have made things very easy in a real restore situation. The /backups mount point contains backup copies of "most" of the important data on a file by file basis, but it does not contain copies of /boot, etc. BTW, the actual internal HDD that holds /backups is separate from the other storage devices.) EDIT: I'll propose a strategy... The idea came from a comment here: LVM mirroring VS RAID1 "LVM mirrors are for replication of a logical volume to a different physical volume. It's essentially meant to "move the data to a different disk". The mirror is then broken..." That would fit my requirements well. Here is an ideal situation: establish the LV mirror on the external drive break the link with the mirror create a (persistent) snapshot on the mirror after a week, resync the mirror with the original source and update the mirror break the link and create another snapshot on the mirror. Obviously, the mirror will be like a weekly full backup. And the snapshots on the mirror will represent earlier points in time. If this would work and if it would be time efficient, it would give a nice full & differential type backup on the external drive based on LVM. I have not heard of a strategy like this before. Will it work? Could it be scripted? Thoughts? EDIT 2: Creating Portable DiskSafes With LoopbackFS And LVM Snapshots This article seems intriguing: http://www.howtoforge.com/creating-portable-disksafes-with-loopbackfs-and-lvm-snapshots Unfortunately, I don't understand exactly how to map those ideas to the strategy I'm proposing above. I'm going to ask this last bit as a separate question. I will leave my original question in place because I still desire feedback on the overall best strategy. At this moment I'm assuming it is LVM mirroring in the style of "Creating Portable DiskSafes with LVM Snapshots" but that might be wrong.

    Read the article

  • How can I capture a one-time full backup of a server using AMANDA?

    - by Daemon
    Suppose I have a preconfigured AMANDA server running automated network backups of directories specified in my disklists file. Normally, AMANDA will backup targets disks to /dumps/amanda. Is there any single command or method to perform a manual, one-time, full backup dump to another destination drive? I ask since I'm investigating the possibility of introducing rotating external hard-drive backups for offsite storage and I want to leverage our existing backup strategies wherever possible. Ideally, a full backup restore should be achievable from only any one of these offsite backup discs.

    Read the article

  • Configuring Backup Exec 2012 using USB hard drives as media

    - by SydxPages
    I have found some information on this but have not found the exact answers to these questions. Background I have installed backup exec 2012 (with agents for databases) I have configured a storage pool, with 2 USB drives (1TB) The backups are configured to backup to one of the 2 drives (depending on which one is connected) I have 2 questions: How do I get Backup Exec to tell me which disk to insert? I have used tapes before and it told me then which tape to use? I was hoping this was available for disks too. (Whilst there are only 2 at the moment, there will be more). And then how do I get Backup Exec to delete old backups when the disk if full.

    Read the article

  • RDP and New Accounts

    - by leeand00
    I created a new user account on the domain and added them to the Remote Desktop Users group. I could login just fine locally, but when I logged in remotely I was basically told that I could not login from there using that user. I could login just fine as the administrator or anybody else other than that new account. So I researched it a bit more and found that my setting looked like this on the local machine: So I changed it to Allow connections only from computers running Remote Desktop with Network Level Authentication (NLA). Now when I tried this down at my office I connected with RDP just fine on another computer. But low and behold when I got home and simply try to connect to the machine, I get the message: There has to be some kind of in between setting, or additional setting that I need to change on the user that allows me to connect directly via remote desktop over the VPN. At the moment I can connect by connecting to another computer on the network and then RDPing from there into my machine, but this is not ideal.

    Read the article

  • Windows 7 Multi Monitor RDC Problem [closed]

    - by Peter Stegnar
    Possible Duplicate: Windows remote controle for dual monitor setup I would like to use all my monitors for the remote session (an option in RDC dialogue) if I connect (from Windows 7) to the one server with Windows 2008 R2 it works OK (I have remote connection on my all monitors) when do I connect to the another server with Windows 7 it just wound not use all my monitors, but just one (full screen mode). What do I missing here? Some setting on the Windows 7 server? Basically my question is: How to establish multimonitor RDC connection from Windows 7 to another Windows 7 running computer?

    Read the article

  • Best Way to Archive Digital Photos and Avoid Duplicate File Names

    - by user31575
    This problem pertains to archiving of digital pictures taken from multiple cameras. Answers here covered the general topic of the-mechanics-of-backups: How do you archive digital photos and videos ? I however face another problem. Having multiple cameras (canon) and multiple SD cards (mixed and matched at random), I have found that different SD cards have different photos with the same file name, i.e. two different photos each name IMG_3141.JPG. Additionally, for better or worse, I've backed up the files to multiple places and need to consolidate my backups. I want to eliminate duplicates, but not clobber files. The only way I can think of is to append the code (md5 or sha1) to the file name, i.e. IMG_3141.JPG becomes IMG_3141_KT229QZ31415926ASDF.JPG, then sorting them out Any better ways? (Note "open letter" address the 'duplicate file name' concern): http://photofocus.com/2010/09/13/an-open-letter-to-digital-camera-manufacturers-regarding-camera-file-naming/ )

    Read the article

  • DNS Server Spoofed Request Amplification DDoS - Prevention

    - by Shackrock
    I've been conducting security scans, and a new one popped up for me: DNS Server Spoofed Request Amplification DDoS The remote DNS server answers to any request. It is possible to query the name servers (NS) of the root zone ('.') and get an answer which is bigger than the original request. By spoofing the source IP address, a remote attacker can leverage this 'amplification' to launch a denial of service attack against a third-party host using the remote DNS server. General Solution: Restrict access to your DNS server from public network or reconfigure it to reject such queries. I'm hosting my own DNS for my website. I'm not sure what the solution is here... I'm really looking for some concrete detailed steps to patch this, but haven't found any yet. Any ideas? CentOS5 with WHM and CPanel. Also see: http://securitytnt.com/dns-amplification-attack/

    Read the article

  • Amazon EC2 Creating Tunnel with OpenVPN

    - by nocode
    I have followed these instructions: http://aws.amazon.com/articles/0639686206802544 I can ping the VPN endpoints and I have the corresponding VPC CIDR pointing to the EC2 instance in the route table. Here is my config: port 1194 proto udp dev tun # Remote peer and network remote Elastic_IP route 10.0.0.0/16 # Configure local and remote VPN endpoints ifconfig 169.254.255.1 169.254.255.2 # The pre-shared static key secret /etc/openvpn/ovpn.key keepalive 10 120 persist-key persist-tun log /var/log/openvpn.log verb 3 When I look at my logs, I get this error: RESOLVE: Cannot resolve host address: 10.0.0.0/16: Name or service not known OpenVPN ROUTE: failed to parse/resolve route for host/network: 10.0.0.0/16 in VPC1, the CIDR is 172.31.0.0/16 which is targeting the EC2 instance also running OpenVPN. I'm getting the same error from the Instance in VPC2 with the corresponding CIDR. Just for testing, i stopped the IPTABLES service I am running the Amazon linux AMI image (x64) as specified in the article I linked.

    Read the article

  • All traffic is passed through OpenVPN although not requested

    - by BFH
    I have a bash script on a Ubuntu box which searches for the fastest openvpn server, connects, and binds one program to the tun0 interface. Unfortunately, all traffic is being passed through the VPN. Does anybody know what's going on? The relevant line follows: openvpn --daemon --config $cfile --auth-user-pass ipvanish.pass --status openvpn-status.log There don't seem to be any entries in iptables when I enter sudo iptables --list. The config files look like this: client dev tun proto tcp remote nyc-a04.ipvanish.com 443 resolv-retry infinite nobind persist-key persist-tun persist-remote-ip ca ca.ipvanish.com.crt tls-remote nyc-a04.ipvanish.com auth-user-pass comp-lzo verb 3 auth SHA256 cipher AES-256-CBC keysize 256 tls-cipher DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA There is nothing in there that would direct everything through tun0, so maybe it's a new vagary of Ubuntu? I don't remember this happening in the past.

    Read the article

  • Using QoS to prioritize IP addresses

    - by Tristan
    I have a Western Digital N900 router. I was hoping I'd be able to throttle users based on their MAC address with it, which isn't possible sadly. Seems simple in principle though, duh. The battle against bandwidth hogging roomates rages on. Could I just set the local IP range to their IP, and then set the Local port range to every single port in existence. Then prioritize their IP to lower than mine? Will this work? What are all the ports? And what's the difference between Local and Remote IPs or Ports? Name: Roomate, Priority: Low, Protocol: TCP or UDP ??, Local IP Range: .101 to .101, Local Port Range: 0 to infinity, Remote IP Range: ? to ?, Remote Port Range: ? to ?

    Read the article

  • Proper use of disk to disk to tape backup using de-duplication and LTO5

    - by Michael
    I currently have ~12TB of data for a full disk to tape (LTO3) backup. Needless to say, it's now requiring over 16 tapes so I'm looking at other solutions. Here is what I've come up with. I'd like to hear the community's thoughts. Server for Disk-to-Disk BackupExec 2010 Using De-duplication Technology 20+TB worth of SATA drives LTO5 robotic library connected via SAS 1Gbps NIC connected to network What I envision is doing a full backup of my entire network which will initially take a long time over the 1Gbps NIC but once the de-duplication kicks in backups should be quick. I will then use the LTO5 to make disk to tape backups and archive those accordingly. What does everyone think? Any faster way of doing the initial full backup over the 1Gbps NIC? What will be my pain points? Is there a better way of doing what I'm trying to achieve?

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >