Search Results

Search found 11259 results on 451 pages for 'remote registry'.

Page 94/451 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • How to access remote network resource from local machine

    - by jerluc
    I just configured VPN access successfully so that I now can connect to my workstation at work from my personal Linux box at home. The problem is that all of my dev files for a server I'm locally running are on my personal box and cannot be transfered to my workstation (at least not in any timely manner over this connection given the amount of data, in addition to the many reconfigurations which would be required for the server to run even if I could somehow get the files across). So essentially, I am able to run my server locally on my personal computer, however, the data-sources required for the back-end are accessible only from within the office's network. But is there some way for me to somehow either access the data-sources directly through a VPN connection or even if I need to be a bit more convoluted by connecting via VPN to my workstation and then somehow connecting to the data-sources through my workstation to my personal computer? And here I could really care less about the speed of the connection from my server to the data-sources since they will probably only be fetched a few times every hour or so. Thanks! Sorry if this a stupid question and/or doesn't make any sense! (And sorry for anyone who read this at stackoverflow, I posted it in the wrong area.)

    Read the article

  • SSH into remote server using Public-private keys

    - by maria
    Hi, I have recently setup ssh on two linux machines (lets call them server-a, client-b). I have generated two ssh auth files on client-b machine using ssh key gen and can see both public and private files in .ssh dir. I have named them 'example' and 'example.pub'. Then I have added example.pub to sever-a's auth file. When I try to ssh into server-a it still requests a password authentication where as I want a password less login (private key on client-b is setup without password). When I try to ssh with '-v' .. get the following output: debug1: Next authentication method: publickey debug1: Trying private key: /Users/abc/.ssh/identity debug1: Offering public key: /Users/abc/.ssh/id_rsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Offering public key: /Users/abc/.ssh/id_dsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Password: Please help.

    Read the article

  • Accessing internal server eg 192.168.10.10 without using remote desktop

    - by bergin
    Hi there My boss has an intranet he wants his employees to gain access to from the WWW. Theres a sharepoint server running on 192.168.10.10 and SBS can be seen from a website 81.244.232.22 (some numbers like this). When you access, theres a default internal sharepoint site "companyweb" but we dont want to use that we want the main sharepoint site which has all the business on it. is this possible? Currently we have to connect to a computer, chose the server and then get in that way. Any ideas?

    Read the article

  • Accessing internal server eg 192.168.10.10 without using remote desktop

    - by bergin
    Hi there My boss has an intranet he wants his employees to gain access to from the WWW. Theres a sharepoint server running on 192.168.10.10 and SBS can be seen from a website 81.244.232.22 (some numbers like this). When you access, theres a default internal sharepoint site "companyweb" but we dont want to use that we want the main sharepoint site which has all the business on it. is this possible? Currently we have to connect to a computer, chose the server and then get in that way. Any ideas?

    Read the article

  • Timeout settings for Remote Desktop Sessions to lock

    - by atroon
    Our office uses a Windows 2003 server to provide access to an accounting application. Recently I was asked to increase the amount of time it takes for the session to lock itself and require the entry of the user's password to resume. That seems to be about ten minutes, at present. I am familiar with group policy and have tweaked those settings to scavenge sessions (and thereby licenses) from sessions that have been disconnected (by the user closing the mstsc.exe client or by a network issue). That's simple and straightforward. But I can't find anything in GP to allow a longer time period before the RDP client window goes black and then, when clicked upon, requires a username and password to resume the session. I must admit this would be nice personally as well, since most of my time is spent documenting the application and/or monitoring its database, so I usually have a window open to the terminal server along with the rest of the staff in the accounting center, but I interact with it very little. I usually enter my password 10-15 times per workday, but I'm pretty good at it by now. ;) So, can this timeout period be adjusted, or are we out of luck?

    Read the article

  • Cannot access any remote resource after connecting to Cisco VPN on Vista

    - by Deepak Singh Rawat
    I have installed Cisco vpn client version 5.0.07.0290 on Vista Business SP2. I am able to successfully connnect to the vpn. But after connecting I am not able to access any resource in the vpn (like database, other computers in the network etc.). I have tried the following without any success : Older versions of the client Other vpn clients like Shrewsoft : same issue as the cisco vpn client Disabled Internet Connection Sharing service Installed the client in the root administrator account Run the installer as administrator Run the vpngui and ipsecdialer in XP compatibility mode and as administrator I am not sure how to troubleshoot this issue. Can somebody please help me in troubleshooting this issue? P.S : I've Zonealarm firewall, can that be an issue?

    Read the article

  • How to setup Mercurial on Mac OS X 10.5.8 for remote access

    - by Abhic
    Hello I have a stable hackintosh box running 10.5.8 at home with py 2.5.1 and mercurial 1.5.2 setup successfully. Now I do not have any ports opened from this box and the associated router nor do I have any web servers running on this system. What are the steps that I have to take to setup this machine to be a remotely available as a mercurial repository? Thank you folks.

    Read the article

  • BackupExec 2012 File System Archiving - Access is denied to Remote Agent

    - by AllisZero
    Gentlemen, I've been struggling with a Trial version of Symantec Backup Exec 2012 for about a week now. It was installed as an upgrade to our 12.5 license, and the setup completed with no issues. The reason I upgraded is solely for the File System Archiving option as I'm working to reduce the amount of live data in my servers. Backups work A-Ok and I have followed the instructions in the Admin Manual to make sure I had filled all requirements. The account BE is running under is a member of the Local administrators group as required and has been added to the test share that I'm using to evaluate the archiving function. Testing the credentials in the job setup window always works fine, and I am able to add both regular and Admin$ shares to my Archive selection. However, every time I run the Archive job, I get the following message: https://dl.dropbox.com/u/59540229/BEXec.png I've already tried to troubleshoot DNS resolution issues as suggested in the Symantec KB to no avail. The only thing I can think of, at this point, is that a trial license doesn't allow me to use the Archiving function, although that would seem silly on their part. Appreciate any assistance or information. Thanks.

    Read the article

  • Synchronising a remote folder with a local one.

    - by Workshop Alex
    I am using a network disk (that's connected to my router by USB) to store several data files. A simple .NET application that I've created is supposed to read and modify these data files. However, some security issues are preventing this application to access these files directly. (Actually, these have been built-in to my application on purpose since it's not going to support NAS disks.) Since this disk is shared with several computers, I just want to have a simple synchronisation method, which will copy the files to a local folder where3 my application can access them. And, once modified, it should send back the modified files to the NAS disk again. I have two options: 1) Build a second application to do my own synchronisation. 2) Find some build-in function inside Windows 7 Ultimate which can do this for me. Option 2 is preferred. Option 1 is something I can do easily, if need be. I don't need third-party tools. (Still, feel free to add some references to good tools, although I won't accept them as answers.) Basically, is this possible with Windows 7 and if so, how?

    Read the article

  • Remote RAID Control in ESXi on a Dell PowerEdge 2950 Using OpenManage

    - by yoyomommy
    I was wondering how one can add a drive into an existing RAID array while ESXi is still running. I have read that you are able to use Dell OpenManage to do this. I have installed OMSA 7.0 on the VMWare ESXi host (5.0 and fully updated) and I've installed OpenManage Essentials on a Windows Server 2008 R2 guest. The issue that I'm having is that OpenManage is unable to see my RAID controller. I have seen videos and photos as parts of guides on how to do this online, so I would assume that the functionality exists and I just have it set up wrong.

    Read the article

  • Kernel-mode Authentication: 401 errors when accessing site from remote machines

    - by CJM
    I have several Classic ASP sites that use Integrated Windows Authentication and Kerberos delegation. They work OK on the live servers (recently moved to a Server 2008/IIS7 servers), but do not work fully on my development PC or my development server. The IIS on both machines were configured through an IIS web deployment tool package which was exported from an old machine; the deployment didn't work perfectly, and I had to tinker a bit to get the sites working. When accessing the apps locally on either machine, they work fine; when accessing from another machine, the user is prompted by a username/password dialog, and regardless of what you enter, ultimately it results in a 401 (Unauthorised) error. I've tried comparing the configuration of these machines against similar live servers (that all work fine), and they seem generally comparable (given that none of the live servers are yet on IIS7.5 (Windows 7/Server 2008 R2). These applications run in a common application pool which uses a special domain user as it's identity - this user has similar permissions on the live and development machines. On IIS6 platforms, to enable kerberos delegation, I needed to set up some SPNs for this user, and they are still in place (even though I don't believe they are needed any longer for IIS7+ due to kernel-mode authentication), Furthermore, this account is enabled for Kerberos delegation in Active Directory, as is each machine I am dealing with. I'm considering the possibility that the deployment might have made changes/failed to make changes to the IIS configuration thus causing this problem. Perhaps a complete rebuild (minus another web deployment attempt) would solve the problem, but I'd rather fix (thus understand) the current problem. Any ideas so far? I've just had another attempt at fixing this issue, and I've made some progress, but I don't have a complete fix...yet. I've discovered that if I access the sites via IP address (than via NetBIOS name), I get the same dialog, except that it accepts my credentials and thus the application works - not quite a fix, but a useful step. More interestingly, I discovered that if I disable Kernel-mode authentication (in IIS Manager Website Authentication Advanced Settings), the applications work perfectly. My foggy understanding is that this is effectively working in the pre-IIS7 way. A reasonable short-term solution, but consider the following explicit advice from IIS on this issue: By default, IIS enables kernel-mode authentication, which may improve authentication performance and prevent authentication problems with application pools configured to use a custom identity. As a best practice, do not disable this setting if Kerberos authentication is used in your environment and the application pool is configured to use a custom identity. Clearly, this is not the way my applications should be working. So what is the issue?

    Read the article

  • Automatically mount a remote folder on boot

    - by Andrew
    I'm trying to mount a Windows folder on my Ubuntu machine on start up. I've tried following this page here, modifying /etc/fstab and appending sshfs#my_user@remote_host:/path/to/directory <local_mount_point> fuse user 0 0 to it, but it fails; on start up, I get an error saying that the mounting failed, and I can press S to skip or M to recover manually. I also tried following this page here, appending /usr/bin/sshfs -o idmap=user my_user@remote_host:/path/to/directory <local_mount_point> to the /etc/rc.local file, but this doesn't help either; Ubuntu just boots up normally without mounting. I have Cygwin installed on my Windows machine, and I can run everything smoothly, such as sshing without passwords, and mounting it manually. I've also tried to run the modified rc.local file $ /etc/rc.local, and it works perfectly, but I just can't seem to get the folder mounted on start up. Can someone help me?

    Read the article

  • Simple EXPECT script to execute remote command and displat output

    - by s.mihai
    I am trying to connect to a network router and execute show status on it. Currently i am using: spawn ssh -o StrictHostKeyChecking=no [email protected] expect " * ? password:*\r" send -- "secretPassword\r" sleep 5 send -- "show status\r" sleep 10 send -- "exit\r" It dosen't work, i get stuck at [email protected]'s password:i've tried entering the password but it does not work, i get: server1:~# secretPassword -bash: server1: command not found server1:~#What am i doing so wrong here ... ?

    Read the article

  • using nmap to guess remote OS and probe service details on a single port only

    - by WoJ
    I am looking at scanning with nmap a large network in order to identify the OS of devices (-O--osscan-limit) probe for details of a service on a single port (I would have used -sV for all open ports) The problem is that -sV will probe all the ports (which I do not want to do for performance reasons) and I cannot use -p to limit the ports to the one I am interested in as this impacts the OS fingerprinting. I could not find anything in the manual to limit the service probing. Thank you for any ideas (including other approaches outside of nmap, though I would prefer to stick to nmap)

    Read the article

  • Wamp virtualhost with supporting of remote access

    - by Farid
    To cut the long story short, I've setup a Wamp server with local virtual host for domain like sample.dev, now I've bind my static IP and port 80 to my Apache and asked the client to make some changes in his hosts file and add x.x.x.x sample.dev , I've also configured my httpd virtual host like this : <VirtualHost *:80> ServerAlias sample.dev DocumentRoot 'webroot_directory' </VirtualHost> Client can reach to my web server using the direct access by ip address, but when he tries using the sample domain looks like he gets in to some infinite loop. The firewall is off too. What would be the problem?! Thanks.

    Read the article

  • remote symbolic link / junction

    - by Blueberry
    Might be a pretty obvious one but have had some trouble finding solid answers. I have a directory on a windows network share containing different versions of an application. I would like to have a link to one of these called 'current', which will be a symbolic link to the directory sitting beside all the other versions and pointing to one of these. Creating this link seems to be more of an issue than I would have thought. Looks like symlink only shows the link on the same machine as where it was created (which is not going to work for obvious reasons) and junction needs to be run on the server which is practically impossible due to various restrictions. What would be the best way to go about this? Would I just need to copy the files twice or can I have a symbolic link which can be created and accessed remotely?

    Read the article

  • Thomson router reboots unexpectedly with an apparent remote connection attempt

    - by ChrisF
    I've got a weird problem. Every so often my rooter (a Thomson TG585 v8 running version 8.2.7.8 of it's firmware) reboots itself. It seems to be associated with this message in the event log: FIREWALL replay check (1 of 2): Protocol: ICMP Src ip: 183.178.144.177 Dst ip: xxx.xxx.xxx.xxx Type: Destination Unreachable Code: Host Unreacheable xxx.xxx.xxx.xxx is my external IP address 183.178.144.177 resolves to 183178144177.ctinets.com We've got a student from Hong Kong staying with us at the moment and the reboots seem coincidental with him starting up his laptop. I say this because a check on ctinets.com shows it to be based in Hong Kong, though our guest's laptop doesn't appear to have any software related to this company installed. I say "apparently" as he is running the Chinese version of Windows and his English doesn't cover technical subjects like this. I know this is an incoming message but I was assuming that it was in response to something on the student's laptop which is why the first thought was malware, but we've got anti virus on all the other machines and have run malwarebytes on his with a negative result so I don't think the problem is due to a virus or (known) trojan. What else can I do to stop this and identify the cause?

    Read the article

  • MySQL timeout only from office network to remote server, but other connections are fine

    - by Adam
    I've been developing these apps just fine on a local machine as has my co worker. We recently moved our work desks so we're now on a different floor of the building, but we only have one router that we're connected to. Since then, connecting to this one server appears to timeout more often than not. Occasionally I get through, and the loading is instantaneous. Anyhow we have these connections that were tested 1. my computer -> office network -> php pdo -> mysql server A - timeout 2. my computer -> office network -> mysql cli -> mysql server A - timeout 3. my computer -> office network -> mysql cli -> mysql server A - timeout 4. another pc -> office network -> mysql cli -> mysql server A - timeout 5. my computer -> mobile network -> mysql cli -> mysql server A - ok 6. my computer -> office network -> ssh server A -> mysql server A - ok 7. my computer -> office network -> ssh server B -> mysql server A - ok 8. server B web app -> php pdo -> mysql server A - ok 9. my computer -> office network -> php pdo -> mysql server B - ok 10. my computer -> office network -> mysql cli -> mysql server B - ok This has really stumped me.

    Read the article

  • accidentally concatenate a large file on a remote system

    - by Dan
    Every once in a while on a computer I'm ssh'd into, I will accidentally type "cat largefile.txt" and my screen will start rushing with text for the next 10 minutes. I'm always working in a screen session, so my current solution is to just log out and then log back in, and since it can go 100X faster when I'm logged out, it'll finish in the short time it takes me to type my password in again. Is there a better way? Either involving the fact I'm in a screen session? Or a way to do this within SSH? What doesn't work: detaching from the screen session (doesn't respond until file is done outputting) trying command to move to a different window in the screen session (also doesn't respond) typing ctrl+C to kill cat command (also doesn't respond, probably because the command is done and the buffers just have to catch up)

    Read the article

  • OS X Sending syslog to a remote box

    - by skarface
    For some reason I have a hard time wrapping my head around how OS X handles things like init, cron, and "normal" daemon maint. Too many years spent doing *nix work. How do I configure syslogd on a 10.6 OS X box to send logs to a syslog server?

    Read the article

  • Routing and Remote Access Service won't start after full disk

    - by NKCSS
    The HDD of the server was out of disk space, and after a reboot, RRAS won't start anymore on my 2008 R2 server. Error Details: Log Name: System Source: RemoteAccess Date: 2/5/2012 9:39:52 PM Event ID: 20153 Task Category: None Level: Error Keywords: Classic User: N/A Computer: Windows14111.<snip> Description: The currently configured accounting provider failed to load and initialize successfully. The connection was prevented because of a policy configured on your RAS/VPN server. Specifically, the authentication method used by the server to verify your username and password may not match the authentication method configured in your connection profile. Please contact the Administrator of the RAS server and notify them of this error. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="RemoteAccess" /> <EventID Qualifiers="0">20153</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2012-02-05T20:39:52.000Z" /> <EventRecordID>12148869</EventRecordID> <Channel>System</Channel> <Computer>Windows14111.<snip></Computer> <Security /> </System> <EventData> <Data>The connection was prevented because of a policy configured on your RAS/VPN server. Specifically, the authentication method used by the server to verify your username and password may not match the authentication method configured in your connection profile. Please contact the Administrator of the RAS server and notify them of this error.</Data> <Binary>2C030000</Binary> </EventData> </Event> I think it has something to do with a corrupt config file, but I am unsure of what to do. I Removed the RRAS role, rebooted, and re-added, but it keeps failing with the same error. Thanks in advance. [UPDATE] If i set the accounting provider from 'Windows' to '' the service starts but VPN won't work. Any ideas how this can be repaired?

    Read the article

  • How to setup GIT repo on server with need for working dir (non- bare)

    - by OrangeTux
    I want to have configurate a GIT repo for a website. Multiple users will have a clone of the repo on their local machine and on the end of each day they push their work to the server. I can setup a bare repo, but I want a working dir/non-bare repository. The idea is that the working dir of the repository will the root folder for the website. At the end of each day all changes will be visible directly. But I can't find a way to do this. Initializing the server repo with git init gives the following error when a client is trying to push some files: git push origin master [email protected]'s password: Counting objects: 3, done. Writing objects: 100% (3/3), 227 bytes, done. Total 3 (delta 0), reused 0 (delta 0) remote: error: refusing to update checked out branch: refs/heads/master remote: error: By default, updating the current branch in a non-bare repository remote: error: is denied, because it will make the index and work tree inconsistent remote: error: with what you pushed, and will require 'git reset --hard' to match remote: error: the work tree to HEAD. remote: error: remote: error: You can set 'receive.denyCurrentBranch' configuration variable to remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into remote: error: its current branch; however, this is not recommended unless you remote: error: arranged to update its work tree to match what you pushed in some remote: error: other way. remote: error: remote: error: To squelch this message and still keep the default behaviour, set remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To ssh://[email protected]/home/orangetux/www/ ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'ssh://[email protected]/home/orangetux/www/' So I'm wondering if this the right way to setup a GIT repo for a website? If so, how do I have to do this? If not, what is a better way to setup a GIT repo for the development of a website? EDIT you can't push to a non-bare repository Oke, clear. But whats the way to solve my problem? Create a bare repository on the server and have a clone of this repo on the same server in the htdocs folder? This looks a bit clumsy to me. To see the result of a commit I've to clone the repository each time.

    Read the article

  • Interface to collect successful remote backups status

    - by Aseques
    I would like to deploy into our infrastructure a web interface that could register when the copies are finished and if for some reason they haven't. The current issue is that we are doing on site backups for customers, for each backup a mail is sent ad the end of the backup, the problems is that sometimes the mail isn't sent for a variety of reasons: System doesn't have internet Backup system crashed before sending the mail etc.. What I'd like to do is to have a web interface that the backup software cant visit after doing the backup (either if it's a success or a fail), that acknowledges that the backup has finished, after some time, I'd like to receive a report of the machines that hadn't done the backup. Is there anything remotely similar to this that I could use/adapt to our environment? UPDATE: Just found out this (paessler.com) that seems to be a privative solution of what I intended.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >