Search Results

Search found 12472 results on 499 pages for 'remote debugging'.

Page 163/499 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • VPN iptables Forwarding: Net-to-net

    - by Mike Holler
    I've tried to look elsewhere on this site but I couldn't find anything matching this problem. Right now I have an ipsec tunnel open between our local network and a remote network. Currently, the local box running Openswan ipsec with the tunnel open can ping the remote ipsec box and any of the other computers in the remote network. When logged into on of the remote computers, I can ping any box in our local network. That's what works, this is what doesn't: I can't ping any of the remote computers via a local machine that is not the ipsec box. Here's a diagram of our network: [local ipsec box] ----------\ \ [arbitrary local computer] --[local gateway/router] -- [internet] -- [remote ipsec box] -- [arbitrary remote computer] The local ipsec box and the arbitrary local computer have no direct contact, instead they communicate through the gateway/router. The router has been set up to forward requests from local computers for the remote subnet to the ipsec box. This works. The problem is the ipsec box doesn't forward anything. Whenever an arbitrary local computer pings something on the remote subnet, this is the response: [user@localhost ~]# ping 172.16.53.12 PING 172.16.53.12 (172.16.53.12) 56(84) bytes of data. From 10.31.14.16 icmp_seq=1 Destination Host Prohibited From 10.31.14.16 icmp_seq=2 Destination Host Prohibited From 10.31.14.16 icmp_seq=3 Destination Host Prohibited Here's the traceroute: [root@localhost ~]# traceroute 172.16.53.12 traceroute to 172.16.53.12 (172.16.53.12), 30 hops max, 60 byte packets 1 router.address.net (10.31.14.1) 0.374 ms 0.566 ms 0.651 ms 2 10.31.14.16 (10.31.14.16) 2.068 ms 2.081 ms 2.100 ms 3 10.31.14.16 (10.31.14.16) 2.132 ms !X 2.272 ms !X 2.312 ms !X That's the IP for our ipsec box it's reaching, but it's not being forwarded. On the IPSec box I have enabled IP Forwarding in /etc/sysctl.conf net.ipv4.ip_forward = 1 And I have tried to set up IPTables to forward: *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [759:71213] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 25 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 500 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 4500 -j ACCEPT -A INPUT -m policy --dir in --pol ipsec -j ACCEPT -A INPUT -p esp -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -s 10.31.14.0/24 -d 172.16.53.0/24 -j ACCEPT -A FORWARD -m policy --dir in --pol ipsec -j ACCEPT -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT Am I missing a rule in IPTables? Is there something I forgot? NOTE: All the machines are running CentOS 6.x Edit: Note 2: eth1 is the only network interface on the local ipsec box.

    Read the article

  • pfsense multi-site VPN VOIP deployment

    - by sysconfig
    have main office pfsense firewall configured like this: local networks WAN - internet LAN - local network VOIP - IP phones need to connect remote offices (multi-users) and single remote users (from home) use IPSEC or OpenVPN to build "permanent" automatically connecting tunnels from remote location to main location. in remote locations, network will look like this: WAN - internet LAN - local network multiple users VOIP - multiple IP phones in order for the IP phones to work they have to be able to "see" the VOIP network and the VOIP server back at the main office for single remote users ( like from home ) the setup will be similar but only one phone and one computer so questions: best way to tie networks together? IPSEC or OpenVPN can this be setup to automatically connect ? any issues/suggestions with that design/topology ? QoS or issues with running the VOIP traffic over a VPN throughput, quality etc.. obviously depends on remote locations connection to some degree

    Read the article

  • pfsense multi-site VPN VOIP deployment

    - by sysconfig
    have main office pfsense firewall configured like this: local networks WAN - internet LAN - local network VOIP - IP phones need to connect remote offices (multi-users) and single remote users (from home) use IPSEC or OpenVPN to build "permanent" automatically connecting tunnels from remote location to main location. in remote locations, network will look like this: WAN - internet LAN - local network multiple users VOIP - multiple IP phones in order for the IP phones to work they have to be able to "see" the VOIP network and the VOIP server back at the main office for single remote users ( like from home ) the setup will be similar but only one phone and one computer so questions: best way to tie networks together? IPSEC or OpenVPN can this be setup to automatically connect ? any issues/suggestions with that design/topology ? QoS or issues with running the VOIP traffic over a VPN throughput, quality etc.. obviously depends on remote locations connection to some degree

    Read the article

  • .htaccess working on remote server but does not work on localhost. Getting 404 errors on localhost

    - by Afsheen Khosravian
    MY PROBLEM: When I visit localhost the site does not work. It shows some text from the site but it seems the server can not locate any other files. Here is a snippet of the errors from firebug: "NetworkError: 404 Not Found - localhost/css/popup.css" "NetworkError: 404 Not Found - localhost/css/style.css" "NetworkError: 404 Not Found - localhost/css/player.css" "NetworkError: 404 Not Found - localhost/css/ui-lightness/jquery-ui-1.8.11.custom.css" "NetworkError: 404 Not Found - localhost/js/jquery.js" It seems my server is looking for the files in the wrong places. For example, localhost/css/popup.css is actually located at localhost/app/webroot/css/popup.css. I have my site setup on a remote server with the same exact configurations and it works perfectly fine. I am just having this issue trying to run the site on my laptop at localhost. I edited my VirtualHosts file DocumentRoot and to /home/user/public_html/site.com/public/app/webroot/ and this reduces some errors but I feel that this is wrong and sort of hacking it since I didn't use these setting on my production server which works. The last note I want to make is that the website uses dynamic URLs. I dont know if that has anything to do with it. For example, on the production server the URLS are: site.com/#hello/12321. HERES WHAT I AM WORKING WITH: I have a LAMP server setup on my laptop which runs on Ubuntu 11.10. I have enabled mod_rewrite: sudo a2enmod rewrite Then I edited my Virtual Hosts file: <VirtualHost *:80> ServerName localhost DirectoryIndex index.php DocumentRoot /home/user/public_html/site.com/public <Directory /home/user/public_html/site.com/public/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> Then I restarted apache. My website is using cakePHP. This is the directory structure of the website: "/home/user/public_html/site.com/public" contains: index.php app cake plugins vendors These are my .htaccess files: /home/user/public_html/site.com/public/app/.htaccess: <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ webroot/ [L] RewriteRule (.*) webroot/$1 [L] </IfModule> /home/user/public_html/site.com/public/app/webroot/.htaccess: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] </IfModule>

    Read the article

  • SSH tunnel RDP through gateway server outside the network?

    - by Mike
    I need to access a PC via RDP that is behind a firewall. There's no way to connect to it directly that I know of. What I'd like to do is SSH from that remote PC to my home Ubuntu server, then connect to the remote PC using my home PC with the Ubuntu server as a gateway. I've tried SSH from remote PC to Ubuntu server, tunneling remote port 3389 to 127.0.0.1:3389, then SSH from home PC to Ubuntu server, tunneling local port 13389 to remote port 3389. At that point I try to RDP into: 127.0.0.1:13389, 127.0.0.2:13389, :3389 - no dice. I suppose I could simply set up an SSH server on my home PC and SSH from remote PC into home PC and then establish the tunnel that way, but I'd rather not go through the hassle of installing and configuring an ssh server on my home PC. I know LogMeIn would work here, but I don't want to go that route for various reasons. Any ideas? Thanks!

    Read the article

  • rsync -c -i flags identical files as different

    - by Scott
    My goal: given a list of files on local server, show any differences to the files with the same absolute path on remote server; e.g. compare local /etc/init.d/apache to same file on remote server. "Difference" for me means different checksum. I don't care about file modification times. I also do not want to sync the files (yet); only show the diffs. I have rsync 3.0.6 on both local and remote servers, which should be able to do what I want. However, it is claiming that local and remote files, even with identical checksums, are still different. Here's the command line: $ rsync --dry-run -avi --checksum --files-from=/home/me/test.txt --rsync-path="cd / && rsync" / me@remote:/ where: "me" = my username; "remote" = remote server hostname current working directory is '/' test.txt contains one line reading "/etc/init.d/apache" OS: Linux 2.6.9 Running cksum on /etc/init.d/apache on both servers yields the same result. The files are the same. However, rsync output is: me@remote's password: building file list ... done .d..t...... etc/ cd+++++++++ etc/init.d/ <f+++++++++ etc/init.d/apache sent 93 bytes received 21 bytes 20.73 bytes/sec total size is 2374 speedup is 20.82 (DRY RUN) The output codes (see http://www.samba.org/ftp/rsync/rsync.html) mean that rsync thinks /etc is identical except for mod time /etc/init.d needs to be changed /etc/init.d/apache will be sent to the remote server I don't understand how, with --checksum option, and the files having identical checksums, that rsync should think they're different. (I've tried with other files having identical mod times, and those files are not flagged as different.) I did run this in /, and made sure (AFAIK) that it's run remotely in /, so even relative pathnames will still be correct. I ran rsync with -avvvi for more debug info, but saw nothing remarkable. I'm wondering: is rsync still looking at file mod times, even with --checksum? am I somehow not setting up the path(s) right? what am I doing wrong?

    Read the article

  • How switch between screen inside screen?

    - by André Andrade
    I have to work inside two environment. One Windows (local) and one Linux (remote). I've installed the screen linux utility in both. I'm able to open a screen on my windows, then in one tab, I opened a ssh connection to the linux remote and I start another screen. Sample linux -- |0 linux remote 0| 1 linux remote 1 windows-- |0 linux | 9 windows I can switch between "linux remote 0" and "linux remote 1" using Atl+. This is configured in .screenrc (bindkey "^[0" select 0) How could I switch to "9 windows"?

    Read the article

  • Can I pass HTTPS traffic from one port to another?

    - by Kit Sunde
    I'm doing a proxy_pass in nginx on port 80 to 8000 on my remote server, and then a port forward from 8000 to 80 from the remote to my localhost. This works great, but I'd also like to do it with https but it seems like nginx needs a valid cert to pass the traffic on. Is there a way for my remote server to simply forward the trafic from port 443 to say 8443 (and then I'll forward remote 8443 to local 443). Then terminate ssl on my development machine instead instead of needing to do it on the remote server? My remote runs ubuntu and my localhost runs osx.

    Read the article

  • MySql transfer / update (a bit specific)

    - by Jeff
    before posting I was digging whole site but didn't find help for my problem, so I hope someone will help... Facts: 30 Gb mysql database on remote server (about 20.000.000 rows) data are once weekly updated in local network (mysql) I need to transfer/replace local updated database with remote connection is about 2mb (real mb, not mbps) up/down Point is that I can't have 'down time' of remote mysql server. Until now I Tried: navicat data sync - Ok, but take about 3 days to finish dbForge - ok but need 5 days to finish mysql dump transfer to remote server and execution - about day, but a lot of downtime rsync folder with database /mysql/lib/MY_DATABASE - 4 hours, but after that I need to execute always 'repir on remote server' which takes about 2 hours, and a lot of down time mysql dump piped from cl to directly goto server - still now satisfied many problems I could give you more things that I tried... mysql replication - slow Anyase, what is best,best way to: refresh remote mysql on weekly level and in same time to have 0 sec down time nor huge server load If you have any idea please share

    Read the article

  • Connecting Linux to WatchGuard Firebox SSL (OpenVPN client)

    Recently, I got a new project assignment that requires to connect permanently to the customer's network through VPN. They are using a so-called SSL VPN. As I am using OpenVPN since more than 5 years within my company's network I was quite curious about their solution and how it would actually be different from OpenVPN. Well, short version: It is a disguised version of OpenVPN. Unfortunately, the company only offers a client for Windows and Mac OS which shouldn't bother any Linux user after all. OpenVPN is part of every recent distribution and can be activated in a couple of minutes - both client as well as server (if necessary). WatchGuard Firebox SSL - About dialog Borrowing some files from a Windows client installation Initially, I didn't know about the product, so therefore I went through the installation on Windows 8. No obstacles (and no restart despite installation of TAP device drivers!) here and the secured VPN channel was up and running in less than 2 minutes or so. Much appreciated from both parties - customer and me. Of course, this whole client package and my long year approved and stable installation ignited my interest to have a closer look at the WatchGuard client. Compared to the original OpenVPN client (okay, I have to admit this is years ago) this commercial product is smarter in terms of file locations during installation. You'll be able to access the configuration and key files below your roaming application data folder. To get there, simply enter '%AppData%\WatchGuard\Mobile VPN' in your Windows/File Explorer and confirm with Enter/Return. This will display the following files: Application folder below user profile with configuration and certificate files From there we are going to borrow four files, namely: ca.crt client.crt client.ovpn client.pem and transfer them to the Linux system. You might also be able to isolate those four files from a Mac OS client. Frankly, I'm just too lazy to run the WatchGuard client installation on a Mac mini only to find the folder location, and I'm going to describe why a little bit further down this article. I know that you can do that! Feedback in the comment section is appreciated. Configuration of OpenVPN (console) Depending on your distribution the following steps might be a little different but in general you should be able to get the important information from it. I'm going to describe the steps in Ubuntu 13.04 (Raring Ringtail). As usual, there are two possibilities to achieve your goal: console and UI. Let's what it is necessary to be done. First of all, you should ensure that you have OpenVPN installed on your system. Open your favourite terminal application and run the following statement: $ sudo apt-get install openvpn network-manager-openvpn network-manager-openvpn-gnome Just to be on the safe side. The four above mentioned files from your Windows machine could be copied anywhere but either you place them below your own user directory or you put them (as root) below the default directory: /etc/openvpn At this stage you would be able to do a test run already. Just in case, run the following command and check the output (it's the similar information you would get from the 'View Logs...' context menu entry in Windows: $ sudo openvpn --config client.ovpn Pay attention to the correct path to your configuration and certificate files. OpenVPN will ask you to enter your Auth Username and Auth Password in order to establish the VPN connection, same as the Windows client. Remote server and user authentication to establish the VPN Please complete the test run and see whether all went well. You can disconnect pressing Ctrl+C. Simplifying your life - authentication file In my case, I actually set up the OpenVPN client on my gateway/router. This establishes a VPN channel between my network and my client's network and allows me to switch machines easily without having the necessity to install the WatchGuard client on each and every machine. That's also very handy for my various virtualised Windows machines. Anyway, as the client configuration, key and certificate files are located on a headless system somewhere under the roof, it is mandatory to have an automatic connection to the remote site. For that you should first change the file extension '.ovpn' to '.conf' which is the default extension on Linux systems for OpenVPN, and then open the client configuration file in order to extend an existing line. $ sudo mv client.ovpn client.conf $ sudo nano client.conf You should have a similar content to this one here: dev tunclientproto tcp-clientca ca.crtcert client.crtkey client.pemtls-remote "/O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server"remote-cert-eku "TLS Web Server Authentication"remote 1.2.3.4 443persist-keypersist-tunverb 3mute 20keepalive 10 60cipher AES-256-CBCauth SHA1float 1reneg-sec 3660nobindmute-replay-warningsauth-user-pass auth.txt Note: I changed the IP address of the remote directive above (which should be obvious, right?). Anyway, the required change is marked in red and we have to create a new authentication file 'auth.txt'. You can give the directive 'auth-user-pass' any file name you'd like to. Due to my existing OpenVPN infrastructure my setup differs completely from the above written content but for sake of simplicity I just keep it 'as-is'. Okay, let's create this file 'auth.txt' $ sudo nano auth.txt and just put two lines of information in it - username on the first, and password on the second line, like so: myvpnusernameverysecretpassword Store the file, change permissions, and call openvpn with your configuration file again: $ sudo chmod 0600 auth.txt $ sudo openvpn --config client.conf This should now work without being prompted to enter username and password. In case that you placed your files below the system-wide location /etc/openvpn you can operate your VPNs also via service command like so: $ sudo service openvpn start client $ sudo service openvpn stop client Using Network Manager For newer Linux users or the ones with 'console-phobia' I'm going to describe now how to use Network Manager to setup the OpenVPN client. For this move your mouse to the systray area and click on Network Connections => VPN Connections => Configure VPNs... which opens your Network Connections dialog. Alternatively, use the HUD and enter 'Network Connections'. Network connections overview in Ubuntu Click on 'Add' button. On the next dialog select 'Import a saved VPN configuration...' from the dropdown list and click on 'Create...' Choose connection type to import VPN configuration Now you navigate to your folder where you put the client files from the Windows system and you open the 'client.ovpn' file. Next, on the tab 'VPN' proceed with the following steps (directives from the configuration file are referred): General Check the IP address of Gateway ('remote' - we used 1.2.3.4 in this setup) Authentication Change Type to 'Password with Certificates (TLS)' ('auth-pass-user') Enter User name to access your client keys (Auth Name: myvpnusername) Enter Password (Auth Password: verysecretpassword) and choose your password handling Browse for your User Certificate ('cert' - should be pre-selected with client.crt) Browse for your CA Certificate ('ca' - should be filled as ca.crt) Specify your Private Key ('key' - here: client.pem) Then click on the 'Advanced...' button and check the following values: Use custom gateway port: 443 (second value of 'remote' directive) Check the selected value of Cipher ('cipher') Check HMAC Authentication ('auth') Enter the Subject Match: /O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server ('tls-remote') Finally, you have to confirm and close all dialogs. You should be able to establish your OpenVPN-WatchGuard connection via Network Manager. For that, click on the 'VPN Connections => client' entry on your Network Manager in the systray. It is advised that you keep an eye on the syslog to see whether there are any problematic issues that would require some additional attention. Advanced topic: routing As stated above, I'm running the 'WatchGuard client for Linux' on my head-less server, and since then I'm actually establishing a secure communication channel between two networks. In order to enable your network clients to get access to machines on the remote side there are two possibilities to enable that: Proper routing on both sides of the connection which enables both-direction access, or Network masquerading on the 'client side' of the connection Following, I'm going to describe the second option a little bit more in detail. The Linux system that I'm using is already configured as a gateway to the internet. I won't explain the necessary steps to do that, and will only focus on the additional tweaks I had to do. You can find tons of very good instructions and tutorials on 'How to setup a Linux gateway/router' - just use Google. OK, back to the actual modifications. First, we need to have some information about the network topology and IP address range used on the 'other' side. We can get this very easily from /var/log/syslog after we established the OpenVPN channel, like so: $ sudo tail -n20 /var/log/syslog Or if your system is quite busy with logging, like so: $ sudo less /var/log/syslog | grep ovpn The output should contain PUSH received message similar to the following one: Jul 23 23:13:28 ios1 ovpn-client[789]: PUSH: Received control message: 'PUSH_REPLY,topology subnet,route 192.168.1.0 255.255.255.0,dhcp-option DOMAIN ,route-gateway 192.168.6.1,topology subnet,ping 10,ping-restart 60,ifconfig 192.168.6.2 255.255.255.0' The interesting part for us is the route command which I highlighted already in the sample PUSH_REPLY. Depending on your remote server there might be multiple networks defined (172.16.x.x and/or 10.x.x.x). Important: The IP address range on both sides of the connection has to be different, otherwise you will have to shuffle IPs or increase your the netmask. {loadposition content_adsense} After the VPN connection is established, we have to extend the rules for iptables in order to route and masquerade IP packets properly. I created a shell script to take care of those steps: #!/bin/sh -eIPTABLES=/sbin/iptablesDEV_LAN=eth0DEV_VPNS=tun+VPN=192.168.1.0/24 $IPTABLES -A FORWARD -i $DEV_LAN -o $DEV_VPNS -d $VPN -j ACCEPT$IPTABLES -A FORWARD -i $DEV_VPNS -o $DEV_LAN -s $VPN -j ACCEPT$IPTABLES -t nat -A POSTROUTING -o $DEV_VPNS -d $VPN -j MASQUERADE I'm using the wildcard interface 'tun+' because I have multiple client configurations for OpenVPN on my server. In your case, it might be sufficient to specify device 'tun0' only. Simplifying your life - automatic connect on boot Now, that the client connection works flawless, configuration of routing and iptables is okay, we might consider to add another 'laziness' factor into our setup. Due to kernel updates or other circumstances it might be necessary to reboot your system. Wouldn't it be nice that the VPN connections are established during the boot procedure? Yes, of course it would be. To achieve this, we have to configure OpenVPN to automatically start our VPNs via init script. Let's have a look at the responsible 'default' file and adjust the settings accordingly. $ sudo nano /etc/default/openvpn Which should have a similar content to this: # This is the configuration file for /etc/init.d/openvpn## Start only these VPNs automatically via init script.# Allowed values are "all", "none" or space separated list of# names of the VPNs. If empty, "all" is assumed.# The VPN name refers to the VPN configutation file name.# i.e. "home" would be /etc/openvpn/home.conf#AUTOSTART="all"#AUTOSTART="none"#AUTOSTART="home office"## ... more information which remains unmodified ... With the OpenVPN client configuration as described above you would either set AUTOSTART to "all" or to "client" to enable automatic start of your VPN(s) during boot. You should also take care that your iptables commands are executed after the link has been established, too. You can easily test this configuration without reboot, like so: $ sudo service openvpn restart Enjoy stable VPN connections between your Linux system(s) and a WatchGuard Firebox SSL remote server. Cheers, JoKi

    Read the article

  • NTP service, offset increasing after sync

    - by Ajay
    I have installed Ubuntu 12.10 version on my PC. I am running NTP service having NTP server as GPS. I found that when we start NTP service by ntp start command, PC is able to sync with GPS as i get '*' symbol before GPS IP when i run ntpq -p command. This remains good for some time and then the * symbol is removed which means that PC is not synchronized to that server. Now, by running command ntpq -p it shows that all parameter are OK but as '*' is removed, slowly offset goes on increasing. remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 7 16 1 2.333 23.799 0.808 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 14 16 3 2.333 23.799 0.879 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 11 16 7 2.333 23.799 1.500 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 8 16 17 2.333 23.799 2.177 below are the last 4 ntp status when sync is lost with GPS ============================================================================== 192.168.100.33 .GPS. 1 u 1 16 377 2.404 1169.94 1.735 remote refid st t when poll reach delay offset jitter ============================================================================== 192.168.100.33 .GPS. 1 u - 16 377 2.513 1171.80 0.898 remote refid st t when poll reach delay offset jitter ============================================================================== 192.168.100.33 .GPS. 1 u 15 16 377 2.513 1171.80 0.898 Since, GPS is already available, PC never re-synchronize itself to GPS later ON. I have to restart the ntp service and then PC synchronizes to GPS and '*' symbol arrives.

    Read the article

  • Using Computer Management (MMC) with the Solaris CIFS Service (August 25, 2009)

    - by user12612012
    One of our goals for the Solaris CIFS Service is to provide seamless Windows interoperability: not just to deliver ubiquitous, multi-protocol file sharing, which is obviously a major part of this project, but to support Windows services at a fundamental level.  It's an ongoing mission and our latest update includes support for Windows remote management. Remote management is extremely important to Windows administrators and one of the mainstay tools is Computer Management. Computer Management is a Windows administration application, actually a collection of Microsoft Management Console (MMC) tools, that can be used to configure, monitor and manage local and remote services and resources.  The MMC is an extensible framework of registered components, known as snap-ins, which allows Computer Management to provide comprehensive management features for both the local system and remote systems on the network. Supported Computer Management features include: Share ManagementSupport for share management is relatively complete.  You can create, delete, list and configure shares.  It's not yet possible to change the maximum allowed or number of users properties but other properties, including the Share Permissions, can be managed via the MMC. Users, Groups and ConnectionsYou can view local SMB users and groups, monitor user connections and see the list of open files. If necessary, you can also disconnect users and/or close files. ServicesYou can view the SMF services running on an OpenSolaris system.  This is a read-only view - we don't support service management (the ability to start or stop) SMF services from Computer Management (yet). To ensure that only the appropriate users have access to administrative operations there are some access restrictions on these remote management features. Regular users can: List shares Only members of the Administrators or Power Users groups can: Manage shares List connections Only members of the Administrators group can: List open files and close files Disconnect users View SMF services View the EventLog Here's a screenshot when I was using Computer Management and Server Manager (another Windows remote management application) on Windows XP to view some open files on an OpenSolaris system to prepare a slide presentation on MMC support.

    Read the article

  • [Get Proactive!] Oracle Service Tools Bundle (STB) ???????????????????????????????

    - by aiyoku
    ??????????????????·?????????? ?????????·?????????????????????·?????????? ?????????·???????????????????????·????????? ???Solaris????????·??????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????? LED ?????????????????????? ??????????????????LED ????????????????????????????????????????????????? ???LED??????????????????????????????????????????????????????????????????????????????????????????????????????????????? ???????????????????????????????????????????????????????????????????????????????????????????????????????????? ????????·?????????????????????????????????????????????????????????????????????????Oracle Service Tools Bundle (STB) ??????????????????? ?????????????????????????? Oracle Explorer Data Collector Oracle Remote Diagnostic Agent (RDA) Oracle Autonomous Crashdump Tool (ACT)   Oracle Explorer Data Collector - ???????????Solaris????????·?????????????????? Oracle Explorer Data Collector??????????????? ???????????????????????????????????·????????????????????????????????????????????·????????·???????????????? ???????????????????????????????·?????????????????????? ???????·?????????????????????????????????????? ???? ???????????????????????????????????????????????????????????? Oracle Services Tools Bundle (STB) - RDA/Explorer, SNEEP, ACT - ??? (Doc ID 1496381.1) Oracle Explorer Data Collector???????·?????? (Doc ID 1571154.1) ??: ?????????????????·????????????????????????????????????·??????????????????????? ?????????????????·???????????????????????????????????   Oracle Remote Diagnostic Agent (RDA) - Solaris ???????????·????????????? MacOS?UNIX?VMS?Windows??????Oracle Remote Diagnostic Agent (RDA) ?????????????????????? Oracle Remote Diagnostic Agent (RDA)??perl???????????????????????Perl???????????????????????????????????????? Oracle Remote Diagnostic Agent (RDA) ?????????????????????????? Remote Diagnostic Agent (RDA) - Getting Started (Doc ID 314422.1) ???Solaris ????????·???????RDA?????Oracle Explorer Data Collector????????   Oracle Autonomous Crashdump Tool (ACT) - kernel core dump ??????????????? ????·?????????·????????????·???????????????????????????????????? ????kernel core????????????????? Solaris[TM]????????·????: x86???x64?????????·??·?????????????? (Doc ID 1515734.1) ????·??????????????kernel core dump????????????????kernel core dump??????????????????????·??????????????????? ?????kernel core dump????????????????????????????????????????????? Oracle Autonomous Crashdump Tool (ACT) ? Services Tools Bundle (STB) ???????????? ????????????????????????????????????????????????????·?????????????????kernel core dump ?????·?????????????????Oracle Autonomous Crashdump Tool (ACT) ????????·???????????????????? Oracle Autonomous Crashdump Tool (ACT)?????????? Oracle Autonomous Crashdump Tool (ACT) - ??? (Doc ID 1596529.1) ???????

    Read the article

  • Reuse remote ssh connections and reduce command/session logging verbosity?

    - by ewwhite
    I have a number of systems that rely on application-level mirroring to a secondary server. The secondary server pulls data by means of a series of remote SSH commands executed on the primary. The application is a bit of a black box, and I may not be able to make modifications to the scripts that are used. My issue is that the logging in /var/log/secure is absolutely flooded with requests from the service user, admin. These commands occur many times per second and have a corresponding impact on logs. They rely on passphrase-less key exchange. The OS involved is EL5 and EL6. Example below. Is there any way to reduce the amount of logging from these actions. (By user? By source?) Is there a cleaner way for the developers to perform these ssh executions without spawning so many sessions? Seems inefficient. Can I reuse the existing connections? Example log output: Jul 24 19:08:54 Cantaloupe sshd[46367]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46446]: Accepted publickey for admin from 172.30.27.32 port 33526 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46446]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46446]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46475]: Accepted publickey for admin from 172.30.27.32 port 33527 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46475]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46475]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46504]: Accepted publickey for admin from 172.30.27.32 port 33528 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46504]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46504]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46583]: Accepted publickey for admin from 172.30.27.32 port 33529 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46583]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46583]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46612]: Accepted publickey for admin from 172.30.27.32 port 33530 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46612]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46612]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46641]: Accepted publickey for admin from 172.30.27.32 port 33531 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46641]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46641]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46720]: Accepted publickey for admin from 172.30.27.32 port 33532 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46720]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46720]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46749]: Accepted publickey for admin from 172.30.27.32 port 33533 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46749]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46749]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46778]: Accepted publickey for admin from 172.30.27.32 port 33534 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46778]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46778]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46857]: Accepted publickey for admin from 172.30.27.32 port 33535 ssh2

    Read the article

  • MMC Snapin development: 'MMC not responding' - Is there a timeout that can be set for debugging?

    - by Ed Sykes
    So I'm writing a snapin for MMC 3. Quite often I have the debugger attached and I'm stepping through some code. MMC has some kind of a fail safe for misbehaving snapins that automatically unloads them after a timeout. The message is 'This snap-in is not responding'. After that MMC can behave as though your snap-in has been unloaded. fair enough, it's not responding because I'm stepping through the debugger. However, the timeout is very small for development. Does anyone know of a way of increasing the timeout? I googled and couldn't find anything. Checking the MMC registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MMC and couldn't see anything there. I also checked the snap-in registry location under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MMC\SnapIns. No luck. I feel like there must be a way of increasing the timeout as this is something the devs as Microsoft must have encountered this problem.

    Read the article

  • Visual Studio 2010 and javascript debugging in external javascript files (embedded and minified).

    - by OKB
    Hi, The asp.net web application I'm working on is written in asp.net 3.5, the web app solution is upgraded from VS 2008 (don't know if that matter). The solution had javascript in the aspx files before I moved the javascript to external files. Now what I have done is to set all the javascript files to be embedded resource (except the jquery.js file) and I want to minify them when building for release by using the MS Ajax Minifier. I want to use the minified javascript files when I'm in the RELEASE mode and when I'm in DEBUG mode I want to use the "normal" versions. My problem now is that I'm unable to debug the javascript code in debug mode. When I set a break point a javascript function, VS is not breaking at all when the function is executed. I have added this entry in my web.config: <system.web> <compilation defaultLanguage="c#" debug="true" /> </system.web> Here how I register the jquery in an aspx-file: <asp:ScriptManagerProxy ID="ScriptManagerProxy1" runat="server"> <Scripts> <asp:ScriptReference Path="~/Javascript/jquery.js"/> </Scripts> </asp:ScriptManagerProxy> External javascript registration in the code-behind: #if DEBUG [assembly: WebResource("braArkivWeb.Javascript.jquery.js", "text/javascript")] [assembly: WebResource(braArkivWeb.ArkivdelSearch.JavaScriptResource, "text/javascript")] #else [assembly: WebResource("braArkivWeb.Javascript.jquery.min.js", "text/javascript")] [assembly: WebResource(braArkivWeb.ArkivdelSearch.JavaScriptMinResource, "text/javascript")] #endif public partial class ArkivdelSearch : Page { public const string JavaScriptResource = "braArkivWeb.ArkivdelSearch.js"; public const string JavaScriptMinResource = "braArkivWeb.ArkivdelSearch.min.js"; protected void Page_Init(object sender, EventArgs e) { InitPageClientScript(); } private void InitPageClientScript() { #if DEBUG this.Page.ClientScript.RegisterClientScriptResource(typeof(ArkivdelSearch), "braArkivWeb.Javascript.jquery.js"); this.Page.ClientScript.RegisterClientScriptResource(typeof(ArkivdelSearch), JavaScriptResource); #else this.Page.ClientScript.RegisterClientScriptResource(typeof(ArkivdelSearch), "braArkivWeb.Javascript.jquery.min.js"); this.Page.ClientScript.RegisterClientScriptResource(typeof(ArkivdelSearch), JavaScriptMinResource); #endif StringBuilder sb = new StringBuilder(); Page.ClientScript.RegisterStartupScript(typeof(ArkivdelSearch), "initArkivdelSearch", sb.ToString(), true); } } In the project file I have added this code to minify the javascripts: <!-- Minify all JavaScript files that were embedded as resources --> <UsingTask TaskName="AjaxMin" AssemblyFile="$(MSBuildProjectDirectory)\..\..\SharedLib\AjaxMinTask.dll" /> <PropertyGroup> <ResGenDependsOn> MinifyJavaScript; $(ResGenDependsOn) </ResGenDependsOn> </PropertyGroup> <Target Name="MinifyJavaScript" Condition=" '$(ConfigurationName)'=='Release' "> <Copy SourceFiles="@(EmbeddedResource)" DestinationFolder="$(IntermediateOutputPath)" Condition="'%(Extension)'=='.js'"> <Output TaskParameter="DestinationFiles" ItemName="EmbeddedJavaScriptResource" /> </Copy> <AjaxMin JsSourceFiles="@(EmbeddedJavaScriptResource)" JsSourceExtensionPattern="\.js$" JsTargetExtension=".js" /> <ItemGroup> <EmbeddedResource Remove="@(EmbeddedResource)" Condition="'%(Extension)'=='.js'" /> <EmbeddedResource Include="@(EmbeddedJavaScriptResource)" /> <FileWrites Include="@(EmbeddedJavaScriptResource)" /> </ItemGroup> </Target> Do you see what I'm doing wrong? Or what I'm missing in order to be able to debug my javascript code? Best Regards, OKB

    Read the article

  • Customize Team Build 2010 – Part 12: How to debug my custom activities

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application       Developers are “spoilt” persons who expect to be able to have easy debugging experiences for every technique they work with. So they also expect it when developing custom activities for the build process template. This post describes how you can debug your custom activities without having to develop on the build server itself. Remote debugging prerequisites The prerequisite for these steps are to install the Microsoft Visual Studio Remote Debugging Monitor. You can find information how to install this at http://msdn.microsoft.com/en-us/library/bt727f1t.aspx. I chose for the option to run the remote debugger on the build server from a file share. Debugging symbols prerequisites To be able to start the debugging, you need to have the pdb files on the buildserver together with the assembly. The pdb must have been build with Full Debug Info. Steps In my setup I have a development machine and a build server. To setup the remote debugging, I performed the following steps Locate on your development machine the folder C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Remote Debugger Create a share for the Remote Debugger folder. Make sure that the share (and the folder) has the correct permissions so the user on the build server has access to the share. On the build server go to the shared “Remote Debugger” folder Start msvsmon.exe which is located in the folder that represents the platform of the build server. This will open a winform application like   Go back to your development machine and open the BuildProcess solution. Start the Attach to process command (Ctrl+Alt+P) Type in the Qualifier the name of the build server. In my case the user account that has started the msvsmon is another user then the user on my development machine. In that case you have to type the qualifier in the format that is shown in the Remote Debugging Monitor (in my case LOCAL\Administrator@TFSLAB) and confirm it by pressing <Enter> Since the build service is running with other credentials, check the option “Show processes from all users”. Now the Attach to process dialog shows the TFSBuildServiceHost process Set the breakpoint in the activity you want to debug and kick of a build. Be aware that when you attach to the TFSBuildServiceHost that you debug every single build that is run by this windows service, so make sure you don’t debug the build server that is in production! You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Visual Studio confused when there are multiple system.web sections in your web.config

    - by Jeff Widmer
    I am trying to start debugging in Visual Studio for the website I am currently working on but Visual Studio is telling me that I have to enable debugging in the web.config to continue: But I clearly have debugging enabled: At first I chose the option to Modify the Web.config file to enable debugging but then I started receiving the following exception on my site: HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. Config section 'system.web/compilation' already defined. Sections must only appear once per config file. See the help topic <location> for exceptions   So what is going on here?  I already have debug=”true”, Visual Studio tells me I do not, and then when I give Visual Studio permission to fix the problem, I get a configuration error. Eventually I tracked it down to having two <system.web> sections. I had defined customErrors higher in the web.config: And then had a second system.web section with compilation debug=”true” further down in the web.config.  This is valid in the web.config and my site was not complaining but I guess Visual Studio does not know how to handle it and sees the first system.web, does not see the debug=”true” and thinks your site is not set up for debugging. To fix this so that Visual Studio was not going to complain, I removed the duplicate system.web declaration and moved the customErrors statement down.

    Read the article

  • Is there a way to simulate a remote AMF call with WCAT?

    - by Lieven Cardoen
    I managed to use WCAT to test an url for an aspx page getting an asset. Now I would like to use it to test a remote call to WebORB from Flex. Problem is I don't know what to put in RequestData, RequestHeader, ... If I look with Charles (HTTP Monitor), then it's impossible to copy paste the request data (it has squares and all kind of weird characters that can't be copy paste to my txt file).

    Read the article

  • Can I programatically get hold of the Autos/local variables that is shown when debugging?

    - by Stefan
    Im trying to build an error-logger that loggs running values that is active in the function that caused the error. (just for fun so its not a critical problem) When going in break-mode and looking at the locals-tab and autos-tab you can see all active variables (name, type and value), it would be useful to get hold of that for logging purposes when an error occur and on some other occasions. For my example, I just want to find all local variables that are of type string and integer and store the name and value of them. Is this possible with reflection? Any tips or pointers that get me closer to my goal would be very appreciated. I have toyed with using expression on a specifik object (a structure) to create an automapper against a dataset, but I have not done anything like what I ask for above, so please make me happy and say its possible. Thanks.

    Read the article

  • Help with debugging COM errors? (.mdi to .pdf file conversions using Microsoft Office Document Imagi

    - by RyanW
    I thought I had a working solution for converting .mdi files to PDF using the Microsoft Office Document Imaging object model. The solution is in a Windows Service, but now I'm running into some errors that I'm having trouble tracking down info on. The exception I get is: The server threw an exception. (Exception from HRESULT: 0x80010105 (RPC_E_SERVERFAULT)) System.Runtime.InteropServices.COMException (0x80010105): The server threw an exception. (Exception from HRESULT: 0x80010105 (RPC_E_SERVERFAULT)) at MODI.DocumentClass.Create(String FileOpen) at DocumentStore.Mdi2PDF(String path, String newPath) Then, in the Event Viewer there is the following Application error: Faulting application MyWindowsServiceName.exe, version 1.0.0.0, time stamp 0x4b97f185, faulting module mso.dll, version 12.0.6425.1000, time stamp 0x49d65443, exception code 0xc0000005, fault offset 0x0000bd8e, process id 0xa5c, application start time 0x01cac08cf032914b. Here's the method that is doing the conversion: private int? Mdi2PDF(String path, String newPath) { int? pageCount = null; string tmpTif = Path.GetTempFileName(); MODI.Document mdiDoc = new MODI.Document(); mdiDoc.Create(path); mdiDoc.SaveAs(tmpTif, MODI.MiFILE_FORMAT.miFILE_FORMAT_TIFF_LOSSLESS, MODI.MiCOMP_LEVEL.miCOMP_LEVEL_HIGH); mdiDoc.Close(false); pageCount = Tiff2PDF(tmpTif, newPath); if (File.Exists(tmpTif)) File.Delete(tmpTif); return pageCount; } I removed all threading from the service invoking this, so that only the primary thread was initializing the MODI object, but still got the error, so it doesn't appear to be threading related. I also built a a console apps converting hundreds of documents and DID NOT get the exception. So, it seems to be caused by creating too many instances of the MODI object, but only instantiated within a Service? Doesn't quite make sense. Anybody have any clues about these errors and how to debug them further?

    Read the article

  • How to set NSZombieEnabled for debugging EXC_BAD_ACCESS on release target for an iPhone app?

    - by Bobby Moretti
    I'm developing an iPhone application. I have an EXC_BAD_ACCESS that occurs only in the release target; when I build the debug target the exception does not occur. However, when I set the NSZombieEnabled environment variable to YES, I still get the EXC_BAD_ACCESS with no further information. Is it even possible for NSZombieEnabled to work when executing the release target? I don't see why not, since gdb is running in both cases...

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >