Search Results

Search found 14544 results on 582 pages for 'ssh config'.

Page 151/582 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Cannot connect to Open WiFi hotspot created by Android

    - by Bibhas
    I'm trying to share my 3G data connection via WiFi hotspot. I have an open Hotspot running on my phone(Xperia Neo V - MT11i - Android 2.3.4). But I cannot connect to it from my Ubuntu system. Here is the syslog while I try to connect to it - NetworkManager[1077]: <info> Activation (wlan0) starting connection 'TheNeo' NetworkManager[1077]: <info> (wlan0): device state change: disconnected -> prepare (reason 'none') [30 40 0] NetworkManager[1077]: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) scheduled... NetworkManager[1077]: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) started... NetworkManager[1077]: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) scheduled... NetworkManager[1077]: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) complete. NetworkManager[1077]: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) starting... NetworkManager[1077]: <info> (wlan0): device state change: prepare -> config (reason 'none') [40 50 0] NetworkManager[1077]: <info> Activation (wlan0/wireless): connection 'TheNeo' requires no security. No secrets needed. NetworkManager[1077]: <info> Config: added 'ssid' value 'TheNeo' NetworkManager[1077]: <info> Config: added 'scan_ssid' value '1' NetworkManager[1077]: <info> Config: added 'key_mgmt' value 'NONE' NetworkManager[1077]: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) complete. NetworkManager[1077]: <info> Config: set interface ap_scan to 1 NetworkManager[1077]: <info> (wlan0): supplicant interface state: inactive -> scanning wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) NetworkManager[1077]: <info> (wlan0): supplicant interface state: scanning -> authenticating kernel: [17498.113553] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17498.310138] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17498.510069] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17498.710083] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17504.779927] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17504.976420] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17505.176379] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17505.376314] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17511.478385] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17511.674738] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17511.874655] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17512.074659] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17518.152643] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17518.349064] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17518.549051] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17518.748999] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17524.858896] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17525.055404] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17525.255387] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17525.455254] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17531.589176] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17531.785747] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17531.985724] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17532.185610] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17538.329257] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17538.528003] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17538.728024] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17538.927922] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17545.022036] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17545.218339] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17545.418319] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17545.618206] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17551.724177] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17551.920685] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17552.120597] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17552.320526] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out NetworkManager[1077]: <warn> Activation (wlan0/wireless): association took too long, failing activation. NetworkManager[1077]: <info> (wlan0): device state change: config -> failed (reason 'supplicant-timeout') [50 120 11] NetworkManager[1077]: <warn> Activation (wlan0) failed for access point (TheNeo) NetworkManager[1077]: <warn> Activation (wlan0) failed. NetworkManager[1077]: <info> (wlan0): device state change: failed -> disconnected (reason 'none') [120 30 0] NetworkManager[1077]: <info> (wlan0): deactivating device (reason 'none') [0] NetworkManager[1077]: <info> (wlan0): supplicant interface state: authenticating -> disconnected NetworkManager[1077]: <warn> Couldn't disconnect supplicant interface: This interface is not connected. Why is direct probe to 5c:b5:24:2f:d1:2f timed out? Any idea?

    Read the article

  • How can I autoclean my gnome main menu?

    - by Bruce Connor
    I like to experiment with lots of different software in my Ubuntu install. Then, every time Ubuntu reaches a new release cycle, I simply do a clean install (instead of upgrading) to get rid of all the extra software (and their respective config files/folders). The only thing I always backup and carry to the next install (besides personal files) are the config files for gnome, so my desktop is always the way I like it. =) The problem with that, is that the different packages I test out never get properly uninstalled, so my gnome main menu is full of broken links referring to software I had in previous installations (which got carried over because I kept the gnome config files). Is there any automated way to go through my gnome main menu and remove any broken links? I know how to manually edit the menu, and I could go through it myself, but I'm looking for some script or package that will clean for me so I wouldn't have to do it manually every release cycle.

    Read the article

  • How to package a file into .deb?

    - by Fluffy
    I'm trying to a make a simple .deb package, which would basically edit a config of another package I listed as a dependency. I added the required manipulations to the postinstall file. The problem is I can't find a way to package an example config, which should be copied and edited from the postinstall script. At the moment I just have a folder with the sample config, of which I'm creating a tar.gz and orig.tar.gz, then dh_make in that folder, edit the generated files and run debuild. However if I open the resulting .deb file with an archive manager, I can see that the sample file was not included at all.

    Read the article

  • How can IIS 7.5 have the error pages for a site reset to the default configuration?

    - by Sn3akyP3t3
    A mishap occurred with web.config to accommodate a subsite existing. I made use of “<location path="." inheritInChildApplications="false">”. Essentially it was a workaround put in place for nested web.config files which was causing a conflict. The result was that error pages were not being handled properly. Error 500 was being passed to the client for every type of error encountered. Removal of the offending inheritInChildApplications tag from the root web.config restored normal operations of most of the error handling, but for some reason error 503 is a correct response header, but the IIS server is performing the custom actions for error 403.4 which is a redirect to https. I'm looking to restore defaults for error pages so that the behavior once again is restored. I then can re-add customizations for the error pages.

    Read the article

  • How to configure path or set environment variables for installation?

    - by Orr22
    I'm aiming to install APE in Ubuntu 14.04 LTS, a simple code for pseudopotential generation. I'm having this error message while running ./configure: checking for gsl-config... no checking for GSL - version >= 1.0... no *** The gsl-config script installed by GSL could not be found *** If GSL was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GSL_CONFIG environment variable to the *** full path to gsl-config. configure: error: could not find required gsl library I checked and I have the GSL already installed: :~/Programas/ape-2.2.0$ dpkg -l | grep gsl ii libgsl0ldbl 1.16+dfsg-1ubuntu1 i386 GNU Scientific Library (GSL) -- library package So I have the library but the program installation isn't finding it. Any help? Thanks in advance

    Read the article

  • How to package a single text file into a .deb?

    - by Fluffy
    I'm trying to a make a simple .deb package, which would basically edit a config of another package I listed as a dependency. I added the required manipulations to the postinstall file. The problem is I can't find a way to package an example config, which should be copied and edited from the postinstall script. At the moment I just have a folder with the sample config, of which I'm creating a tar.gz and orig.tar.gz, then dh_make in that folder, edit the generated files and run debuild. However if I open the resulting .deb file with an archive manager, I can see that the sample file was not included at all.

    Read the article

  • xubuntu 12.04 restarts after suspend - only from my account

    - by Yoav Aner
    After installing a clean xubuntu 12.04 I noticed that when I suspend, the computer suspends and turns itself off (you see the lights go off, and a click sound from the HD or fans), but then about 2 seconds later it turns itself back on again... The odd thing is that: It doesn't happen when booting from the liveCD I created another user account. When I log onto this account I can suspend fine. The computer stays off until I press the ON button When I remove my .config folder and it's clean - I can also suspend without problem on my account So it seems that something in my user config is causing this, but I can't work out what it might be. I tried diffing the two .config folders, and also all processes running with one account compared to the other (ps -ef |grep <username>), but couldn't find anything obvious that might be causing this...

    Read the article

  • How to maintain different settings files in TFS

    - by aggietech
    I'm currently working on integrating the TFS source control system at my work ... I run into one small problem ... I need different version of web.config (among other config files) for different branches (due to the environment that we're releasing the web application to). (for example - i don't want to merge the web.config file all the time even though there are differences ...) Is there a good way to keep track of that (instead of manually diff-ing the files)? thanks!

    Read the article

  • How can I close a port that appears to be orphaned by Xvfb?

    - by Jim Fiorato
    I'm running Xvfb on a FC8 Amazon EC2 image. On occasion Xvfb will crash (unable at the moment to find out the reason for the crash), and after crashing the TCP port will appear to be orphaned. I'm unable to get a PID to kill any process that may be using it. I'm starting Xvfb with: Xvfb :7 -screen 0 1024x768x24 & Examples of what I'm working with are below, the Xvfb port is (was) 6007: # netstat -ap Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:ssh *:* LISTEN 1894/sshd tcp 0 0 *:6007 *:* LISTEN - tcp 0 352 ip-10-84-69-165.ec2.int:ssh c-71-194-253-238.hsd1:51689 ESTABLISHED 2981/0 udp 0 0 *:bootpc *:* 1817/dhclient udp 0 0 *:bootpc *:* 1463/dhclient Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ] DGRAM 871 668/udevd @/org/kernel/udev/udevd unix 2 [ ACC ] STREAM LISTENING 5385 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 6 [ ] DGRAM 5353 1867/rsyslogd /dev/log unix 2 [ ] DGRAM 11861 2981/0 unix 2 [ ] DGRAM 5461 1974/crond unix 2 [ ] DGRAM 5451 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5438 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 5437 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5396 1880/dbus-daemon unix 3 [ ] STREAM CONNECTED 5395 1880/dbus-daemon unix 2 [ ] DGRAM 5361 1871/rklogd # lsof -i COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME dhclient 1463 root 3u IPv4 4704 UDP *:bootpc dhclient 1817 root 4u IPv4 5173 UDP *:bootpc sshd 1894 root 3u IPv4 5414 TCP *:ssh (LISTEN) sshd 2981 root 3u IPv4 11825 TCP ip-10-84-69-165.ec2.internal:ssh->c-71-194-253-238.hsd1.il.comcast.net:51689 (ESTABLISHED) Attempting to force the port closed with iptables doesn't seem to work either. iptables -A INPUT -p tcp --dport 6007 -j DROP I'm at a loss as to how to reclaim/free the port. From what I can tell, this port will remain in this state until the EC2 instance is shut down. So, how can I close this port so I can restart Xvfb?

    Read the article

  • D-LINK 2450U DSL router: Port forwarding forwading to the modem itself, not the specified IP

    - by axk
    I found a similar question but it has no satisfactory answers. I have a D-LINK 2540U DSL router. It has a basic port forwarding(under DNS - Virtual Servers) configuration in the administration panel where you specify: external port range, protocol, internal port range, server IP address and it is supposed to forward that port to that IP address. When I first set it up for a Real VNC connection it worked fine, just as I expected. Then I added a DynDNS configuration entry in the router's 'Dynamic DNS' section and added an additional SSH (22) forwarding rule. The SSH forwarding also worked fine (now with the dynamic hostname, but I suppose it doesn't make any difference as far as SSH is concerned). Then I removed the SSH rule and after that the VNC forwarding stopped working with the VNC client failing to connect (I have tried to connect with telnet and it also failed to connect, so it wasn't a VNC problem). After adding a rule for port 80 it turned out it would forward on port 80 though not to the specified server IP but to the modem itself. At least it is what it looks like, because it gives me the administration panel when I connect to my external IP (both using a browser and plain telnet in which case I can see that it is mini_hhtpd sitting on the port, which is obviously the modem's administration panel). Have anybody encountered a similar problem with port forwarding? I have tried to do a reset through the administration panel and to restore a backup of the settings made before I started playing with port forwarding, but it didn't help. Should I do a 'hard' reset with the button on the modem? Is it any different from the administration panel's reset (Restore default)?

    Read the article

  • Usage of putty in command line from Hudson

    - by kij
    Hi, I'm trying to use putty in command line from an hudson job. The command is the following one: putty -ssh -2 -P 22 USERNAME@SERVER_ADDR -pw PASS -m command.txt Where 'command.txt' is a shell script to execute in the server through SSH. If i launch this command from the Window command prompt, it works, the shell script is executed on the server machine. If i launch a build of the hudson job configured with this batch command, it doesn't work. The build is running... and running... and running.. without doing anything, and i have to stop it manually. So my question is: Is it possible to launch an external programm (i.e. putty) from an hudson job ? ps: i tried SSH plugin but... not a really good plugin (pre/post build, fail status of the commands launched not caught by hudson, etc.) Thanks in advance for your help. Best regards. kij EDIT: These are the build logs: [workspace] $ cmd /c call C:\WINDOWS\TEMP\hudson7429256014041663539.bat C:\Hudson\jobs\Artifact deployer\workspace>putty -ssh -2 -P 22 USER@SERV_ADD -pw PASS -m com.txt Le build a été annulé Finished: ABORTED And the Hudson.err.log file at the same time (after a stop): 3 juin 2010 18:27:28 hudson.model.Run run INFO: Artifact deployer #6 aborted java.lang.InterruptedException at java.lang.ProcessImpl.waitFor(Native Method) at hudson.Proc$LocalProc.join(Proc.java:179) at hudson.Launcher$ProcStarter.join(Launcher.java:278) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:83) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:601) at hudson.model.Build$RunnerImpl.build(Build.java:174) at hudson.model.Build$RunnerImpl.doRun(Build.java:138) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:416) at hudson.model.Run.run(Run.java:1241) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:124) My shell script only write "hello" in a "hello.txt" file on the server, and nothing is done.

    Read the article

  • Connection refused after installing vsftp on Ubuntu 8.04 with fail2ban

    - by Patrick
    I have been using an Ubuntu 8.04 server with fail2ban for a while now (12+ months) and using ftp over SSH without any problems. I have a new user that needs to put files onto the server from an IP modem. I have installed vsftp (sudo apt-get install vsftp) and everything installed correctly. I have created an ftp user on the server following this guide. Whenever I try to connect to the server with my ftp program (filezilla) I get an immediate response of: Connection attempt failed with "ECONNREFUSED - Connection refused by server". I have looked into fail2ban and cannot find any problems. The iptables setup is: Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- anywhere anywhere VSFTP config file (commented lines removed) listen=YES anonymous_enable=NO local_enable=YES write_enable=YES dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=[username] secure_chroot_dir=/var/run/vsftpd pam_service_name=vsftpd rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key Any ideas on what is preventing access to the server?

    Read the article

  • Host name change breaking http? Fedora

    - by Dave
    OK so I have been messing around on my development server. It has been a while since I have had my head in linux and I suspect I have broken something. I have SSH running and that is working fine. I also have HTTP and I had FTP running also. Earlier today I decided I wanted to rename the machine so I updated the /etc/hosts file and /etc/sysconfig/network. I also changed the server name in the httpd.conf. I rebooted the machine and reconnected to SSH fine. Later I was messing around with the FTP service (trying to tighten up the user security) and when i tried to connect remotely to FTP no joy, it said cannot connect. I thought that was weird but had planned to remove ftp as we will be using github so removed ftp and moved on. Then I tried to connect to the website but major fail. even connecting to the IP address is failing. I used lynx to connect to the localhost and there was my site so something going on at server level. I thought maybe something up with iptables but I have not changed them but tried adding http but still no joy. I have a - Fedora release 17 (Beefy Miracle) NAME=Fedora VERSION="17 (Beefy Miracle)" ID=fedora VERSION_ID=17 PRETTY_NAME="Fedora 17 (Beefy Miracle)" ANSI_COLOR="0;34" CPE_NAME="cpe:/o:fedoraproject:fedora:17" Fedora release 17 (Beefy Miracle) Fedora release 17 (Beefy Miracle) Linux version 3.3.4-5.fc17.x86_64 ([email protected]) (gcc version 4.7.0 20120504 (Red Hat 4.7.0-4) (GCC) ) #1 SMP Mon May 7 17:29:34 UTC 2012 This is my iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination Like I say I can use SSH no issue but http although running is a no go from a remote computer. Any ideas?

    Read the article

  • git : The remote end hung up unexpectedly - too many simultaneous users?

    - by Pritam Barhate
    I asked this first on StackOverflow and I was suggested that I should ask it here: We have a self hosted git server (Gitolite) on a VPS account (CPU:2.68GHz RAM:1824MB). This same VPS is also used to publish our underdevelopment web apps for client demos. (Very little traffic). so the main use of the server is as a Git Server Only. This git server is accessed by a team of 30-40 people for various projects. Our problem is that during the day when 6-7 people are trying to access the server (sometimes same repo) we get frequent error message: ssh: connect to host xxx.xxx.xx.xx port 22: Bad file number fatal: The remote end hung up unexpectedly After trying for 10-15 minutes it generally succeeds. During early mornings and late nights when there are only 1-2 people, git commands work with 100% success rate. Also I would like to note that if I access the other file hosted on the server through HTTP it works fine. I found a couple of questions on StackOverflow and on other sites regarding this. But most of the people point towards SSH key set up or conflicts between Msysgit and Cygns SSH. However I don't think this is the problem in our case as we get this behavior on Windows (using msysgit only) as well as Mac Machines. Also if it was SSH configuration issue then it shouldn't work at all. But in our case it works after 10-15 minutes. I think in our case it might be too many simultaneous connections to same server (or same repo) or something like that. Does there exists a setting or a conf file that needs to modified to solve this problem? Please help me solve this problem or point me in the right direction. Thanks in advance. Pritam.

    Read the article

  • Copying files between linux machines with strong authentication but without encryption

    - by Zizzencs
    I'm looking for a suitable program to copy files from one linux machine to another one. The program should be able to do authentication but it should not do encryption. The reason behind the latter is the lack of CPU power to do the encryption. I copy backups from ~70 machines to a single backup server simultaneously. The single server is an HP Proliant DL360 G7, with 10 Gbps ethernet connection and an FC storage backend that can do 4 Gbps. Through FTP I can write ~400MB/sec to the storage (that's about what I want) but through ssh with arcfour I can only do ~100MB/sec while having 100% CPU usage. That's why I want file transfers not to be encrypted. The alternatives that I found not really suitable: rcp: no authentication, forget it FTP: making the authentication "secure" (at least preventing plain-text password exchange) is possible but not really easy and I haven't found a method to force any FTP daemon to encrypt the control channel (for the authentication) and not to encrypt the data channel (for data transfers) SCP/SFTP: in farely recent ssh(d) implementations you can't turn off encryption. The best you can do is to use the arcfour cypher for the encryption but it sill uses too much CPU power for my needs. rsync over ssh: same problems as with SCP/SFTP. plain rsync: from the documentation of rsyncd: "The authentication protocol used in rsync is a 128 bit MD4 based challenge response system. This is fairly weak protection, though (with at least one brute-force hash-finding algorithm publicly available), so if you want really top-quality security, then I recommend that you run rsync over ssh." It's a no-go. Is there a protocol/program that can do exactly what I want? (A big plus would be if it could work on windows as well and/or if it would support rsync-stlye copying/synchronization (e.g. copy only the differences).)

    Read the article

  • PHP / SSH2 Multi-threading

    - by Asad Moeen
    I'm basically done using SSH2 with PHP. Some may already that while using it, the PHP code actually waits for all the listed commands to be executed in SSH and when everything is done, it then gives back the results. Where that is fine for the work I am doing, but I need some commands to be multi-threaded. $cmd= MyCommand; echo $ssh-exec($cmd); So I just want this to run in parallel 2 times. I googled some stuff but didn't get along with it. For a basic thing, I came across to this way posted by someone but it didn't work out for me. for ($i = 0; $i < 2; $i += 1) { exec("php test_1.php $i > test.txt &"); //this will execute test_1.php and will leave this process executing in the background and will go to next iteration of the loop immediately without waiting the completion of the script in the test_1.php , $i is passed as argument . } I tried to put it this way exec("echo $ssh-exec($cmd) $i test.txt &"); in the loop but either it never entered the loop or the echo $ssh-exec failed. I don't really need a very neat multi-threading. Even a single second delay would do good, thank you.

    Read the article

  • Cloning a git repository from a machine running OS X

    - by Mike
    Hi folks, I'm trying to host a git repository from my home OS X machine, and I'm stuck on the last step of cloning the repository from a remote system. Here's what I've done so far: On the OS X (10.6.6) machine (heretofore dubbed the "server") I created a new admin user Logged into the new user's account Installed git Created an empty git repository via "git init" Turned on remote login Set port mapping on my router (airport extreme) to send ssh traffic to the server Added a ".ssh" directory to the user's home directory From the remote machine (also an OS X 10.6.6 machine), I sent that machine's public key to the server using scp and the login credentials of the user created in step 1 To test that the server would use the remote machine's public key, I ssh'd to the server using the username of the user created in step 1 and indeed was able to connect successfully without being asked for a password I installed git on the remote machine From the remote machine I attempted to "git clone ssh://[email protected]:myrepo" (where "user", "my.server.address", and "myrepo" are all replaced by the actual username, server address and repo folder name, respectively) However, every time I try the command in step 11, I get asked to confirm the server's RSA fingerprint, then I'm asked for a password, but the password for the user I set up for that machine never works. Any advice on how to make this work would be greatly appreciated!

    Read the article

  • Install multiport module on iptables

    - by tarteauxfraises
    I'am trying to install "fail2ban" on Cubidebian, a Debian port for Cubieboard (A raspberry like board). The following rule failed due to "-m multiport --dports ssh" options (It works, when i run manually the command without multiple options). $ iptables -I INPUT -p tcp -m multiport --dports ssh -j fail2ban-ssh" iptables: No chain/target/match by that name. When i make a cat on "/proc/net/ip_tables_matches", i see that multiport module is not loaded: $ cat /proc/net/ip_tables_matches u32 time string statistic state owner pkttype mac limit helper connmark mark ah icmp socket socket quota2 policy length iprange ttl hashlimit ecn udplite udp tcp The result of iptables -L -n -v command : $ iptables -L -n -v Chain INPUT (policy ACCEPT 6 packets, 456 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 3 packets, 396 bytes) pkts bytes target prot opt in out source destination Chain fail2ban-apache (0 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (0 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 What can i do to compile or to enable the multiport module? Thanks in advance for your help

    Read the article

  • GlassFish Clustering with DCOM on Windows

    - by ByronNevins
    DCOM - Distributed COM, a Microsoft protocol for communicating with Windows machines. Why use DCOM? In GlassFish 3.1 SSH is used as the standard way to run commands on remote nodes for clustering.  It is very difficult for users to get SSH configured properly on Windows.  SSH does not come with Windows so we have to depend on third party tools.  And then the user is forced to install and configure these tools -- which can be tricky. DCOM is available on all supported platforms.  It is built-in to Windows. The idea is to use DCOM to communicate with remote Windows nodes.  This has the huge advantage that the user has to do minimal, if any, configuration on the Windows nodes. Implementation HighlightsTwo open Source Libraries have been added to GlassFish: Jcifs – a SAMBA implementation in Java J-interop – A Java implementation for making DCOM calls to remote Windows computers.   Note that any supported platform can use DCOM to work with Windows nodes -- not just Windows.E.g. you can have a Linux DAS work with Windows remote instances.All existing SSH commands now have a corresponding DCOM command – except for setup-ssh which isn’t needed for DCOM.  validate-dcom is an all new command. New DCOM Commands create-node-dcom delete-node-dcom install-node-dcom list-nodes-dcom ping-node-dcom uninstall-node-dcom update-node-dcom validate-dcom setup-local-dcom (This is only available via Update Center for GlassFish 3.1.2) These commands are in-place in the trunk (4.0).  And in the branch (3.1.2) Windows Configuration Challenges There are an infinite number of possible configurations of Windows if you look at it as a combination of main release, service-pack, special drivers, software, configuration etc.  Later versions of Windows err on the side of tightening security be default.  This means that the Windows host may need to have configuration changes made.These configuration changes mostly need to be made by the user.  setup-local-dcom will assist you in making required changes to the Windows Registry.  See the reference blogs for details. The validate-dcom Command validate-dcom is a crucial command.  It should be run before any other commands.  If it does not run successfully then there is no point in running other commands.The validate-dcom command must be used from a DAS machine to test a different Windows machine.  If  validate-dcom runs successfully you can be confident that all the DCOM commands will work.  Conversely, the opposite is also true:  If validate-dcom fails, then no DCOM commands will work. What validate-dcom does Verify that the remote host is not the local machine. Resolves the remote host name Checks that the remote DCOM port is being listened on (135, 139) Checks that the remote host’s File Sharing is enabled (port 445) It copies a file (a script) to the remote host to verify that SAMBA is working and authorization is correct It runs a script that it copied on-the-fly to the remote host. Tips and Tricks The bread and butter commands that use DCOM are existing commands like create-instance, start-instance etc.   All of the commands that have dcom in their name are for dealing with the actual nodes. The way the software works is to call asadmin.bat on the remote machine and run a command.  This means that you can track these commands easily on the remote machine with the usual tools.  E.g. using AS_LOGFILE, looking at log files, etc.  It’s easy to attach a debugger to the remote asadmin process, “just in time”, if necessary. How to debug the remote commands:Edit the asadmin.bat file that is in the glassfish/bin folder.  Use glassfish/lib/nadmin.bat in GlassFish 4.0+Add these options to the java call:-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1234  Now if you run, say start-instance on DAS, you can attach your debugger, at your leisure, to the remote machines port 1234.  It will be running start-local-instance and patiently waiting for you to attach.

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • WCF REST Service Activation Errors when AspNetCompatibility is enabled

    - by Rick Strahl
    I’m struggling with an interesting problem with WCF REST since last night and I haven’t been able to track this down. I have a WCF REST Service set up and when accessing the .SVC file it crashes with a version mismatch for System.ServiceModel: Server Error in '/AspNetClient' Application. Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.] System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMarkHandle stackMark, Boolean loadTypeFromPartialName, ObjectHandleOnStack type) +0 System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) +95 System.RuntimeType.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) +54 System.Type.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +65 System.Web.Compilation.BuildManager.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +69 System.Web.Configuration.HandlerFactoryCache.GetTypeWithAssert(String type) +38 System.Web.Configuration.HandlerFactoryCache.GetHandlerType(String type) +13 System.Web.Configuration.HandlerFactoryCache..ctor(String type) +19 System.Web.HttpApplication.GetFactory(String type) +81 System.Web.MaterializeHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +223 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +184 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1 What’s really odd about this is that it crashes only if it runs inside of IIS (it works fine in Cassini) and only if ASP.NET Compatibility is enabled in web.config:<serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> Arrrgh!!!!! After some experimenting and some help from Glenn Block and his team mates I was able to track down the problem in ApplicationHost.config. Specifically the problem was that there were multiple *.svc mappings in the ApplicationHost.Config file and the older 2.0 runtime specific versions weren’t marked for the proper runtime. Because these handlers show up at the top of the list they execute first resulting in assembly load errors for the wrong version assembly. To fix this problem I ended up making a couple changes in applicationhost.config. On the machine level root’s Handler mappings I had an entry that looked like this:<add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="integratedMode" /> and it needs to be changed to this:<add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="integratedMode,runtimeVersionv2.0" />Notice the explicit runtime version assignment in the preCondition attribute which is key to keep ASP.NET 4.0 from executing that handler. The key here is that the runtime version needs to be set explicitly so that the various *.svc handlers don’t fire only in the order defined which in case of a .NET 4.0 app with the original setting would result in an incompatible version of System.ComponentModel to load.What was really hard to track this down is that even when looking in the debugger when launching the Web app, the AppDomain assembly loads showed System.ServiceModel V4.0 starting up just fine. Apparently the ASP.NET runtime load occurs at a different point and that’s when things break.So how did this break? According to the Microsoft folks it’s some older tools that got installed that change the default service handlers. There’s a blog entry that points at this problem with more detail:http://blogs.iis.net/webtopics/archive/2010/04/28/system-typeloadexception-for-system-servicemodel-activation-httpmodule-in-asp-net-4.aspxNote that I tried running aspnet_regiis and that did not fix the problem for me. I had to manually change the entries in applicationhost.config.   © Rick Strahl, West Wind Technologies, 2005-2011Posted in AJAX   ASP.NET  WCF  

    Read the article

  • ASP.NET Routing not working on IIS 7.0

    - by Rick Strahl
    I ran into a nasty little problem today when deploying an application using ASP.NET 4.0 Routing to my live server. The application and its Routing were working just fine on my dev machine (Windows 7 and IIS 7.5), but when I deployed (Windows 2008 R1 and IIS 7.0) Routing would just not work. Every time I hit a routed url IIS would just throw up a 404 error: This is an IIS error, not an ASP.NET error so this doesn’t actually come from ASP.NET’s routing engine but from IIS’s handling of expressionless URLs. Note that it’s clearly falling through all the way to the StaticFile handler which is the last handler to fire in the typical IIS handler list. In other words IIS is trying to parse the extension less URL and not firing it into ASP.NET but failing. As I mentioned on my local machine this all worked fine and to make sure local and live setups match I re-copied my Web.config, double checked handler mappings in IIS and re-copied the actual application assemblies to the server. It all looked exactly matched. However no workey on the server with IIS 7.0!!! Finally, totally by chance, I remembered the runAllManagedModulesForAllRequests attribute flag on the modules key in web.config and set it to true: <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <add name="ScriptCompressionModule" type="Westwind.Web.ScriptCompressionModule,Westwind.Web" /> </modules> </system.webServer> And lo and behold, Routing started working on the live server and IIS 7.0! This seems really obvious now of course, but the really tricky thing about this is that on IIS 7.5 this key is not necessary. So on my Windows 7 machine ASP.NET Routing was working just fine without the key set. However on IIS 7.0 on my live server the same missing setting was not working. On IIS 7.0 this key must be present or Routing will not work. Oddly on IIS 7.5 it appears that you can’t even turn off the behavior – setting runtAllManagedModuleForAllRequests="false" had no effect at all and Routing continued to work just fine even with the flag set to false, which is NOT what I would have expected. Kind of disappointing too that Windows Server 2008 (R1) can’t be upgraded to IIS 7.5. It sure seems like that should have been possible since the OS server core changes in R2 are pretty minor. For the future I really hope Microsoft will allow updating IIS versions without tying them explicitly to the OS. It looks like that with the release of IIS Express Microsoft has taken some steps to untie some of those tight OS links from IIS. Let’s hope that’s the case for the future – it sure is nice to run the same IIS version on dev and live boxes, but upgrading live servers is too big a deal to do just because an updated OS release came out. Moral of the story – never assume that your dev setup will work as is on the live setup. It took me forever to figure this out because I assumed that because my web.config on the local machine was fine and working and I copied all relevant web.config data to the server it can’t be the configuration settings. I was looking everywhere but in the .config file forever before getting desperate and remembering the flag when I accidentally checked the intellisense settings in the modules key. Never assume anything. The other moral is: Try to keep your dev machine and server OS’s in sync whenever possible. Maybe it’s time to upgrade to Windows Server 2008 R2 after all. More info on Extensionless URLs in IIS Want to find out more exactly on how extensionless Urls work on IIS 7? The check out  How ASP.NET MVC Routing Works and its Impact on the Performance of Static Requests which goes into great detail on the complexities of the process. Thanks to Jeff Graves for pointing me at this article – a great linked reference for this topic!© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7  Windows  

    Read the article

  • The type 'XXX' is defined in an assembly that is not referenced exception after upgrade to ASP.NET 4

    - by imran_ku07
       Introduction :          I found two posts in ASP.NET MVC forums complaining that they are getting exception, The type XXX is defined in an assembly that is not referenced, after upgrading thier application into Visual Studio 2010 and .NET Framework 4.0 at here and here .   Description :           The reason why they are getting the above exception is the use of new clean web.config without referencing the assemblies which were presents in ASP.NET 3.5 web.config. The quick solution for this problem is to add the old assemblies in new web.config.          <assemblies>             <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>             <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>             <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>              <add assembly="System.Data.Entity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />              <add assembly="System.Data.Linq, Version=4.0.0.0, Culture=neutral, publicKeyToken=b77a5c561934e089" />          </assemblies>    How It works :            Currently i have not tested the above scenario in ASP.NET 4.0 because i have not yet get it. But the above scenario can easily be tested and verified in VS 2008. Just Open the root web.config and remove           <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>             Even you add the reference of System.Core in your project, you will still get the above exception because aspx pages are compiled in separate assembly. You can check this yourself by checking Show Detailed Compiler Output: below in the yellow screen of death, you will find something,/out:"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\e907aee4\5fa0acc8\App_Web_y5rd6bdg.dll"             This shows that aspx pages are compiled in separate assembly in Temporary ASP.NET Files.Summary :             After getting the above exception make sure to add the assemblies in web.config or add the Assembly directive at Page level. Hopefully this will helps to solve your problem.       

    Read the article

  • Using T4 to generate Configuration classes

    - by Justin Hoffman
    I wanted to try to use T4 to read a web.config and generate all of the appSettings and connectionStrings as properties of a class.  I elected in this template only to output appSettings and connectionStrings but you can see it would be easily adapted for app specific settings, bindings etc.  This allows for quick access to config values as well as removing the potential for typo's when accessing values from the ConfigurationManager. One caveat: a developer would need to remember to run the .tt file after adding an entry to the web.config.  However, one would quickly notice when trying to access the property from the generated class (it wouldn't be there).  Additionally, there are other options as noted here. The first step was to create the .tt file.  Note that this is a basic example, it could be extended even further I'm sure.  In this example I just manually input the path to the web.config file. <#@ template debug="false" hostspecific="true" language="C#" #><#@ output extension=".cs" #><#@ assembly Name="System.Configuration" #><#@ assembly name="System.Xml" #><#@ assembly name="System.Xml.Linq" #><#@ assembly name="System.Net" #><#@ assembly name="System" #><#@ import namespace="System.Configuration" #><#@ import namespace="System.Xml" #><#@ import namespace="System.Net" #><#@ import namespace="Microsoft.VisualStudio.TextTemplating" #><#@ import namespace="System.Xml.Linq" #>using System;using System.Configuration;using System.Xml;using System.Xml.Linq;using System.Linq;namespace MyProject.Web { public partial class Configurator { <# var xDocument = XDocument.Load(@"G:\MySolution\MyProject\Web.config"); var results = xDocument.Descendants("appSettings"); const string key = "key"; const string name = "name"; foreach (var xElement in results.Descendants()) {#> public string <#= xElement.Attribute(key).Value#>{get {return ConfigurationManager.AppSettings[<#= string.Format("{0}{1}{2}","\"" , xElement.Attribute(key).Value, "\"")#>];}} <#}#> <# var connectionStrings = xDocument.Descendants("connectionStrings"); foreach(var connString in connectionStrings.Descendants()) {#> public string <#= connString.Attribute(name).Value#>{get {return ConfigurationManager.ConnectionStrings[<#= string.Format("{0}{1}{2}","\"" , connString.Attribute(name).Value, "\"")#>].ConnectionString;}} <#} #> }} The resulting .cs file: using System;using System.Configuration;using System.Xml;using System.Xml.Linq;using System.Linq;namespace MyProject.Web { public partial class Configurator { public string ClientValidationEnabled{get {return ConfigurationManager.AppSettings["ClientValidationEnabled"];}} public string UnobtrusiveJavaScriptEnabled{get {return ConfigurationManager.AppSettings["UnobtrusiveJavaScriptEnabled"];}} public string ServiceUri{get {return ConfigurationManager.AppSettings["ServiceUri"];}} public string TestConnection{get {return ConfigurationManager.ConnectionStrings["TestConnection"].ConnectionString;}} public string SecondTestConnection{get {return ConfigurationManager.ConnectionStrings["SecondTestConnection"].ConnectionString;}} }} Next, I extended the partial class for easy access to the Configuration. However, you could just use the generated class file itself. using System;using System.Linq;using System.Xml.Linq;namespace MyProject.Web{ public partial class Configurator { private static readonly Configurator Instance = new Configurator(); public static Configurator For { get { return Instance; } } }} Finally, in my example, I used the Configurator class like so: [TestMethod] public void Test_Web_Config() { var result = Configurator.For.ServiceUri; Assert.AreEqual(result, "http://localhost:30237/Service1/"); }

    Read the article

  • Rails 2 and Ngnix: https pages can't load css or js (but will load graphics)

    - by Max Williams
    ADMISSION: i've posted this same question on stackoverflow, before realising it's probabaly better suited to superuser, but it kind of depends on the answer: If it turns out to be a problem in my nginx config, it's definitely superuser. If it turns out to be a problem in my Rails config (or code) then it's arguably stackoverflow. I'm adding some https pages to my rails site. In order to test it locally, i'm running my site under one mongrel_rails instance (on 3000) and nginx. I've managed to get my nginx config to the point where i can actually go to the https pages, and they load. Except, the javascript and css files all fail to load: looking in the Network tab in chrome web tools, i can see that it is trying to load them via an https url. Eg, one of the non-working file urls is https://cmw-local.co.uk/stylesheets/cmw-logged-out.css?1383759216 I have these set up (or at least think i do) in my nginx config to redirect to the http versions of the static files. This seems to be working for graphics, but not for css and js files. If i click on this in the Network tab, it takes me to the above url, which redirects to the http version. So, the redirect seems to be working in some sense, but not when they're loaded by an https page. Like i say, i thought i had this covered in the second try_files directive in my config below, but maybe not. Can anyone see what i'm doing wrong? thanks, Max Here's my nginx config - sorry it's a bit lengthy! I think the error is likely to be in the first (ssl) server block: server { listen 443 ssl; keepalive_timeout 70; ssl_certificate /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.crt; ssl_certificate_key /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; # ensure that we serve css, js, other statics when requested # as SSL, but if the files don't exist (i.e. any non /basket controller) # then redirect to the non-https version location / { try_files $uri @non-ssl-redirect; } # securely serve everything under /basket (/basket/checkout etc) # we need general too, because of the email/username checking location ~ ^/(basket|general|cmw/account/check_username_availability) { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running # other rewrite tests try_files $uri @rails-ssl; expires 1h; } location @non-ssl-redirect { return 301 http://$host$request_uri; } location @rails-ssl { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } } #upstream elrs { # server 127.0.0.1:3000; #} server { listen 80; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; access_log /home/max/work/charanga/elearn_container/elearn/log/access.log; error_log /home/max/work/charanga/elearn_container/elearn/log/error.log debug; client_max_body_size 50M; index index.html index.htm; # gzip html, css & javascript, but don't gzip javascript for pre-SP2 MSIE6 (i.e. those *without* SV1 in their user-agent string) gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #text/html # make sure gzip does not lose large gzipped js or css files # see http://blog.leetsoft.com/2007/7/25/nginx-gzip-ssl gzip_buffers 16 8k; # Disable gzip for certain browsers. #gzip_disable "MSIE [1-6].(?!.*SV1)"; gzip_disable "MSIE [1-6]"; # blank gif like it's 1995 location = /images/blank.gif { empty_gif; } # don't serve files beginning with dots location ~ /\. { access_log off; log_not_found off; deny all; } # we don't care if these are missing location = /robots.txt { log_not_found off; } location = /favicon.ico { log_not_found off; } location ~ affiliate.xml { log_not_found off; } location ~ copyright.xml { log_not_found off; } # convert urls with multiple slashes to a single / if ($request ~ /+ ) { rewrite ^(/)+(.*) /$2 break; } # X-Accel-Redirect # Don't tie up mongrels with serving the lesson zips or exes, let Nginx do it instead location /zips { internal; root /var/www/apps/e_learning_resource/shared/assets; } location /tmp { internal; root /; } location /mnt{ root /; } # resource library thumbnails should be served as usual location ~ ^/resource_library/.*/*thumbnail.jpg$ { if (!-f $request_filename) { rewrite ^(.*)$ /images/no-thumb.png break; } expires 1m; } # don't make Rails generate the dynamic routes to the dcr and swf, we'll do it here location ~ "lesson viewer.dcr" { rewrite ^(.*)$ "/assets/players/lesson viewer.dcr" break; } # we need this rule so we don't serve the older lessonviewer when the rule below is matched location = /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf { rewrite ^(.*)$ /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf break; } location ~ v6lessonViewer.swf { rewrite ^(.*)$ /assets/players/v6lessonViewer.swf break; } location ~ lessonViewer.swf { rewrite ^(.*)$ /assets/players/lessonViewer.swf break; } location ~ lgn111.dat { empty_gif; } # try to get autocomplete school names from memcache first, then # fallback to rails when we can't location /schools/autocomplete { set $memcached_key $uri?q=$arg_q; memcached_pass 127.0.0.1:11211; default_type text/html; error_page 404 =200 @rails; # 404 not really! Hand off to rails } location / { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running other rewrite tests try_files $uri @rails; expires 1h; } location @rails { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } }

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >