Search Results

Search found 6376 results on 256 pages for 'stream wrapper'.

Page 192/256 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • How do I troubleshoot nginx not recognizing passenger?

    - by Jade
    Issue: nginx does not seem to recognize my rails application Symptoms: When the server starts up, it shows the "Welcome to nginx!" message instead of my Rails application. Nginx seems to be using the local nginx path instead of the Rails root I specified: 2010/04/18 06:29:06 [error] 783#0: *1 "/usr/local/nginx/html/blog/index.html" is not found (2: No such file or directory), client: 1.2.3.4, server: www.farmerjade.com, request: "GET /blog/ HTTP/1.1", host: "www.farmerjade.com" I used [RVM and Passenger Setup on NGINX][1] to install nginx and passenger on a virtual machine. Here is my nginx configuration: user farmerjade; worker_processes 1; ... http { include mime.types; default_type application/octet-stream; passenger_ruby /home/farmerjade/.rvm/bin/passenger_ruby; passenger_root /home/farmerjade/.rvm/gems/ree-1.8.7-head/gems/passenger-2.2.11; ... server { listen 80; server_name www.farmerjade.com; root /home/farmerjade/farmerjade/public; passenger_enabled on; rails_env development; ... I'd appreciate any help anyone has to offer -- I'm quite new to nginx.

    Read the article

  • How to download video from a website that uses flash player but

    - by TPR
    Possible Duplicate: Download Flash video file from any video site? Livestream.com seems to be using flash player to show both live streams and archived/recorded streams (meaning previously shown streams). I want to download the archived streams. I am assuming that it should be much easier to download archived video from the website compared to the live stream. Here is a sample video: http://www.livestream.com/copanamericana/video?clipId=pla_6f9f4d97-e48f-4b04-bcaa-18e281341b0f&utm_source=lslibrary&utm_medium=ui-thumb ^^ I am not interested in this particular video, just an example. Firefox plugins like DownloadHelper and all do not work. Any suggestions? If I look at the browsing cache, no matter what the website plays, all files have the same size! If I open them, of course no video gets played. So something clever/funny is going on with the flash player on livestream.com (yes, even the archives videos), so it is definitely not the same as downloading videos from youtube. However, ads played on livestream.com videos are properly stored in browser cache.

    Read the article

  • server_name seems to be ignored in nginx

    - by user46171
    I have two domains set up in nginx.conf. Both are using SSL with their own certificates, and proxy to Apache. However the second domain is completely ignored, and nginx always resolves to the first domain. I can't see what in the issue is with this configuration, having set the server_name in each case correctly (as far as I can see): http { include mime.types; default_type application/octet-stream; keepalive_timeout 65; upstream site { # real IP addresses masked server xx.xxx.x.xxx; server xx.xxx.x.xxx; } server { # this domain always works listen 443; server_name *.first-site.com; ssl on; ssl_certificate /var/ssl/first-site.crt; ssl_certificate_key /var/ssl/first-site.key; location / { access_log off; proxy_connect_timeout 15; proxy_next_upstream error; proxy_pass http://site; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Protocol https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } server { # this domain is ignored, always resolves to first-site.com listen 443; server_name *.second-site.com; ssl on; ssl_certificate /var/ssl/second-site.crt; ssl_certificate_key /var/ssl/second-site.key; location / { access_log off; proxy_connect_timeout 15; proxy_next_upstream error; proxy_pass http://site; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Protocol https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } }

    Read the article

  • NGINX: How do I calculate an optimal no. of worker processes and worker connections?

    - by bodacious
    Our web app is running on a Linode 2048 server at the moment (~ 2048 GB of RAM) The MYSQL database is on another linode of it's own so this server is really only handling NGINX and and the Rails application. The application itself uses about 185976 of memory per instance (RSS). Our traffic is < 1000 per day and the pages are mostly cached so there are fewer hits to the rails app itself. My question is - how can I calculate optimal NGINX config settings for my app? Below is the current config: worker_processes 1; # pid of nginx master process pid /var/run/nginx.pid; events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; passenger_root /home/user/.rvm/gems/ree-1.8.7-2011.01@URTV/gems/passenger-3.0.3; passenger_ruby /home/user/.rvm/rubies/ree-1.8.7-2011.01/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; # gzip settings gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_vary on; gzip_proxied any; gzip_types text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # load extra modules from the vhosts directory include /opt/nginx/vhosts/*.conf; } Any advice would be appreciated! :)

    Read the article

  • What differences are there between "home" switches and "professional" switches?

    - by pjreddie
    Our radio station uses a PtP wireless system to stream our radio and TV signals from our studio up a hill to our transmitter. We have been having problems with warbly sound and drop outs that come from some point in this system. An engineer that occasionally visits the station thinks it could be the switches we use on each side of the PtP wireless system to connect the PtP devices to the encoders and decoders and wants us to get two of these switches: http://www.amazon.com/Netgear-JGS516-ProSafe-16-Port-Ethernet/dp/B0002CWPOK/ref=dp_return_1 The encoder/decoder setup only streams 8Mbps total so it seems like the switches we have should not be stressed out, unless they are causing sufficient latency to degrade the performance of the encoder/decoder. At each end of the connection we only have 4 connections, is there any reason we couldn't get a cheaper, "home" quality switch like this: http://www.amazon.com/D-Link-DGS-1005G-5-Port-Gigabit-Desktop/dp/tech-data/B003X7TRWE/ref=de_a_smtd Is there a significant difference that we would notice in terms of latency between these two switches? How much does the quality of the switch actually matter in this scenario? Any help is appreciated, feel free to ask questions if anything needs clarification. Thanks

    Read the article

  • Cannot access new folders created in my Apache2 document-root

    - by user235101
    I have tried to create a new folder 'test' in my documentroot of my Apache2 installation, however, whenever I try and access it from a web-browser it gives me a 403 (forbidden) error. My virtualhosts file - <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName REMOVED DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride All AuthType Digest AuthName "documentroot" AuthDigestProvider file AuthUserFile /etc/apache2/htpasswd Require user REMOVED AllowOverride Indexes </Directory> <Directory /var/www/> Options FollowSymLinks Options -Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/share/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/REMOVED/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/stream/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/test> Order Deny,Allow Allow from all Satisfy any </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined <Directory /var/www/REMOVED> AuthType Digest AuthName "rutorrent" AuthDigestDomain /var/www/REMOVED/ http://46.105.127.19/REMOVED AuthDigestProvider file AuthUserFile /etc/apache2/htpasswd Require valid-user SetEnv R_ENV "/var/www/REMOVED" AllowOverride Indexes </Directory> </VirtualHost> Image of the permissions - Other information - If I create a new folder (and use chmod --reference to ensure it has the same permissions as an accessible folder), I get a 403 client-side. If I copy folder 'rapidleech' to the name 'rapidleech1', it will let access 'rapidleech1', but no longer 'rapidleech', until I delete the copy. In my logs I found nothing logged in errors.log, and only that it delivered a 403 in access.log. All the appropriate users are members of www-data.

    Read the article

  • How to set up Windows 7 Professional as a NAS

    - by Enyalius
    I searched and didn't find any answers, so please forgive me if this is a repeat. Anyway, I have an older computer that I'm using as an HTPC, and I was hoping that I could use it as a NAS/multimedia server, as well. My primary uses would include accessing content on my PS3 (same LAN), accessing content from other computers on my home network and (if I can) accessing content from my Android phone over the internet. I have used SubSonic to stream music to my Android phone and other computers before, but I would really like to find a way to do this natively if possible. I know that I can buy external hard disk cases that can plug in the USB port of my router, that I can get a Drobo or other network storage solution, but I would really just rather not spend the money (especially considering that I already have a computer that I should be able to use). Hardware involved: Apple AirPort Extreme base station router (most recent revision) Home Theater Personal Computer: Core 2 Duo @ 2.4GHz, 8GB DDR2 RAM, ~3.5TB hard drive space Sony Playstaiton 3 Thin 120GB HTC Thunderbolt (I have 4G coverage) rooted and running Android 2.2.1 Various Apple laptops Various Windows 7 desktops/laptops Thanks in advance! Note- I have looked at open source NAS software but I would like to preserve the Windows Media Center functionality in Windows 7, so other NAS software is not an option for me currently. .

    Read the article

  • Dropbox picture sync: Skip RAW files?

    - by Steven Lu
    I like the convenience of having Dropbox keep track of my photos because it tends to work with my devices over 3G (I am often tethering to my phone with my iPad and Macbook) as well as Wifi, but it's a waste of network traffic to sync the raw files from my camera or memory card. It clutters up the dropbox list and the files are just huge. Is there a way to configure the Dropbox client so that it ignores a certain file extension for the picture sync? Also, I suspect that if I just go and delete the raw files, that the next time I plug in the memory card and tell Dropbox to sync, it will re-download the raw files. Which would be terribad. I could switch to iCloud for Photo Stream, I suppose, but there will be no access via 3G that way. And I've already got years of experience with Dropbox so I know it's going to just work. I think any method that works for filtering files to exclude from sync on Dropbox in general should work here too. Edit: Wow there are 19k votes for this exact request.

    Read the article

  • Can Windows Media Player create playlists based on folder structure?

    - by Chaulky
    Over the years I've carefully molded my digital media collection into a series of folders that make it easy for me to find what I'm looking for. I recently discovered the awesomeness that is streaming video from Windows 7 Media Player to the PS3 so I can watch it on the big screen without all the hassle of hooking the computer up to the TV. The problem is, I totally lose my carefully crafted folder structure and all my videos become one giant mess again... not cool! As a temporary solution, I've created a few playlists for my favorites (Dexter Season 4, Dexter Season 5, Breaking Bad Season 1, etc.). This is a HUGE pain in the a$$. So, is there a way to get Windows Media Player (on Windows 7) to maintain some sort of folder structure based on the location of the actual video files? So if I have my videos sorted into folders by show and season, Media Player will pick that up and let me browse it in the same way. As an alternative answer, I'll accept suggestions for a program that can also stream to PS3 and has this "folder organization" feature.

    Read the article

  • Why is piping dd through gzip so much faster than a direct copy?

    - by Foo Bar
    I wanted to backup a path from a computer in my network to another computer in the same network over a 100MBit/s line. For this I did dd if=/local/path of=/remote/path/in/local/network/backup.img which gave me a very low network transfer speed of something about 50 to 100 kB/s, which would have taken forever. So I stopped it and decided to try gzipping it on the fly to make it much smaller so that the amount to transfer is less. So I did dd if=/local/folder | gzip > /remote/path/in/local/network/backup.img.gz But now I get something like 1 MB/s network transfer speed, so a factor of 10 to 20 faster. After noticing this, I tested this on several paths and files and it was always the same. Why does piping dd through gzip also increase the transfer rates by a large factor instead of only reducing the bytelength of the stream by a large factor? I'd expected even a small decrease in transfer rates instead, due to the higher CPU consumption while compressing, but now I get a double plus. Not that I'm not happy, but just wondering. ;)

    Read the article

  • Can Windows Media Player create playlists based on folder structure?

    - by Chaulky
    Over the years I've carefully molded my digital media collection into a series of folders that make it easy for me to find what I'm looking for. I recently discovered the awesomeness that is streaming video from Windows 7 Media Player to the PS3 so I can watch it on the big screen without all the hassle of hooking the computer up to the TV. The problem is, I totally lose my carefully crafted folder structure and all my videos become one giant mess again... not cool! As a temporary solution, I've created a few playlists for my favorites (Dexter Season 4, Dexter Season 5, Breaking Bad Season 1, etc.). This is a HUGE pain in the a$$. So, is there a way to get Windows Media Player (on Windows 7) to maintain some sort of folder structure based on the location of the actual video files? So if I have my videos sorted into folders by show and season, Media Player will pick that up and let me browse it in the same way. As an alternative answer, I'll accept suggestions for a program that can also stream to PS3 and has this "folder organization" feature.

    Read the article

  • MSMQ on Win2008 R2 won't receive messages from older clients

    - by Graffen
    I'm battling a really weird problem here. I have a Windows 2008 R2 server with Message Queueing installed. On another machine, running Windows 2003 is a service that is set up to send messages to a public queue on the 2008 server. However, messages never show up on the server. I've written a small console app that just sends a "Hello World" message to a test queue on the 2008 machine. Running this app on XP or 2003 results in absolutely nothing. However, when I try running the app on my Windows 7 machine, a message is delivered just fine. I've been through all sorts of security settings, disabled firewalls on all machines etc. The event log shows nothing of interest, and no exceptions are being thrown on the clients. Running a packet sniffer (WireShark) on the server reveals only a little. When trying to send a message from XP or 2003 I only see an ICMP error "Port Unreachable" on port 3527 (which I gather is an MQPing packet?). After that, silence. Wireshark shows a nice little stream of packets when I try from my Win7 client (as expected - messages get delivered just fine from Win7). I've enabled MSMQ End2End logging on the server, but only entries from the messages sent from my Win7 machine are appearing in the log. So somehow it seems that messages are being dropped silently somewhere along the route from XP or 2003 to my 2008 server. Does anyone have any clues as to what might be causing this mysterious behaviour? -- Jesper

    Read the article

  • Windows 7 Sharing issue on RAID 5 Array(s)

    - by K.A.I.N
    Greetings all, I'm having a very odd error with a windows 7 ultimate x64 system. The network system setup is as follows: 2x XP Pro 32 Bit machines 1x Vista ultimate x64 machine 2x Windows 7 x64 Ultimate machines all chained into 1x 16 port netgear prosafe gigabit switch, the windows 7 & vista machines are duplexed. Also there is a router (netgear Rangemax) chained off the switch I am basically using one of the windows 7 machines to host storage & stream media to other machines. To this end i have put 2x 3tb hardware RAID 5 arrays in it and assorted other spare disks which i have shared the roots of. The unusual problems start when i am getting Access denied, Please contact administrator for permission blah blah blah when trying to access both of the RAID 5 arrays but not the other stand alones. I have checked the permission settings, i have added everyone to the read permission for the root, i have tried moving things into sub directories then sharing them. I have tried various setting combinations in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa and always the same. I have tried flushing caches all round, disabling and re-enabling shares & sharing after restart as well as several other things & the result is always the same... No problem on individual drives but access denied on both the RAID arrays from both XP & Vista & Windows 7 machines. One interesting quirk that may lead to an answer is that there is no "offline status" information regarding the folders when you select the RAID 5s from a windows 7 machine yet there is on the normal drives which say they are online. It is as if the raid is present but turned off or spun down but as far as i was aware windows will spin an array back up on network request and on the machine itself the drives seem to be online and can be accessed. Have to admit this has me stumped. Any suggestions anyone? Thanks in advance for any fellow geek assistance. K.A.I.N

    Read the article

  • How to change the MAC address in Win 8 to spoof a Roku Player through a WiFi splash page?

    - by luser droog
    My Linux laptop died yesterday and now I can't watch TV. Let me explain. I use a Roku Player to stream Netflix shows to my television; and a year or two ago, the Internet Service provided in my apartment complex added a Splash Page to get through the router and onto the net. After not too many days, I remembered that internet devices identify themselves with a MAC address. So I delved into the manpage of ifconfig and discovered that I could persuade my laptop to pretend to be the Roku Player, connect, click through the Splash Page, disconnect and change it back. This would allow the Roku to connect for about 24 hours, when I would have to do it again. But the laptop died yesterday during my smoke break. So during lunch, I ran to OfficeMax and got a new one. But I don't know where to begin looking for where to change the MAC address (assuming it's possible). I know I can try dual-boot, or a keychain OS, or possibly other things to resurrect my old method. But, is it possible to get Windows do it?

    Read the article

  • Allow connections to only a specific URL via HTTPS with iptables, -m recent (potentially) and -m string (definitely)

    - by The Consumer
    Hello, Let's say that, for example, I want to allow connections only to subdomain.mydomain.com; I have it partially working, but it sometimes gets in a freaky loop with the client key exchange once the Client Hello is allowed. Ah, to make it even more annoying, it's a self-signed certificate, and the page requires authentication, and HTTPS is listening on a non-standard port... So the TCP/SSL Handshake experience will differ greatly for many users. Is -m recent the right route? Is there a more graceful method to allow the complete TCP stream once the string is seen? Here's what I have so far: #iptables -N SSL #iptables -A INPUT -i eth0 -p tcp -j SSL #iptables -A SSL -m recent --set -p tcp --syn --dport 400 #iptables -A SSL -m recent --update -p tcp --tcp-flags PSH,SYN,ACK SYN,ACK --sport 400 #iptables -A SSL -m recent --update -p tcp --tcp-flags PSH,SYN,ACK ACK --dport 400 #iptables -A SSL -m recent --remove -p tcp --tcp-flags PSH,ACK PSH,ACK --dport 400 -m string --algo kmp --string "subdomain.mydomain.com" -j ACCEPT Yes, I have tried to get around this with nginx tweaks, but I can't get nginx to return a 444 or abrupt disconnect before the client hello, if you can think of a way to achieve this instead, I'm all ears, err, eyes. (As suggested by a user, bringing this inquiry over from http://stackoverflow.com/questions/4628157/allow-connections-to-only-a-specific-url-via-https-with-iptables-m-recent-pote)

    Read the article

  • Cisco FWSM -> ASA upgrade broke our mail server

    - by Mike Pennington
    We send mail with unicode asian characters to our mail server on the other side of our WAN... immediately after upgrading from a FWSM running 2.3(2) to an ASA5550 running 8.2(5), we saw failures on mail jobs that contained unicode. The symptoms are pretty clear... using the ASA's packet capture utility, we snagged the traffic before and after it left the ASA... access-list PCAP line 1 extended permit tcp any host 192.0.2.25 eq 25 capture pcap_inside type raw-data access-list PCAP buffer 1500000 packet-length 9216 interface inside capture pcap_outside type raw-data access-list PCAP buffer 1500000 packet-length 9216 interface WAN I downloaded the pcaps from the ASA by going to https://<fw_addr>/pcap_inside/pcap and https://<fw_addr>/pcap_outside/pcap... when I looked at them with Wireshark Follow TCP Stream, the inside traffic going into the ASA looks like this EHLO metabike AUTH LOGIN YzFwbUlciXNlck== cZUplCVyXzRw But the same mail leaving the ASA on the outside interface looks like this... EHLO metabike AUTH LOGIN YzFwbUlciXNlck== XXXXXXXXXXXX The XXXX characters are concerning... I fixed the issue by disabling ESMTP inspection: wan-fw1(config)# policy-map global_policy wan-fw1(config-pmap)# class inspection_default wan-fw1(config-pmap-c)# no inspect esmtp wan-fw1(config-pmap-c)# end The $5 question... our old FWSM used SMTP fixup without issues... mail went down at the exact moment that we brought the new ASAs online... what specifically is different about the ASA that it is now breaking this mail? Note: usernames / passwords / app names were changed... don't bother trying to Base64-decode this text.

    Read the article

  • Using multiple USB webcams in Linux

    - by rachelderp
    Running more than one USB webcam in Debian/Linux results in the the following error: libv4l2: error turning on stream: No space left on device VIDIOC_STREAMON: No space left on device What initially seemed to be a programming issue in OpenCV turned into a quest for a mysterious hardware/software problem after the same errors were produced by running cheese and xawtv. Apparently it's caused by webcams requesting all the available bandwidth on the USB host controller. With that in mind I decided to run wireshark and capinfos to see just how much bandwidth a single camera used. 4 megabits per second at 320x240 14 megabits per second at 640x480 32 megabits per second at 1920x1080 Interesting! That might explain why two cameras at 320x240 work but any higher resolution fails. It's as if my USB controller is only operating at USB 1 speeds, yet lsusb shows both webcams belonging to a device which supposedly supports 480 megabits per second. One solution proposed forcing the webcams to calculate their bandwidth usage instead of requesting their maximum by running the following commands: sudo rmmod uvcvideo sudo modprobe uvcvideo quirks=128 Unfortunately that made no difference, so I decided to try another solution. A post on StackOverflow suggested telling my webcams to use a lower FPS or compressed video format like MJPEG, but after running v4lctl list it doesn't appear either of my webcams support changing their video mode. And that's where I'm stuck. Why would two webcams operating well below the maximum speed of USB 2 would produce this error? ps: It's not a disk space issue, df displays no change when the webcams are started. pps: If it makes a difference, here's the output of lsusb

    Read the article

  • How to disable my laptop's defective keyboard?

    - by Kolink
    I have an Alienware M17x R2, and after a couple years of use I started having problems with the keyboard: Intermittently, and without warning or apparent cause (even if I'm not touching the computer), the S key (and more recently the D key) will start firing incessant signals until I hit the offending key. I've been using a USB keyboard, which is conveniently exactly the right size to fit over the built-in keyboard without touching any buttons. However, even as I type the defective keyboard will fire its stream of signals. I have contacted Dell about this, twice in fact. The first time, I was just bluntly told that they don't ship replacement keyboards. The second time I was told that they did, but they were out of stock and they would contact me when they were in stock again. They haven't contacted me since. I can't seem to work out which device to disable in the device manager. Here's a screenshot of what I have: None of the keyboard devices have "Disable" in their context menus, only "Uninstall"... so you'll understand if I'm hesitant to try anything myself. Any advice on how to fix this?

    Read the article

  • Winodws server 2003 Setup

    - by Barracksbuilder
    I work at a university maintaining the computer science department server. I am looking for a more economical way to stream line the set up of student accounts. CS students are granted a Username and password an IIS virtual directory, FTP virtual directory, and a mysql database. Server is running windows server 2003R2 (Possibly migrating to 2008R2) The server is running a domain though no students physically log a terminal into it (No computers are part of my domain.) Creating the account is a manual process. I did right a PHP script to query the Universities AD and copy the information and write it to my AD. I then have to create basically the users home directory. I tried having AD do it but since the user never physically logs in it never creates the directory. Permissions on this folder are set to User - full, Instructors (group) - full, Users (group) - read, IUSER - read. Inside of the users folder their is a "Private" folder with permissions User - full, instructors (group) - full. Next step is IIS I create a virtual directory in the default web site pointed to the users home directory so they have a website. Same goes for FTP virtual directory in the default ftp configuration to allow the users to upload files to their website. Mysql I have to create a user and password then create a mysql scheme (database) full access for the user and full access to the instructors account to be able to access the students database. All of this is done manually and takes me a week to do. The closest description is maybe a shared hosting environment. Is there a better way to do this? Scripting wise, or better structure setup?

    Read the article

  • Config nginx for slow connection to avoide corrupted doanlowds

    - by user1850273
    We have a Windows 2003 server that nginx 1.3.8 is running. Our problem is users with slow connction about 10K . Our server is serving our program update files and when they download from our server the downloade file is incompleted or crrupted. (Users can not download file with DL manager and the problem is in IE ) for example in slow connection a file with 25mb , after 2Mb downloaded finish . in high speed connections there is no problem. Also when we redirect these slow connection to other port F.e 50005 with the same config they download will be much better but not good as other servers. Which config we must apply to avoide such these download stops or corrupted downloads in slow connection ? this is our server config : worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent ' '"$http_user_agent"'; access_log logs/access.log main; sendfile off; keepalive_timeout 60; server { listen 80; server_name localhost; location / { root html; deny 127.0.0.3; index index.html index.htm; } } server_tokens off; } Our server use Htaccess password accounting and we can not use IIS on windows , Which soloution you think is better ? IIS with a extention to use apache htaccess ? Or use apache for windows insted of nginx ? Thank You.

    Read the article

  • "No such file or directory"?

    - by user1509541
    Ok, so I have a VDS laying around, and I thought I would turn it into a TF2 game server. When I connect to my server through PuTTY, and use wget to download the package "hldsupdatetool.bin" from Steampowered.com. I go to run it and it says "No such file or directory found". When I use "ls" to see what files are in directory, it lists "hldsupdatetool.bin" as being in the directory. So, why is it saying it's not there? This has been a headache for the past 2 days. It's returning: root@10004:~# wget http://www.steampowered.com/download/hldsupdatetool.bin --2012-07-08 06:04:49-- http://www.steampowered.com/download/hldsupdatetool.bin Resolving www.steampowered.com... 208.64.202.68 Connecting to www.steampowered.com|208.64.202.68|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 3513408 (3.4M) [application/octet-stream] Saving to: “hldsupdatetool.bin.3” 100%[======================================>] 3,513,408 2.45M/s in 1.4s 2012-07-08 06:04:51 (2.45 MB/s) - “hldsupdatetool.bin.3” saved [3513408/3513408] root@10004:~# chmod +x hldsupdatetool.bin.3 root@10004:~# ./hldsupdatetool.bin.3 -bash: ./hldsupdatetool.bin.3: No such file or directory root@10004:~# More: root@10004:~# ls ffmpeg-packages hldsupdatetool.bin.1 hldsupdatetool.bin.3 hldsupdatetool.bin hldsupdatetool.bin.2 setup.sh root@10004:~# ls -la total 13828 drwx------ 4 root root 4096 Jul 8 06:04 . drwxr-xr-x 21 root root 4096 Jul 8 05:57 .. -rw------- 1 root root 8799 Jul 8 06:26 .bash_history -rw-r--r-- 1 root root 570 Jan 31 2010 .bashrc -rw-r--r-- 1 root root 4 Jul 2 19:39 .custombuild drwxr-xr-x 2 root root 4096 Jul 4 18:49 ffmpeg-packages ---x--xrwx 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin -rwxr-xr-x 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.1 -rw-r--r-- 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.2 -rwxr-xr-x 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.3 -rw-r--r-- 1 root root 140 Nov 19 2007 .profile -rw------- 1 root root 1024 Jul 2 19:49 .rnd -rwxr-xr-x 1 root root 38866 May 23 22:02 setup.sh drwxr-xr-x 2 root root 4096 Jul 2 19:44 .ssh root@10004:~#

    Read the article

  • Spring boot JAR as windows service

    - by roblovelock
    I am trying to wrap a spring boot "uber JAR" with procrun. Running the following works as expected: java -jar my.jar I need my spring boot jar to automatically start on windows boot. The nicest solution for this would be to run the jar as a service (same as a standalone tomcat). When I try to run this I am getting "Commons Daemon procrun failed with exit value: 3" Looking at the spring-boot source it looks as if it uses a custom classloader: https://github.com/spring-projects/spring-boot/blob/master/spring-boot-tools/spring-boot-loader/src/main/java/org/springframework/boot/loader/JarLauncher.java I also get a "ClassNotFoundException" when trying to run my main method directly. java -cp my.jar my.MainClass Is there a method I can use to run my main method in a spring boot jar (not via JarLauncher)? Has anyone successfully integrated spring-boot with procrun? I am aware of http://wrapper.tanukisoftware.com/. However due to their licence I can't use it. UPDATE I have now managed to start the service using procrun. set SERVICE_NAME=MyService set BASE_DIR=C:\MyService\Path set PR_INSTALL=%BASE_DIR%prunsrv.exe REM Service log configuration set PR_LOGPREFIX=%SERVICE_NAME% set PR_LOGPATH=%BASE_DIR% set PR_STDOUTPUT=%BASE_DIR%stdout.txt set PR_STDERROR=%BASE_DIR%stderr.txt set PR_LOGLEVEL=Error REM Path to java installation set PR_JVM=auto set PR_CLASSPATH=%BASE_DIR%%SERVICE_NAME%.jar REM Startup configuration set PR_STARTUP=auto set PR_STARTIMAGE=c:\Program Files\Java\jre7\bin\java.exe set PR_STARTMODE=exe set PR_STARTPARAMS=-jar#%PR_CLASSPATH% REM Shutdown configuration set PR_STOPMODE=java set PR_STOPCLASS=TODO set PR_STOPMETHOD=stop REM JVM configuration set PR_JVMMS=64 set PR_JVMMX=256 REM Install service %PR_INSTALL% //IS//%SERVICE_NAME% I now just need to workout how to stop the service. I am thinking of doing someting with the spring-boot actuator shutdown JMX Bean. What happens when I stop the service at the moment is; windows fails to stop the service (but marks it as stopped), the service is still running (I can browse to localhost), There is no mention of the process in task manager (Not very good! unless I am being blind).

    Read the article

  • Byte array serialization in JSON.NET

    - by Daniel Earwicker
    Given this simple class: class HasBytes { public byte[] Bytes { get; set; } } I can round-trip it through JSON using JSON.NET such that the byte array is base-64 encoded: var bytes = new HasBytes { Bytes = new byte[] { 1, 2, 3, 4 } }; // turn it into a JSON string var json = JsonConvert.SerializeObject(bytes); // get back a new instance of HasBytes var result1 = JsonConvert.DeserializeObject<HasBytes>(json); // all is well Debug.Assert(bytes.Bytes.SequenceEqual(result1.Bytes)); But if I deserialize this-a-wise: var result2 = (HasBytes)new JsonSerializer().Deserialize( new JTokenReader( JToken.ReadFrom(new JsonTextReader( new StringReader(json)))), typeof(HasBytes)); ... it throws an exception, "Expected bytes but got string". What other options/flags/whatever would need to be added to the "complicated" version to make it properly decode the base-64 string to initialize the byte array? Obviously I'd prefer to use the simple version but I'm trying to work with a CouchDB wrapper library called Divan, which sadly uses the complicated version, with the responsibilities for tokenizing/deserializing widely separated, and I want to make the simplest possible patch to how it currently works.

    Read the article

  • SSL authentication error: RemoteCertificateChainErrors on ASP.NET on Ubuntu

    - by Frank Krueger
    I am trying to access Gmail's SMTP service from an ASP.NET MVC site running under Mono 2.4.2.3. But I keep getting this error: System.InvalidOperationException: SSL authentication error: RemoteCertificateChainErrors at System.Net.Mail.SmtpClient.m__3 (System.Object sender, System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Security.Cryptography.X509Certificates.X509Chain chain, SslPolicyErrors sslPolicyErrors) [0x00000] at System.Net.Security.SslStream+c__AnonStorey9.m__9 (System.Security.Cryptography.X509Certificates.X509Certificate cert, System.Int32[] certErrors) [0x00000] at Mono.Security.Protocol.Tls.SslClientStream.OnRemoteCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] errors) [0x00000] at Mono.Security.Protocol.Tls.SslStreamBase.RaiseRemoteCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] errors) [0x00000] at Mono.Security.Protocol.Tls.SslClientStream.RaiseServerCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] certificateErrors) [0x00000] at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.validateCertificates (Mono.Security.X509.X509CertificateCollection certificates) [0x00000] at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.ProcessAsTls1 () [0x00000] at Mono.Security.Protocol.Tls.Handshake.HandshakeMessage.Process () [0x00000] at (wrapper remoting-invoke-with-check) Mono.Security.Protocol.Tls.Handshake.HandshakeMessage:Process () at Mono.Security.Protocol.Tls.ClientRecordProtocol.ProcessHandshakeMessage (Mono.Security.Protocol.Tls.TlsStream handMsg) [0x00000] at Mono.Security.Protocol.Tls.RecordProtocol.InternalReceiveRecordCallback (IAsyncResult asyncResult) [0x00000] I have installed certificates using: certmgr -ssl -m smtps://smtp.gmail.com:465 with this output: Mono Certificate Manager - version 2.4.2.3 Manage X.509 certificates and CRL from stores. Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed. X.509 Certificate v3 Issued from: C=US, O=Equifax, OU=Equifax Secure Certificate Authority Issued to: C=US, O=Google Inc, CN=Google Internet Authority Valid from: 06/08/2009 20:43:27 Valid until: 06/07/2013 19:43:27 *** WARNING: Certificate signature is INVALID *** Import this certificate into the CA store ?yes X.509 Certificate v3 Issued from: C=US, O=Google Inc, CN=Google Internet Authority Issued to: C=US, S=California, L=Mountain View, O=Google Inc, CN=smtp.gmail.com Valid from: 04/22/2010 20:02:45 Valid until: 04/22/2011 20:12:45 Import this certificate into the AddressBook store ?yes 2 certificates added to the stores. In fact, this worked for a month but mysteriously stopped working on May 5. I installed these new certs today, but I am still getting these errors.

    Read the article

  • Detect user logout / shutdown in Python / GTK under Linux

    - by Ivo Wetzel
    OK this is presumably a hard one, I've got an pyGTK application that has random crashes due to X Window errors that I can't catch/control. So I created a wrapper that restarts the app as soon as it detects a crash, now comes the problem, when the user logs out or shuts down the system, the app exits with status 1. But on some X errors it does so too. So I tried literally anything to catch the shutdown/logout, with no success, here's what I've tried: import pygtk import gtk import sys class Test(gtk.Window): def delete_event(self, widget, event, data=None): open("delete_event", "wb") def destroy_event(self, widget, data=None): open("destroy_event", "wb") def destroy_event2(self, widget, event, data=None): open("destroy_event2", "wb") def __init__(self): gtk.Window.__init__(self, gtk.WINDOW_TOPLEVEL) self.show() self.connect("delete_event", self.delete_event) self.connect("destroy", self.destroy_event) self.connect("destroy-event", self.destroy_event2) def foo(): open("add_event", "wb") def ex(): open("sys_event", "wb") from signal import * def clean(sig): f = open("sig_event", "wb") f.write(str(sig)) f.close() exit(0) for sig in (SIGABRT, SIGILL, SIGINT, SIGSEGV, SIGTERM): signal(sig, lambda *args: clean(sig)) def at(): open("at_event", "wb") import atexit atexit.register(at) f = Test() sys.exitfunc = ex gtk.quit_add(gtk.main_level(), foo) gtk.main() open("exit_event", "wb") Not one of these succeeds, is there any low level way to detect the system shutdown? Google didn't find anything related to that. I guess there must be a way, am I right? :/

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >