Search Results

Search found 10550 results on 422 pages for 'syntax rules'.

Page 224/422 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • Lighttpd not starting - no error

    - by Furism
    I recently installed Lighttpd on Ubuntu Server 10.04 x86_64 and created several websites. What I do is include /etc/lighttpd/vhost.d/*.conf and put a configuration file for each website in that directory. The problem I have is when I "service lighttpd start" I get the message that the service started, there is no error message: root@178-33-104-210:~# service lighttpd start Syntax OK * Starting web server lighttpd [ OK ] But then if I take a look at the services listening, Lighttpd is nowhere to be seen: root@178-33-104-210:~# netstat -tap Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 localhost:mysql *:* LISTEN 829/mysqld tcp 0 0 *:ftp *:* LISTEN 737/vsftpd tcp 0 0 *:ssh *:* LISTEN 739/sshd tcp6 0 0 [::]:ssh [::]:* LISTEN 739/sshd So I'm looking at ways I could troubleshoot this. I checked in /var/log/lighttpd/error.log and there's nothing in it. Edit: Sorry, I indicated I use CentOS but it's actually Ubuntu Server (I usually use CentOS but had to go with Ubuntu for that one).

    Read the article

  • Puppet apache module causing 'Error 400 on SERVER: Invalid parameter identifier'

    - by Andy Shinn
    I am receiving the following error when trying to use the latest puppetlabs-apache module from github (https://github.com/puppetlabs/puppetlabs-apache): Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter identifier at /etc/puppet/environments/apache_update/modules/apache/manifests/mod.pp:40 on node zordon.mydomain.com Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run My node config looks like: node 'zordon.mydomain.com' { include template::common include template::puppetagent include template::lamp User::Create sudo::conf { 'joe': priority = 60, content = 'joe ALL=(ALL) NOPASSWD: ALL', require = User::Create['joe'], } } The template::lamp class is what uses apache module: class template::lamp { include myfirewall Firewall Firewall class { 'apache': } class { 'apache::mod::php': } class { 'apache::mod::ssl': } class { 'mysql::server': } } It looks like serverfault markup is getting garbled on Puppet realize statements. The User::Create and Firewall lines are just realizing a user and 2 firewall rules. I have verified that the /var/lib/puppet/lib/puppet/type/a2mod.rb type has the identifier parameter and it is the same MD5 as the server. I am using Puppet 3.0.1 on both agent and master. Any idea what may cause this?

    Read the article

  • Where do I start ?

    - by Panthe
    Brief History: Just graduated high school, learned a bit of python and C++, have no friends with any helpful computer knowledge at all. Out of anyone i met in my school years I was probably the biggest nerd, but no one really knew. I consider my self to have a vast amount of knowledge on computers and tech then the average person. built/fixed tons of computers, and ability to troubleshoot pretty much any problem I came across. Now that high school is over, Ive really been thinking about my career. Loving, living computers for the past 15 years of my life I decided to take my ability's and try to learn computer programming, why I didn't start earlier I don't know, seems to be big mistake on my part... Doing some research I concluded that Python was the first programming language I should learn, since it was high level and easier to understand then C++ and Java. I also knew that to become good at what I did I needed to know more then just 2 or 3 languages, which didn't seem like a big problem considering once I learned the way Python worked, mainly syntax changed, and the rest would come naturally. I watched a couple of youtube videos, downloaded some book pdf's and snooped around from some tutorials here and there to get the hang of what to do. A two solid weeks had passed of trying to understand the syntax, create small programs that used the basic functions and understanding how it worked, I think i have got the hang of it. It breaks down into what ive been dealing with all this time (although i kinda knew) is that, input,output, loops, functions and other things derived from 0's and 1's storing data and recalling it, ect. (A VERY BASIC IDEA). Ive been able to create small programs, Hangman, file storing, temperature conversion, Caeser Cipher decode/encoding, Fibonacci Sequence and more, which i can create and understand how each work. Being 2 weeks into this, I have learned alot. Nothing at all compared to what i should be learning in the years to come if i get a grip on what I'm doing. While doing these programs I wont stop untill I've done doing a practice problem on a book, which embarresing enough will take me a couple hour depending on the complexity of it. I absolutly will not put aside the challenge until its complete, WHICH CAN BE EXTREMELY DRAINING, ive tried most problems without cheating and reached success, which makes me feel extremely proud of my self after completing something after much trial and error. After all this I have met the demon, alogrithm's which seem to be key to effiecent code. I cant seem to rap my head around some of the computer codes people put out there using numbers, and sometimes even basic functions, I have been able to understand them after a while but i know there are alot more complex things to come, considering my self smart, functions that require complex codes, actually hurt my brain. NOTHING EVER IN LIFE HURT MY BRAIN....... not even math classes in highschool, trying to understand some of the stuff people put out there makes me feel like i have a mental disadvantage lol... i still walk forward though, crossing my fingers that the understanding will come with time. Sorry if is this is long i just wish someone takes all these things into consideration when answering my question. even through all these downsides im still pushing through and continuing to try and get good at this, i know reading these tutorials wont make me any good unless i can become creative and make my own, understand other peoples programs, so this leads me to the simple question i could have asked in the beginning..... WHERE IN THE WORLD DO I START ? Ive been trying to find out how to understand some of the open source projects, how i can work with experianced coders to learn from them and help them, but i dont think thats even possible by the way how far people's knowledge is compared to me, i have no freinds who i can learn from, can someone help me and guide me into the right direction.. i have a huge motivation to get good at coding, anything information would be extremely helpful

    Read the article

  • Iptables based router inside KVM virtual machine

    - by Anton
    I have KVM virtual machine (CentOS 6.2 x64), it has 2 NIC: eth0 - real external IP 1.2.3.4 (simplified example instead of real one) eth1 - local internal IP 172.16.0.1 Now I'm trying to make port mapping 1.2.3.4:80 = 172.16.0.2:80 Current iptables rules: # Generated by iptables-save v1.4.7 on Fri Jun 29 17:53:36 2012 *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE -A PREROUTING -p tcp -m tcp -d 1.2.3.4 --dport 80 -j DNAT --to-destination 172.16.0.2:80 COMMIT # Completed on Fri Jun 29 17:53:36 2012 # Generated by iptables-save v1.4.7 on Fri Jun 29 17:53:36 2012 *mangle :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT # Completed on Fri Jun 29 17:53:36 2012 # Generated by iptables-save v1.4.7 on Fri Jun 29 17:53:36 2012 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] COMMIT # Completed on Fri Jun 29 17:53:36 2012 But there is nothing works, I mean it does not forwards that port. Similar configuration without virtualization seems to be working. What am I missing? Thanks!

    Read the article

  • Which iptables rule do you think is a 'must have'

    - by Saif Bechan
    I have some basic iptable rules set up now for my vps. Just block everything except some default ports, 80,21,22,443. I do get brute forced a lot. I have heard that iptables is very powerful but I have not seen many use cases. Can you give me an example of a(some) rule(s) you always use and give a small example why. I can not find a general best practice post here on SF, if there is any I would like the link. If this is a duplicate I am sorry and it can be closed.

    Read the article

  • Domino 8.5.3: Modify Subject of Incoming Email

    - by Void
    I am a newbie in managing and development for Domino. Recently, I have request from other teams at work to set up a filter or agent for incoming mail. This is the requirement for the request: Look for Incoming Mail addressed to #CRITICAL (mutlipurpose, internal group containing a list of engineers) For mail matching Point 1, append "For Immediate Action: " to the front of the Subject Some restrictions I have: Only the Domino server is under my charge, not to touch on network-side or other servers No 3rd party software to be installed I have gone through the configurations in the Domino server and the closest thing I have to filtering Email is the Router/SMTP Restrictions... Rules. But this is not able to fulfill Point 2 in any way. Is this even possible using just Domino server settings, or through agents?

    Read the article

  • Make shortened and long urls play together on the same domain (RewriteRule).

    - by Renato Renato
    Long story short, I want to have both example.com/aJ5 and example.com/any-other-url working together. I'm using apache and not very good at writing regex. I have already a global RewriteRule which sends everything to the app entry point. What I need is to tell apache if length($path) is <= 5 chars then rewrite to another location. I know I can use {1,5} like syntax in regex, but don't really know if it's what I'm looking for. I'd like to implement this at web-server level rather than php level. Any help is appreciated.

    Read the article

  • Sun Java keytool importing EV certificates into a single keystore

    - by ss0
    At my current job we are using tomcat, customers have custom web portals setup on their own local machines. EV certs are new to me, they have 2 part intermediary and a primary certificate. For our product to work it appears I need to get all three parts installed under a single keystore entry. How can I roll all three parts into a single x.509 compliant file for import? They syntax I am using is as follows: /blah/system/j2sdk/bin/keytool -import -alias foo -keystore /zix/system/jdk1.5.0_06/jre/lib/security/cacerts -file certname.pem -trustcacerts where foo = the keystore name and certname.pem is the main cert. I have tried importing the intermediate certs under their own names into the keystore and I don't know if it's just the product I have to work with (not vanilla tomcat) or what but it doesn't see those. I have seen a working system and all three certs were under the single keystore alias. Anyone have any ideas?

    Read the article

  • pfsense CARP - wan failure on firewall

    - by eldblz
    I have recently configured 2 firewall (on 2 DELL PowerEdge R210II with ESXI 5.1) with pfsense. We have several LANs and 2 WANs. Everything is running fine but i have a strange behavior: i can access internet from alla LANs but not from the firewall (itself). For example the firewall cannot retrive package information or if i setup a gatway monitor ip (like google 8.8.8.8 ) this fails. These are the screenshots of firewall configuration: http://imgur.com/a/LNuMz#0 ATM i kept firewall rules to minimum to avoid problem or conflicts. Any ideas how to solve the problem? Thank you in advance.

    Read the article

  • Forwarding ports with ssh on Linux

    - by Patrick Klingemann
    I have a database server, let's call it: dbserver I have a web server with access to my dbserver, let's call it: webserver I have a development machine that I'd like to use to access a database on dbserver, let's call it: dev dbserver has a firewall rule set to allow TCP requests from webserver to dbserver:1433 I'd like to set up a tunnel from dev:1433 to dbserver:1433, so all requests to 1433 on dev are passed along to dbserver:1433 My sshd_config on webserver has the following rules set: AllowTcpForwarding yes GatewayPorts yes This is what I've tried: ssh -v -L localhost:1433:dbserver:1433 webserver In another terminal: telnet localhost 1433 Results in: Trying ::1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. Any idea what I'm doing wrong here? Thanks in advance!

    Read the article

  • NetBeans 7.2 MinGW installing for OpenCV

    - by Gligorijevic
    i have installed minGW on my PC according to http://netbeans.org/community/releases/72/cpp-setup-instructions.html, and i have "restored defaults" using NetBeans 7.2 who has found all necessary files. But when I made test sample C++ app i got following error: c:/mingw/bin/../lib/gcc/mingw32/4.6.2/../../../../mingw32/bin/ld.exe: cannot find -ladvapi32 c:/mingw/bin/../lib/gcc/mingw32/4.6.2/../../../../mingw32/bin/ld.exe: cannot find -lshell32 c:/mingw/bin/../lib/gcc/mingw32/4.6.2/../../../../mingw32/bin/ld.exe: cannot find -luser32 c:/mingw/bin/../lib/gcc/mingw32/4.6.2/../../../../mingw32/bin/ld.exe: cannot find -lkernel32 collect2: ld returned 1 exit status make[2]: *** [dist/Debug/MinGW-Windows/welcome_1.exe] Error 1 make[1]: *** [.build-conf] Error 2 make: *** [.build-impl] Error 2 Can anyone give me a hand with installing openCV and minGW for NetBeans? generated Makefiles file goes like this: > # CMAKE generated file: DO NOT EDIT! > # Generated by "MinGW Makefiles" Generator, CMake Version 2.8 > > # Default target executed when no arguments are given to make. default_target: all .PHONY : default_target > > #============================================================================= > # Special targets provided by cmake. > > # Disable implicit rules so canonical targets will work. .SUFFIXES: > > # Remove some rules from gmake that .SUFFIXES does not remove. SUFFIXES = > > .SUFFIXES: .hpux_make_needs_suffix_list > > # Suppress display of executed commands. $(VERBOSE).SILENT: > > # A target that is always out of date. cmake_force: .PHONY : cmake_force > > #============================================================================= > # Set environment variables for the build. > > SHELL = cmd.exe > > # The CMake executable. CMAKE_COMMAND = "C:\Program Files (x86)\cmake-2.8.9-win32-x86\bin\cmake.exe" > > # The command to remove a file. RM = "C:\Program Files (x86)\cmake-2.8.9-win32-x86\bin\cmake.exe" -E remove -f > > # Escaping for special characters. EQUALS = = > > # The program to use to edit the cache. CMAKE_EDIT_COMMAND = "C:\Program Files (x86)\cmake-2.8.9-win32-x86\bin\cmake-gui.exe" > > # The top-level source directory on which CMake was run. CMAKE_SOURCE_DIR = C:\msys\1.0\src\opencv > > # The top-level build directory on which CMake was run. CMAKE_BINARY_DIR = C:\msys\1.0\src\opencv\build\mingw > > #============================================================================= > # Targets provided globally by CMake. > > # Special rule for the target edit_cache edit_cache: @$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan > "Running CMake cache editor..." "C:\Program Files > (x86)\cmake-2.8.9-win32-x86\bin\cmake-gui.exe" -H$(CMAKE_SOURCE_DIR) > -B$(CMAKE_BINARY_DIR) .PHONY : edit_cache > > # Special rule for the target edit_cache edit_cache/fast: edit_cache .PHONY : edit_cache/fast

    Read the article

  • Nginx rewrite with Simple Machines Forum

    - by Kevin Worthington
    I am running Nginx 1.5.6 and I use the Simple Machines Forum software. Most rewrite rules seem to work properly, with the exception of the RSS feeds. In my Nginx configuration, I have the following line which is supposed to handle URLs which contain ".xml": rewrite ^/forum/(\.xml|xmlhttp)/?$ "/forum/index.php?pretty;action=$1" last; The above rule produces the following URL for the main forum, which returns a 403 Error: http://www.mydomain.com/forum/.xml/?type=rss I would like the rewrite rule to produce this type of URL, which returns code 200 (a real page): http://www.mydomain.com/forum/?type=rss;action=.xml Here is the entire block pertaining to the forum rewrites: http://pastebin.com/raw.php?i=tZkAibW3 I would really appreciate some help to create a rewrite rule to do that. Thanks.

    Read the article

  • kvm -net only passing broadcast, multicast, and guest destination traffic

    - by user52874
    Figured this out just last week, but I can't find it now. Even printed it out. Can't find that either. Frustrating...so...help! Configured a 'monitoring' nic on a kvm guest (running 'Security Onion, if it matters). I read (somewhere) that the default nic configuration for a kvm guest is to only pass broadcast traffic, multicast traffic, and traffic with the guest's mac as a destination. There is an option to override this behaviour, and pass all traffic. It's something like --mac-filtering=no, or --mac-restriction=no, or something like that. Worked beautifully. Does this look at all familiar to anyone who can clue me in to the exact option syntax? thx.

    Read the article

  • Set up simple reverse proxy using IIS

    - by Ropstah
    I would like to reverse proxy my Jira installation on a Windows server 2008 machine. Jira is running under: http://jira.domain.com:8080/ and is accessible as such. The machine also runs IIS for hosting several ASP.NET websites. I followed instructions here: http://blogs.iis.net/carlosag/archive/2010/04/01/setting-up-a-reverse-proxy-using-iis-url-rewrite-and-arr.aspx and installed URL rewrite and ARR. I now have a “Web farm” node in my IIS instance but I’ve got no idea on how to proceed. I tried adding some rules but this made the rest of my IIS websites stop responding. Is there a simple way to say: 1. Forward http://jira.domain.com to http://localhost:8080 2. Ignore other domains and route them as usual Any help is greatly appreciated!

    Read the article

  • FTP "PUT" fails from Virtual Machine, but not host PC: 504 Command not implemented for that paramete

    - by BrianH
    I have an FTP Script I'm using to automate a file transfer. The transfer works fine on my PC (XP SP2), but when I try and run it on a VM on my PC (XP SP2), the "put" commands gives off: 504 Command not implemented for that parameter. FTP File: open [ftp site] [username] [password] cd [directory on FTP server] binary hash put ..\[subfolder1]\[Subfolder2]\[subfolder3]\[filename] bye The FTP site/server is around the world, and not under my control. From what I understand of a 504, that means the command should NEVER work, but since the same script DOES work on my PC (hosting the VM), that eliminates syntax, file naming, etc. The put command when triggered from the VM, actually creates a 0 length file on the target FTP server, but doesn't populate the file.

    Read the article

  • Nginx no static files after update

    - by SomeoneS
    First, i must say that i am not expert in server administration, my site was setup by hosting admins (that i cannot contact anymore). Few days ago, i updated Nginx to latest version (admin told me that it is safe to do). But after that, my site serves only html content, no CSS, images, JS. If i try to open some image i get message "Wellcome to Nginx" (same thin if i try to open static.mysitedomain.com). More details: Site has static. subdomain, but static files are in same directory as they used to be before setting up static files. I was googling for some solutions, i tried to change something in /etc/nginx/, but no luck. I feel that this is some minor configuration problem, any ideas? EDIT: Here is /etc/nginx/nginx.conf file content: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Here is /etc/nginx/sites-enabled/default file content: server { #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } # Only for nginx-naxsi : process denied requests #location /RequestDenied { # For example, return an error code #return 418; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ /index.html; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ /index.html; # } #}

    Read the article

  • multiple vlans routed on one nic? trunk?General? or Access?

    - by Aceth
    ok for the last week I've tried racking my head around this... I have a SRW208P with 802.1q support, and a virtual endian appliance. I would like to be able to have 3 vlans having everything routed through the endian appliance.. i.e. The Virtual server has 2 bridged NIC's to the switch. This is where I'm getting confused .. On the 8 port switch I've got the 3 vlans set up ok (all being untagged as they are not going to be vlan aware), it's the port I'm connecting the endian firewall to the switch I'm having trouble with (second nic goes to the adsl modem and NAT'd) Is it meant to be a trunk, "Genereal" or "Access" then untagged or tagged? the end goal is to have vlan traffic routing through the single NIC and have endian route vlan traffic according to the rules. Any one have any ideas on the cisco small business stuff? Thanks

    Read the article

  • Public DNS Server fails on Windows Amazon EC2

    - by Adroidist
    I have started a new Windows server instance on Amazon EC2. The security group has the following rules: Ports Protocol Source 22 tcp 0.0.0.0/0 80 tcp 0.0.0.0/0 443 tcp 0.0.0.0/0 3389 tcp 0.0.0.0/0 53 udp 0.0.0.0/0 -1 icmp 0.0.0.0/0 I am able to ping the public DNS server of the machine and i can connect to it using Windows Remote Desktop connection. However, when i put in my web browser the public DNS server, it fails to connect. Morever, I used filezilla and putty (and in both I loaded the private key .pem) but i receive connection timed out. I disabled the firewall on both my pc and the instance (which I entered using Remote desktop connection). Can you please tell me what I am missing?

    Read the article

  • Running a home mail server using dynamic dns [closed]

    - by Anand
    Hi, Is it possible to run an email server on my home box using dynamic dns? The scenario is, I want to auto cc all incoming and outgoing emails from my one account to another, from some server side config instead of configuring email clients for rules. I have tried Google Apps Mail but it doesn't allow auto cc of outgoing emails. After having read tons of blogs, forum messages etc (hope I have been reading the correct info :) ) the only option to achieve what I am needing is to setup my own mail server, but the cost of getting a static IP doesn't fit my budget. Please can someone point me in the correct direction. Platform doesn't matter, I can setup a Windows or Linux server. Many Thanks

    Read the article

  • Forward emails from specific domain in Exchange

    - by neildeadman
    Our Exchange server handles emails for @ourdomain.com (for example). We have multiple clients that will send emails to our [email protected] email address and we want to configure server-side rules that will forward emails from each client's domain to a different email address within our exchange server. For example: [email protected] sends an email to [email protected] and we forward it to [email protected] [email protected] sends an email to [email protected] and we forward it to [email protected] ...and so on. It would be nice if we can additionally stop the email arriving in the [email protected] mailbox, but that is not a specific requirement. We have a rule setup in Outlook that sort of works, but it doesn't do all from a domain only specific email addresses. It does work when Outlook is not running which is a start. I realise it would be easier to give each client a partiuclar email address and have them email straight to that rather than all use the same, but this is what I have been asked to setup.... :S

    Read the article

  • What are your "must have", free (gratis), programs?

    - by flybywire
    Poll: What software must you always keep handy? I don't care if it is open source, freeware, or demo, as long as its price is $0. Neither do I care if it is for desktops, handhelds, netbooks, web based, cellphones. If it is free to use – and essential to your happiness and well-being – put it in this list. Rules: Please, list only ONE application per answer, so that people can vote up the items that they prefer. Please do not post applications that have already been posted - instead, up-vote the existing answer.

    Read the article

  • Configuring port forwarding on Fortigate 50B

    - by GomoX
    I can't for the life of me get port forwarding to work on my Fortigate 50B. I followed the setup tips described on this other SF thread with no success. The only specific difference I can find is we are using load balancing through 2 different internet uplinks. Is there any caveat specific to this scenario that I might be missing? If you need any specific additional information please ask because I think I have checked everything: Virtual IP mapping on external interface wan1 ACCEPT all from any on wan1 to the corresponding server on internal No seeming offending firewall rules (any specific pitfalls that I might want to check for?)

    Read the article

  • Sonicwall Enhanced With One-To-One NAT, Firewall Blocking Everything

    - by Justin
    Hello, just migrated from a Sonicwall TZ180 (Standard) to a Sonicwall TZ200 (Enhanced). Everything is working except the firewall rules are blocking everything. All hosts are online, and being assigned correct ip addresses. I can browse the internet on the hosts. I am using one-to-one NAT translating public ip addresses to private. 64.87.28.98 -> 192.168.1.2 64.87.28.99 -> 192.168.1.3 etc First order of business is to get ping working. My rule is in the new firewall is (FROM WAN to LAN): SOURCE DESTINATION SERVICE ACTION USERS ANY 192.168.1.2-192.168.1.6 PING ALLOW ALL This should be working, but not. I even tried changing the destination to the public ip addresses, but still no luck. SOURCE DESTINATION SERVICE ACTION USERS ANY 64.87.28.98-64.87.28.106 PING ALLOW ALL Any ideas what I am doing wrong?

    Read the article

  • Windows Event Viewer - XML Custom Filter

    - by Frank
    <QueryList> <Query Id="0" Path="Application"> <Select Path="Application"> *[EventData[Data and (Data="Error")]] </Select> </Query> </QueryList> I believe the above XML custom filter would work if I wanted to check for Events where "Data" equals the word "Error". However, what I want to express is that I want the Events where Data CONTAINS the word "Error" . . . how do I express that? I've Goggled around, but I can find no references to Regular Expression like pattern matching in the Event Viewer. XPath has "contains", but if Event Viewer will support it, I cannot seem to figure out the syntax for invoking it.

    Read the article

  • tracd multiple projects+nginx reverse proxy

    - by Xeross
    I am trying to setup nginx with a reverse proxy to tracd, however I only want to use 1 tracd. Now first here's my config for this domain server { listen 80; server_name bugs.XXXXXXXX.com; access_log /var/log/nginx/XXXXXXXX-bugtracker.access.log proxy; location / { rewrite ^/bugtracker/(.*)$ /$1; rewrite ^/bugtracker$ /; proxy_pass http://127.0.0.1:81/bugtracker/; proxy_redirect default; proxy_set_header Host $host; } location ~ /\.ht { deny all; } } As you can see there's the rewrite rules, because for some reason all the urls that tracd spews out are like /bugtracker/something. Now this is indeed caused by tracd just sending urls like it normally should however trac is at bugs.XXXXXXXX.com/ and not at bugs.XXXXXXXX.com/bugtracker. So how can I make tracd/trac display the (In this case) correct urls ?

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >