Search Results

Search found 12439 results on 498 pages for 'bad practice'.

Page 27/498 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Javascript Prototype Best Practice Event Handlers

    - by nahum
    Hi this question is more a consulting of best practice, Sometimes when I'm building a complete ajax application I usually add elements dynamically for example. When you'r adding a list of items, I do something like: var template = new Template("<li id='list#{id}'>#{value}</li>"); var arrayTemplate = []; arrayOfItem.each(function(item, index){ arrayTemplate.push(template.evaluate( id : index, value : item)) }); after this two options add the list via "update" or "insert" ----- $("elementToUpdate").update("<ul>" + arrayTemplate.join("") + "</ul">); the question is how can I add the event handler without repeat the process of read the array, this is because if you try add a Event before the update or insert you will get an Error because the element isn't still on the DOM. so what I'm doing by now is after insert or update: arrayOfItem.each(function(item, index){ $("list" + index).observe("click", function(){ alert("I see the world"); }) }); so the question is exist a better way to doing this??????

    Read the article

  • Best practice for web server user/group permissions

    - by Poe
    What's the best practice in a secure manner to setup the user/group and permissions? Here's what we currently have; web server runs as www/www. Fastcgi Php runs as www/www. User's shell/ftp account is username/username. We want the user to be able to have full access to all files, including those created by the web server 'www' from the shell or ftp. Similarly, we want the scripts run by fastcgi/php to be able to create files in user created directories and modify user created files.

    Read the article

  • Best Practice for creating Web Services

    - by Holograham
    To preface I am new to web development. I am looking at creating a core set of RESTful web services around a valuable document library of sorts (initial CRUD abilities). In doing so I am theoretically creating a perfectly re-usable and scalable back-end to be used by unanticipated applications in the future. My question centers around the best practice for doing this. My initial requirement has me also creating a unique front end. Would I make the front end and back end completely separate projects to enhance the re-usability. It would increase overhead. Looking at using GWT, Restlet, and JEE technology stack if this influences the setup at all.

    Read the article

  • Would this constructor be acceptable practice?

    - by Robb
    Let's assume I have a c++ class that have properly implemented a copy constructor and an overloaded = operator. By properly implemented I mean they are working and perform a deep copy: Class1::Class1(const Class1 &class1) { // Perform copy } Class1& Class1::operator=(const Class1 *class1) { // perform copy return *this; } Now lets say I have this constructor as well: Class1::Class1(Class1 *class1) { *this = *class1; } My question is would the above constructor be acceptable practice? This is code that i've inherited and maintaining.

    Read the article

  • Best practice for passing configuration to each GUI object

    - by Laimoncijus
    Hi, I am writing an application, where I do have few different windows implemented, where each window is a separate class. Now I need somehow to pass a single configuration object to all of them. My GUI is designed in way, where I have one main window, which may create some child windows of its own, and these child windows can have their own childs (so there is no possibility to create all windows in initialization part and feed the config object to all of them from the very beginning)... What would be best practice for sharing this configuration object between them? Always passing via constructor or maybe making it somewhere as final public static and let each window object to access it when needed? Thanks

    Read the article

  • postgresql duplicate table names best practice

    - by veilig
    My company has a handful of apps that we deploy in the websites we build. Recently a very old app needed to be included along side a newer app and there was a conflict w/ a duplicate table name needed to be used by both apps. We are now in the process of updating an old app and there will be some DB updates. I'm curious what people consider best practice (or how do you do it) to help ensure these name collisions don't happen. I've looked at schema's but not sure if thats the right path we want to take. As the documentation prescribes, I don't want to "wire" a particular schema name into an application and if I add schema's to the user search path how would it know which table I was referring to if two schema's have the same table name. although, maybe I'm reading to much into this. Any insights or words of wisdom would be greatly appreciated!

    Read the article

  • MVC best practice when adding functions to controller?

    - by Patrick
    Hi I'm writing a function in my controller; this is supposed to take in a form, process it, register the user in DB, send a confirmation email etc etc. to avoid this function to be too cluttered, I was thinking of calling some sub-functions (eg: function registration() { //process form.. _insertInDb($formdata) _send_mail($address); //load confirmation view.. } function _send_mail($to) { //code here } function _insertInDb($formdata) { //other code here... } I'm not sure whether writing all the functions in the controller would be best practice -maybe I should insert all 'supporting' function (eg send_mail and insertInDb in this example) in another file and then import them? This would probably make the controller much more readable.. what's your view?

    Read the article

  • Java 7 API design best practice - return Array or return Collection

    - by Shengjie
    I know this question has be asked before generic comes out. Array does win out a bit given Array enforces the return type, it's more type-safe. But now, with latest JDK 7, every time when I design this type of APIs: public String[] getElements(String type) vs public List<String> getElements(String type) I am always struggling to think of some good reasons to return A Collection over An Array or another way around. What's the best practice when it comes to the case of choosing String[] or List as the API's return type? Or it's courses for horses. I don't have a special case in my mind, I am more looking for a generic pros/cons comparison.

    Read the article

  • best-practice on for loop's condition

    - by guest
    what is considered best-practice in this case? for (i=0; i<array.length(); ++i) or for (i=array.length(); i>0; --i) assuming i don't want to iterate from a certain direction, but rather over the bare length of the array. also, i don't plan to alter the array's size in the loop body. so, will the array.length() become constant during compilation? if not, then the second approach should be the one to go for..

    Read the article

  • What is good practice for writing web applications that control daemons (and their config files)

    - by Jones R
    Can someone suggest some basic advice on dealing with web applications that interact with configuration files like httpd.conf, bind zone files, etc. I understand that it's bad practice, in fact very dangerous to allow arbitrary execution of code without fully validating it and so on. But say you are tasked to write a small app that allows one to add vhosts to an apache configuration. Do you have your code execute with full privileges, do you write future variables into a database and have a cron job (with full privileges) execute a script that pulls the vars from the database and throws them into a template config file, etc. Some thoughts & contributions on this issue would be appreciated. tl;dr - how can you securely write a web app to update/create entries in a config file like apache's httpd.conf, etc.

    Read the article

  • Best practice for error handling in codeigniter / php apps

    - by Industrial
    Hi everyone, We're working with a new codeigniter based application that are cross referencing different PHP functions forwards and backwards from various libraries, models and such. We're running PHP5 on the server and we try to find a good way for managing errors and status reports that arises from the usage of our functions. While using return in functions, the execution is ended, so nothing more can be sent back. Right? What's the best practice to send a status information or error code upon ending execution of actual function? Should we look into using exceptions or any other approach? http://us.php.net/manual/en/language.exceptions.php

    Read the article

  • Performing periodic audits and best practice

    - by DTown
    I'm doing a windows form and would like an audit task to happen every 30 seconds. This audit is essentially checking a series of services on remote computers and reporting back into a richtextbox the status. Current I have this running in an endless background thread and using an invoker to update the richtextbox in the main form. Is this best practice? If I made an endless loop in my main form that would prevent any of my buttons from working, correct? I'm just curious if every time I want to create a periodic audit check I have to create a new thread which checks the status or file or what have you?

    Read the article

  • Design best practice - best way to handle user selection

    - by user1457227
    I'm an experienced developer (WPF) moving over to Android development. My question: an app I am developing allows the user to browse their local storage (such as SDCARD) and select a file. Now, should I simply create a new Activity (after the user has made a selection) to handle what I want to have the app do with that chosen file, -or- is the better approach to pass the path/name of the selected file back to the main Activity and let IT launch the next Activity? In other words, is the better practice to have the main Activity launch other (support) activities, or is it perfectly ok and normal to have one activity chain to another and on and on? Thanks!

    Read the article

  • iphone best practice, how to load multiple high quality images

    - by bennythemink
    Hi guys and girls, I have about 20-ish high quality images (~3840x5800 px) that I need to load in a simple gallery type app. The user clicks a button and the next image is loaded into the UIImageView. I currently use [UIImage imageWithContentsOfFile:] which takes about 6 seconds to load each image in the simulator :( if I use [UIImage imageNamed:] it takes even longer to load but caches the images which means its quicker if the user wishes to see the same images again. But it may cause memory problems later with all that caching crashing my app. I want to know whats the best practice for loading these? I'm experimenting with reducing image file size as much as is possible but I really need them to be high quality image for the purpose of the app (zoomable, etc.). Thanks for any advice

    Read the article

  • The best practice to setup hierarchy data

    - by eski
    I'm trying to figure out the best practice to setup hierarchy data that i take from a database into some contoller that would show the hierachy. Basicly this would look like a normal tree but when you press the items that are under "chapters" you get a link to another page. I have these tables and this is the way they are connected Period Courses Subjects Chapters I select Period from a DropDownBox and then i want all the courses in that period to line up. Under each course would be the subject and under them are the chapers, typical hierarchy. The tables are linked together with refrences to each other in linear way. I have tried to use treeview to show this, but dont understand how to do it. I though i could use <ul><il> tags and do it at runtime. Reapeter or datalist, possible ? Is it better to do this with databinding in XAML or in code ?

    Read the article

  • Best practice for creating objects used in for/foreach loops

    - by StupidDeveloper
    Hi! What's the best practice for dealing with objects in for or foreach loops? Should we create one object outside the loops and recreate it all over again (using new... ) or create new one for every loop iteration? Example: foreach(var a in collection) { SomeClass sc = new SomeClass(); sc.id = a; sc.Insert(); } or SomeClass sc = null; foreach(var a in collection) { sc = new SomeClass(); sc.id = a; sc.Insert(); } Which is better?

    Read the article

  • Best practice: Define form field name in backend or the template

    - by AbcAeffchen
    If you designing a webpage you should separate the backend from the frontend. But if you use forms you have to name them. But where should you set this name? e.g. PHP: $fieldName = 'email'; $template->setVar('field_name', $fieldName) ... if(!empty($_POST)) validate($_POST[$fieldName]); Template: <input type="text" name="{$field_name}"> Or just PHP: if(!empty($_POST)) validate($_POST['email']); Template: <input type="text" name="email"> Or should I write a function that can be called from the template an converts an array of field data (name, type, value, id, class, ...) into html code? Is there a best practice where to define fieldnames (types,etc.)? Notice: I used php and smarty like pseudocode (and tags), but its a general question.

    Read the article

  • global counter in application: bad practice?

    - by Martin
    In my C++ application I sometimes create different output files for troubleshooting purposes. Each file is created at a different step of our pipelined operation and it's hard to know file came before which other one (file timestamps all show the same date). I'm thinking of adding a global counter in my application, and a function (with multithreading protection) which increments and returns that global counter. Then I can use that counter as part of the filenames I create. Is this considered bad practice? Many different modules need to create files and they're not necessarily connected to each other.

    Read the article

  • DKIM error: dkim=neutral (bad version) header.i=

    - by GBC
    Ive been struggling the last couple of hours with setting up DKIM on my Postfix/CentOS 5.3 server. It finally sends and signs the emails, but apparently Google still does not like it. The errors I'm getting are: dkim=neutral (bad version) [email protected] from googles "show original" interface. This is what my DKIM-signature header look like: v=1; a=rsa-sha1; c=simple/simple; d=mydomain.com.au; s=default; t=1267326852; bh=0wHpkjkf7ZEiP2VZXAse+46PC1c=; h=Date:From:Message-Id:To:Subject; b=IFBaqfXmFjEojWXI/WQk4OzqglNjBWYk3jlFC8sHLLRAcADj6ScX3bzd+No7zos6i KppG9ifwYmvrudgEF+n1VviBnel7vcVT6dg5cxOTu7y31kUApR59dRU5nPR/to0E9l dXMaBoYPG8edyiM+soXo7rYNtlzk+0wd5glgFP1I= Very appreciative of any suggestions as to how I can solve this problem! Btw, here is exactly how I installed dkim-milter in CentOS 5.3 for postfix, if anyone is interested (based on this guide): mkdir dkim-milter cd dkim-milter wget http://www.topdog-software.com/oss/dkim-milter/dkim-milter-2.8.3-1.x86_64.rpm ======S====== Newest version: http://www.topdog-software.com/oss/dkim-milter/ ======E====== rpm -Uvh dkim-milter-2.8.3-1.x86_64.rpm /usr/bin/dkim-genkey -r -d mydomain.com.au ======S====== add contents of default.txt to DNS as TXT _ssp._domainkey TXT dkim=unknown _adsp._domainkey TXT dkim=unknown default._domainkey TXT v=DKIM1; g=*; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GWETBNiQKBgQC5KT1eN2lqCRQGDX+20I4liM2mktrtjWkV6mW9WX7q46cZAYgNrus53vgfl2z1Y/95mBv6Bx9WOS56OAVBQw62+ksXPT5cRUAUN9GkENPdOoPdpvrU1KdAMW5c3zmGOvEOa4jAlB4/wYTV5RkLq/1XLxXfTKNy58v+CKETLQS/eQIDAQAB ======E====== mv default.private default mkdir /etc/mail/dkim/keys/mydomain.com.au mv default /etc/mail/dkim/keys/mydomain.com.au chmod 600 /etc/mail/dkim/keys/mydomain.com.au/default chown dkim-milt.dkim-milt /etc/mail/dkim/keys/mydomain.com.au/default vim /etc/dkim-filter.conf ======S====== ADSPDiscard yes ADSPNoSuchDomain yes AllowSHA1Only no AlwaysAddARHeader no AutoRestart yes AutoRestartRate 10/1h BaseDirectory /var/run/dkim-milter Canonicalization simple/simple Domain mydomain.com.au #add all your domains here and seperate them with comma ExternalIgnoreList /etc/mail/dkim/trusted-hosts InternalHosts /etc/mail/dkim/trusted-hosts KeyList /etc/mail/dkim/keylist LocalADSP /etc/mail/dkim/local-adsp-rules Mode sv MTA MSA On-Default reject On-BadSignature reject On-DNSError tempfail On-InternalError accept On-NoSignature accept On-Security discard PidFile /var/run/dkim-milter/dkim-milter.pid QueryCache yes RemoveOldSignatures yes Selector default SignatureAlgorithm rsa-sha1 Socket inet:20209@localhost Syslog yes SyslogSuccess yes TemporaryDirectory /var/tmp UMask 022 UserID dkim-milt:dkim-milt X-Header yes ======E====== vim /etc/mail/dkim/keylist ======S====== *@mydomain.com.au:mydomain.com.au:/etc/mail/dkim/keys/mydomain.com.au/default ======E====== vim /etc/postfix/main.cf ======S====== Add: smtpd_milters = inet:localhost:20209 non_smtpd_milters = inet:localhost:20209 milter_protocol = 2 milter_default_action = accept ======E====== vim /etc/mail/dkim/trusted-hosts ======S====== localhost 127.0.0.1 ======E====== /etc/mail/local-host-names ======S====== localhost 127.0.0.1 ======E====== /sbin/chkconfig dkim-milter on /etc/init.d/dkim-milter start /etc/init.d/postfix restart

    Read the article

  • tomcat5 HTTP 400 BAd Request

    - by Oneiroi
    OS is centOS 5.5 x64, rpm's are as follows: tomcat5-jsp-2.0-api-5.5.23-0jpp.9.el5_5 tomcat5-common-lib-5.5.23-0jpp.9.el5_5 tomcat5-servlet-2.4-api-5.5.23-0jpp.9.el5_5 tomcat5-server-lib-5.5.23-0jpp.9.el5_5 tomcat5-5.5.23-0jpp.9.el5_5 tomcat5-jasper-5.5.23-0jpp.9.el5_5 telnet localhost 8080 Trying 127.0.0.1... Connected to localhost.localdomain (127.0.0.1). Escape character is '^]'. GET / HTTP/1.0 Host: localhost HTTP/1.1 400 Bad Request Server: Apache-Coyote/1.1 Date: Thu, 16 Sep 2010 15:06:21 GMT Connection: close alternatives --display java output: alternatives --display java java - status is manual. link currently points to /usr/lib/jvm/jre1.6.0_21/bin/java /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java - priority 16000 slave keytool: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/keytool slave orbd: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/orbd slave pack200: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/pack200 slave rmid: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/rmid slave rmiregistry: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/rmiregistry slave servertool: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/servertool slave tnameserv: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/tnameserv slave unpack200: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/unpack200 slave jre_exports: /usr/lib/jvm-exports/jre-1.6.0-openjdk.x86_64 slave jre: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64 slave java.1.gz: /usr/share/man/man1/java-java-1.6.0-openjdk.1.gz slave keytool.1.gz: /usr/share/man/man1/keytool-java-1.6.0-openjdk.1.gz slave orbd.1.gz: /usr/share/man/man1/orbd-java-1.6.0-openjdk.1.gz slave pack200.1.gz: /usr/share/man/man1/pack200-java-1.6.0-openjdk.1.gz slave rmid.1.gz: /usr/share/man/man1/rmid-java-1.6.0-openjdk.1.gz slave rmiregistry.1.gz: /usr/share/man/man1/rmiregistry-java-1.6.0-openjdk.1.gz slave servertool.1.gz: /usr/share/man/man1/servertool-java-1.6.0-openjdk.1.gz slave tnameserv.1.gz: /usr/share/man/man1/tnameserv-java-1.6.0-openjdk.1.gz slave unpack200.1.gz: /usr/share/man/man1/unpack200-java-1.6.0-openjdk.1.gz /usr/lib/jvm/jre-1.4.2-gcj/bin/java - priority 1420 slave keytool: /usr/lib/jvm/jre-1.4.2-gcj/bin/keytool slave orbd: (null) slave pack200: (null) slave rmid: (null) slave rmiregistry: /usr/lib/jvm/jre-1.4.2-gcj/bin/rmiregistry slave servertool: (null) slave tnameserv: (null) slave unpack200: (null) slave jre_exports: /usr/lib/jvm-exports/jre-1.4.2-gcj slave jre: /usr/lib/jvm/jre-1.4.2-gcj slave java.1.gz: (null) slave keytool.1.gz: (null) slave orbd.1.gz: (null) slave pack200.1.gz: (null) slave rmid.1.gz: (null) slave rmiregistry.1.gz: (null) slave servertool.1.gz: (null) slave tnameserv.1.gz: (null) slave unpack200.1.gz: (null) /usr/lib/jvm/jre1.6.0_21/bin/java - priority 2 slave keytool: (null) slave orbd: (null) slave pack200: (null) slave rmid: (null) slave rmiregistry: (null) slave servertool: (null) slave tnameserv: (null) slave unpack200: (null) slave jre_exports: (null) slave jre: (null) slave java.1.gz: (null) slave keytool.1.gz: (null) slave orbd.1.gz: (null) slave pack200.1.gz: (null) slave rmid.1.gz: (null) slave rmiregistry.1.gz: (null) slave servertool.1.gz: (null) slave tnameserv.1.gz: (null) slave unpack200.1.gz: (null) Current `best' version is /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java. Same occurs trying HTTP/1.1, and I am at a complete loss as to why.

    Read the article

  • Providing DNS redirection to honeypot server for known bad domains

    - by syn-
    Currently running BIND on RHEL 5.4 and am looking for a more efficient manner of providing DNS redirection to a honeypot server for a large (30,000+) list of forbidden domains. Our current solution for this requirement is to include a file containing a zone master declaration for each blocked domain in named.conf. Subsequently, each of these zone declarations point to the same zone file, which resolves all hosts in that domain to our honeypot servers. ...basically this allows us to capture any "phone home" attempts by malware that may infiltrate the internal systems. The problem with this configuration is the large amount of time taken to load all 30,000+ domains as well as management of the domain list configuration file itself... if any errors creep into this file, the BIND server will fail to start, thereby making automation of the process a little frightening. So I'm looking for something more efficient and potentially less error prone. named.conf entry: include "blackholes.conf"; blackholes.conf entry example: zone "bad-domain.com" IN { type master; file "/var/named/blackhole.zone"; allow-query { any; }; notify no; }; blackhole.zone entries: $INCLUDE std.soa @ NS ns1.ourdomain.com. @ NS ns2.ourdomain.com. @ NS ns3.ourdomain.com.                        IN            A                192.168.0.99 *                      IN            A                192.168.0.99

    Read the article

  • phpMyAdmin causes php-fpm worker to restart (502 Bad Gateway)

    - by rndbit
    I am trying to set up a test site for myself. Everything works fine except phpMyAdmin. php installation loads my test site scripts, they work fine, however trying to load phpMyAdmin i get 502 Bad Gateway error. Judging from logs (that are not too helpful) it seems that php-fpm worker is crashing each time phpmyadmin is being accessed. No clue how or why.. Does anyone have any idea? nginx log: *3 recv() failed (104: Connection reset by peer) while reading response header from upstream, And php-fpm log: [07-Jun-2012 14:19:51] WARNING: [pool www] child 32179 exited on signal 11 (SIGSEGV) after 3.217902 seconds from start [07-Jun-2012 14:19:51] NOTICE: [pool www] child 32351 started My nginx conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; fastcgi_buffers 8 16k; fastcgi_buffer_size 32k; include /etc/nginx/conf.d/*.conf; server { listen 443 ssl; listen 80; server_name testsite.net www.testsite.net; ssl on; ssl_certificate /var/www/html/admin/ssl/certificate.pem; ssl_certificate_key /var/www/html/admin/ssl/privatekey.pem; ssl_session_timeout 1m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5:!kEDH; ssl_prefer_server_ciphers on; access_log off; location ~ \.php$ { root /var/www/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location / { root /var/www/html; index index.php; } } } php.ini is standard, with cgi.fix_pathinfo=0 php-fpm.conf: include=/etc/php-fpm.d/*.conf [global] pid = /var/run/php-fpm/php-fpm.pid error_log = /var/log/php-fpm/error.log log_level = notice php-fpm.d/www.conf: [www] listen = 127.0.0.1:9000 listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 10 pm.start_servers = 1 pm.min_spare_servers = 1 pm.max_spare_servers = 10 slowlog = /var/log/php-fpm/www-slow.log php_flag[display_errors] = on php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on

    Read the article

  • 502 Bad Gateway with nginx + apache + subversion + ssl (SVN COPY)

    - by theplatz
    I've asked this on stackoverflow, but it may be better suited for serverfault... I'm having a problem running Apache + Subversion with SSL behind an Nginx proxy and I'm hoping someone might have the answer. I've scoured google for hours looking for the answer to my problem and can't seem to figure it out. What I'm seeing are "502 (Bad Gateway)" errors when trying to MOVE or COPY using subversion; however, checkouts and commits work fine. Here are the relevant parts (I think) of the nginx and apache config files in question: Nginx upstream subversion_hosts { server 127.0.0.1:80; } server { listen x.x.x.x:80; server_name hostname; access_log /srv/log/nginx/http.access_log main; error_log /srv/log/nginx/http.error_log info; # redirect all requests to https rewrite ^/(.*)$ https://hostname/$1 redirect; } # HTTPS server server { listen x.x.x.x:443; server_name hostname; passenger_enabled on; root /path/to/rails/root; access_log /srv/log/nginx/ssl.access_log main; error_log /srv/log/nginx/ssl.error_log info; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; add_header Front-End-Https on; location /svn { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; set $fixed_destination $http_destination; if ( $http_destination ~* ^https(.*)$ ) { set $fixed_destination http$1; } proxy_set_header Destination $fixed_destination; proxy_pass http://subversion_hosts; } } Apache Listen 127.0.0.1:80 <VirtualHost *:80> # in order to support COPY and MOVE, etc - over https (443), # ServerName _must_ be the same as the nginx servername # http://trac.edgewall.org/wiki/TracNginxRecipe ServerName hostname UseCanonicalName on <Location /svn> DAV svn SVNParentPath "/srv/svn" Order deny,allow Deny from all Satisfy any # Some config omitted ... </Location> ErrorLog /var/log/apache2/subversion_error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/subversion_access.log combined </VirtualHost> From what I could tell while researching this problem, the server name has to match on both the apache server as well as the nginx server, which I've done. Additionally, this problem seems to stick around even if I change the configuration to use http only.

    Read the article

  • php-fpm or nginx: bad gateway

    - by John Tate
    I'm getting a bad gateway error all the sudden for a site. I didn't change the configuration for the site, I just added a new server config where I put them under /etc/nginx/servers and it stopped working. The new server works, and there is no conflict between the php-fpm listen addresses. server { listen 80; server_name obfuscated.onion; location = / { root /var/www/sites/obfuse; index index.php; } location / { root /var/www/sites/obfuse; index index.php; if (!-f $request_filename) { rewrite ^(.*)$ /index.php?q=$1 last; break; } if (!-d $request_filename) { rewrite ^(.*)$ /index.php?q=$1 last; break; } } error_page 404 /index.php; location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { root /var/www/sites/obfuse; access_log off; expires 30d; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sites/obfuse$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param PATH_INFO $fastcgi_script_name; include fastcgi_params; } } There is nothing unusual in php-fpm's log even when I raised the level to debug. [24-Jun-2013 09:10:37.357943] DEBUG: pid 6756, fpm_scoreboard_init_main(), line 40: got clock tick '100' [24-Jun-2013 09:10:37.358950] DEBUG: pid 6756, fpm_event_init_main(), line 333: event module is kqueue and 1 fds have been reserved [24-Jun-2013 09:10:37.358978] NOTICE: pid 6756, fpm_init(), line 83: fpm is running, pid 6756 [24-Jun-2013 09:10:37.359009] DEBUG: pid 6756, main(), line 1832: Sending "1" (OK) to parent via fd=5 [24-Jun-2013 09:10:37.389215] DEBUG: pid 6756, fpm_children_make(), line 421: [pool cyruserv] child 22288 started [24-Jun-2013 09:10:37.391343] DEBUG: pid 6756, fpm_children_make(), line 421: [pool cyruserv] child 21911 started [24-Jun-2013 09:10:37.391914] DEBUG: pid 6756, fpm_event_loop(), line 362: 5776 bytes have been reserved in SHM [24-Jun-2013 09:10:37.391941] NOTICE: pid 6756, fpm_event_loop(), line 363: ready to handle connections [24-Jun-2013 09:10:38.393048] DEBUG: pid 6756, fpm_pctl_perform_idle_server_maintenance(), line 379: [pool cyruserv] currently 0 active children, 2 spare children, 2 running children. Spawning rate 1 [24-Jun-2013 09:10:39.403032] DEBUG: pid 6756, fpm_pctl_perform_idle_server_maintenance(), line 379: [pool cyruserv] currently 0 active children, 2 spare children, 2 running children. Spawning rate 1 [24-Jun-2013 09:10:40.413070] DEBUG: pid 6756, fpm_pctl_perform_idle_server_maintenance(), line 379: [pool cyruserv] currently 0 active children, 2 spare children, 2 running children. Spawning rate 1 I don't know why this has started happening, but the logs are not telling me anything. Please ask for more information than this, you'll probably need it.

    Read the article

  • 40k Event Log Errors an hour Unknown Username or bad password

    - by ErocM
    I am getting about 200k of these an hour: An account failed to log on. Subject: Security ID: SYSTEM Account Name: TGSERVER$ Account Domain: WORKGROUP Logon ID: 0x3e7 Logon Type: 4 Account For Which Logon Failed: Security ID: NULL SID Account Name: administrator Account Domain: TGSERVER Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x334 Caller Process Name: C:\Windows\System32\svchost.exe Network Information: Workstation Name: TGSERVER Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Advapi Authentication Package: Negotiate Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols. - Key length indicates the length of the generated session key. This will be 0 if no session key was requested. On my server... I changed my adminstrative username to something else and since then I've been inidated with these messages. I found on http://technet.microsoft.com/en-us/library/cc787567(v=WS.10).aspx that the 4 means "Batch logon type is used by batch servers, where processes may be executing on behalf of a user without their direct intervention." which really doesn't shed any light on it for me. I checked the services and they are all logging in as local system or network service. Nothing for administrator. Anyone have any idea how I tell where these are coming from? I would assume this is a program that is crapping out... Thanks in advance!

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >