Search Results

Search found 15377 results on 616 pages for 'socket programming'.

Page 598/616 | < Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >

  • Why Java servlet can't get Paypal IPN messages everytime ?

    - by Frank
    I have a Java servlet running on my notebook with Windows Vista, I set up a static IP, did port forwarding and registered for a free DDNS service, now my servlet is running, I gave the url to Paypal to send me IPN messages, I went on to it's sandbox site got to the test tools page, tried to send test messages by clicking the "Send IPN" button, most of the time it would fail, the error is : "IPN delivery failed. Unable to connect to the specified URL. Please verify the URL and try again." But maybe 1 in 10 times, it might be successful and my servlet would get the message, and I looked at the messages I got, they are in correct format. So I called Paypal asking why, he said I shouldn't run the servlet on my notebook, in stead I should run it on the web server, but I told him my ISP doesn't support Java on their server, and since I did all the above steps, shouldn't it be the same to run the servlet on my notebook ? He said his test showed he couldn't get to my servlet, but I asked why maybe 1 in 10 times it could get through ? If there is something wrong with running it on my notebook, then 100% times it should fail, am I correct on this point ? But anyway he said that's all he could do, and I should troubleshoot it myself. The servlet looks like this : import java.io.*; import java.net.*; import javax.servlet.*; import javax.servlet.http.*; import java.util.*; public class PayPal_Servlet extends HttpServlet { static boolean Debug=true; static String PayPal_Url="https://www.paypal.com/cgi-bin/webscr",Sandbox_Url="https://www.sandbox.paypal.com/cgi-bin/webscr", Dir_License_Messages="C:/Dir_License_Messages/"; static TransparencyExample Transparency_Example; static PayPal_Message_To_License_File_Worker PayPal_message_to_license_file_worker; // Initializes the servlet. public void init(ServletConfig config) throws ServletException { super.init(config); if (!new File(Dir_License_Messages).exists()) new File(Dir_License_Messages).mkdirs(); System.gc(); } /** Processes requests for both HTTP <code>GET</code> and <code>POST</code> methods. * @param request servlet request * @param response servlet response */ protected void processRequest(HttpServletRequest request,HttpServletResponse response) throws ServletException,IOException { // Read post from PayPal system and add 'cmd' Enumeration en=request.getParameterNames(); String str="cmd=_notify-validate"; while (en.hasMoreElements()) { String paramName=(String)en.nextElement(); String paramValue=request.getParameter(paramName); str=str+"&"+paramName+"="+URLEncoder.encode(paramValue); } // Post back to PayPal system to validate // NOTE: change http: to https: in the following URL to verify using SSL (for increased security). // using HTTPS requires either Java 1.4 or greater, or Java Secure Socket Extension (JSSE) and configured for older versions. URL u=new URL(Debug?Sandbox_Url:PayPal_Url); URLConnection uc=u.openConnection(); uc.setDoOutput(true); uc.setRequestProperty("Content-Type","application/x-www-form-urlencoded"); PrintWriter pw=new PrintWriter(uc.getOutputStream()); pw.println(str); pw.close(); BufferedReader in=new BufferedReader(new InputStreamReader(uc.getInputStream())); String res=in.readLine(); in.close(); // Assign posted variables to local variables String itemName=request.getParameter("item_name"); String itemNumber=request.getParameter("item_number"); String paymentStatus=request.getParameter("payment_status"); String paymentAmount=request.getParameter("mc_gross"); String paymentCurrency=request.getParameter("mc_currency"); String txnId=request.getParameter("txn_id"); String receiverEmail=request.getParameter("receiver_email"); String payerEmail=request.getParameter("payer_email"); if (res.equals("VERIFIED")) // Check notification validation { // check that paymentStatus=Completed // check that txnId has not been previously processed // check that receiverEmail is your Primary PayPal email // check that paymentAmount/paymentCurrency are correct // process payment } else if (res.equals("INVALID")) // Log for investigation { } else // Log for error { } // =========================================================================== if (txnId!=null) { Write_File_Safe_Fast(Dir_License_Messages+txnId+".txt",new StringBuffer(str.replace("&","\n")),false); } // =========================================================================== String Message_File_List[]=Tool_Lib.Get_File_List_From_Dir(Dir_License_Messages); response.setContentType("text/html"); PrintWriter out=response.getWriter(); String title="Reading All Request Parameters",Name="",Value; out.println("<Html><Head><Title>"+title+"</Title></Head>\n<Body Bgcolor=\"#FDF5E6\">\n<H1 Align=Center>"+title+"</H1>\n"+ "<Table Border=1 Align=Center>\n"+"<Tr Bgcolor=\"#FFAD00\"><Th>Parameter Name</Th><Th>Parameter Value(s) Messages = "+Message_File_List.length+"</Th></Tr>"); Enumeration paramNames=request.getParameterNames(); while(paramNames.hasMoreElements()) { String paramName=(String)paramNames.nextElement(); out.print("<Tr><Td>"+paramName+"</Td><Td>"); String[] paramValues=request.getParameterValues(paramName); if (paramValues.length == 1) { String paramValue=paramValues[0]; if (paramValue.length() == 0) out.print("<I>No Value</I>"); else { out.println(paramValue+"</Td></Tr>"); // Out("paramName = "+paramName+" paramValue = "+paramValue); // if (paramName.startsWith("Name")) Name=paramValue; // else if (paramName.startsWith("Value")) Write_File_Safe_Fast("C:/Dir_Data/"+Name,new StringBuffer(paramValue),false); } } else { out.println("<Ul>"); for (int i=0;i<paramValues.length;i++) out.println("<Li>"+paramValues[i]); out.println("</Ul></Td</Tr>"); } } out.println("</Table>\n</Body></Html>"); } /** Handles the HTTP <code>GET</code> method. * @param request servlet request * @param response servlet response */ protected void doGet(HttpServletRequest request,HttpServletResponse response) throws ServletException,IOException { processRequest(request,response); } /** Handles the HTTP <code>POST</code> method. * @param request servlet request * @param response servlet response */ protected void doPost(HttpServletRequest request,HttpServletResponse response) throws ServletException,IOException { processRequest(request,response); } // Returns a short description of the servlet. public String getServletInfo() { return "Short description"; } // Destroys the servlet. public void destroy() { System.gc(); } public static void Write_File_Safe_Fast(String File_Path,StringBuffer Str_Buf,boolean Append) { FileOutputStream fos=null; BufferedOutputStream bos=null; try { fos=new FileOutputStream(File_Path,Append); bos=new BufferedOutputStream(fos); for (int j=0;j<Str_Buf.length();j++) bos.write(Str_Buf.charAt(j)); } catch (Exception e) { e.printStackTrace(); } finally { try { if (bos!=null) { bos.close(); bos=null; } if (fos!=null) { fos.close(); fos=null; } } catch (Exception ex) { ex.printStackTrace(); } } System.gc(); } } I use Netbean6.7 to develop the servlet, and the code was from Paypal's JSP sample code, what can I do to debug the problem ?

    Read the article

  • Can't figure out where race condition is occuring

    - by Nik
    I'm using Valgrind --tool=drd to check my application that uses Boost::thread. Basically, the application populates a set of "Book" values with "Kehai" values based on inputs through a socket connection. On a seperate thread, a user can connect and get the books send to them. Its fairly simple, so i figured using a boost::mutex::scoped_lock on the location that serializes the book and the location that clears out the book data should be suffice to prevent any race conditions. Here is the code: void Book::clear() { boost::mutex::scoped_lock lock(dataMutex); for(int i =NUM_KEHAI-1; i >= 0; --i) { bid[i].clear(); ask[i].clear(); } } int Book::copyChangedKehaiToString(char* dst) const { boost::mutex::scoped_lock lock(dataMutex); sprintf(dst, "%-4s%-13s",market.c_str(),meigara.c_str()); int loc = 17; for(int i = 0; i < Book::NUM_KEHAI; ++i) { if(ask[i].changed > 0) { sprintf(dst+loc,"A%i%-21s%-21s%-21s%-8s%-4s",i,ask[i].price.c_str(),ask[i].volume.c_str(),ask[i].number.c_str(),ask[i].postTime.c_str(),ask[i].status.c_str()); loc += 77; } } for(int i = 0; i < Book::NUM_KEHAI; ++i) { if(bid[i].changed > 0) { sprintf(dst+loc,"B%i%-21s%-21s%-21s%-8s%-4s",i,bid[i].price.c_str(),bid[i].volume.c_str(),bid[i].number.c_str(),bid[i].postTime.c_str(),bid[i].status.c_str()); loc += 77; } } return loc; } The clear() function and the copyChangedKehaiToString() function are called in the datagetting thread and data sending thread,respectively. Also, as a note, the class Book: struct Book { private: Book(const Book&); Book& operator=(const Book&); public: static const int NUM_KEHAI=10; struct Kehai; friend struct Book::Kehai; struct Kehai { private: Kehai& operator=(const Kehai&); public: std::string price; std::string volume; std::string number; std::string postTime; std::string status; int changed; Kehai(); void copyFrom(const Kehai& other); Kehai(const Kehai& other); inline void clear() { price.assign(""); volume.assign(""); number.assign(""); postTime.assign(""); status.assign(""); changed = -1; } }; std::vector<Kehai> bid; std::vector<Kehai> ask; tm recTime; mutable boost::mutex dataMutex; Book(); void clear(); int copyChangedKehaiToString(char * dst) const; }; When using valgrind --tool=drd, i get race condition errors such as the one below: ==26330== Conflicting store by thread 1 at 0x0658fbb0 size 4 ==26330== at 0x653AE68: std::string::_M_mutate(unsigned int, unsigned int, unsigned int) (in /usr/lib/libstdc++.so.6.0.8) ==26330== by 0x653AFC9: std::string::_M_replace_safe(unsigned int, unsigned int, char const*, unsigned int) (in /usr/lib/libstdc++.so.6.0.8) ==26330== by 0x653B064: std::string::assign(char const*, unsigned int) (in /usr/lib/libstdc++.so.6.0.8) ==26330== by 0x653B134: std::string::assign(char const*) (in /usr/lib/libstdc++.so.6.0.8) ==26330== by 0x8055D64: Book::Kehai::clear() (Book.h:50) ==26330== by 0x8094A29: Book::clear() (Book.cpp:78) ==26330== by 0x808537E: RealKernel::start() (RealKernel.cpp:86) ==26330== by 0x804D15A: main (main.cpp:164) ==26330== Allocation context: BSS section of /usr/lib/libstdc++.so.6.0.8 ==26330== Other segment start (thread 2) ==26330== at 0x400BB59: pthread_mutex_unlock (drd_pthread_intercepts.c:633) ==26330== by 0xC59565: pthread_mutex_unlock (in /lib/libc-2.5.so) ==26330== by 0x805477C: boost::mutex::unlock() (mutex.hpp:56) ==26330== by 0x80547C9: boost::unique_lock<boost::mutex>::~unique_lock() (locks.hpp:340) ==26330== by 0x80949BA: Book::copyChangedKehaiToString(char*) const (Book.cpp:134) ==26330== by 0x80937EE: BookSerializer::serializeBook(Book const&, std::string const&) (BookSerializer.cpp:41) ==26330== by 0x8092D05: BookSnapshotManager::getSnaphotDataList() (BookSnapshotManager.cpp:72) ==26330== by 0x8088179: SnapshotServer::getDataList() (SnapshotServer.cpp:246) ==26330== by 0x808870F: SnapshotServer::run() (SnapshotServer.cpp:183) ==26330== by 0x808BAF5: boost::_mfi::mf0<void, RealThread>::operator()(RealThread*) const (mem_fn_template.hpp:49) ==26330== by 0x808BB4D: void boost::_bi::list1<boost::_bi::value<RealThread*> >::operator()<boost::_mfi::mf0<void, RealThread>, boost::_bi::list0>(boost::_bi::type<void>, boost::_mfi::mf0<void, RealThread>&, boost::_bi::list0&, int) (bind.hpp:253) ==26330== by 0x808BB90: boost::_bi::bind_t<void, boost::_mfi::mf0<void, RealThread>, boost::_bi::list1<boost::_bi::value<RealThread*> > >::operator()() (bind_template.hpp:20) ==26330== Other segment end (thread 2) ==26330== at 0x400B62A: pthread_mutex_lock (drd_pthread_intercepts.c:580) ==26330== by 0xC59535: pthread_mutex_lock (in /lib/libc-2.5.so) ==26330== by 0x80546B8: boost::mutex::lock() (mutex.hpp:51) ==26330== by 0x805473B: boost::unique_lock<boost::mutex>::lock() (locks.hpp:349) ==26330== by 0x8054769: boost::unique_lock<boost::mutex>::unique_lock(boost::mutex&) (locks.hpp:227) ==26330== by 0x8094711: Book::copyChangedKehaiToString(char*) const (Book.cpp:113) ==26330== by 0x80937EE: BookSerializer::serializeBook(Book const&, std::string const&) (BookSerializer.cpp:41) ==26330== by 0x808870F: SnapshotServer::run() (SnapshotServer.cpp:183) ==26330== by 0x808BAF5: boost::_mfi::mf0<void, RealThread>::operator()(RealThread*) const (mem_fn_template.hpp:49) ==26330== by 0x808BB4D: void boost::_bi::list1<boost::_bi::value<RealThread*> >::operator()<boost::_mfi::mf0<void, RealThread>, boost::_bi::list0>(boost::_bi::type<void>, boost::_mfi::mf0<void, RealThread>&, boost::_bi::list0&, int) (bind.hpp:253) For the life of me, i can't figure out where the race condition is. As far as I can tell, clearing the kehai is done only after having taken the mutex, and the same holds true with copying it to a string. Does anyone have any ideas what could be causing this, or where I should look? Thank you kindly.

    Read the article

  • Need help in setting lighttpd on Ubuntu 9.10

    - by hap497
    Hi, I am trying to run lighttpd on Ubuntu 9.10. I get the conf file from the doc directory of lighttpd source. $ sudo ./lighttpd -f lighttpd.conf $ ps -ef | grep lighttpd root 2094 1 0 19:40 ? 00:00:00 ./lighttpd -f lighttpd.conf This is my lighttpd.conf: $ more lighttpd.conf # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", # "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) #server.port = 81 ## bind to localhost (default: all interfaces) #server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.s ocket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "ac cess plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 When I go to browser and hit 'http://127.0.0.1', I get link not found. Any idea?

    Read the article

  • Fedora 16 can connect to samba share using smbclient but not in nautilus 3.2.1

    - by Nathan Jones
    I have a machine running Ubuntu 11.10 Server acting as a Samba server to share my home directory. Everything works fine on my Windows 7 machine, but on my Fedora 16 laptop, if I use Nautilus to try to access the share using smb://192.168.0.8/nathan in the location bar, it just has the loading cursor and does nothing. It never shows any errors, nothing. Using smbclient works just fine, but I'd like to get it working in Nautilus. I know that there can be problems with SELinux and Samba, so I created a file called booleans.local that contains samba_enable_home_dirs=1. My smb.conf file looks like this: # For Unix password sync to work on a Debian GNU/Linux system, the following # parameters must be set (thanks to Ian Kahan <<[email protected]> for # sending the correct chat script for the passwd program in Debian Sarge). passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . # This boolean controls whether PAM will be used for password changes # when requested by an SMB client instead of the program listed in # 'passwd program'. The default is 'no'. pam password change = yes # This option controls how unsuccessful authentication attempts are mapped # to anonymous connections map to guest = bad user ########## Domains ########### # Is this machine able to authenticate users. Both PDC and BDC # must have this setting enabled. If you are the BDC you must # change the 'domain master' setting to no # ; domain logons = yes # # The following setting only takes effect if 'domain logons' is set # It specifies the location of the user's profile directory # from the client point of view) # The following required a [profiles] share to be setup on the # samba server (see below) ; logon path = \\%N\profiles\%U # Another common choice is storing the profile in the user's home directory # (this is Samba's default) # logon path = \\%N\%U\profile # The following setting only takes effect if 'domain logons' is set # It specifies the location of a user's home directory (from the client # point of view) ; logon drive = H: # logon home = \\%N\%U # The following setting only takes effect if 'domain logons' is set # It specifies the script to run during logon. The script must be stored # in the [netlogon] share # NOTE: Must be store in 'DOS' file format convention ; logon script = logon.cmd # This allows Unix users to be created on the domain controller via the SAMR # RPC pipe. The example command creates a user account with a disabled Unix # password; please adapt to your needs ; add user script = /usr/sbin/adduser --quiet --disabled-password --gecos "" %u # This allows machine accounts to be created on the domain controller via the # SAMR RPC pipe. # The following assumes a "machines" group exists on the system ; add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u # This allows Unix groups to be created on the domain controller via the SAMR # RPC pipe. ; add group script = /usr/sbin/addgroup --force-badname %g ########## Printing ########## # If you want to automatically load your printer list rather # than setting them up individually then you'll need this # load printers = yes # lpr(ng) printing. You may wish to override the location of the # printcap file ; printing = bsd ; printcap name = /etc/printcap # CUPS printing. See also the cupsaddsmb(8) manpage in the # cupsys-client package. ; printing = cups ; printcap name = cups ############ Misc ############ # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting ; include = /home/samba/etc/smb.conf.%m # Most people will find that this option gives better performance. # See smb.conf(5) and /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/speed.html # for details # You may want to add the following on a Linux system: # SO_RCVBUF=8192 SO_SNDBUF=8192 # socket options = TCP_NODELAY # The following parameter is useful only if you have the linpopup package # installed. The samba maintainer and the linpopup maintainer are # working to ease installation and configuration of linpopup and samba. ; message command = /bin/sh -c '/usr/bin/linpopup "%f" "%m" %s; rm %s' & # Domain Master specifies Samba to be the Domain Master Browser. If this # machine will be configured as a BDC (a secondary logon server), you # must set this to 'no'; otherwise, the default behavior is recommended. # domain master = auto # Some defaults for winbind (make sure you're not using the ranges # for something else.) ; idmap uid = 10000-20000 ; idmap gid = 10000-20000 ; template shell = /bin/bash # The following was the default behaviour in sarge, # but samba upstream reverted the default because it might induce # performance issues in large organizations. # See Debian bug #368251 for some of the consequences of *not* # having this setting and smb.conf(5) for details. ; winbind enum groups = yes ; winbind enum users = yes # Setup usershare options to enable non-root users to share folders # with the net usershare command. # Maximum number of usershare. 0 (default) means that usershare is disabled. ; usershare max shares = 100 # Allow users who've been granted usershare privileges to create # public shares, not just authenticated ones usershare allow guests = yes #======================= Share Definitions ======================= # Un-comment the following (and tweak the other settings below to suit) # to enable the default home directory shares. This will share each # user's home director as \\server\username [homes] comment = Home Directories browseable = yes # By default, the home directories are exported read-only. Change the # next parameter to 'no' if you want to be able to write to them. read only = no # File creation mask is set to 0700 for security reasons. If you want to # create files with group=rw permissions, set next parameter to 0775. ; create mask = 0775 # Directory creation mask is set to 0700 for security reasons. If you want to # create dirs. with group=rw permissions, set next parameter to 0775. ; directory mask = 0775 # By default, \\server\username shares can be connected to by anyone # with access to the samba server. Un-comment the following parameter # to make sure that only "username" can connect to \\server\username # The following parameter makes sure that only "username" can connect # # This might need tweaking when using external authentication schemes valid users = %S # Un-comment the following and create the netlogon directory for Domain Logons # (you need to configure Samba to act as a domain controller too.) ;[netlogon] ; comment = Network Logon Service ; path = /home/samba/netlogon ; guest ok = yes ; read only = yes # Un-comment the following and create the profiles directory to store # users profiles (see the "logon path" option above) # (you need to configure Samba to act as a domain controller too.) # The path below should be writable by all users so that their # profile directory may be created the first time they log on ;[profiles] ; comment = Users profiles ; path = /home/samba/profiles ; guest ok = no ; browseable = no ; create mask = 0600 ; directory mask = 0700 [printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = no create mask = 0700 # Windows clients look for this share name as a source of downloadable # printer drivers [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no # Uncomment to allow remote administration of Windows print drivers. # You may need to replace 'lpadmin' with the name of the group your # admin users are members of. # Please note that you also need to set appropriate Unix permissions # to the drivers directory for these users to have write rights in it ; write list = root, @lpadmin # A sample share for sharing your CD-ROM with others. ;[cdrom] ; comment = Samba server's CD-ROM ; read only = yes ; locking = no ; path = /cdrom ; guest ok = yes # The next two parameters show how to auto-mount a CD-ROM when the # cdrom share is accesed. For this to work /etc/fstab must contain # an entry like this: # # /dev/scd0 /cdrom iso9660 defaults,noauto,ro,user 0 0 # # The CD-ROM gets unmounted automatically after the connection to the # # If you don't want to use auto-mounting/unmounting make sure the CD # is mounted on /cdrom # ; preexec = /bin/mount /cdrom ; postexec = /bin/umount /cdrom smbusers: <nathan> = <"nathan"> Any help would be very much appreciated! Thanks!

    Read the article

  • I need advices: small memory footprint linux mail server with spam filtering

    - by petermolnar
    I have a VPS which is originally destined to be a webserver but some minimal mail capabilities are needed to be deployed as well, including sending and receiving as standalone server. The current setup is the following: Postfix reveices the mail, the users are in virtual tables, stored in MySQL on connection all servers are tested with policyd-weight service against some DNSBLs all mail is runs through SpamAssassin spamd with the help of spamc client the mail is then delivered with Dovecot 2' LDA (local delivery agent), virtual users as well As you saw... there's no virus scanner running, and that's for a reason: clamav eats all the memory possible and also, virus mails are all filtered out with this setup (I've tested the same with ClamAV enabled for 1,5 years, no virus mail ever got even to ClamAV) I don't use amavisd and I really don't want to. You only need that monster if you have plenty of memory and lots of simultaneous scanners. It's also a nightmare to fine tune by hand. I run policyd-weight instead of policyd and native DNSBLs in postfix. I don't like to send someone away because a single service listed them. Important statement: everything works fine. I receive very small amount of spam, nearly never get a false positive and most of the bad mail is stopped by policyd-weight. The only "problem" that I feel the services at total uses a bit much memory alltogether. I've already cut the modules of spamassassin (see below), but I'd really like to hear some advices how to cut the memory footprint as low as possible, mostly: what plugins SpamAssassin really needs and what are more or less useless, regarding to my current postfix & policyd-weight setup? SpamAssassin rules are also compiled with sa-compile (sa-update runs once a week from cron, compile runs right after that) These are some of the current configurations that may matter, please tell me if you need anything more. postfix/master.cf (parts only) dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/bin/spamc -e /usr/lib/dovecot/deliver -d ${recipient} -f {sender} postfix/main.cf (parts only) smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_invalid_hostname, reject_non_fqdn_hostname, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_pipelining, reject_unauth_destination, check_policy_service inet:127.0.0.1:12525, permit policyd-weight.conf (parts only) $REJECTMSG = "550 Mail appeared to be SPAM or forged. Ask your Mail/DNS-Administrator to correct HELO and DNS MX settings or to get removed from DNSBLs"; $REJECTLEVEL = 4; $DEFER_STRING = 'IN_SPAMCOP= BOGUS_MX='; $DEFER_ACTION = '450'; $DEFER_LEVEL = 5; $DNSERRMSG = '450 No DNS entries for your MTA, HELO and Domain. Contact YOUR administrator'; # 1: ON, 0: OFF (default) # If ON request that ALL clients are only checked against RBLs $dnsbl_checks_only = 0; # 1: ON (default), 0: OFF # When set to ON it logs only RBLs which affect scoring (positive or negative) $LOG_BAD_RBL_ONLY = 1; ## DNSBL settings @dnsbl_score = ( # host, hit, miss, log name 'dnsbl.ahbl.org', 3, -1, 'dnsbl.ahbl.org', 'dnsbl.njabl.org', 3, -1, 'dnsbl.njabl.org', 'dnsbl.sorbs.net', 3, -1, 'dnsbl.sorbs.net', 'bl.spamcop.net', 3, -1, 'bl.spamcop.net', 'zen.spamhaus.org', 3, -1, 'zen.spamhaus.org', 'pbl.spamhaus.org', 3, -1, 'pbl.spamhaus.org', 'cbl.abuseat.org', 3, -1, 'cbl.abuseat.org', 'list.dsbl.org', 3, -1, 'list.dsbl.org', ); # If Client IP is listed in MORE DNSBLS than this var, it gets REJECTed immediately $MAXDNSBLHITS = 3; # alternatively, if the score of DNSBLs is ABOVE this level, reject immediately $MAXDNSBLSCORE = 9; $MAXDNSBLMSG = '550 Az levelezoszerveruk IP cime tul sok spamlistan talahato, kerjuk ellenorizze! / Your MTA is listed in too many DNSBLs; please check.'; ## RHSBL settings @rhsbl_score = ( 'multi.surbl.org', 4, 0, 'multi.surbl.org', 'rhsbl.ahbl.org', 4, 0, 'rhsbl.ahbl.org', 'dsn.rfc-ignorant.org', 4, 0, 'dsn.rfc-ignorant.org', # 'postmaster.rfc-ignorant.org', 0.1, 0, 'postmaster.rfc-ignorant.org', # 'abuse.rfc-ignorant.org', 0.1, 0, 'abuse.rfc-ignorant.org' ); # skip a RBL if this RBL had this many continuous errors $BL_ERROR_SKIP = 2; # skip a RBL for that many times $BL_SKIP_RELEASE = 10; ## cache stuff # must be a directory (add trailing slash) $LOCKPATH = '/var/run/policyd-weight/'; # socket path for the cache daemon. $SPATH = $LOCKPATH.'/polw.sock'; # how many seconds the cache may be idle before starting maintenance routines #NOTE: standard maintenance jobs happen regardless of this setting. $MAXIDLECACHE = 60; # after this number of requests do following maintenance jobs: checking for config changes $MAINTENANCE_LEVEL = 5; # negative (i.e. SPAM) result cache settings ################################## # set to 0 to disable caching for spam results. To this level the cache will be cleaned. $CACHESIZE = 2000; # at this number of entries cleanup takes place $CACHEMAXSIZE = 4000; $CACHEREJECTMSG = '550 temporarily blocked because of previous errors'; # after NTTL retries the cache entry is deleted $NTTL = 1; # client MUST NOT retry within this seconds in order to decrease TTL counter $NTIME = 30; # positve (i.,e. HAM) result cache settings ################################### # set to 0 to disable caching of HAM. To this number of entries the cache will be cleaned $POSCACHESIZE = 1000; # at this number of entries cleanup takes place $POSCACHEMAXSIZE = 2000; $POSCACHEMSG = 'using cached result'; #after PTTL requests the HAM entry must succeed one time the RBL checks again $PTTL = 60; # after $PTIME in HAM Cache the client must pass one time the RBL checks again. #Values must be nonfractal. Accepted time-units: s, m, h, d $PTIME = '3h'; # The client must pass this time the RBL checks in order to be listed as hard-HAM # After this time the client will pass immediately for PTTL within PTIME $TEMP_PTIME = '1d'; ## DNS settings # Retries for ONE DNS-Lookup $DNS_RETRIES = 1; # Retry-interval for ONE DNS-Lookup $DNS_RETRY_IVAL = 5; # max error count for unresponded queries in a complete policy query $MAXDNSERR = 3; $MAXDNSERRMSG = 'passed - too many local DNS-errors'; # persistent udp connection for DNS queries. #broken in Net::DNS version 0.51. Works with Net::DNS 0.53; DEFAULT: off $PUDP= 0; # Force the usage of Net::DNS for RBL lookups. # Normally policyd-weight tries to use a faster RBL lookup routine instead of Net::DNS $USE_NET_DNS = 0; # A list of space separated NS IPs # This overrides resolv.conf settings # Example: $NS = '1.2.3.4 1.2.3.5'; # DEFAULT: empty $NS = ''; # timeout for receiving from cache instance $IPC_TIMEOUT = 2; # If set to 1 policyd-weight closes connections to smtpd clients in order to avoid too many #established connections to one policyd-weight child $TRY_BALANCE = 0; # scores for checks, WARNING: they may manipulate eachother # or be factors for other scores. # HIT score, MISS Score @client_ip_eq_helo_score = (1.5, -1.25 ); @helo_score = (1.5, -2 ); @helo_score = (0, -2 ); @helo_from_mx_eq_ip_score= (1.5, -3.1 ); @helo_numeric_score= (2.5, 0 ); @from_match_regex_verified_helo= (1,-2 ); @from_match_regex_unverified_helo = (1.6, -1.5 ); @from_match_regex_failed_helo = (2.5, 0 ); @helo_seems_dialup = (1.5, 0 ); @failed_helo_seems_dialup= (2, 0 ); @helo_ip_in_client_subnet= (0,-1.2 ); @helo_ip_in_cl16_subnet = (0,-0.41 ); #@client_seems_dialup_score = (3.75, 0 ); @client_seems_dialup_score = (0, 0 ); @from_multiparted = (1.09, 0 ); @from_anon= (1.17, 0 ); @bogus_mx_score = (2.1, 0 ); @random_sender_score = (0.25, 0 ); @rhsbl_penalty_score = (3.1, 0 ); @enforce_dyndns_score = (3, 0 ); spamassassin/init.pre (I've put the .pre files together) loadplugin Mail::SpamAssassin::Plugin::Hashcash loadplugin Mail::SpamAssassin::Plugin::SPF loadplugin Mail::SpamAssassin::Plugin::Pyzor loadplugin Mail::SpamAssassin::Plugin::Razor2 loadplugin Mail::SpamAssassin::Plugin::AutoLearnThreshold loadplugin Mail::SpamAssassin::Plugin::MIMEHeader loadplugin Mail::SpamAssassin::Plugin::ReplaceTags loadplugin Mail::SpamAssassin::Plugin::Check loadplugin Mail::SpamAssassin::Plugin::HTTPSMismatch loadplugin Mail::SpamAssassin::Plugin::URIDetail loadplugin Mail::SpamAssassin::Plugin::Bayes loadplugin Mail::SpamAssassin::Plugin::BodyEval loadplugin Mail::SpamAssassin::Plugin::DNSEval loadplugin Mail::SpamAssassin::Plugin::HTMLEval loadplugin Mail::SpamAssassin::Plugin::HeaderEval loadplugin Mail::SpamAssassin::Plugin::MIMEEval loadplugin Mail::SpamAssassin::Plugin::RelayEval loadplugin Mail::SpamAssassin::Plugin::URIEval loadplugin Mail::SpamAssassin::Plugin::WLBLEval loadplugin Mail::SpamAssassin::Plugin::VBounce loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody spamassassin/local.cf (parts) use_bayes 1 bayes_auto_learn 1 bayes_store_module Mail::SpamAssassin::BayesStore::MySQL bayes_sql_dsn DBI:mysql:db:127.0.0.1:3306 bayes_sql_username user bayes_sql_password pass bayes_ignore_header X-Bogosity bayes_ignore_header X-Spam-Flag bayes_ignore_header X-Spam-Status ### User settings user_scores_dsn DBI:mysql:db:127.0.0.1:3306 user_scores_sql_password user user_scores_sql_username pass user_scores_sql_custom_query SELECT preference, value FROM _TABLE_ WHERE username = _USERNAME_ OR username = '$GLOBAL' OR username = CONCAT('%',_DOMAIN_) ORDER BY username ASC # for better speed score DNS_FROM_AHBL_RHSBL 0 score __RFC_IGNORANT_ENVFROM 0 score DNS_FROM_RFC_DSN 0 score DNS_FROM_RFC_BOGUSMX 0 score __DNS_FROM_RFC_POST 0 score __DNS_FROM_RFC_ABUSE 0 score __DNS_FROM_RFC_WHOIS 0 UPDATE 01 As adaptr advised I remove policyd-weight and configured postfix postscreen, this resulted approximately -15-20 MB from RAM usage and a lot faster work. I'm not sure it's working at full capacity but it seems promising.

    Read the article

  • Inbound SIP calls through Cisco 881 NAT hang up after a few seconds

    - by MasterRoot24
    I've recently moved to a Cisco 881 router for my WAN link. I was previously using a Cisco Linksys WAG320N as my modem/router/WiFi AP/NAT firewall. The WAG320N is now running in bridged mode, so it's simply acting as a modem with one of it's LAN ports connected to FE4 WAN on my Cisco 881. The Cisco 881 get's a DHCP provided IP from my ISP. My LAN is part of default Vlan 1 (192.168.1.0/24). General internet connectivity is working great, I've managed to setup static NAT rules for my HTTP/HTTPS/SMTP/etc. services which are running on my LAN. I don't know whether it's worth mentioning that I've opted to use NVI NAT (ip nat enable as opposed to the traditional ip nat outside/ip nat inside) setup. My reason for this is that NVI allows NAT loopback from my LAN to the WAN IP and back in to the necessary server on the LAN. I run an Asterisk 1.8 PBX on my LAN, which connects to a SIP provider on the internet. Both inbound and outbound calls through the old setup (WAG320N providing routing/NAT) worked fine. However, since moving to the Cisco 881, inbound calls drop after around 10 seconds, whereas outbound calls work fine. The following message is logged on my Asterisk PBX: [Dec 9 15:27:45] WARNING[27734]: chan_sip.c:3641 retrans_pkt: Retransmission timeout reached on transmission [email protected] for seqno 1 (Critical Response) -- See https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions Packet timed out after 6528ms with no response [Dec 9 15:27:45] WARNING[27734]: chan_sip.c:3670 retrans_pkt: Hanging up call [email protected] - no reply to our critical packet (see https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions). (I know that this is quite a common issue - I've spend the best part of 2 days solid on this, trawling Google.) I've done as I am told and checked https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions. Referring to the section "Other SIP requests" in the page linked above, I believe that the hangup to be caused by the ACK from my SIP provider not being passed back through NAT to Asterisk on my PBX. I tried to ascertain this by dumping the packets on my WAN interface on the 881. I managed to obtain a PCAP dump of packets in/out of my WAN interface. Here's an example of an ACK being reveived by the router from my provider: 689 21.219999 193.x.x.x 188.x.x.x SIP 502 Request: ACK sip:[email protected] | However a SIP trace on the Asterisk server show's that there are no ACK's received in response to the 200 OK from my PBX: http://pastebin.com/wwHpLPPz In the past, I have been strongly advised to disable any sort of SIP ALGs on routers and/or firewalls and the many posts regarding this issue on the internet seem to support this. However, I believe on Cisco IOS, the config command to disable SIP ALG is no ip nat service sip udp port 5060 however, this doesn't appear to help the situation. To confirm that config setting is set: Router1#show running-config | include sip no ip nat service sip udp port 5060 Another interesting twist: for a short period of time, I tried another provider. Luckily, my trial account with them is still available, so I reverted my Asterisk config back to the revision before I integrated with my current provider. I then dialled in to the DDI associated with the trial trunk and the call didn't get hung up and I didn't get the error above! To me, this points at the provider, however I know, like all providers do, will say "There's no issues with our SIP proxies - it's your firewall." I'm tempted to agree with this, as this issue was not apparent with the old WAG320N router when it was doing the NAT'ing. I'm sure you'll want to see my running-config too: ! ! Last configuration change at 15:55:07 UTC Sun Dec 9 2012 by xxx version 15.2 no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug datetime msec localtime show-timezone service timestamps log datetime msec localtime show-timezone no service password-encryption service sequence-numbers ! hostname Router1 ! boot-start-marker boot-end-marker ! ! security authentication failure rate 10 log security passwords min-length 6 logging buffered 4096 logging console critical enable secret 4 xxx ! aaa new-model ! ! aaa authentication login local_auth local ! ! ! ! ! aaa session-id common ! memory-size iomem 10 ! crypto pki trustpoint TP-self-signed-xxx enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-xxx revocation-check none rsakeypair TP-self-signed-xxx ! ! crypto pki certificate chain TP-self-signed-xxx certificate self-signed 01 quit no ip source-route no ip gratuitous-arps ip auth-proxy max-login-attempts 5 ip admission max-login-attempts 5 ! ! ! ! ! no ip bootp server ip domain name dmz.merlin.local ip domain list dmz.merlin.local ip domain list merlin.local ip name-server x.x.x.x ip inspect audit-trail ip inspect udp idle-time 1800 ip inspect dns-timeout 7 ip inspect tcp idle-time 14400 ip inspect name autosec_inspect ftp timeout 3600 ip inspect name autosec_inspect http timeout 3600 ip inspect name autosec_inspect rcmd timeout 3600 ip inspect name autosec_inspect realaudio timeout 3600 ip inspect name autosec_inspect smtp timeout 3600 ip inspect name autosec_inspect tftp timeout 30 ip inspect name autosec_inspect udp timeout 15 ip inspect name autosec_inspect tcp timeout 3600 ip cef login block-for 3 attempts 3 within 3 no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO881-SEC-K9 sn ! ! username xxx privilege 15 secret 4 xxx username xxx secret 4 xxx ! ! ! ! ! ip ssh time-out 60 ! ! ! ! ! ! ! ! ! interface FastEthernet0 no ip address ! interface FastEthernet1 no ip address ! interface FastEthernet2 no ip address ! interface FastEthernet3 switchport access vlan 2 no ip address ! interface FastEthernet4 ip address dhcp no ip redirects no ip unreachables no ip proxy-arp ip nat enable duplex auto speed auto ! interface Vlan1 ip address 192.168.1.1 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip nat enable ! interface Vlan2 ip address 192.168.0.2 255.255.255.0 ! ip forward-protocol nd ip http server ip http access-class 1 ip http authentication local ip http secure-server ip http timeout-policy idle 60 life 86400 requests 10000 ! ! no ip nat service sip udp port 5060 ip nat source list 1 interface FastEthernet4 overload ip nat source static tcp x.x.x.x 80 interface FastEthernet4 80 ip nat source static tcp x.x.x.x 443 interface FastEthernet4 443 ip nat source static tcp x.x.x.x 25 interface FastEthernet4 25 ip nat source static tcp x.x.x.x 587 interface FastEthernet4 587 ip nat source static tcp x.x.x.x 143 interface FastEthernet4 143 ip nat source static tcp x.x.x.x 993 interface FastEthernet4 993 ip nat source static tcp x.x.x.x 1723 interface FastEthernet4 1723 ! ! logging trap debugging logging facility local2 access-list 1 permit 192.168.1.0 0.0.0.255 access-list 1 permit 192.168.0.0 0.0.0.255 no cdp run ! ! ! ! control-plane ! ! banner motd Authorized Access only ! line con 0 login authentication local_auth length 0 transport output all line aux 0 exec-timeout 15 0 login authentication local_auth transport output all line vty 0 1 access-class 1 in logging synchronous login authentication local_auth length 0 transport preferred none transport input telnet transport output all line vty 2 4 access-class 1 in login authentication local_auth length 0 transport input ssh transport output all ! ! end ...and, if it's of any use, here's my Asterisk SIP config: [general] context=default ; Default context for calls allowoverlap=no ; Disable overlap dialing support. (Default is yes) udpbindaddr=0.0.0.0 ; IP address to bind UDP listen socket to (0.0.0.0 binds to all) ; Optionally add a port number, 192.168.1.1:5062 (default is port 5060) tcpenable=no ; Enable server for incoming TCP connections (default is no) tcpbindaddr=0.0.0.0 ; IP address for TCP server to bind to (0.0.0.0 binds to all interfaces) ; Optionally add a port number, 192.168.1.1:5062 (default is port 5060) srvlookup=yes ; Enable DNS SRV lookups on outbound calls ; Note: Asterisk only uses the first host ; in SRV records ; Disabling DNS SRV lookups disables the ; ability to place SIP calls based on domain ; names to some other SIP users on the Internet ; Specifying a port in a SIP peer definition or ; when dialing outbound calls will supress SRV ; lookups for that peer or call. directmedia=no ; Don't allow direct RTP media between extensions (doesn't work through NAT) externhost=<MY DYNDNS HOSTNAME> ; Our external hostname to resolve to IP and be used in NAT'ed packets localnet=192.168.1.0/24 ; Define our local network so we know which packets need NAT'ing qualify=yes ; Qualify peers by default dtmfmode=rfc2833 ; Set the default DTMF mode disallow=all ; Disallow all codecs by default allow=ulaw ; Allow G.711 u-law allow=alaw ; Allow G.711 a-law ; ---------------------- ; SIP Trunk Registration ; ---------------------- ; Orbtalk register => <MY SIP PROVIDER USER NAME>:[email protected]/<MY DDI> ; Main Orbtalk number ; ---------- ; Trunks ; ---------- [orbtalk] ; Main Orbtalk trunk type=peer insecure=invite host=sipgw3.orbtalk.co.uk nat=yes username=<MY SIP PROVIDER USER NAME> defaultuser=<MY SIP PROVIDER USER NAME> fromuser=<MY SIP PROVIDER USER NAME> secret=xxx context=inbound I really don't know where to go with this. If anyone can help me find out why these calls are being dropped off, I'd be grateful if you could chime in! Please let me know if any further info is required.

    Read the article

  • How does one get rid of fishy behavior in Windows?

    - by Tom Wijsman
    After I had boot my computer this morning there suddenly flooded water from the top of the screen, after which some fishes dropped into it. Now I can barely see what I am doing because the water distorts the view. Sometimes the fish follow the cursor so I need to move it away or wait for the fish to mind their own business. This makes it very annoying to use my system. What have I tried? Reboot the system. This caused the water to deplete from the desktop. Upon reboot, the screen was refilled with water and fishes. Attach another monitor. Same problem, fills that monitor as well and gives me extra fish. Clicking the fish. Makes them turn direction. Right clicking the fish. Changes color of the fish, not really useful. I'm locked out of changing the background or screen saver settings. Hence, I had to post the lady below... Safe mode doesn't save me from the fishes. It does give me another background there, but I can't screenshot easily. Other user accounts experience this as well. The Guest account seems to experience more fish than the other accounts. Using HijackThis, OTL Timekeeper List, Syninternal Autoruns, RootKitRevealer, ShellExView and similar tools I can't seem to find any entries that could be it, the Sysinternals tools show everything as verified. I'm suspecting this to be a driver problem. Randomly removing drivers doesn't seem to alleviate the problem. When removing the Graphics Drivers, it makes my screen black. While that could be considered the solution, it's not what I want. Changing the time / date settings does also not seem to affect the fishes. Changing the time a few years in the future, I would have expected the fishes to be dead. But, the same fishes are still there... They simply won't die! Tried to get used to them. They are really bothering me, looks like they require food. I don't know how to give them food, but apparently they get it elsewhere during reboot... Tried to disable my mouse pointer and use the keyboard. This works, they now swim around more randomly. They do put their attention to huge changes on the screen, so I need to type slow. Or otherwise I can't see what I'm tying exactly. Hold my laptop upside down. This seems to affect the water and fishes, but the water stays in the screen. They seem super resistant against water sickness and confusion though... What does the problem look like? What do I need? A way to get rid of these fishes on my screen forever, they are really annoying me a lot and I'm about to crack the screen to see if that makes them escape. Do you have any idea why this problem is occurring? What are my considerations? Buying an USB fish tank could make the fish leave the screen, I am uncertain though whether the fish could leave the screen through the USB cable. Using the FISh (programming language) which seems to provide EXPRESSIVE POWER and EFFICIENT EXECUTION, I can however not find any examples on how to remove fish. What are my Specifications? I'm using a Sony Vaio Fishy laptop. Sony VAIO VGN-Fishy, VAIO. Processor: 1337 MHz, Intel Core 2 Duo, T5432, 1 MB, Intel PM965 Express, 667 MHz. Memory: 1024 MB, DDR2-SDRAM, 667 MHz, 2 x 1024 MB, 4 GB. Disk Drive: 50 GB, Serial ATA, 5400 RPM. Storage Media: Memory Stick™, Memory Stick PRO™. Display: 15.4 ", 1280 x 800 pixels, LCD. Video: GeForce 8400M GT, 128 MB. Optical Drive: DVD±R/RW DL, 24 x, 24 x, 24 x, 6 x, 4 x, 6 x, 4 x, 5 x, 5 x, 8 x, 8 x, 8 x, 8 x, 6 x, 6 x, 24 x, 24 x, 24 x, 16 x. Camera: 1.3 MP, 30 fps. Networking: 2.0+EDR. Keyboard: Touchpad, AZERTY. Operating System/Software: Windows Vista Home Premium. Security: Kensington. Weight & Dimensions: 98.8 oz (2800 g), 14 " (355.8 mm), 10 " (254.4 mm), 0.98 " (24.9 mm). Other features: 100 BASE-TX/10 BASE-T, 802.11a/b/g/n/Draft n, V92/V.90, fishes. Plz! Help me...

    Read the article

  • How to configure fastcgi to work with ligttpd in ubuntu

    - by michael
    I am able to run lighttpd on ubuntu 9.10. But when i tried to setup fastcgi with lighttpd by putting this in the ligttpd.conf file: #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => "9098", "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", "docroot" => "/" # remote server may use # it's own docroot )) ) This is what I get in the error.log in ligttpd: 2010-03-07 21:00:11: (log.c.166) server started 2010-03-07 21:00:11: (mod_fastcgi.c.1104) the fastcgi-backend /usr/local/bin/cgi-fcgi failed to start: 2010-03-07 21:00:11: (mod_fastcgi.c.1108) child exited with status 1 /usr/local/bin/cgi-fcgi 2010-03-07 21:00:11: (mod_fastcgi.c.1111) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version. If this is PHP on Gentoo, add 'fastcgi' to the USE flags. 2010-03-07 21:00:11: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed. 2010-03-07 21:00:11: (server.c.931) Configuration of plugins failed. Going down. I do have cgi-fcgi in /usr/local/bin: $ which cgi-fcgi /usr/local/bin/cgi-fcgi '/usr/local/bin/cgi-fcgi' is the executable after I download and compile fast-cgi. Here is my lighttpd conf file: $ more lighttpd.conf # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) server.port = 9090 ## bind to localhost (default: all interfaces) server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => 1026, "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", #"docroot" => "/" # remote server may use # it's own docroot )) ) ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.s ocket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "ac cess plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 Thank you for your help.

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • Portable scripting language for a multi-server admin?

    - by Aaron
    Please Note: Portable as in portableapps.com, not the traditional definition. Originally posted on stackoverflow.com, asking here at another user's suggestion. I'm a DBA and sysadmin, mostly for Windows machines running SQL Server. I'm looking for a programming/scripting language for Windows that doesn't require Admin access or an installer, needing no install process other than expanding it into a folder. My intent is to have a language for automation on which I can standardize. Up to this point, I've been using a combination of batch files and Unix shell, using sh.exe from UnxUtils but it's far from a perfect solution. I've evaluated a handful of options, all of them have at least one serious shortcoming or another. I have a strong preference for something open source or dual license, but I'm more interested in finding the right tool than anything else. Not interested that anything that relies on Cygwin or Java, but at this point I'd be fine with something that needs .NET. Requirements: Manageable footprint (1-100 files, under 30 MB installed) Run on Windows XP and Server (2003+) No installer (exe, msi) Works with external pipes, processes, and files Support for MS SQL Server or ODBC connections Bonus Points: Open Source FFI for calling functions in native DLLs GUI support (native or gtk, wx, fltk, etc) Linux, AIX, and/or OS X support Dynamic, object oriented and/or functional, interpreted or bytecode compiled; interactive development Able to package or compile scripts into executables So far I've tried: Ruby: 148 MB on disk, 23000 files Portable Python: 54 MB on disk, 2800 files Strawberry Perl: 123 MB on disk, 3600 files REBOL: Great, except closed source and no MSSQL or ODBC in free version Squeak Smalltalk: Great, except poor support for scripting ---- cut: points of clarification ---- Why all the limitations? I realize some of my criteria seem arbitrarily confining. It's primarily a product my environment. I work as a SQL Server DBA and backup Unix admin at a division of a large company. In addition to near a hundred boxes running some version or another of SQL Server on Windows, I also support the SQL Server Express Edition installs on over a thousand machines in the field. Because of our security policies, I don't login rights on every machine. Often enough, an issue comes up and I'm given local Admin for some period of time. Often enough, it's some box I've never touched and don't have my own environment setup yet. I may have temporary admin rights on the box, but I'm not the admin for the machine- I'm just the DBA. I've no interest in stepping on the toes of the Windows admins, nor do I want to take over any of their duties. If I bring up "installing" something, suddenly it becomes a matter of interest for Production Control and the Windows admins; if I'm copying up a script, no one minds. The distinction may not mean much to the readers, but if someone gets the wrong idea I've suddenly got a long wait and significant overhead before I can get the tool installed and get the problem solved. That's why I want something that can be copied and run in the manner of a portable app. What about the small footprint? My company has three divisions, each in a different geographical location, and one of them is a new acquisition. We have different production control/security policies in each division. I support our MSSQL databases in all three divisions. The field machines are spread around the US, sometimes connecting to the VPN over very slow links. Installing Ruby \using psexec has taken a long time over these connections. In these instances, the bigger time waster seems to be archives with thousands and thousands of files rather than their sheer size. You could say I'm spoiled by Unix, where the admins usually have at least some modern scripting language installed; I'd use PowerShell, but I don't know it well and more importantly it isn't everywhere I need to work. It's a regular occurrence that I need to write, deploy and execute some script on short notice on some machine I've never on which logged in. Since having Ruby or something similar installed on every machine I'll ever need to touch is effectively impossible because of the approvals, time and and Windows admin labor needed I makes more sense find a solution that allows me to work on my own terms.

    Read the article

  • C# SOCKS proxy service for HTTP requests

    - by Ed
    I'm trying to build a service that will forward HTTP requests from agents like a browser to the Tor service. Problem is, the Tor service only accepts SOCKS4a connections. So my solution is to listen for HTTP requests, get the URL they're requesting, and make a request via Tor with the help of the Starksoft.Net.Proxy library. Then return the response. The library kind of works, but I'm not happy. It returns HTTP headers with the response and it can't handle images. So the responses are messed up. How could I improve my code? I'm very new to network programming. Sorry for the long example. public AnonymiserService(ILogger logger) { try { _logger = logger; _logger.Log("Listening on port {0}...", Properties.Settings.Default.ListeningPort); StartListener(new string[] { string.Format("http://*:{0}/", Properties.Settings.Default.ListeningPort) }); } catch (Exception ex) { _logger.LogError("Exception!", ex); } } private void StartListener(string[] prefixes) { if (!HttpListener.IsSupported) { _logger.LogError("HttpListener isn't supported on this machine!"); return; } HttpListener listener = new HttpListener(); foreach (string s in prefixes) listener.Prefixes.Add(s); while (true) { listener.Start(); IAsyncResult result = listener.BeginGetContext(new AsyncCallback(ListenerCallback), listener); result.AsyncWaitHandle.WaitOne(); } } private void ListenerCallback(IAsyncResult result) { try { // Get HTTP request HttpListener listener = (HttpListener)result.AsyncState; HttpListenerContext context = listener.EndGetContext(result); _logger.Log("Retrieving [{0}]", context.Request.RawUrl); // Create connection // Use Tor as proxy IProxyClient proxyClient = new Socks4aProxyClient("localhost", 9050); TcpClient tcpClient = proxyClient.CreateConnection(context.Request.UserHostName, 80); // Create message // Need to set Connection: close to close the connection as soon as it's done byte[] data = Encoding.UTF8.GetBytes(String.Format("GET {0} HTTP/1.1\r\nHost: {1}\r\nConnection: close\r\n\r\n", context.Request.Url.PathAndQuery, context.Request.UserHostName)); // Send message NetworkStream ns = tcpClient.GetStream(); ns.Write(data, 0, data.Length); // Pass on HTTP response HttpListenerResponse responseOut = context.Response; if (ns.CanRead) { byte[] buffer = new byte[32768]; int read = 0; string responseString = string.Empty; // Read response while ((read = ns.Read(buffer, 0, buffer.Length)) > 0) { responseString += Encoding.UTF8.GetString(buffer, 0, read); } // Remove headers if (responseString.IndexOf("HTTP/1.1 200 OK") > -1) responseString = responseString.Substring(responseString.IndexOf("\r\n\r\n")); // Forward response byte[] byteArray = Encoding.UTF8.GetBytes(responseString); responseOut.OutputStream.Write(byteArray, 0, byteArray.Length); } // Close streams responseOut.OutputStream.Close(); ns.Close(); // Close connection tcpClient.Close(); _logger.Log("Retrieved [{0}]", context.Request.RawUrl); } catch (Exception ex) { _logger.LogError("Exception in ListenerCallback!", ex); } }

    Read the article

  • create fixed length flat file with Java

    - by Leslie
    I have a process that currently runs in a Delphi application that I wrote and I need to convert it to a Java process that will run on our web application. Basically our State Financial (legacy) system requires this file in a specific output. In Delphi it is like this: procedure CreateSHAREJournalFile(AppDate : string; ClassCode : string; BudgetRef : String; AccountNumber : string; FYEStep : integer); var GLFileInfo : TStrings; MPayFormat, HPayFormat, TPayFormat : string;<br> const<br> //this is the fixed length format for each item in the file<br> HeaderFormat = '%-1s%-5s%-10s%-8s%-12s%-10s%-21s%-3s%-71s%-3s%-20s%-1s';<br> DetailFormat = '%-1s%-5s%-9s%-10s%-10s%-10s%-10s%-8s%-6s%-5s%-5s%-5s%-8s%-25s%-10s%-60s%-28s%-66s%-28s';<br> begin<br> try<br>//get the data from the query<br> with dmJMS.qryShare do<br> begin<br> SQL.Clear;<br> SQL.Add('SELECT SUM(TOTHRPAY) As HourPay, SUM(TOTMLPAY) As MilePay, SUM(TOTALPAY) AS TotalPay FROM JMPCHECK INNER JOIN JMPMAIN ON JMPCHECK.JURNUM = JMPMAIN.JURNUM WHERE PANELID LIKE ''' + Copy(AppDate, 3, 6) + '%'' ');<br> if FYEStep > -1 then<br> SQL.Add('AND WARRANTNO = ' + QUotedStr(IntToStr(FYEStep)));<br> Active := True;<br> //assign totals to variables so they can be padded with leading zeros<br> MPayFormat := FieldByName('MilePay').AsString;<br> while length(MPayFormat) < 28 do <br>MPayFormat := '0' + MPayFormat;<br> HPayFormat := FieldByName('HourPay').AsString;<br> while length(HPayFormat) < 28 do <br>HPayFormat := '0' + HPayFormat;<br> TPayFormat := Format('%f' ,[(FieldByName('TotalPay').AsCurrency)]);<br> while length(TPayFormat) < 27 do<br> TPayFormat := '0' + TPayFormat;<br> TPayFormat := '-' + TPayFormat;<br> //create a TStringlist to put each line item into<br> GLFileInfo := TStringList.Create;<br> //add header info using HeaderFormat defined above<br> GLFileInfo.Add(Format(HeaderFormat, ['H', '21801', 'NEXT', FormatDateTime('MMDDYYYY', Today), '', 'ACTUALS', '', 'EXT', '', 'EXT', '', 'N']));<br> //add detail info using DetailFormat defined above<br> GLFileInfo.Add(Format(DetailFormat, ['L', '21801', '1', 'ACTUALS', AccountNumber, '', '1414000000', '111500', '', '01200', ClassCode, '', BudgetRef, '', AccountNumber + '0300', '', MPayFormat, '', MPayFormat]));<br> GLFileInfo.Add(Format(DetailFormat, ['L', '21801', '2', 'ACTUALS', AccountNumber, '', '1414000000', '111500', '', '01200', ClassCode, '', BudgetRef, '', AccountNumber + '0100', '', HPayFormat, '', HPayFormat]));<br> GLFileInfo.Add(Format(DetailFormat, ['L', '21801', '3', 'ACTUALS', '101900', '', '1414000000', '111500', '', '01200', ClassCode, '', BudgetRef, '', '', '', TPayFormat, '', TPayFormat]));<br> //save TStringList to text file<br> GLFileINfo.SaveToFile(ExtractFilePath(Application.ExeName) + 'FileTransfer\GL_' + formatdateTime('mmddyy', Today) + SequenceID + '24400' + '.txt');<br> end;<br> finally<br> GLFileINfo.Free;<br> end; end; is there an equivalent in Java for the Format option? Or the TStringList that saves to a text file? Thanks for any information....haven't done a lot of Java programming! Leslie

    Read the article

  • Paying great programmers more than average programmers

    - by Kelly French
    It's fairly well recognized that some programmers are up to 10 times more productive than others. Joel mentions this topic on his blog. There is a whole blog devoted to the idea of the "10x productive programmer". In years since the original study, the general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000). Fred Brooks mentions the wide range in the quality of designers in his "No Silver Bullet" article, The differences are not minor--they are rather like the differences between Salieri and Mozart. Study after study shows that the very best designers produce structures that are faster, smaller, simpler, cleaner, and produced with less effort. The differences between the great and the average approach an order of magnitude. The study that Brooks cites is: H. Sackman, W.J. Erikson, and E.E. Grant, "Exploratory Experimental Studies Comparing Online and Offline Programming Performance," Communications of the ACM, Vol. 11, No. 1 (January 1968), pp. 3-11. The way programmers are paid by employers these days makes it almost impossible to pay the great programmers a large multiple of what the entry-level salary is. When the starting salary for a just-graduated entry-level programmer, we'll call him Asok (From Dilbert), is $40K, even if the top programmer, we'll call him Linus, makes $120K that is only a multiple of 3. I'd be willing to be that Linus does much more than 3 times what Asok does, so why wouldn't we expect him to get paid more as well? Here is a quote from Stroustrup: "The companies are complaining because they are hurting. They can't produce quality products as cheaply, as reliably, and as quickly as they would like. They correctly see a shortage of good developers as a part of the problem. What they generally don't see is that inserting a good developer into a culture designed to constrain semi-skilled programmers from doing harm is pointless because the rules/culture will constrain the new developer from doing anything significantly new and better." This leads to two questions. I'm excluding self-employed programmers and contractors. If you disagree that's fine but please include your rationale. It might be that the self-employed or contract programmers are where you find the top-10 earners, but please provide a explanation/story/rationale along with any anecdotes. [EDIT] I thought up some other areas in which talent/ability affects pay. Financial traders (commodities, stock, derivatives, etc.) designers (fashion, interior decorators, architects, etc.) professionals (doctor, lawyer, accountant, etc.) sales Questions: Why aren't the top 1% of programmers paid like A-list movie stars? What would the industry be like if we did pay the "Smart and gets things done" programmers 6, 8, or 10 times what an intern makes? [Footnote: I posted this question after submitting it to the Stackoverflow podcast. It was included in episode 77 and I've written more about it as a Codewright's Tale post 'Of Rockstars and Bricklayers'] Epilogue: It's probably unfair to exclude contractors and the self-employed. One aspect of the highest earners in other fields is that they are free-agents. The competition for their skills is what drives up their earning power. This means they can not be interchangeable or otherwise treated as a plug-and-play resource. I liked the example in one answer of a major league baseball team trying to field two first-basemen. Also, something that Joel mentioned in the Stackoverflow podcast (#77). There are natural dynamics to shrink any extreme performance/pay ranges between the highs and lows. One is the peer pressure of organizations to pay within a given range, another is the likelyhood that the high performer will realize their undercompensation and seek greener pastures.

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    A long time ago in a galaxy far, far away... Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A New Hope Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. The episode continues in a remote shell known as bash, with both databases running, and the installation of a set of tools with a rather unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? How do you ensure the schema name comes through when transforming the data from MySQL to PostgreSQL inserts? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • backtracking in haskell

    - by dmindreader
    I have to traverse a matrix and say how many "characteristic areas" of each type it has. A characteristic area is defined as a zone where elements of value n or n are adjacent. For example, given the matrix: 0 1 2 2 0 1 1 2 0 3 0 0 There's a single characteristic area of type 1 which is equal to the original matrix: 0 1 2 2 0 1 1 2 0 3 0 0 There are two characteristic areas of type 2: 0 0 2 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 3 0 0 And one characteristic area of type 3: 0 0 0 0 0 0 0 0 0 3 0 0 So, for the function call: countAreas [[0,1,2,2],[0,1,1,2],[0,3,0,0]] The result should be [1,2,1] I haven't defined countAreas yet, I'm stuck with my visit function when it has no more possible squares in which to move it gets stuck and doesn't make the proper recursive call. I'm new to functional programming and I'm still scratching my head about how to implement a backtracking algorithm here. Take a look at my code, what can I do to change it? move_right :: (Int,Int) -> [[Int]] -> Int -> Bool move_right (i,j) mat cond | (j + 1) < number_of_columns mat && consult (i,j+1) mat /= cond = True | otherwise = False move_left :: (Int,Int) -> [[Int]] -> Int -> Bool move_left (i,j) mat cond | (j - 1) >= 0 && consult (i,j-1) mat /= cond = True | otherwise = False move_up :: (Int,Int) -> [[Int]] -> Int -> Bool move_up (i,j) mat cond | (i - 1) >= 0 && consult (i-1,j) mat /= cond = True | otherwise = False move_down :: (Int,Int) -> [[Int]] -> Int -> Bool move_down (i,j) mat cond | (i + 1) < number_of_rows mat && consult (i+1,j) mat /= cond = True | otherwise = False imp :: (Int,Int) -> Int imp (i,j) = i number_of_rows :: [[Int]] -> Int number_of_rows i = length i number_of_columns :: [[Int]] -> Int number_of_columns (x:xs) = length x consult :: (Int,Int) -> [[Int]] -> Int consult (i,j) l = (l !! i) !! j visited :: (Int,Int) -> [(Int,Int)] -> Bool visited x y = elem x y add :: (Int,Int) -> [(Int,Int)] -> [(Int,Int)] add x y = x:y visit :: (Int,Int) -> [(Int,Int)] -> [[Int]] -> Int -> [(Int,Int)] visit (i,j) vis mat cond | move_right (i,j) mat cond && not (visited (i,j+1) vis) = visit (i,j+1) (add (i,j+1) vis) mat cond | move_down (i,j) mat cond && not (visited (i+1,j) vis) = visit (i+1,j) (add (i+1,j) vis) mat cond | move_left (i,j) mat cond && not (visited (i,j-1) vis) = visit (i,j-1) (add (i,j-1) vis) mat cond | move_up (i,j) mat cond && not (visited (i-1,j) vis) = visit (i-1,j) (add (i-1,j) vis) mat cond | otherwise = vis

    Read the article

  • WCF errors in VS 2010/.Net 4 using sample publish/subscribe app from IDesign website

    - by Bill
    I am attempting to compile/run a sample WCF application from Juval Lowy's website (author of Programming WCF Services & founder of IDesign). The application is an example of a publish/subscribe 'traffic-light' application that requires using VS 2010/.Net 4. This is my first attempt at using anything other than VS 2008/Net 3.5. Initially I recieved the following binding error: "Configuration binding extension 'system.serviceModel/bindings/ netOnewayRelayBinding' could not be found." This error appeared to be resolved by amending the .Net 4 machine.config file, to incorporate the following references from the .Net 2 machine.config file. <xml> <bindingElementExtensions> <add name="tcpRelayTransport" type="Microsoft.ServiceBus.Configuration.TcpRelayTransportElement, Microsoft.ServiceBus, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add name="httpRelayTransport" type="Microsoft.ServiceBus.Configuration.HttpRelayTransportElement, Microsoft.ServiceBus, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add name="httpsRelayTransport" type="Microsoft.ServiceBus.Configuration.HttpsRelayTransportElement, Microsoft.ServiceBus, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add name="onewayRelayTransport" type="Microsoft.ServiceBus.Configuration.RelayedOnewayTransportElement, Microsoft.ServiceBus, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add name="webMessageEncoding" type="System.ServiceModel.Configuration.WebMessageEncodingElement, System.ServiceModel.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> <add name="context" type="System.ServiceModel.Configuration.ContextBindingElementExtensionElement, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> <add name="byteStreamMessageEncoding" type="System.ServiceModel.Configuration.ByteStreamMessageEncodingElement, System.ServiceModel.Channels, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> <add name="discoveryClient" type="System.ServiceModel.Discovery.Configuration.DiscoveryClientElement, System.ServiceModel.Discovery, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> </bindingElementExtensions> <bindingExtensions> <add name="webHttpBinding" type="System.ServiceModel.Configuration.WebHttpBindingCollectionElement, System.ServiceModel.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> <add name="basicHttpContextBinding" type="System.ServiceModel.Configuration.BasicHttpContextBindingCollectionElement, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> <add name="basicHttpRelayBinding" type="Microsoft.ServiceBus.Configuration.BasicHttpRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add name="webHttpRelayBinding" type="Microsoft.ServiceBus.Configuration.WebHttpRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add name="ws2007HttpRelayBinding" type="Microsoft.ServiceBus.Configuration.WS2007HttpRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add name="netTcpRelayBinding" type="Microsoft.ServiceBus.Configuration.NetTcpRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add name="netOnewayRelayBinding" type="Microsoft.ServiceBus.Configuration.NetOnewayRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add name="netEventRelayBinding" type="Microsoft.ServiceBus.Configuration.NetEventRelayBindingCollectionElement, Microsoft.ServiceBus, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> <add name="wsHttpContextBinding" type="System.ServiceModel.Configuration.WSHttpContextBindingCollectionElement, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> <add name="netTcpContextBinding" type="System.ServiceModel.Configuration.NetTcpContextBindingCollectionElement, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> </bindingExtensions> Unfortunately running the application results in the following security error: An error occurred creating the configuration section handler for system.serviceModel/client: That assembly does not allow partially trusted callers. (\TrafficLights\TrafficController\bin\Debug\TrafficController.vshost.exe.Config line 4) The sample source code is available for download at the following link: http://www.idesign.net/idesign/DesktopDefault.aspx?tabindex=-1&tabid=19&download=226 I know that Juval's code is not at fault here and that it must be something I'm doing wrong with my VS 2010 configuration. I have not been able to find a solution online. Could someone please steer me in the right direction as to how best to deal with this issue?

    Read the article

  • c# opennetCF background worker - e.result gives a ObjectDisposedException

    - by ikky
    Hi! I'm new working with background worker in C#. Here is a class, and under it, you will find the instansiation of it, and under there i will define my problem for you: I have the class Drawing: class Drawing { BackgroundWorker bgWorker; ProgressBar progressBar; Panel panelHolder; public Drawing(ref ProgressBar pgbar, ref Panel panelBig) // Progressbar and panelBig as reference { this.panelHolder = panelBig; this.progressBar = pgbar; bgWorker = new BackgroundWorker(); bgWorker.WorkerReportsProgress = true; bgWorker.WorkerSupportsCancellation = true; bgWorker.DoWork += new OpenNETCF.ComponentModel.DoWorkEventHandler(this.bgWorker_DoWork); bgWorker.RunWorkerCompleted += new OpenNETCF.ComponentModel.RunWorkerCompletedEventHandler(this.bgWorker_RunWorkerCompleted); bgWorker.ProgressChanged += new OpenNETCF.ComponentModel.ProgressChangedEventHandler(this.bgWorker_ProgressChanged); } public void createDrawing() { bgWorker.RunWorkerAsync(); } private void bgWorker_DoWork(object sender, DoWorkEventArgs e) { Panel panelContainer = new Panel(); // Adding panels to the panelContainer for(i=0; i<100; i++) { Panel panelSubpanel = new Panel(); // Setting size, color, name etc.... panelContainer.Controls.Add(panelSubpanel); // Adding the subpanel to the panelContainer //Report the progress bgWorker.ReportProgress(0, i); // Reporting number of panels loaded } e.Result = imagePanel; // Send the result(a panel with lots of subpanels) as an argument } private void bgWorker_ProgressChanged(object sender, ProgressChangedEventArgs e) { this.progressBar.Value = (int)e.UserState; this.progressBar.Update(); } private void bgWorker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { if (e.Error == null) { this.panelHolder = (Panel)e.Result; } else { MessageBox.Show("An error occured, please try again"); } } } Instansiating an object of this class: public partial class Draw: Form { public Draw() { ProgressBar progressBarLoading = new ProgressBar(); // Set lots of properties on progressBarLoading Panel panelBigPanelContainer = new Panel(); Drawing drawer = new Drawing(ref progressBarLoading, ref panelBigPanelContainer); drawer.createDrawing(); // this makes the object start a new thread, loading all the panels into a panel container, while also sending the progress to this progressbar. } } Here is my problem: In the private void bgWorker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) i don't get the e.Result as it should be. When i debug and look at the e.Result, the panel's properties have this exception message: '((System.Windows.Forms.Control)(e.Result)).ClientSize' threw an exception of type 'System.ObjectDisposedException' So the object gets disposed, but "why" is my question, and how can i fix this? I hope someone will answer me, this is making me crazy. Another question i have: Is it allowed to use "ref" with arguments? is it bad programming? Thanks in advance. I have also written how i understand the Background worker below here: This is what i think is the "rules" for background workers: bgWorker.RunWorkerAsync(); => starts a new thread. bgWorker_DoWork cannot reach the main thread without delegates - private void bgWorker_DoWork(object sender, DoWorkEventArgs e) { // The work happens here, this is a thread that is not reachable by the main thread e.Result => This is an argument which can be reached by bgWorker_RunWorkerCompleted() bgWorker.ReportProgress(progressVar); => Reports the progress to the bgWorker_ProgressChanged() } - private void bgWorker_ProgressChanged(object sender, ProgressChangedEventArgs e) { // I get the progress here, and can do stuff to the main thread from here (e.g update a control) this.ProgressBar.Value = e.ProgressPercentage; } - private void bgWorker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { // This is where the thread is completed. // Here i can get e.Result from the bgWorker thread // From here i can reach controls in my main thread, and use e.Result in my main thread if (e.Error == null) { this.panelTileHolder = (Panel)e.Result; } else { MessageBox.Show("There was an error"); } }

    Read the article

  • If record exists in database, UPDATE a single column

    - by Doug
    I have a bulk uploading object in place that is being used to bulk upload roughly 25-40 image files at a time. Each image is about 100-150 kb in size. During the upload, I've created a for each loop that takes the file name of the image (minus the file extension) to write it into a column named "sku". Also, for each file being uploaded, the date is recorded to a column named DateUpdated, as well as some image path data. Here is my c# code: protected void graphicMultiFileButton_Click(object sender, EventArgs e) { //graphicMultiFile is the ID of the bulk uploading object ( provided by Dean Brettle: http://www.brettle.com/neatupload ) if (graphicMultiFile.Files.Length > 0) { foreach (UploadedFile file in graphicMultiFile.Files) { //strip ".jpg" from file name (will be assigned as SKU) string sku = file.FileName.Substring(0, file.FileName.Length - 4); //assign the directory where the images will be stored on the server string directoryPath = Server.MapPath("~/images/graphicsLib/" + file.FileName); //ensure that if image existes on server that it will get overwritten next time it's uploaded: file.MoveTo(directoryPath, MoveToOptions.Overwrite); //current sql that inserts a record to the db SqlCommand comm; SqlConnection conn; string connectionString = ConfigurationManager.ConnectionStrings["DataConnect"].ConnectionString; conn = new SqlConnection(connectionString); comm = new SqlCommand("INSERT INTO GraphicsLibrary (sku, imagePath, DateUpdated) VALUES (@sku, @imagePath, @DateUpdated)", conn); comm.Parameters.Add("@sku", System.Data.SqlDbType.VarChar, 50); comm.Parameters["@sku"].Value = sku; comm.Parameters.Add("@imagePath", System.Data.SqlDbType.VarChar, 300); comm.Parameters["@imagePath"].Value = "images/graphicsLib/" + file.FileName; comm.Parameters.Add("@DateUpdated", System.Data.SqlDbType.DateTime); comm.Parameters["@DateUpdated"].Value = DateTime.Now; conn.Open(); comm.ExecuteNonQuery(); conn.Close(); } } } After images are uploaded, managers will go back and re-upload images that have previously been uploaded. This is because these product images are always being revised and improved. For each new/improved image, the file name and extension will remain the same - so that when image 321-54321.jpg was first uploaded to the server, the new/improved version of that image will still have the image file name as 321-54321.jpg. I can't say for sure if the file sizes will remain in the 100-150KB range. I'll assume that the image file size will grow eventually. When images get uploaded (again), there of course will be an existing record in the database for that image. What is the best way to: Check the database for the existing record (stored procedure or SqlDataReader or create a DataSet ...?) Then if record exists, simply UPDATE that record so that the DateUpdated column gets today's date. If no record exists, do the INSERT of the record as normal. Things to consider: If the record exists, we'll let the actual image be uploaded. It will simply overwrite the existing image so that the new version gets displayed on the web. We're using SQL Server 2000 on hosted environment (DiscountAsp). I'm programming in C#. The uploading process will be used by about 2 managers a few times a month (each) - which to me is not a allot of usage. Although I'm a jr. developer, I'm guessing that a stored procedure would be the way to go. Just seems more efficient - to do this record check away from the for each loop... but not sure. I'd need extra help writing a sproc, since I don't have too much experience with them. Thank everyone...

    Read the article

  • Understanding the memory consumption on iPhone

    - by zoul
    Hello! I am working on a 2D iPhone game using OpenGL ES and I keep hitting the 24 MB memory limit – my application keeps crashing with the error code 101. I tried real hard to find where the memory goes, but the numbers in Instruments are still much bigger than what I would expect. I ran the application with the Memory Monitor, Object Alloc, Leaks and OpenGL ES instruments. When the application gets loaded, free physical memory drops from 37 MB to 23 MB, the Object Alloc settles around 7 MB, Leaks show two or three leaks a few bytes in size, the Gart Object Size is about 5 MB and Memory Monitor says the application takes up about 14 MB of real memory. I am perplexed as where did the memory go – when I dig into the Object Allocations, most of the memory is in the textures, exactly as I would expect. But both my own texture allocation counter and the Gart Object Size agree that the textures should take up somewhere around 5 MB. I am not aware of allocating anything else that would be worth mentioning, and the Object Alloc agrees. Where does the memory go? (I would be glad to supply more details if this is not enough.) Update: I really tried to find where I could allocate so much memory, but with no results. What drives me wild is the difference between the Object Allocations (~7 MB) and real memory usage as shown by Memory Monitor (~14 MB). Even if there were huge leaks or huge chunks of memory I forget about, the should still show up in the Object Allocations, shouldn’t they? I’ve already tried the usual suspects, ie. the UIImage with its caching, but that did not help. Is there a way to track memory usage “debugger-style”, line by line, watching each statement’s impact on memory usage? What I have found so far: I really am using that much memory. It is not easy to measure the real memory consumption, but after a lot of counting I think the memory consumption is really that high. My fault. I found no easy way to measure the memory used. The Memory Monitor numbers are accurate (these are the numbers that really matter), but the Memory Monitor can’t tell you where exactly the memory goes. The Object Alloc tool is almost useless for tracking the real memory usage. When I create a texture, the allocated memory counter goes up for a while (reading the texture into the memory), then drops (passing the texture data to OpenGL, freeing). This is OK, but does not always happen – sometimes the memory usage stays high even after the texture has been passed on to OpenGL and freed from “my” memory. This means that the total amount of memory allocated as shown by the Object Alloc tool is smaller than the real total memory consumption, but bigger than the real consumption minus textures (real – textures < object alloc < real). Go figure. I misread the Programming Guide. The memory limit of 24 MB applies to textures and surfaces, not the whole application. The actual red line lies a bit further, but I could not find any hard numbers. The consensus is that 25–30 MB is the ceiling. When the system gets short on memory, it starts sending the memory warning. I have almost nothing to free, but other applications do release some memory back to the system, especially Safari (which seems to be caching the websites). When the free memory as shown in the Memory Monitor goes zero, the system starts killing. I had to bite the bullet and rewrite some parts of the code to be more efficient on memory, but I am probably still pushing it. I

    Read the article

  • Emulating old-school sprite flickering (theory and concept)

    - by Jeffrey Kern
    I'm trying to develop an oldschool NES-style video game, with sprite flickering and graphical slowdown. I've been thinking of what type of logic I should use to enable such effects. I have to consider the following restrictions if I want to go old-school NES style: No more than 64 sprites on the screen at a time No more than 8 sprites per scanline, or for each line on the Y axis If there is too much action going on the screen, the system freezes the image for a frame to let the processor catch up with the action From what I've read up, if there were more than 64 sprites on the screen, the developer would only draw high-priority sprites while ignoring low-priority ones. They could also alternate, drawing each even numbered sprite on opposite frames from odd numbered ones. The scanline issue is interesting. From my testing, it is impossible to get good speed on the XBOX 360 XNA framework by drawing sprites pixel-by-pixel, like the NES did. This is why in old-school games, if there were too many sprites on a single line, some would appear if they were cut in half. For all purposes for this project, I'm making scanlines be 8 pixels tall, and grouping the sprites together per scanline by their Y positioning. So, dumbed down I need to come up with a solution that.... 64 sprites on screen at once 8 sprites per 'scanline' Can draw sprites based on priority Can alternate between sprites per frame Emulate slowdown Here is my current theory First and foremost, a fundamental idea I came up with is addressing sprite priority. Assuming values between 0-255 (0 being low), I can assign sprites priority levels, for instance: 0 to 63 being low 63 to 127 being medium 128 to 191 being high 192 to 255 being maximum Within my data files, I can assign each sprite to be a certain priority. When the parent object is created, the sprite would randomly get assigned a number between its designated range. I would then draw sprites in order from high to low, with the end goal of drawing every sprite. Now, when a sprite gets drawn in a frame, I would then randomly generate it a new priority value within its initial priority level. However, if a sprite doesn't get drawn in a frame, I could add 32 to its current priority. For example, if the system can only draw sprites down to a priority level of 135, a sprite with an initial priority of 45 could then be drawn after 3 frames of not being drawn (45+32+32+32=141) This would, in theory, allow sprites to alternate frames, allow priority levels, and limit sprites to 64 per screen. Now, the interesting question is how do I limit sprites to only 8 per scanline? I'm thinking that if I'm sorting the sprites high-priority to low-priority, iterate through the loop until I've hit 64 sprites drawn. However, I shouldn't just take the first 64 sprites in the list. Before drawing each sprite, I could check to see how many sprites were drawn in it's respective scanline via counter variables . For example: Y-values between 0 to 7 belong to Scanline 0, scanlineCount[0] = 0 Y-values between 8 to 15 belong to Scanline 1, scanlineCount[1] = 0 etc. I could reset the values per scanline for every frame drawn. While going down the sprite list, add 1 to the scanline's respective counter if a sprite gets drawn in that scanline. If it equals 8, don't draw that sprite and go to the sprite with the next lowest priority. SLOWDOWN The last thing I need to do is emulate slowdown. My initial idea was that if I'm drawing 64 sprites per frame and there's still more sprites that need to be drawn, I could pause the rendering by 16ms or so. However, in the NES games I've played, sometimes there's slowdown if there's not any sprite flickering going on whereas the game moves beautifully even if there is some sprite flickering. Perhaps give a value to each object that uses sprites on the screen (like the priority values above), and if the combined values of all objects w/ sprites surpass a threshold, introduce the sprite flickering? IN CONCLUSION... Does everything I wrote actually sound legitimate and could work, or is it a pipe dream? What improvements can you all possibly think with this game programming theory of mine?

    Read the article

  • Can't obtain reference to EKReminder array retrieved from fetchRemindersMatchingPredicate

    - by Scionwest
    When I create an NSPredicate via EKEventStore predicateForRemindersInCalendars; and pass it to EKEventStore fetchRemindersMatchingPredicate:completion:^ I can loop through the reminders array provided by the completion code block, but when I try to store a reference to the reminders array, or create a copy of the array into a local variable or instance variable, both array's remain empty. The reminders array is never copied to them. This is the method I am using, in the method, I create a predicate, pass it to the event store and then loop through all of the reminders logging their title via NSLog. I can see the reminder titles during runtime thanks to NSLog, but the local arrayOfReminders object is empty. I also try to add each reminder into an instance variable of NSMutableArray, but once I leave the completion code block, the instance variable remains empty. Am I missing something here? Can someone please tell me why I can't grab a reference to all of the reminders for use through-out the app? I am not having any issues at all accessing and storing EKEvents, but for some reason I can't do it with EKReminders. - (void)findAllReminders { NSPredicate *predicate = [self.eventStore predicateForRemindersInCalendars:nil]; __block NSArray *arrayOfReminders = [[NSArray alloc] init]; [self.eventStore fetchRemindersMatchingPredicate:predicate completion:^(NSArray *reminders) { arrayOfReminders = [reminders copy]; //Does not work. for (EKReminder *reminder in reminders) { [self.remindersForTheDay addObject:reminder]; NSLog(@"%@", reminder.title); } }]; //Always = 0; if ([self.remindersForTheDay count]) { NSLog(@"Instance Variable has reminders!"); } //Always = 0; if ([arrayOfReminders count]) { NSLog(@"Local Variable has reminders!"); } } The eventStore getter is where I perform my instantiation and get access to the event store. - (EKEventStore *)eventStore { if (!_eventStore) { _eventStore = [[EKEventStore alloc] init]; //respondsToSelector indicates iOS 6 support. if ([_eventStore respondsToSelector:@selector(requestAccessToEntityType:completion:)]) { //Request access to user calendar [_eventStore requestAccessToEntityType:EKEntityTypeEvent completion:^(BOOL granted, NSError *error) { if (granted) { NSLog(@"iOS 6+ Access to EventStore calendar granted."); } else { NSLog(@"Access to EventStore calendar denied."); } }]; //Request access to user Reminders [_eventStore requestAccessToEntityType:EKEntityTypeReminder completion:^(BOOL granted, NSError *error) { if (granted) { NSLog(@"iOS 6+ Access to EventStore Reminders granted."); } else { NSLog(@"Access to EventStore Reminders denied."); } }]; } else { //iOS 5.x and lower support if Selector is not supported NSLog(@"iOS 5.x < Access to EventStore calendar granted."); } for (EKCalendar *cal in self.calendars) { NSLog(@"Calendar found: %@", cal.title); } [_eventStore reset]; } return _eventStore; } Lastly, just to show that I am initializing my remindersForTheDay instance variable using lazy instantiation. - (NSMutableArray *)remindersForTheDay { if (!_remindersForTheDay) _remindersForTheDay = [[NSMutableArray alloc] init]; return _remindersForTheDay; } I've read through the Apple documentation and it doesn't provide any explanation that I can find to answer this. I read through the Blocks Programming docs and it states that you can access local and instance variables without issues from within a block, but for some reason, the above code does not work. Any help would be greatly appreciated, I've scoured Google for answers but have yet to get this figured out. Thanks everyone! Johnathon.

    Read the article

  • Netbeans platform projects - problems with wrapped jar files that have dependencies

    - by I82Much
    For starters, this question is not so much about programming in the NetBeans IDE as developing a NetBeans project (e.g. using the NetBeans Platform framework). I am attempting to use the BeanUtils library to introspect my domain models and provide the properties to display in a property sheet. Sample code: public class MyNode extends AbstractNode implements PropertyChangeListener { private static final PropertyUtilsBean bean = new PropertyUtilsBean(); // snip protected Sheet createSheet() { Sheet sheet = Sheet.createDefault(); Sheet.Set set = Sheet.createPropertiesSet(); APIObject obj = getLookup().lookup (APIObject.class); PropertyDescriptor[] descriptors = bean.getPropertyDescriptors(obj); for (PropertyDescriptor d : descriptors) { Method readMethod = d.getReadMethod(); Method writeMethod = d.getWriteMethod(); Class valueType = d.getClass(); Property p = new PropertySupport.Reflection(obj, valueType, readMethod, writeMethod); set.put(p); } sheet.put(set); return sheet; } I have created a wrapper module around commons-beanutils-1.8.3.jar, and added a dependency on the module in my module containing the above code. Everything compiles fine. When I attempt to run the program and open the property sheet view (i.e.. the above code actually gets run), I get the following error: java.lang.ClassNotFoundException: org.apache.commons.logging.LogFactory at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:319) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:330) at java.lang.ClassLoader.loadClass(ClassLoader.java:254) at org.netbeans.ProxyClassLoader.loadClass(ProxyClassLoader.java:259) Caused: java.lang.ClassNotFoundException: org.apache.commons.logging.LogFactory starting from ModuleCL@64e48e45[org.apache.commons.beanutils] with possible defining loaders [ModuleCL@75da931b[org.netbeans.libs.commons_logging]] and declared parents [] at org.netbeans.ProxyClassLoader.loadClass(ProxyClassLoader.java:261) at java.lang.ClassLoader.loadClass(ClassLoader.java:254) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:399) Caused: java.lang.NoClassDefFoundError: org/apache/commons/logging/LogFactory at org.apache.commons.beanutils.PropertyUtilsBean.<init>(PropertyUtilsBean.java:132) at org.myorg.myeditor.MyNode.<clinit>(MyNode.java:35) at org.myorg.myeditor.MyEditor.<init>(MyEditor.java:33) at org.myorg.myeditor.OpenEditorAction.actionPerformed(OpenEditorAction.java:13) at org.openide.awt.AlwaysEnabledAction$1.run(AlwaysEnabledAction.java:139) at org.netbeans.modules.openide.util.ActionsBridge.implPerformAction(ActionsBridge.java:83) at org.netbeans.modules.openide.util.ActionsBridge.doPerformAction(ActionsBridge.java:67) at org.openide.awt.AlwaysEnabledAction.actionPerformed(AlwaysEnabledAction.java:142) at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:2028) at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2351) at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:387) at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:242) at javax.swing.AbstractButton.doClick(AbstractButton.java:389) at com.apple.laf.ScreenMenuItem.actionPerformed(ScreenMenuItem.java:95) at java.awt.MenuItem.processActionEvent(MenuItem.java:627) at java.awt.MenuItem.processEvent(MenuItem.java:586) at java.awt.MenuComponent.dispatchEventImpl(MenuComponent.java:317) at java.awt.MenuComponent.dispatchEvent(MenuComponent.java:305) [catch] at java.awt.EventQueue.dispatchEvent(EventQueue.java:638) at org.netbeans.core.TimableEventQueue.dispatchEvent(TimableEventQueue.java:125) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:296) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:211) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:201) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:196) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:188) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) I understand that beanutils is using the commons-logging component. I have tried adding the commons-logging component in two different ways (creating a wrapper library around the commons-logging library, and putting a dependency on the Commons Logging Integration library). Neither solves the problem. I noticed that the same problem occurs with other wrapped libraries; if they themselves have external dependencies, the ClassNotFoundExceptions propagate like mad, even if I've wrapped the jars of the libraries they require and added them as dependencies to the original wrapped library module. Pictorially: I'm at my wits end here. I noticed similar problems while googling: Is there a known bug on NB Module dependency Same issue I'm facing but when wrapping a different jar NetBeans stance on this - none of the 3 apply to me. None conclusively help me. Thank you, Nick

    Read the article

  • jQuery sequence and function call problem

    - by Jonas
    Hi everyone, I'm very new to jquery and programming in general but I'm still trying to achieve something here. I use Fullcalendar to allow the users of my web application to insert an event in the database. The click on a day, view changes to agendaDay, then on a time of the day, and a dialog popup opens with a form. I am trying to combine validate (pre-jquery.1.4), jquery.form to post the form without page refresh The script calendar.php, included in several pages, defines the fullcalendar object and displays it in a div: $(document).ready(function() { function EventLoad() { $("#addEvent").validate({ rules: { calendar_title: "required", calendar_url: { required: false, maxlength: 100, url: true } }, messages: { calendar_title: "Title required", calendar_url: "Invalid URL format" }, success: function() { $('#addEvent').submit(function() { var options = { success: function() { $('#eventDialog').dialog('close'); $('#calendar').fullCalendar( 'refetchEvents' ); } }; // submit the form $(this).ajaxSubmit(options); // return false to prevent normal browser submit and page navigation return false; }); } }); } $('#calendar').fullCalendar({ header: { left: 'prev,next today', center: 'title', right: 'month,agendaWeek,agendaDay' }, theme: true, firstDay: 1, editable: false, events: "json-events.php?list=1&<?php echo $events_list; ?>", <?php if($_GET['page'] == 'home') echo "defaultView: 'agendaWeek',"; ?> eventClick: function(event) { if (event.url) { window.open(event.url); return false; } }, dayClick: function(date, allDay, jsEvent, view) { if (view.name == 'month') { $('#calendar').fullCalendar( 'changeView', 'agendaDay').fullCalendar( 'gotoDate', date ); }else{ if(allDay) { var timeStamp = $.fullCalendar.formatDate( date, 'dddd+dd+MMMM_u'); var $eventDialog = $('<div/>').load("json-events.php?<?php echo $events_list; ?>&new=1&all_day=1&timestamp=" + timeStamp, null, EventLoad).dialog({autoOpen:false,draggable: false, width: 675, modal:true, position:['center',202], resizable: false, title:'Add an Event'}); $eventDialog.dialog('open').attr('id','eventDialog'); } else { var timeStamp = $.fullCalendar.formatDate( date, 'dddd+dd+MMMM_u'); var $eventDialog = $('<div/>').load("json-events.php?<?php echo $events_list; ?>&new=1&all_day=0&timestamp=" + timeStamp, null, EventLoad).dialog({autoOpen:false,draggable: false, width: 675, modal:true, position:['center',202], resizable: false, title:'Add an Event'}); $eventDialog.dialog('open').attr('id','eventDialog');; } } } }); }); The script json-events.php contains the form and also the code to process the data from the submitted form. What happens when I test the whole thing: - first user click on a day, then time of day. Popup opens with time and date indicated on the form. When user submits the form, dialog closes and calendar refreshes its events.... and the event added by the user appears several times (from 4 to up to 11 times!). The form has been processed several times by the receiving php script?! - second user click, the popup opens, user submit empty form. Form is submitted (validate function not triggered) and user redirected to empty page json-events.php (ajaxForm not triggered either) Obviously, my code is wrong (and dirty as well, sorry). Why is the submitted form, submitted several time to receiving script and why is the javascript function EventLoad triggered only once ? Thank you very much for you help. This problem is killing me !

    Read the article

  • How to use pthread_atfork() and pthread_once() to reinitialize mutexes in child processes

    - by Blair Zajac
    We have a C++ shared library that uses ZeroC's Ice library for RPC and unless we shut down Ice's runtime, we've observed child processes hanging on random mutexes. The Ice runtime starts threads, has many internal mutexes and keeps open file descriptors to servers. Additionally, we have a few of mutexes of our own to protect our internal state. Our shared library is used by hundreds of internal applications so we don't have control over when the process calls fork(), so we need a way to safely shutdown Ice and lock our mutexes while the process forks. Reading the POSIX standard on pthread_atfork() on handling mutexes and internal state: Alternatively, some libraries might have been able to supply just a child routine that reinitializes the mutexes in the library and all associated states to some known value (for example, what it was when the image was originally executed). This approach is not possible, though, because implementations are allowed to fail *_init() and *_destroy() calls for mutexes and locks if the mutex or lock is still locked. In this case, the child routine is not able to reinitialize the mutexes and locks. On Linux, the this test C program returns EPERM from pthread_mutex_unlock() in the child pthread_atfork() handler. Linux requires adding _NP to the PTHREAD_MUTEX_ERRORCHECK macro for it to compile. This program is linked from this good thread. Given that it's technically not safe or legal to unlock or destroy a mutex in the child, I'm thinking it's better to have pointers to mutexes and then have the child make new pthread_mutex_t on the heap and leave the parent's mutexes alone, thereby having a small memory leak. The only issue is how to reinitialize the state of the library and I'm thinking of reseting a pthread_once_t. Maybe because POSIX has an initializer for pthread_once_t that it can be reset to its initial state. #include <pthread.h> #include <stdlib.h> #include <string.h> static pthread_once_t once_control = PTHREAD_ONCE_INIT; static pthread_mutex_t *mutex_ptr = 0; static void setup_new_mutex() { mutex_ptr = malloc(sizeof(*mutex_ptr)); pthread_mutex_init(mutex_ptr, 0); } static void prepare() { pthread_mutex_lock(mutex_ptr); } static void parent() { pthread_mutex_unlock(mutex_ptr); } static void child() { // Reset the once control. pthread_once_t once = PTHREAD_ONCE_INIT; memcpy(&once_control, &once, sizeof(once_control)); setup_new_mutex(); } static void init() { setup_new_mutex(); pthread_atfork(&prepare, &parent, &child); } int my_library_call(int arg) { pthread_once(&once_control, &init); pthread_mutex_lock(mutex_ptr); // Do something here that requires the lock. int result = 2*arg; pthread_mutex_unlock(mutex_ptr); return result; } In the above sample in the child() I only reset the pthread_once_t by making a copy of a fresh pthread_once_t initialized with PTHREAD_ONCE_INIT. A new pthread_mutex_t is only created when the library function is invoked in the child process. This is hacky but maybe the best way of dealing with this skirting the standards. If the pthread_once_t contains a mutex then the system must have a way of initializing it from its PTHREAD_ONCE_INIT state. If it contains a pointer to a mutex allocated on the heap than it'll be forced to allocate a new one and set the address in the pthread_once_t. I'm hoping it doesn't use the address of the pthread_once_t for anything special which would defeat this. Searching comp.programming.threads group for pthread_atfork() shows a lot of good discussion and how little the POSIX standards really provides to solve this problem. There's also the issue that one should only call async-signal-safe functions from pthread_atfork() handlers, and it appears the most important one is the child handler, where only a memcpy() is done. Does this work? Is there a better way of dealing with the requirements of our shared library?

    Read the article

< Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >