Search Results

Search found 7473 results on 299 pages for 'usage statistics'.

Page 284/299 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • finishing some functions in this code

    - by osabri
    i have problem to finish some functions in this code program lab4; var inFile : text; var pArray : array[1..10]of real; //array of 10 integer values containing patterns to search in a given set of numbers var rArray : array[1..10]of integer; //array containing result of pattern search, each index in rArray coresponds to number of occurences of //pattern from pArray in sArray. var sArray : array[1..100] of real; //array containing data read from file var accuracy : real; (****************************************************************************) function errMsg:integer; begin if ParamCount < 3 then begin writeln('Too few arguments'); writeln('Usage: ./lab4 lab4.txt <accuracy> <pattern_1> <pattern_2> <pattern_n>'); errMsg:=-1; end else if ParamCount > 12 then begin writeln('Too many arguments'); writeln('Maximum number of patterns is 10'); errMsg:=-1; end else begin assign(inFile,ParamStr(1)); {$I-} reset(inFile); if ioresult<>0 then begin writeln('Cannot open ',ParamStr(1)); errMsg:=-1; end else errMsg:=0; end; end; (****************************************************************************) procedure readPattern; var errPos,idx:integer; begin if errMsg=0 then begin for idx:=1 to ParamCount-2 do begin Val(ParamStr(idx+2),pArray[idx],errPos); writeln('pArray:',pArray[idx]:2:2); end; end; end; (****************************************************************************) procedure getAccuracy; //Function should get the accuracy as the first param (after the program name) begin (here where i stopped : (( ) end; (****************************************************************************) function readSet:integer; //Function returns number of elements read from file var idx,errPos:integer; var sChar: string; begin if errMsg=0 then begin idx:=1; repeat begin readln(inFile,sChar); VAL(sChar,sArray[idx],errPos); writeln('sArray:',sArray[idx]:2:2); idx:=idx+1; end until eof(inFile)=TRUE; end; readSet:=idx-1; end; (****************************************************************************) procedure searchPattern(sNumber:integer); //Function should search and count pattern(patterns) occurence in a given set what the best solution for this part?? (****************************************************************************) procedure dispResult; //Function should display the result of pattern(patterns) search begin (****************************************************************************) begin readPattern; getAccuracy; searchPattern(readSet); dispResult; end.

    Read the article

  • [C#] Not enough memory or not enough handles?

    - by Nayan
    I am working on a large scale project where a custom (pretty good and robust) framework has been provided and we have to use that for showing up forms and views. There is abstract class StrategyEditor (derived from some class in framework) which is instantiated whenever a new StrategyForm is opened. StrategyForm (a customized window frame) contains StrategyEditor. StrategyEditor contains StrategyTab. StrategyTab contains StrategyCanvas. This is a small portion of the big classes to clarify that there are many objects that will be created if one StrategyForm object is allocated in memory at run-time. My component owns all these classes mentioned above except StrategyForm whose code is not in my control. Now, at run-time, user opens up many strategy objects (which trigger creation of new StrategyForm object.) After creating approx. 44 strategy objects, we see that the USER OBJECT HANDLES (I'll use UOH from here onwards) created by the application reaches to about 20k+, while in registry the default amount for handles is 10k. Read more about User Objects here. Testing on different machines made it clear that the number of strategy objects opened is different for message to pop-up - on one m/c if it is 44, then it can be 40 on another. When we see the message pop-up, it means that the application is going to respond slowly. It gets worse with few more objects and then creation of window frames and subsequent objects fail. We first thought that it was not-enough-memory issue. But then reading more about new in C# helped in understanding that an exception would be thrown if app ran out of memory. This is not a memory issue then, I feel (task manager also showed 1.5GB+ available memory.) M/C specs Core 2 Duo 2GHz+ 4GB RAM 80GB+ free disk space for page file Virtual Memory set: 4000 - 6000 My questions Q1. Does this look like a memory issue and I am wrong that it is not? Q2. Does this point to exhaustion of free UOHs (as I'm thinking) and which is resulting in failure of creation of window handles? Q3. How can we avoid loading up of an StrategyEditor object (beyond a threshold, keeping an eye on the current usage of UOHs)? (we already know how to fetch number of UOHs in use, so don't go there.) Keep in mind that the call to new StrategyForm() is outside the control of my component. Q4. I am bit confused - what are Handles to user objects exactly? Is MSDN talking about any object that we create or only some specific objects like window handles, cursor handles, icon handles? Q5. What exactly cause to use up a UOH? (almost same as Q4) I would be really thankful to anyone who can give me some knowledgeable answers. Thanks much! :)

    Read the article

  • Large concurrent user performance issues for Apache + mod_jk + GlassFish v3.1 clusters

    - by user10035
    I am running a java ee 6 ear application on a GlassFish v3.1 ( 2 clusters with 2 instances each) load balanced by an Apache v2.2 with mod_jk - all on the same server (Windows Server 2003 R2, Intel Xeon CPU x5670 @2.93Ghz, 6GB RAM, 2 cpus). The web application is accessed by around ~100 users. When they all try to access it at the same time every morning ~8am, the response is very slow while trying to access the main jsf home page. Apart from that I have seen the CPU usage spike upto 99% by the httpd process during the day frequently and I start seeing errors in the mod_jk.log file. [Wed Jun 08 08:25:43 2011] [9380:8216] [info] ajp_process_callback::jk_ajp_common.c (1885): Writing to client aborted or client network problems [Wed Jun 08 08:25:43 2011] [9380:8216] [info] ajp_service::jk_ajp_common.c (2543): (myAppLocalInstance4) sending request to tomcat failed (unrecoverable), because of client write error (attempt=1) Any suggestions on how I can go about improving this? Apache configuration is mostly the default as shown below ServerRoot "C:/Program Files/Apache Software Foundation/Apache2.2" Listen 80 LoadModule actions_module modules/mod_actions.so LoadModule alias_module modules/mod_alias.so LoadModule asis_module modules/mod_asis.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule cgi_module modules/mod_cgi.so LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule include_module modules/mod_include.so LoadModule isapi_module modules/mod_isapi.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule setenvif_module modules/mod_setenvif.so <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> User daemon Group daemon </IfModule> </IfModule> DocumentRoot "C:/Program Files/Apache Software Foundation/Apache2.2/htdocs" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> <Directory "C:/Program Files/Apache Software Foundation/Apache2.2/htdocs"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.html </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> ErrorLog "logs/error.log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "logs/access.log" common </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "C:/Program Files/Apache Software Foundation/Apache2.2/cgi-bin/" </IfModule> <Directory "C:/Program Files/Apache Software Foundation/Apache2.2/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> Include conf/extra/httpd-mpm.conf <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> LoadModule jk_module modules/mod_jk.so JkWorkersFile conf/workers.properties JkLogFile logs/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories JkRequestLogFormat "%w %V %T" JkMount /myApp/* loadbalancerLocal JkMount /myAppRemote/* loadbalancerRemote JkMount /myApp loadbalancerLocal JkMount /myAppRemote loadbalancerRemote The workers.properties config file is: worker.list=loadbalancerLocal,loadbalancerRemote worker.myAppLocalInstance1.type=ajp13 worker.myAppLocalInstance1.host=localhost worker.myAppLocalInstance1.port=8109 worker.myAppLocalInstance1.lbfactor=1 worker.myAppLocalInstance1.socket_keepalive=1 worker.myAppLocalInstance1.socket_timeout=1000 worker.myAppLocalInstance2.type=ajp13 worker.myAppLocalInstance2.host=localhost worker.myAppLocalInstance2.port=8209 worker.myAppLocalInstance2.lbfactor=1 worker.myAppLocalInstance2.socket_keepalive=1 worker.myAppLocalInstance2.socket_timeout=1000 worker.myAppLocalInstance3.type=ajp13 worker.myAppLocalInstance3.host=localhost worker.myAppLocalInstance3.port=8309 worker.myAppLocalInstance3.lbfactor=1 worker.myAppLocalInstance3.socket_keepalive=1 worker.myAppLocalInstance3.socket_timeout=1000 worker.myAppLocalInstance4.type=ajp13 worker.myAppLocalInstance4.host=localhost worker.myAppLocalInstance4.port=8409 worker.myAppLocalInstance4.lbfactor=1 worker.myAppLocalInstance4.socket_keepalive=1 worker.myAppLocalInstance4.socket_timeout=1000 worker.myAppRemoteInstance1.type=ajp13 worker.myAppRemoteInstance1.host=localhost worker.myAppRemoteInstance1.port=8509 worker.myAppRemoteInstance1.lbfactor=1 worker.myAppRemoteInstance1.socket_keepalive=1 worker.myAppRemoteInstance1.socket_timeout=1000 worker.myAppRemoteInstance2.type=ajp13 worker.myAppRemoteInstance2.host=localhost worker.myAppRemoteInstance2.port=8609 worker.myAppRemoteInstance2.lbfactor=1 worker.myAppRemoteInstance2.socket_keepalive=1 worker.myAppRemoteInstance2.socket_timeout=1000 worker.myAppRemoteInstance3.type=ajp13 worker.myAppRemoteInstance3.host=localhost worker.myAppRemoteInstance3.port=8709 worker.myAppRemoteInstance3.lbfactor=1 worker.myAppRemoteInstance3.socket_keepalive=1 worker.myAppRemoteInstance3.socket_timeout=1000 worker.myAppRemoteInstance4.type=ajp13 worker.myAppRemoteInstance4.host=localhost worker.myAppRemoteInstance4.port=8809 worker.myAppRemoteInstance4.lbfactor=1 worker.myAppRemoteInstance4.socket_keepalive=1 worker.myAppRemoteInstance4.socket_timeout=1000 worker.loadbalancerLocal.type=lb worker.loadbalancerLocal.sticky_session=True worker.loadbalancerLocal.balance_workers=myAppLocalInstance1,myAppLocalInstance2,myAppLocalInstance3,myAppLocalInstance4 worker.loadbalancerRemote.type=lb worker.loadbalancerRemote.balance_workers=myAppRemoteInstance1,myAppRemoteInstance2,myAppRemoteInstance3,myAppRemoteInstance4 worker.loadbalancerRemote.sticky_session=True

    Read the article

  • Is this a right way to use NHibernate?

    - by Venemo
    I spent the rest of the evening reading StackOverflow questions and also some blog entries and links about the subject. All of them turned out to be very helpful, but I still feel that they don't really answer my question. So, I'm developing a simple web application. I'd like to create a reusable data access layer which I can later reuse in other solutions. 99% of these will be web applications. This seems to be a good excuse for me to learn NHibernate and some of the patterns around it. My goals are the following: I don't want the business logic layer to know ANYTHING about the inner workings of the database, nor NHibernate itself. I want the business logic layer to have the least possible number of assumptions about the data access layer. I want the data access layer as simplistic and easy-to-use as possible. This is going to be a simple project, so I don't want to overcomplicate anything. I want the data access layer to be as non-intrusive as possible. Will all this in mind, I decided to use the popular repository pattern. I read about this subject on this site and on various dev blogs, and I heard some stuff about the unit of work pattern. I also looked around and checked out various implementations. (Including FubuMVC contrib, and SharpArchitecture, and stuff on some blogs.) I found out that most of these operate with the same principle: They create a "unit of work" which is instantiated when a repository is instantiated, they start a transaction, do stuff, and commit, and then start all over again. So, only one ISession per Repository and that's it. Then the client code needs to instantiate a repository, do stuff with it, and then dispose. This usage pattern doesn't meet my need of being as simplistic as possible, so I began thinking about something else. I found out that NHibernate already has something which makes custom "unit of work" implementations unnecessary, and that is the CurrentSessionContext class. If I configure the session context correctly, and do the clean up when necessary, I'm good to go. So, I came up with this: I have a static class called NHibernateHelper. Firstly, it has a static property called CurrentSessionFactory, which upon first call, instantiates a session factory and stores it in a static field. (One ISessionFactory per one AppDomain is good enough.) Then, more importantly, it has a CurrentSession static property, which checks if there is an ISession bound to the current session context, and if not, creates one, and binds it, and it returns with the ISession bound to the current session context. Because it will be used mostly with WebSessionContext (so, one ISession per HttpRequest, although for the unit tests, I configured ThreadStaticSessionContext), it should work seamlessly. And after creating and binding an ISession, it hooks an event handler to the HttpContext.Current.ApplicationInstance.EndRequest event, which takes care of cleaning up the ISession after the request ends. (Of course, it only does this if it is really running in a web environment.) So, with all this set up, the NHibernateHelper will always be able to return a valid ISession, so there is no need to instantiate a Repository instance for the "unit of work" to operate properly. Instead, the Repository is a static class which operates with the ISession from the NHibernateHelper.CurrentSession property, and exposes some functionality through that. I'm curious, what do you think about this? Is it a valid way of thinking, or am I completely off track here?

    Read the article

  • Random Complete System Unresponsiveness Running Mathematical Functions

    - by Computer Guru
    I have a program that loads a file (anywhere from 10MB to 5GB) a chunk at a time (ReadFile), and for each chunk performs a set of mathematical operations (basically calculates the hash). After calculating the hash, it stores info about the chunk in an STL map (basically <chunkID, hash>) and then writes the chunk itself to another file (WriteFile). That's all it does. This program will cause certain PCs to choke and die. The mouse begins to stutter, the task manager takes 2 min to show, ctrl+alt+del is unresponsive, running programs are slow.... the works. I've done literally everything I can think of to optimize the program, and have triple-checked all objects. What I've done: Tried different (less intensive) hashing algorithms. Switched all allocations to nedmalloc instead of the default new operator Switched from stl::map to unordered_set, found the performance to still be abysmal, so I switched again to Google's dense_hash_map. Converted all objects to store pointers to objects instead of the objects themselves. Caching all Read and Write operations. Instead of reading a 16k chunk of the file and performing the math on it, I read 4MB into a buffer and read 16k chunks from there instead. Same for all write operations - they are coalesced into 4MB blocks before being written to disk. Run extensive profiling with Visual Studio 2010, AMD Code Analyst, and perfmon. Set the thread priority to THREAD_MODE_BACKGROUND_BEGIN Set the thread priority to THREAD_PRIORITY_IDLE Added a Sleep(100) call after every loop. Even after all this, the application still results in a system-wide hang on certain machines under certain circumstances. Perfmon and Process Explorer show minimal CPU usage (with the sleep), no constant reads/writes from disk, few hard pagefaults (and only ~30k pagefaults in the lifetime of the application on a 5GB input file), little virtual memory (never more than 150MB), no leaked handles, no memory leaks. The machines I've tested it on run Windows XP - Windows 7, x86 and x64 versions included. None have less than 2GB RAM, though the problem is always exacerbated under lower memory conditions. I'm at a loss as to what to do next. I don't know what's causing it - I'm torn between CPU or Memory as the culprit. CPU because without the sleep and under different thread priorities the system performances changes noticeably. Memory because there's a huge difference in how often the issue occurs when using unordered_set vs Google's dense_hash_map. What's really weird? Obviously, the NT kernel design is supposed to prevent this sort of behavior from ever occurring (a user-mode application driving the system to this sort of extreme poor performance!?)..... but when I compile the code and run it on OS X or Linux (it's fairly standard C++ throughout) it performs excellently even on poor machines with little RAM and weaker CPUs. What am I supposed to do next? How do I know what the hell it is that Windows is doing behind the scenes that's killing system performance, when all the indicators are that the application itself isn't doing anything extreme? Any advice would be most welcome.

    Read the article

  • (static initialization order?!) problems with factory pattern

    - by smerlin
    Why does following code raise an exception (in createObjects call to map::at) alternativly the code (and its output) can be viewed here intererestingly the code works as expected if the commented lines are uncommented with both microsoft and gcc compiler (see here), this even works with initMap as ordinary static variable instead of static getter. The only reason for this i can think of is that the order of initialization of the static registerHelper_ object (factory_helper_)and the std::map object (initMap) are wrong, however i cant see how that could happen, because the map object is constructed on first usage and thats in factory_helper_ constructor, so everything should be alright shouldnt it ? I am even more suprised that those doNothing() lines fix the issue, because that call to doNothing() would happen after the critical section (which currently fails) is passed anyway. EDIT: debugging showed, that without the call to factory_helper_.doNothing(), the constructor of factory_helper_ is never called. #include <iostream> #include <string> #include <map> #define FACTORY_CLASS(classtype) \ extern const char classtype##_name_[] = #classtype; \ class classtype : FactoryBase<classtype,classtype##_name_> namespace detail_ { class registerHelperBase { public: registerHelperBase(){} protected: static std::map<std::string, void * (*)(void)>& getInitMap() { static std::map<std::string, void * (*)(void)>* initMap = 0; if(!initMap) initMap= new std::map<std::string, void * (*)(void)>(); return *initMap; } }; template<class TParent, const char* ClassName> class registerHelper_ : registerHelperBase { static registerHelper_ help_; public: //void doNothing(){} registerHelper_(){ getInitMap()[std::string(ClassName)]=&TParent::factory_init_; } }; template<class TParent, const char* ClassName> registerHelper_<TParent,ClassName> registerHelper_<TParent,ClassName>::help_; } class Factory : detail_::registerHelperBase { private: Factory(); public: static void* createObject(const std::string& objclassname) { return getInitMap().at(objclassname)(); } }; template <class TClass, const char* ClassName> class FactoryBase { private: static detail_::registerHelper_<FactoryBase<TClass,ClassName>,ClassName> factory_helper_; static void* factory_init_(){ return new TClass();} public: friend class detail_::registerHelper_<FactoryBase<TClass,ClassName>,ClassName>; FactoryBase(){ //factory_helper_.doNothing(); } virtual ~FactoryBase(){}; }; template <class TClass, const char* ClassName> detail_::registerHelper_<FactoryBase<TClass,ClassName>,ClassName> FactoryBase<TClass,ClassName>::factory_helper_; FACTORY_CLASS(Test) { public: Test(){} }; int main(int argc, char** argv) { try { Test* test = (Test*) Factory::createObject("Test"); } catch(const std::exception& ex) { std::cerr << "caught std::exception: "<< ex.what() << std::endl; } #ifdef _MSC_VER system("pause"); #endif return 0; }

    Read the article

  • In Exim, is RBL spam rejected prior to being scanned by SpamAssassin?

    - by user955664
    I've recently been battling spam issues on our mail server. One account in particular was getting hammered with incoming spam. SpamAssassin's memory use is one of our concerns. What I've done is enable RBLs in Exim. I now see many rejection notices in the Exim log based on the various RBLs, which is good. However, when I run Eximstats, the numbers seem to be the same as they were prior to the enabling of the RBLs. I am assuming because the email is still logged in some way prior to the rejection. Is that what's happening, or am I missing something else? Does anyone know if these emails are rejected prior to being processed by SpamAssassin? Or does anyone know how I'd be able to find out? Is there a standard way to generate SpamAssassin stats, similar to Eximstats, so that I could compare the numbers? Thank you for your time and any advice. Edit: Here is the ACL section of my Exim configuration file ###################################################################### # ACLs # ###################################################################### begin acl # ACL that is used after the RCPT command check_recipient: # to block certain wellknown exploits, Deny for local domains if # local parts begin with a dot or contain @ % ! / | deny domains = +local_domains local_parts = ^[.] : ^.*[@%!/|] # to restrict port 587 to authenticated users only # see also daemon_smtp_ports above accept hosts = +auth_relay_hosts condition = ${if eq {$interface_port}{587} {yes}{no}} endpass message = relay not permitted, authentication required authenticated = * # allow local users to send outgoing messages using slashes # and vertical bars in their local parts. # Block outgoing local parts that begin with a dot, slash, or vertical # bar but allows them within the local part. # The sequence \..\ is barred. The usage of @ % and ! is barred as # before. The motivation is to prevent your users (or their virii) # from mounting certain kinds of attacks on remote sites. deny domains = !+local_domains local_parts = ^[./|] : ^.*[@%!] : ^.*/\\.\\./ # local source whitelist # accept if the source is local SMTP (i.e. not over TCP/IP). # Test for this by testing for an empty sending host field. accept hosts = : # sender domains whitelist # accept if sender domain is in whitelist accept sender_domains = +whitelist_domains # sender hosts whitelist # accept if sender host is in whitelist accept hosts = +whitelist_hosts accept hosts = +whitelist_hosts_ip # envelope senders whitelist # accept if envelope sender is in whitelist accept senders = +whitelist_senders # accept mail to postmaster in any local domain, regardless of source accept local_parts = postmaster domains = +local_domains # accept mail to abuse in any local domain, regardless of source accept local_parts = abuse domains = +local_domains # accept mail to hostmaster in any local domain, regardless of source accept local_parts = hostmaster domains =+local_domains # OPTIONAL MODIFICATIONS: # If the page you're using to notify senders of blocked email of how # to get their address unblocked will use a web form to send you email so # you'll know to unblock those senders, then you may leave these lines # commented out. However, if you'll be telling your senders of blocked # email to send an email to [email protected], then you should # replace "errors" with the left side of the email address you'll be # using, and "example.com" with the right side of the email address and # then uncomment the second two lines, leaving the first one commented. # Doing this will mean anyone can send email to this specific address, # even if they're at a blocked domain, and even if your domain is using # blocklists. # accept mail to [email protected], regardless of source # accept local_parts = errors # domains = example.com # deny so-called "legal" spammers" deny message = Email blocked by LBL - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains sender_domains = +blacklist_domains # deny using hostname in bad_sender_hosts blacklist deny message = Email blocked by BSHL - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains hosts = +bad_sender_hosts # deny using IP in bad_sender_hosts blacklist deny message = Email blocked by BSHL - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains hosts = +bad_sender_hosts_ip # deny using email address in blacklist_senders deny message = Email blocked by BSAL - to unblock see http://www.example.com/ domains = +use_rbl_domains senders = +blacklist_senders # By default we do NOT require sender verification. # Sender verification denies unless sender address can be verified: # If you want to require sender verification, i.e., that the sending # address is routable and mail can be delivered to it, then # uncomment the next line. If you do not want to require sender # verification, leave the line commented out #require verify = sender # deny using .spamhaus deny message = Email blocked by SPAMHAUS - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains dnslists = sbl.spamhaus.org # deny using ordb # deny message = Email blocked by ORDB - to unblock see http://www.example.com/ # # only for domains that do want to be tested against RBLs # domains = +use_rbl_domains # dnslists = relays.ordb.org # deny using sorbs smtp list deny message = Email blocked by SORBS - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains dnslists = dnsbl.sorbs.net=127.0.0.5 # Next deny stuff from more "fuzzy" blacklists # but do bypass all checking for whitelisted host names # and for authenticated users # deny using spamcop deny message = Email blocked by SPAMCOP - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = bl.spamcop.net # deny using njabl deny message = Email blocked by NJABL - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = dnsbl.njabl.org # deny using cbl deny message = Email blocked by CBL - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = cbl.abuseat.org # deny using all other sorbs ip-based blocklist besides smtp list deny message = Email blocked by SORBS - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = dnsbl.sorbs.net!=127.0.0.6 # deny using sorbs name based list deny message = Email blocked by SORBS - to unblock see http://www.example.com/ domains =+use_rbl_domains # rhsbl list is name based dnslists = rhsbl.sorbs.net/$sender_address_domain # accept if address is in a local domain as long as recipient can be verified accept domains = +local_domains endpass message = "Unknown User" verify = recipient # accept if address is in a domain for which we relay as long as recipient # can be verified accept domains = +relay_domains endpass verify=recipient # accept if message comes for a host for which we are an outgoing relay # recipient verification is omitted because many MUA clients don't cope # well with SMTP error responses. If you are actually relaying from MTAs # then you should probably add recipient verify here accept hosts = +relay_hosts accept hosts = +auth_relay_hosts endpass message = authentication required authenticated = * deny message = relay not permitted # default at end of acl causes a "deny", but line below will give # an explicit error message: deny message = relay not permitted # ACL that is used after the DATA command check_message: accept

    Read the article

  • One letter game problem?

    - by Alex K
    Recently at a job interview I was given the following problem: Write a script capable of running on the command line as python It should take in two words on the command line (or optionally if you'd prefer it can query the user to supply the two words via the console). Given those two words: a. Ensure they are of equal length b. Ensure they are both words present in the dictionary of valid words in the English language that you downloaded. If so compute whether you can reach the second word from the first by a series of steps as follows a. You can change one letter at a time b. Each time you change a letter the resulting word must also exist in the dictionary c. You cannot add or remove letters If the two words are reachable, the script should print out the path which leads as a single, shortest path from one word to the other. You can /usr/share/dict/words for your dictionary of words. My solution consisted of using breadth first search to find a shortest path between two words. But apparently that wasn't good enough to get the job :( Would you guys know what I could have done wrong? Thank you so much. import collections import functools import re def time_func(func): import time def wrapper(*args, **kwargs): start = time.time() res = func(*args, **kwargs) timed = time.time() - start setattr(wrapper, 'time_taken', timed) return res functools.update_wrapper(wrapper, func) return wrapper class OneLetterGame: def __init__(self, dict_path): self.dict_path = dict_path self.words = set() def run(self, start_word, end_word): '''Runs the one letter game with the given start and end words. ''' assert len(start_word) == len(end_word), \ 'Start word and end word must of the same length.' self.read_dict(len(start_word)) path = self.shortest_path(start_word, end_word) if not path: print 'There is no path between %s and %s (took %.2f sec.)' % ( start_word, end_word, find_shortest_path.time_taken) else: print 'The shortest path (found in %.2f sec.) is:\n=> %s' % ( self.shortest_path.time_taken, ' -- '.join(path)) def _bfs(self, start): '''Implementation of breadth first search as a generator. The portion of the graph to explore is given on demand using get_neighboors. Care was taken so that a vertex / node is explored only once. ''' queue = collections.deque([(None, start)]) inqueue = set([start]) while queue: parent, node = queue.popleft() yield parent, node new = set(self.get_neighbours(node)) - inqueue inqueue = inqueue | new queue.extend([(node, child) for child in new]) @time_func def shortest_path(self, start, end): '''Returns the shortest path from start to end using bfs. ''' assert start in self.words, 'Start word not in dictionnary.' assert end in self.words, 'End word not in dictionnary.' paths = {None: []} for parent, child in self._bfs(start): paths[child] = paths[parent] + [child] if child == end: return paths[child] return None def get_neighbours(self, word): '''Gets every word one letter away from the a given word. We do not keep these words in memory because bfs accesses a given vertex only once. ''' neighbours = [] p_word = ['^' + word[0:i] + '\w' + word[i+1:] + '$' for i, w in enumerate(word)] p_word = '|'.join(p_word) for w in self.words: if w != word and re.match(p_word, w, re.I|re.U): neighbours += [w] return neighbours def read_dict(self, size): '''Loads every word of a specific size from the dictionnary into memory. ''' for l in open(self.dict_path): l = l.decode('latin-1').strip().lower() if len(l) == size: self.words.add(l) if __name__ == '__main__': import sys if len(sys.argv) not in [3, 4]: print 'Usage: python one_letter_game.py start_word end_word' else: g = OneLetterGame(dict_path = '/usr/share/dict/words') try: g.run(*sys.argv[1:]) except AssertionError, e: print e

    Read the article

  • Server hung with "blocked for more than 120 seconds", diskless

    - by alterpub
    I have server which hung every 2-5 days. dmesg show following situation: kernel: [490894.231753] INFO: task munin-html:10187 blocked for more than 120 seconds. kernel: [490894.231799] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kernel: [490894.231843] munin-html D 0000000000000000 0 10187 9796 0x00000000 kernel: [490894.231878] ffff88063174b968 0000000000000082 ffff88063174b8d8 0000000000015b80 kernel: [490894.231930] ffff88063174bfd8 0000000000015b80 ffff88063174bfd8 ffff88062fe644d0 kernel: [490894.231982] 0000000000015b80 0000000000015b80 ffff88063174bfd8 0000000000015b80 kernel: [490894.232033] Call Trace: kernel: [490894.232059] [<ffffffff8117fce0>] ? sync_buffer+0x0/0x50 kernel: [490894.232089] [<ffffffff815a1f13>] io_schedule+0x73/0xc0 kernel: [490894.232115] [<ffffffff8117fd25>] sync_buffer+0x45/0x50 kernel: [490894.232143] [<ffffffff815a258f>] __wait_on_bit+0x5f/0x90 kernel: [490894.232170] [<ffffffff8117fce0>] ? sync_buffer+0x0/0x50 kernel: [490894.232197] [<ffffffff815a2638>] out_of_line_wait_on_bit+0x78/0x90 kernel: [490894.232227] [<ffffffff81080250>] ? wake_bit_function+0x0/0x40 kernel: [490894.232255] [<ffffffff8117fcd6>] __wait_on_buffer+0x26/0x30 kernel: [490894.232288] [<ffffffffa00131be>] squashfs_read_data+0x1be/0x520 [squashfs] kernel: [490894.232320] [<ffffffff8114f0f1>] ? __mem_cgroup_try_charge+0x71/0x450 kernel: [490894.232350] [<ffffffffa0013963>] squashfs_cache_get+0x1c3/0x320 [squashfs] kernel: [490894.232381] [<ffffffffa00136eb>] ? squashfs_copy_data+0x10b/0x130 [squashfs] kernel: [490894.232426] [<ffffffff815a3dbe>] ? _raw_spin_lock+0xe/0x20 kernel: [490894.232454] [<ffffffffa0013b68>] ? squashfs_read_metadata+0x48/0xf0 [squashfs] kernel: [490894.232499] [<ffffffffa0013ae1>] squashfs_get_datablock+0x21/0x30 [squashfs] kernel: [490894.232544] [<ffffffffa0015026>] squashfs_readpage+0x436/0x4a0 [squashfs] kernel: [490894.232575] [<ffffffff8111a375>] ? __inc_zone_page_state+0x35/0x40 kernel: [490894.232606] [<ffffffff8110d072>] __do_page_cache_readahead+0x172/0x210 kernel: [490894.232636] [<ffffffff8110d131>] ra_submit+0x21/0x30 kernel: [490894.232662] [<ffffffff811045f3>] filemap_fault+0x3f3/0x450 kernel: [490894.232691] [<ffffffff812bd156>] ? prio_tree_insert+0x256/0x2b0 kernel: [490894.232726] [<ffffffffa009225d>] aufs_fault+0x11d/0x170 [aufs] kernel: [490894.232755] [<ffffffff8111f6d4>] __do_fault+0x54/0x560 kernel: [490894.232782] [<ffffffff81122f39>] handle_mm_fault+0x1b9/0x440 kernel: [490894.232811] [<ffffffff811286f5>] ? do_mmap_pgoff+0x335/0x380 kernel: [490894.232840] [<ffffffff815a7af5>] do_page_fault+0x125/0x350 kernel: [490894.232867] [<ffffffff815a4675>] page_fault+0x25/0x30 Os info cat /etc/issue.net Ubuntu 10.04.2 LTS uname -a Linux Shard1Host3 2.6.35-32-server #68~lucid1-Ubuntu SMP Wed Mar 28 18:33:00 UTC 2012 x86_64 GNU/Linux This system load via ltsp and hasn't harddrives also it has a lot of memory 24Gb. free -m total used free shared buffers cached Mem: 24152 17090 7061 0 50 494 -/+ buffers/cache: 16545 7607 Swap: 0 0 0 I put vmstat info here but I think it won't give results after reboot vmstat 1 30 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 7231156 52196 506400 0 0 0 0 172 143 7 0 92 0 1 0 0 7231024 52196 506400 0 0 0 0 7859 16233 5 0 94 0 0 0 0 7231024 52196 506400 0 0 0 0 7870 16446 2 0 98 0 0 0 0 7230900 52196 506400 0 0 0 0 7308 15661 5 0 95 0 0 0 0 7231100 52196 506400 0 0 0 0 7960 16543 6 0 94 0 0 0 0 7231100 52196 506400 0 0 0 0 7542 16047 5 1 94 0 3 0 0 7231100 52196 506400 0 0 0 0 7709 16621 3 0 96 0 0 0 0 7231220 52196 506400 0 0 0 0 7857 16552 4 0 96 0 0 0 0 7231220 52196 506400 0 0 0 0 7192 15491 6 0 94 0 0 0 0 7231220 52196 506400 0 0 0 0 7423 15792 5 1 94 0 1 0 0 7231260 52196 506404 0 0 0 0 7686 16296 2 0 98 0 0 0 0 7231260 52196 506404 0 0 0 0 6976 15183 5 0 95 0 0 0 0 7231260 52196 506404 0 0 0 0 7303 15600 4 0 95 0 0 0 0 7231320 52196 506404 0 0 0 0 7967 16241 1 0 98 0 0 0 0 7231444 52196 506404 0 0 0 0 6948 15113 6 0 94 0 0 0 0 7231444 52196 506404 0 0 0 0 7931 16181 6 0 94 0 1 0 0 7231516 52196 506404 0 0 0 0 7715 15829 6 0 94 0 0 0 0 7231516 52196 506404 0 0 0 0 7771 16036 2 0 97 0 0 0 0 7231268 52196 506404 0 0 0 0 7782 16202 6 0 94 0 1 0 0 7231212 52196 506404 0 0 0 0 7457 15622 4 0 96 0 0 0 0 7231212 52196 506404 0 0 0 0 7573 16045 2 0 98 0 2 0 0 7231216 52196 506404 0 0 0 0 7689 16076 6 0 94 0 0 0 0 7231424 52196 506404 0 0 0 0 7429 15650 4 0 95 0 3 0 0 7231424 52196 506404 0 0 0 0 7534 16168 3 0 97 0 1 0 0 7230548 52196 506404 0 0 0 0 8559 15926 7 1 92 0 0 0 0 7230672 52196 506404 0 0 0 0 7720 15905 2 0 98 0 0 0 0 7230548 52196 506404 0 0 0 0 7677 16313 5 0 95 0 1 0 0 7230676 52196 506404 0 0 0 0 7209 15432 5 0 95 0 0 0 0 7230800 52196 506404 0 0 0 0 7522 15861 2 0 98 0 0 0 0 7230552 52196 506404 0 0 0 0 7760 16661 5 0 95 0 In the munin(monitoring system) graphs I see(before server hung): Disk(nbd0) IOs per device: read: 289m but avg by week 2.09m Disk(nbd0) throughput per device: read: 4.73k but avg by week 108.76 Disk(nbd0) utilization per device: 100% but avg by week 1.2% Eth0 traffic was low: in/out only 2Mbps Number of threads increased to 566 usually 392 Fork rate 1.08 but usually 2.82 VMStat(processes state) Increased to 17.77(from 0 as far as I could see in the graph) CPU usage iowait 880.46% when usually 6.67% It'll be great if somebody help me to understand what's up.

    Read the article

  • What are good CLI tools for JSON?

    - by jasonmp85
    General Problem Though I may be diagnosing the root cause of an event, determining how many users it affected, or distilling timing logs in order to assess the performance and throughput impact of a recent code change, my tools stay the same: grep, awk, sed, tr, uniq, sort, zcat, tail, head, join, and split. To glue them all together, Unix gives us pipes, and for fancier filtering we have xargs. If these fail me, there's always perl -e. These tools are perfect for processing CSV files, tab-delimited files, log files with a predictable line format, or files with comma-separated key-value pairs. In other words, files where each line has next to no context. XML Analogues I recently needed to trawl through Gigabytes of XML to build a histogram of usage by user. This was easy enough with the tools I had, but for more complicated queries the normal approaches break down. Say I have files with items like this: <foo user="me"> <baz key="zoidberg" value="squid" /> <baz key="leela" value="cyclops" /> <baz key="fry" value="rube" /> </foo> And let's say I want to produce a mapping from user to average number of <baz>s per <foo>. Processing line-by-line is no longer an option: I need to know which user's <foo> I'm currently inspecting so I know whose average to update. Any sort of Unix one liner that accomplishes this task is likely to be inscrutable. Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us. Previously, I had gotten accustomed to using the wonderful XML::XPath Perl module to accomplish queries like the one above, but after finding a TextMate Plugin that could run an XPath expression against my current window, I stopped writing one-off Perl scripts to query XML. And I just found out about XMLStarlet which is installing as I type this and which I look forward to using in the future. JSON Solutions? So this leads me to my question: are there any tools like this for JSON? It's only a matter of time before some investigation task requires me to do similar queries on JSON files, and without tools like XPath and XSLT, such a task will be a lot harder. If I had a bunch of JSON that looked like this: { "firstName": "Bender", "lastName": "Robot", "age": 200, "address": { "streetAddress": "123", "city": "New York", "state": "NY", "postalCode": "1729" }, "phoneNumber": [ { "type": "home", "number": "666 555-1234" }, { "type": "fax", "number": "666 555-4567" } ] } And wanted to find the average number of phone numbers each person had, I could do something like this with XPath: fn:avg(/fn:count(phoneNumber)) Questions Are there any command-line tools that can "query" JSON files in this way? If you have to process a bunch of JSON files on a Unix command line, what tools do you use? Heck, is there even work being done to make a query language like this for JSON? If you do use tools like this in your day-to-day work, what do you like/dislike about them? Are there any gotchas? I'm noticing more and more data serialization is being done using JSON, so processing tools like this will be crucial when analyzing large data dumps in the future. Language libraries for JSON are very strong and it's easy enough to write scripts to do this sort of processing, but to really let people play around with the data shell tools are needed. Related Questions Grep and Sed Equivalent for XML Command Line Processing Is there a query language for JSON? JSONPath or other XPath like utility for JSON/Javascript; or Jquery JSON

    Read the article

  • "Attach or Add an entity that is not new...loaded from another DataContext. This is not supported."

    - by sah302
    Similar error as other questions, but not quite the same, I am not trying to attach anything. What I am trying to do is insert a new row into a linking table, specifically UserAccomplishment. Relations are set in LINQ to User and Accomplishment Tables. I have a generic insert function: Public Function insertRow(ByVal entity As ImplementationType) As Boolean If entity IsNot Nothing Then Dim lcfdatacontext As New LCFDataContext() Try lcfdatacontext.GetTable(Of ImplementationType)().InsertOnSubmit(entity) lcfdatacontext.SubmitChanges() lcfdatacontext.Dispose() Return True Catch ex As Exception Return False End Try Else Return False End If End Function If you try and give UserAccomplishment the two appropriate objects this will naturally crap out if either the User or Accomplishment already exist. It only works when both user and accomplishment don't exist. I expected this behavior. What does work is simply giving the userAccomplishment object a user.id and accomplishment.id and populating the rest of the fields. This works but is kind of awkward to use in my app, it would be much easier to simply pass in both objects and have it work out what already exists and what doesn't. Okay so I made the following (please ignore the fact that this is horribly inefficient because I know it is): Public Class UserAccomplishmentDao Inherits EntityDao(Of UserAccomplishment) Public Function insertLinkerObjectRow(ByVal userAccomplishment As UserAccomplishment) Dim insertSuccess As Boolean = False If Not userAccomplishment Is Nothing Then Dim userDao As New UserDao() Dim accomplishmentDao As New AccomplishmentDao() Dim user As New User() Dim accomplishment As New Accomplishment() 'see if either object already exists in db' user = userDao.getOneByValueOfProperty("Id", userAccomplishment.User.Id) accomplishment = accomplishmentDao.getOneByValueOfProperty("Id", userAccomplishment.Accomplishment.Id) If user Is Nothing And accomplishment Is Nothing Then 'neither the user or the accomplishment exist, both are new so insert them both, typical insert' insertSuccess = Me.insertRow(userAccomplishment) ElseIf user Is Nothing And Not accomplishment Is Nothing Then 'user is new, accomplishment is not new, so just insert the user, and the relation in userAccomplishment' Dim userWithExistingAccomplishment As New UserAccomplishment(userAccomplishment.User, userAccomplishment.Accomplishment.Id, userAccomplishment.LastUpdatedBy) insertSuccess = Me.insertRow(userWithExistingAccomplishment) ElseIf Not user Is Nothing And accomplishment Is Nothing Then 'user is not new, accomplishment is new, so just insert the accomplishment, and the relation in userAccomplishment' Dim existingUserWithAccomplishment As New UserAccomplishment(userAccomplishment.UserId, userAccomplishment.Accomplishment, userAccomplishment.LastUpdatedBy) insertSuccess = Me.insertRow(existingUserWithAccomplishment) Else 'both are not new, just add the relation' Dim userAccomplishmentBothExist As New UserAccomplishment(userAccomplishment.User.Id, userAccomplishment.Accomplishment.Id, userAccomplishment.LastUpdatedBy) insertSuccess = Me.insertRow(userAccomplishmentBothExist) End If End If Return insertSuccess End Function End Class Alright, here I basically check if the supplied user and accomplishment already exists in the db, and if so call an appropriate constructor that will leave whatever already exists empty, but supply the rest of the information so the insert can succeed. However, upon trying an insert: Dim result As Boolean = Me.userAccomplishmentDao.insertLinkerObjectRow(userAccomplishment) In which the user already exists, but the accomplishment does not (the 99% typical scenario) I get the error: "An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported." I have debugged this multiple times now and am not sure why this is occuring, if either User or Accomplishment exist, I am not including it in the final object to try to insert. So nothing appears to be attempted to be added. Even in debug, upon insert, the object was set to empty. So the accomplishment is new and the user is empty. 1) Why is it still saying that and how can I fix it ..using my current structure 2) Pre-emptive 'use repository pattern answers' - I know this way kind of sucks in general and I should be using the repository pattern. However, I can't use that in the current project because I don't have time to refactor that due to my non existence knowledge of it and time constraints. The usage of the app is going to so small that the inefficient use of datacontext's and what have you won't matter so much. I can refactor it once it's up and running, but for now I just need to 'push through' with my current structure. Edit: I also just tested this when having both already exists, and only insert each object's IDs into the table, that works. So I guess I could manually insert whichever object doesn't exist as a single insert, then put the ids only into the linking table, but I still don't know why when one object exists, and I make it empty, it doens't work.

    Read the article

  • Uneditable file and Unreadable(for further processing) file( WHY? ) after processing it through C++

    - by mgj
    Hi...:) This might look to be a very long question to you I understand, but trust me on this its not long. I am not able to identify why after processing this text is not being able to be read and edited. I tried using the ord() function in python to check if the text contains any Unicode characters( non ascii characters) apart from the ascii ones.. I found quite a number of them. I have a strong feeling that this could be due to the original text itself( The INPUT ). Input-File: Just copy paste it into a file "acle5v1.txt" The objective of this code below is to check for upper case characters and to convert it to lower case and also to remove all punctuations so that these words are taken for further processing for word alignment #include<iostrea> #include<fstream> #include<ctype.h> #include<cstring> using namespace std; ifstream fin2("acle5v1.txt"); ofstream fin3("acle5v1_op.txt"); ofstream fin4("chkcharadded.txt"); ofstream fin5("chkcharntadded.txt"); ofstream fin6("chkprintchar.txt"); ofstream fin7("chknonasci.txt"); ofstream fin8("nonprinchar.txt"); int main() { char ch,ch1; fin2.seekg(0); fin3.seekp(0); int flag = 0; while(!fin2.eof()) { ch1=ch; fin2.get(ch); if (isprint(ch))// if the character is printable flag = 1; if(flag) { fin6<<"Printable character:\t"<<ch<<"\t"<<(int)ch<<endl; flag = 0; } else { fin8<<"Non printable character caught:\t"<<ch<<"\t"<<int(ch)<<endl; } if( isalnum(ch) || ch == '@' || ch == ' ' )// checks for alpha numeric characters { fin4<<"char added: "<<ch<<"\tits ascii value: "<<int(ch)<<endl; if(isupper(ch)) { //tolower(ch); fin3<<(char)tolower(ch); } else { fin3<<ch; } } else if( ( ch=='\t' || ch=='.' || ch==',' || ch=='#' || ch=='?' || ch=='!' || ch=='"' || ch != ';' || ch != ':') && ch1 != ' ' ) { fin3<<' '; } else if( (ch=='\t' || ch=='.' || ch==',' || ch=='#' || ch=='?' || ch=='!' || ch=='"' || ch != ';' || ch != ':') && ch1 == ' ' ) { //fin3<<" '; } else if( !(int(ch)>=0 && int(ch)<=127) ) { fin5<<"Char of ascii within range not added: "<<ch<<"\tits ascii value: "<<int(ch)<<endl; } else { fin7<<"Non ascii character caught(could be a -ve value also)\t"<<ch<<int(ch)<<endl; } } return 0; } I have a similar code as the above written in python which gives me an otput which is again not readable and not editable The code in python looks like this: #!/usr/bin/python # -*- coding: UTF-8 -*- import sys input_file=sys.argv[1] output_file=sys.argv[2] list1=[] f=open(input_file) for line in f: line=line.strip() #line=line.rstrip('.') line=line.replace('.','') line=line.replace(',','') line=line.replace('#','') line=line.replace('?','') line=line.replace('!','') line=line.replace('"','') line=line.replace('?','') line=line.replace('|','') line = line.lower() list1.append(line) f.close() f1=open(output_file,'w') f1.write(' '.join(list1)) f1.close() the file takes ip and op at runtime.. as: python punc_remover.py acle5v1.txt acle5v1_op.txt The output of this file is in "acle5v1_op.txt" now after processing this particular output file is needed for further processing. This particular file "aclee5v1_op.txt" is the UNREADABLE Aand UNEDITABLE File that I am not being able to use for further processing. I need this for Word alignment in NLP. I tried readin this output with the following program #include<iostream> #include<fstream> using namespace std; ifstream fin1("acle5v1_op.txt"); ofstream fout1("chckread_acle5v1_op.txt"); ofstream fout2("chcknotread_acle5v1_op.txt"); int main() { char ch; int flag = 0; long int r = 0; long int nr = 0; while(!(fin1)) { fin1.get(ch); if(ch) { flag = 1; } if(flag) { fout1<<ch; flag = 0; r++; } else { fout2<<"Char not been able to be read from source file\n"; nr++; } } cout<<"Number of characters able to be read: "<<r; cout<<endl<<"Number of characters not been able to be read: "<<nr; return 0; } which prints the character if its readable and if not it doesn't print them but I observed the output of both the file is blank thus I could draw a conclusion that this file "acle5v1_op.txt" is UNREADABLE AND UNEDITABLE. Could you please help me on how to deal with this problem.. To tell you a bit about the statistics wrt the original input file "acle5v1.txt" file it has around 3441 lines in it and around 3 million characters in it. Keeping in mind the number of characters in the file you editor might/might not be able to manage to open the file.. I was able to open the file in gedit of Fedora 10 which I am currently using .. This is just to notify you that opening with a particular editor was not actually an issue at least in my case... Can I use scripting languages like Python and Perl to deal with this problem if Yes how? could please be specific on that regard as I am a novice to Perl and Python. Or could you please tell me how do I solve this problem using C++ itself.. Thank you...:) I am really looking forward to some help or guidance on how to go about this problem....

    Read the article

  • Break a class in twain, or impose an interface for restricted access?

    - by bedwyr
    What's the best way of partitioning a class when its functionality needs to be externally accessed in different ways by different classes? Hopefully the following example will make the question clear :) I have a Java class which accesses a single location in a directory allowing external classes to perform read/write operations to it. Read operations return usage stats on the directory (e.g. available disk space, number of writes, etc.); write operations, obviously, allow external classes to write data to the disk. These methods always work on the same location, and receive their configuration (e.g. which directory to use, min disk space, etc.) from an external source (passed to the constructor). This class looks something like this: public class DiskHandler { public DiskHandler(String dir, int minSpace) { ... } public void writeToDisk(String contents, String filename) { int space = getAvailableSpace(); ... } public void getAvailableSpace() { ... } } There's quite a bit more going on, but this will do to suffice. This class needs to be accessed differently by two external classes. One class needs access to the read operations; the other needs access to both read and write operations. public class DiskWriter { DiskHandler diskHandler; public DiskWriter() { diskHandler = new DiskHandler(...); } public void doSomething() { diskHandler.writeToDisk(...); } } public class DiskReader { DiskHandler diskHandler; public DiskReader() { diskHandler = new DiskHandler(...); } public void doSomething() { int space = diskHandler.getAvailableSpace(...); } } At this point, both classes share the same class, but the class which should only read has access to the write methods. Solution 1 I could break this class into two. One class would handle read operations, and the other would handle writes: // NEW "UTILITY" CLASSES public class WriterUtil { private ReaderUtil diskReader; public WriterUtil(String dir, int minSpace) { ... diskReader = new ReaderUtil(dir, minSpace); } public void writeToDisk(String contents, String filename) { int = diskReader.getAvailableSpace(); ... } } public class ReaderUtil { public ReaderUtil(String dir, int minSpace) { ... } public void getAvailableSpace() { ... } } // MODIFIED EXTERNALLY-ACCESSING CLASSES public class DiskWriter { WriterUtil diskWriter; public DiskWriter() { diskWriter = new WriterUtil(...); } public void doSomething() { diskWriter.writeToDisk(...); } } public class DiskReader { ReaderUtil diskReader; public DiskReader() { diskReader = new ReaderUtil(...); } public void doSomething() { int space = diskReader.getAvailableSpace(...); } } This solution prevents classes from having access to methods they should not, but it also breaks encapsulation. The original DiskHandler class was completely self-contained and only needed config parameters via a single constructor. By breaking apart the functionality into read/write classes, they both are concerned with the directory and both need to be instantiated with their respective values. In essence, I don't really care to duplicate the concerns. Solution 2 I could implement an interface which only provisions read operations, and use this when a class only needs access to those methods. The interface might look something like this: public interface Readable { int getAvailableSpace(); } The Reader class would instantiate the object like this: Readable diskReader; public DiskReader() { diskReader = new DiskHandler(...); } This solution seems brittle, and prone to confusion in the future. It doesn't guarantee developers will use the correct interface in the future. Any changes to the implementation of the DiskHandler could also need to update the interface as well as the accessing classes. I like it better than the previous solution, but not by much. Frankly, neither of these solutions seems perfect, but I'm not sure if one should be preferred over the other. I really don't want to break the original class up, but I also don't know if the interface buys me much in the long run. Are there other solutions I'm missing?

    Read the article

  • BIND returns serverfail when querying for its authoriative domain

    - by estol
    Hi there Serverfault folks! First of all: sorry about the title, I had some problem coming up with the proper title. I have a little home server set up, for internet sharing, samba, basic http, dlna mediaserver and what not, and I happend to have a domain at hand, so I thought why not direct it to this computer? I have a BIND 9.8.0 installed, and - afaik - configured it properly. For a few days, the public view did not worked, and I really did not cared, since the local view worked. But now suddenly, even the local view fails. If I try to query the nameserver for anything in my domain, it returns the following error: $ nslookup andromeda.dafaces.com ;; Got SERVFAIL reply from ::1, trying next server ;; Got SERVFAIL reply from ::1, trying next server Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find andromeda.dafaces.com.dafaces.com: SERVFAIL Also, the public view points to the old ip address of the domain, probably because of the same error. Some information about the system: $ uname -a Linux tressis 2.6.37-ARCH #1 SMP PREEMPT Tue Mar 15 09:21:17 CET 2011 x86_64 AMD Athlon(tm) 64 X2 Dual Core Processor 5000+ AuthenticAMD GNU/Linux $ named -v BIND 9.8.0 And the named.conf file: # cat /etc/named.conf // // /etc/named.conf // include "/etc/rndc.key"; #controls { # inet 127.0.0.1 allow {localhost; } keys { "dnskulcs"; }; #}; options { directory "/var/named"; pid-file "/var/run/named/named.pid"; auth-nxdomain yes; datasize default; // Uncomment these to enable IPv6 connections support // IPv4 will still work: listen-on-v6 { any; }; listen-on { any; }; // Add this for no IPv4: // listen-on { none; }; // Default security settings. // allow-recursion { 127.0.0.1; ::1; 192.168.1.0/24; }; // allow-recursion { any; }; allow-query { any; }; allow-transfer { 127.0.0.1; ::1; 92.243.14.172; 87.98.164.164; 88.191.64.64; }; allow-update { key "dnskulcs"; }; version none; hostname none; server-id none; zone-statistics yes; forwarders { 213.46.246.53; 213.26.246.54; 8.8.8.8; 8.8.4.4; 192.188.242.65; 193.227.196.3; 2001:470:20::2; }; }; view "local" { match-clients { 192.168.1.0/24; 127.0.0.1; ::1; fec0:0:0:ffff::/64; }; recursion yes; zone "localhost" IN { type master; file "localhost.zone"; allow-transfer { any; }; }; zone "0.0.127.in-addr.arpa" IN { type master; file "127.0.0.zone"; allow-transfer { any; }; }; zone "." IN { type hint; file "root.hint"; }; zone "dafaces.com" IN { type master; file "internal/dafaces.com.fw"; allow-update { key "dnskulcs"; }; }; zone "1.168.192.in-addr.arpa" IN { type master; file "internal/dafaces.com.rev"; allow-update { key "dnskulcs"; }; }; }; view "public" { match-clients { any;}; recursion no; zone "dafaces.com" IN { type master; file "external/dafaces.com.fw"; allow-transfer { 87.98.164.164; 195.234.42.1; 88.191.64.64; }; }; }; //zone "example.org" IN { // type slave; // file "example.zone"; // masters { // 192.168.1.100; // }; // allow-query { any; }; // allow-transfer { any; }; //}; logging { channel xfer-log { file "/var/log/named.log"; print-category yes; print-severity yes; print-time yes; severity info; }; category xfer-in { xfer-log; }; category xfer-out { xfer-log; }; category notify { xfer-log; }; }; All help would be highly appreciated! EDIT: Zone files: # cat /var/named/internal/dafaces.com.fw $ORIGIN . $TTL 3600 ; 1 hour dafaces.com IN SOA tressis.dafaces.com. postmaster.dafaces.com. ( 2011032201 ; serial 28800 ; refresh (8 hours) 7200 ; retry (2 hours) 2419200 ; expire (4 weeks) 3600 ; minimum (1 hour) ) NS tressis.dafaces.com. A 192.168.1.1 MX 10 mail.dafaces.com. $ORIGIN _tcp.dafaces.com. _http SRV 0 5 80 www.dafaces.com. _ssh SRV 0 5 22 tressis.dafaces.com. $ORIGIN dafaces.com. acrisius A 192.168.1.230 andromeda A 192.168.1.7 andromeda-win7 CNAME andromeda aspasia A 192.168.1.233 athena A 192.168.1.232 callisto A 192.168.1.102 db A 192.168.1.1 management A 192.168.1.1 ; web management for the router functions haley A 192.168.1.5 hoth A 192.168.1.101 mail A 192.168.1.1 satelite A 192.168.1.20 sony-player A 192.168.1.103 TXT "310f16de2d2712dfc4ae6e5c54f60f828e" torrent A 192.168.1.1 tracker A 192.168.1.1 tressis A 192.168.1.1 www A 192.168.1.1 zeus A 192.168.1.231 and # cat /var/named/external/dafaces.com.fw $ORIGIN . $TTL 3600 dafaces.com IN SOA ns.dafaces.com. postmaster.dafaces.com. ( 2011032405; serial 28800; refresh 7200; retry 2419200; expire 3600; minimum ) NS ns.dafaces.com. NS ns0.xname.org. NS ns1.xname.org. NS ns2.xname.org. A 89.135.129.37 MX 10 mail.dafaces.com. $ORIGIN dafaces.com. ;Szolgaltatasok _ssh._tcp SRV 0 5 22 tressis _http._tcp SRV 0 5 80 www ns A 89.135.129.37 hoth A 89.135.129.37 www A 89.135.129.37 mail A 89.135.129.37 db A 89.135.129.37 torrent A 89.135.129.37 tracker A 89.135.129.37 Edit: Ohh, hell I almost forgot. Since the node is connected to the internet via a residential connection, there is a possibility, that the public ipv4 address will change(but thank god, it is a very rare case), so I daily update the external IP address in the zone file with a shellscript: # cat /etc/cron.daily/dnsupdate #!/bin/sh FILE="/var/named/external/dafaces.com.fw" SERIAL=$(date +%Y%m%d05) PUBLIC_IP=$(ifconfig internet |sed -n "/inet addr:.*255.255.255.255/{s/.*inet addr://; s/ .*//; p}") cat $FILE | sed --posix 's/^.* serial$/\t\t\t\t\t'$SERIAL'; serial/' | sed --posix 's/[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*/'$PUBLIC_IP'/' > /tmp/ujzona mv /tmp/ujzona $FILE /etc/rc.d/named reload

    Read the article

  • evaluating cost/benefits of using extension methods in C# => 3.0

    - by BillW
    Hi, In what circumstances (usage scenarios) would you choose to write an extension rather than sub-classing an object ? < full disclosure : I am not an MS employee; I do not know Mitsu Furota personally; I do know the author of the open-source Componax library mentioned here, but I have no business dealings with him whatsoever; I am not creating, or planning to create any commercial product using extensions : in sum : this post is from pure intellectal curiousity related to my trying to (continually) become aware of "best practices" I find the idea of extension methods "cool," and obviously you can do "far-out" things with them as in the many examples you can in Mitsu Furota's (MS) blog postslink text. A personal friend wrote the open-source Componax librarylink text, and there's some remarkable facilities in there; but he is in complete command of his small company with total control over code guidelines, and every line of code "passes through his hands." While this is speculation on my part : I think/guess other issues might come into play in a medium-to-large software team situation re use of Extensions. Looking at MS's guidelines at link text, you find : In general, you will probably be calling extension methods far more often than implementing your own. ... In general, we recommend that you implement extension methods sparingly and only when you have to. Whenever possible, client code that must extend an existing type should do so by creating a new type derived from the existing type. For more information, see Inheritance (C# Programming Guide). ... When the compiler encounters a method invocation, it first looks for a match in the type's instance methods. If no match is found, it will search for any extension methods that are defined for the type, and bind to the first extension method that it finds. And at Ms's link text : Extension methods present no specific security vulnerabilities. They can never be used to impersonate existing methods on a type, because all name collisions are resolved in favor of the instance or static method defined by the type itself. Extension methods cannot access any private data in the extended class. Factors that seem obvious to me would include : I assume you would not write an extension unless you expected it be used very generally and very frequently. On the other hand : couldn't you say the same thing about sub-classing ? Knowing we can compile them into a seperate dll, and add the compiled dll, and reference it, and then use the extensions : is "cool," but does that "balance out" the cost inherent in the compiler first having to check to see if instance methods are defined as described above. Or the cost, in case of a "name clash," of using the Static invocation methods to make sure your extension is invoked rather than the instance definition ? How frequent use of Extensions would affect run-time performance or memory use : I have no idea. So, I'd appreciate your thoughts, or knowing about how/when you do, or don't do, use Extensions, compared to sub-classing. thanks, Bill

    Read the article

  • Need help identiying a nasty rootkit in Windows

    - by goofrider
    I have a nasty rootkit that not tools seem to be able to idenity. I know for sure it's a rootkit, but I can figure out which rootkit it is. Here's what I gathered so far: It creates multiple copies of itself in %HOME%\Local Settings\Temp with names like Q.EXE, IAJARZ.exe, etc., and install them as hidden services. These EXE have SysInternals identifiers in them so they're definitely rootkits. It hooked very deep in the system, including file read/write, security policies, registry read/write, and possibly WinSock/TCP/IP. When going to Sophos.com to download their software, the rootkit inject something called Microsoft Ajax Tootkit into the page, which injects code into the email submission form in order to redirect it. (EDIT: I might have panicked. Looks like Sophos does use an AJAZ email form, their form is just broken on Chrome so it looked like a mail form injection attack, the link is http://www.sophos.com/en-us/products/free-tools/virus-removal-tool/download.aspx ) Super-Antispyware found a lot of spyware cookies, in the name of .kaspersky.2o7.net, etc. (just chedk 2o7.net, looks like it's a legit ad company) I tried comparing DNS lookup from the infected systems and from system in other physical locations, no DNS redirections it seems. I used dd to copy the MBR and compared it with the MBR provided by ms-sys package, no differences so it's not infecting MBR. No antivirus or rootkit scanner be able to identify it. Most of them can't even find it. I tried scanning, in-situ (normal mode), in safe mode, and boot to linux live CD. Scanners used: Avast, Sophos anti rootkit, Kasersky TDSSKiller, GMER, RootkitRevealer, and many others. Kaspersky reported some unsigned system files that ought to be signed (e.g. tcpip.sys), and reported a number of MD5 mismatches. But otherwise couldn't identify anything based on signature. When running Sysinternal RootkitRevealer and Sophos AntiRootkit, CPU usage goes up to 100% and gets stucked. The Rootkit is blocking them. When trying running/installing HiJackThis, RootkitRevealer and some other scanners, it tells me system security policy prevent running/installing it. The list of malicious acitivities go on and on. here's a sample of logs from all my scans. In particular, aswSnx.SYS, apnenfno.sys and PROCMON20.SYS has a huge number of hooks. It's hard to tell if the rootkit replaced legit program files like aswSnx.SYS (from Avast) and PROCMON20.SYS (from Sysinternal Process Monitor). I can't find whether apnenfno.sys is from a legit program. Help to identify it is appreciated. Trend Micro RootkitBuster ------ [HIDDEN_REGISTRY][Hidden Reg Value]: KeyPath : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\sptd\Cfg Root : 586bfc0 SubKey : Cfg ValueName : g0 Data : 38 23 E8 D0 BF F2 2D 6F ... ValueType : 3 AccessType: 0 FullLength: 61 DataSize : 32 [HOOKED_SERVICE_API]: Service API : ZwCreateMutant Image Path : C:\WINDOWS\System32\Drivers\aswSnx.SYS OriginalHandler : 0x8061758e CurrentHandler : 0xaa66cce8 ServiceNumber : 0x2b ModuleName : aswSnx.SYS SDTType : 0x0 [HOOKED_SERVICE_API]: Service API : ZwCreateThread Image Path : c:\windows\system32\drivers\apnenfno.sys OriginalHandler : 0x805d1038 CurrentHandler : 0xaa5f118c ServiceNumber : 0x35 ModuleName : apnenfno.sys SDTType : 0x0 [HOOKED_SERVICE_API]: Service API : ZwDeleteKey Image Path : C:\WINDOWS\system32\Drivers\PROCMON20.SYS OriginalHandler : 0x80624472 CurrentHandler : 0xa709b0f8 ServiceNumber : 0x3f ModuleName : PROCMON20.SYS SDTType : 0x0 HiJackThis ------ O23 - Service: JWAHQAGZ - Sysinternals - www.sysinternals.com - C:\DOCUME~1\jeff\LOCALS~1\Temp\JWAHQAGZ.exe O23 - Service: LHIJ - Sysinternals - www.sysinternals.com - C:\DOCUME~1\jeff\LOCALS~1\Temp\LHIJ.exe Kaspersky TDSSKiller ------ 21:05:58.0375 3936 C:\WINDOWS\system32\ati2sgag.exe - copied to quarantine 21:05:59.0217 3936 ATI Smart ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:05:59.0342 3936 C:\WINDOWS\system32\BUFADPT.SYS - copied to quarantine 21:05:59.0856 3936 BUFADPT ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:05:59.0965 3936 C:\Program Files\CrashPlan\CrashPlanService.exe - copied to quarantine 21:06:00.0152 3936 CrashPlanService ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:00.0246 3936 C:\WINDOWS\system32\epmntdrv.sys - copied to quarantine 21:06:00.0433 3936 epmntdrv ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:00.0464 3936 C:\WINDOWS\system32\EuGdiDrv.sys - copied to quarantine 21:06:00.0526 3936 EuGdiDrv ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:00.0604 3936 C:\Program Files\Common Files\Macrovision Shared\FLEXnet Publisher\FNPLicensingService.exe - copied to quarantine 21:06:01.0181 3936 FLEXnet Licensing Service ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0321 3936 C:\Program Files\AddinForUNCFAT\UNCFATDMS.exe - copied to quarantine 21:06:01.0430 3936 OTFSDMS ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0492 3936 C:\WINDOWS\system32\DRIVERS\tcpip.sys - copied to quarantine 21:06:01.0539 3936 Tcpip ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0601 3936 C:\DOCUME~1\jeff\LOCALS~1\Temp\TULPUWOX.exe - copied to quarantine 21:06:01.0664 3936 HKLM\SYSTEM\ControlSet003\services\TULPUWOX - will be deleted on reboot 21:06:01.0664 3936 C:\DOCUME~1\jeff\LOCALS~1\Temp\TULPUWOX.exe - will be deleted on reboot 21:06:01.0664 3936 TULPUWOX ( UnsignedFile.Multi.Generic ) - User select action: Delete 21:06:01.0757 3936 C:\WINDOWS\system32\Drivers\usbaapl.sys - copied to quarantine 21:06:01.0866 3936 USBAAPL ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0913 3936 C:\Program Files\VMware\VMware Player\vmware-authd.exe - copied to quarantine 21:06:02.0443 3936 VMAuthdService ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:02.0443 3936 vmount2 ( UnsignedFile.Multi.Generic ) - skipped by user 21:06:02.0443 3936 vmount2 ( UnsignedFile.Multi.Generic ) - User select action: Skip 21:06:02.0459 3936 vstor2 ( UnsignedFile.Multi.Generic ) - skipped by user 21:06:02.0459 3936 vstor2 ( UnsignedFile.Multi.Generic ) - User select action: Skip

    Read the article

  • Modern alternatives to Java

    - by Ralph
    I have been a Java developer for 14 years and have written an enterprise-level (~500 kloc) Swing application that uses most of the standard library APIs. Recently, I have become disappointed with the progress that the language has made to "modernize" itself, and am looking for an alternative for ongoing development. I have considered moving to the .NET platform, but I have issues with using something the only runs well in Windows (I know about Mono, but that is still far behind Microsoft). I also plan on buying a new Macbook Pro as soon as Apple releases their new rumored Arrandale-based machines and want to develop in an environment that will feel "at home" in Unix/Linux. I have considered using Python or Ruby, but the standard Java library is arguably the largest of any modern language. In JVM-based languages, I looked at Groovy, but am disappointed with its performance. Rumor has it that with the soon-to-be released JDK7, with its InvokeDynamic instruction, this will improve, but I don't know how much. Groovy is also not truly a functional language, although it provides closures and some of the "functional" features on collections. It does not embrace immutability. I have narrowed my search down to two JVM-based alternatives: Scala and Clojure. Each has its strengths and weaknesses. I am looking for opinions. I am not an expert at either of these languages; I have read 2 1/2 books on Scala and am currently reading Stu Halloway's book on Clojure. Scala is strongly statically typed. I know the dynamic language folks claim that static typing is a crutch for not doing unit testing, but it does provide a mechanism for compile-time location of a whole class of errors. Scala is more concise than Java, but not as much as Clojure. Scala's inter-operation with Java seems to be better than Clojure's, in that most Java operations are easier to do in Scala than in Clojure. For example, I can find no way in Clojure to create a non-static initialization block in a class derived from a Java superclass. For example, I like the Apache commons CLI library for command line argument parsing. In Java and Scala, I can create a new Options object and add Option items to it in an initialization block as follows (Java code): final Options options = new Options() { { addOption(new Option("?", "help", false, "Show this usage information"); // other options } }; I can't figure out how to the same thing in Clojure (except by using (doit...)), although that may reflect my lack of knowledge of the language. Clojure's collections are optimized for immutability. They rarely require copy-on-write semantics. I don't know if Scala's immutable collections are implemented using similar algorithms, but Rich Hickey (Clojure's inventor) goes out of his way to explain how that language's data structures are efficient. Clojure was designed from the beginning for concurrency (as was Scala) and with modern multi-core processors, concurrency takes on more importance, but I occasionally need to write simple non-concurrent utilities, and Scala code probably runs a little faster for these applications since it discourages, but does not prohibit, "simple" mutability. One could argue that one-off utilities do not have to be super-fast, but sometimes they do tasks that take hours or days to complete. I know that there is no right answer to this "question", but I thought I would open it up for discussion. Are there other JVM-based languages that can be used for enterprise level development?

    Read the article

  • Different setter behavior between DbContext and ObjectContext

    - by Paul
    (This is using EntityFramework 4.2 CTP) I haven't found any references to this on the web yet, although it's likely I'm using the wrong terminology while searching. There's also a very likely scenario where this is 100% expected behavior, just looking for confirmation and would rather not dig through the tt template (still new to this). Assuming I have a class with a boolean field called Active and I have one row that already has this value set to true. I have code that executes to set said field to True regardless of it's existing value. If I use DbContext to update the value to True no update is made. If I use ObjectContext to update the value an update is made regardless of the existing value. This is happening in the exact same EDMX, all I did was change the code generation template from DbContext to EntityObject. Update: Ok, found the confirmation I was looking for...consider this a dupe...next time I'll do MOAR SEARCHING! Entity Framework: Cancel a property change if no change in value ** Update 2: ** Problem: the default tt template wraps the "if (this != value)" in the setter with "if (iskey), so only primarykey fields receive this logic. Solution: it's not the most graceful thing, but I removed this check...we'll see how it pans out in real usage. I included the entire tt template, my changes are denoted with "**"... //////// //////// Write SimpleType Properties. //////// private void WriteSimpleTypeProperty(EdmProperty simpleProperty, CodeGenerationTools code) { MetadataTools ef = new MetadataTools(this); #> /// <summary> /// <#=SummaryComment(simpleProperty)#> /// </summary><#=LongDescriptionCommentElement(simpleProperty, 1)#> [EdmScalarPropertyAttribute(EntityKeyProperty= <#=code.CreateLiteral(ef.IsKey(simpleProperty))#>, IsNullable=<#=code.CreateLiteral(ef.IsNullable(simpleProperty))#>)] [DataMemberAttribute()] <#=code.SpaceAfter(NewModifier(simpleProperty))#><#=Accessibility.ForProperty(simpleProperty)#> <#=MultiSchemaEscape(simpleProperty.TypeUsage, code)#> <#=code.Escape(simpleProperty)#> { <#=code.SpaceAfter(Accessibility.ForGetter(simpleProperty))#>get { <#+ if (ef.ClrType(simpleProperty.TypeUsage) == typeof(byte[])) { #> return StructuralObject.GetValidValue(<#=code.FieldName(simpleProperty)#>); <#+ } else { #> return <#=code.FieldName(simpleProperty)#>; <#+ } #> } <#=code.SpaceAfter(Accessibility.ForSetter((simpleProperty)))#>set { <#+ **//if (ef.IsKey(simpleProperty)) **//{ if (ef.ClrType(simpleProperty.TypeUsage) == typeof(byte[])) { #> if (!StructuralObject.BinaryEquals(<#=code.FieldName(simpleProperty)#>, value)) <#+ } else { #> if (<#=code.FieldName(simpleProperty)#> != value) <#+ } #> { <#+ PushIndent(CodeRegion.GetIndent(1)); **//} #> <#=ChangingMethodName(simpleProperty)#>(value); ReportPropertyChanging("<#=simpleProperty.Name#>"); <#=code.FieldName(simpleProperty)#> = <#=CastToEnumType(simpleProperty.TypeUsage, code)#>StructuralObject.SetValidValue(<#=CastToUnderlyingType(simpleProperty.TypeUsage, code)#>value<#=OptionalNullableParameterForSetValidValue(simpleProperty, code)#>, "<#=simpleProperty.Name#>"); ReportPropertyChanged("<#=simpleProperty.Name#>"); <#=ChangedMethodName(simpleProperty)#>(); <#+ //if (ef.IsKey(simpleProperty)) //{ PopIndent(); #> } <#+ //} #> } }

    Read the article

  • Apache/PHP serving file multiple times

    - by easement
    I have a system with a download.php page. The page takes and id and loads a file based on from the DB Record and then serves it up. I've noticed a couple instances where files are requested multiple times in short time spans (20ms). Times that are too quick for human input. There are plenty of instances where the downloader functions fine. However, in taking a closer look at the downloader’s usage, I did see some interesting behavior. For instance, the IP address xxx.xxx.xxx.xxx (which is one in a range owned by xxxxxx.de in Germany) came to the site through Google. They browsed around and then came to the page http://site.com/xxxx/press+125.php There they issued a request for /download.php?id=/ZZ/n+aH55Y= (a PDF) at 9:04:23AM. That alone is not a big deal. However, what is interesting is that the server seems to have been quite preoccupied with serving that request. In the logs the request first completes between 9:09:48 and 9:10:00. It looks like the user must have gotten tired of waiting during that time and requested the document two more times. Between 09:14:47 and 09:15:00 the same request appears again, except it is from 9:04:43AM, 20ms later than the first request. Then it pops up a third time, with a request that started at 09:05:06 completing between 09:19:55 and 09:19:58! I’m suspicious of that document. In looking through the logs I see other instances where it takes the server a little while to handle that specific file. Check out this list of requests from zzz.zzz.zzz.zzz[different than above] for the file /download.php?id=/ZZ/n+aH55Y= (the same docuemnt as before): Request time Complete Time 04:32:43 04:33:36 04:32:50 04:33:36 04:32:51 04:33:38 04:33:05 04:33:38 04:33:34 04:33:42 04:33:05 04:33:42 So something is definitely going on. Whether it has to do with this specific document tripping up the server, the download.php page’s code, or if we’re just seeing the evidence of some server level overload as it plays out in real time I’m not yet sure. In fairness, there are other instances of people downloading /download.php?id=/ZZ/n+aH55Y= (the same PDF) without error. However, it is interesting that the multiple processes only seem to happen with this one file, and then only when it is accessed through the page http://site.com/press+125.php . It bears further investigation if there’s something amiss inside the code that causes the system to fire off multiple download requests that occupy the server. I don't know if this press+125.php is a rabbit hole, but there is weird consicence. Any ideas? I'm totally out of ideas. Apache maxed out? Things like that. ///DOWNLOAD.php $file = new files(); $file->comparison_filter("id", "=", $id); //sql to load if ($file->load()) { $file->serve(); } //FILES function serve() { if ($this->is_loaded) { if (file_exists($this->get_value("filename"))) { if ($this->get_value("content_type") != "") { header("Content-Type: " . $this->get_value("content_type")); } header("Content-Length: " . filesize($this->get_value("filename"))); if ($this->get_value("flag_image") == 0 || $this->get_value("flag_image") == false) { header("Cache-Control: private"); header("Content-Disposition: attachment; filename=" . urlencode($this->get_value("original_filename"))); } set_time_limit(0); @readfile($this->get_value("filename")); exit; } } }

    Read the article

  • Unicorn_init.sh cannot find app root on capistrano cold deploy

    - by oFca
    I am deploying Rails app and upon running cap deploy:cold I get the error saying * 2012-11-02 23:53:26 executing `deploy:migrate' * executing "cd /home/mr_deployer/apps/prjct_mngr/releases/20121102225224 && bundle exec rake RAILS_ENV=production db:migrate" servers: ["xxxxxxxxxx"] [xxxxxxxxxx] executing command command finished in 7464ms * 2012-11-02 23:53:34 executing `deploy:start' * executing "/etc/init.d/unicorn_prjct_mngr start" servers: ["xxxxxxxxxx"] [xxxxxxxxxx] executing command ** [out :: xxxxxxxxxx] /etc/init.d/unicorn_prjct_mngr: 33: cd: can't cd to /home/mr_deployer/apps/prjct_mngr/current; command finished in 694ms failed: "rvm_path=$HOME/.rvm/ $HOME/.rvm/bin/rvm-shell '1.9.3-p125@prjct_mngr' -c '/etc/init.d/unicorn_prjct_mngr start'" on xxxxxxxxxx but my app root is there! Why can't it find it? Here's part of my unicorn_init.sh file : 1 #!/bin/sh 2 set -e 3 # Example init script, this can be used with nginx, too, 4 # since nginx and unicorn accept the same signals 5 6 # Feel free to change any of the following variables for your app: 7 TIMEOUT=${TIMEOUT-60} 8 APP_ROOT=/home/mr_deployer/apps/prjct_mngr/current 9 PID=$APP_ROOT/tmp/pids/unicorn.pid 10 CMD="cd $APP_ROOT; bundle exec unicorn -D -c $APP_ROOT/config/unicorn.rb - E production" 11 # INIT_CONF=$APP_ROOT/config/init.conf 12 AS_USER=mr_deployer 13 action="$1" 14 set -u 15 16 # test -f "$INIT_CONF" && . $INIT_CONF 17 18 old_pid="$PID.oldbin" 19 20 cd $APP_ROOT || exit 1 21 22 sig () { 23 test -s "$PID" && kill -$1 `cat $PID` 24 } 25 26 oldsig () { 27 test -s $old_pid && kill -$1 `cat $old_pid` 28 } 29 case $action in 30 31 start) 32 sig 0 && echo >&2 "Already running" && exit 0 33 $CMD 34 ;; 35 36 stop) 37 sig QUIT && exit 0 38 echo >&2 "Not running" 39 ;; 40 41 force-stop) 42 sig TERM && exit 0 43 echo >&2 "Not running" 44 ;; 45 46 restart|reload) 47 sig HUP && echo reloaded OK && exit 0 48 echo >&2 "Couldn't reload, starting '$CMD' instead" 49 $CMD 50 ;; 51 52 upgrade) 53 if sig USR2 && sleep 2 && sig 0 && oldsig QUIT 54 then 55 n=$TIMEOUT 56 while test -s $old_pid && test $n -ge 0 57 do 58 printf '.' && sleep 1 && n=$(( $n - 1 )) 59 done 60 echo 61 62 if test $n -lt 0 && test -s $old_pid 63 then 64 echo >&2 "$old_pid still exists after $TIMEOUT seconds" 65 exit 1 66 fi 67 exit 0 68 fi 69 echo >&2 "Couldn't upgrade, starting '$CMD' instead" 70 $CMD 71 ;; 72 73 reopen-logs) 74 sig USR1 75 ;; 76 77 *) 78 echo >&2 "Usage: $0 <start|stop|restart|upgrade|force-stop|reopen-logs>" 79 exit 1 80 ;; 81 esac

    Read the article

  • No improvement in speed when using Ehcache with Hibernate

    - by paddydub
    I'm getting no improvement in speed when using Ehcache with Hibernate Here are the results I get when i run the test below. The test is reading 80 Stop objects and then the same 80 Stop objects again using the cache. On the second read it is hitting the cache, but there is no improvement in speed. Any idea's on what I'm doing wrong? Speed Test: First Read: Reading stops 1-80 : 288ms Second Read: Reading stops 1-80 : 275ms Cache Info: elementsInMemory: 79 elementsInMemoryStore: 79 elementsInDiskStore: 0 JunitCacheTest public class JunitCacheTest extends TestCase { static Cache stopCache; public void testCache() { ApplicationContext context = new ClassPathXmlApplicationContext("beans-hibernate.xml"); StopDao stopDao = (StopDao) context.getBean("stopDao"); CacheManager manager = new CacheManager(); stopCache = (Cache) manager.getCache("ie.dataStructure.Stop.Stop"); //First Read for (int i=1; i<80;i++) { Stop toStop = stopDao.findById(i); } //Second Read for (int i=1; i<80;i++) { Stop toStop = stopDao.findById(i); } System.out.println("elementsInMemory " + stopCache.getSize()); System.out.println("elementsInMemoryStore " + stopCache.getMemoryStoreSize()); System.out.println("elementsInDiskStore " + stopCache.getDiskStoreSize()); } public static Cache getStopCache() { return stopCache; } } HibernateStopDao @Repository("stopDao") public class HibernateStopDao implements StopDao { private SessionFactory sessionFactory; @Transactional(readOnly = true) public Stop findById(int stopId) { Cache stopCache = JunitCacheTest.getStopCache(); Element cacheResult = stopCache.get(stopId); if (cacheResult != null){ return (Stop) cacheResult.getValue(); } else{ Stop result =(Stop) sessionFactory.getCurrentSession().get(Stop.class, stopId); Element element = new Element(result.getStopID(),result); stopCache.put(element); return result; } } } ehcache.xml <cache name="ie.dataStructure.Stop.Stop" maxElementsInMemory="1000" eternal="false" timeToIdleSeconds="5200" timeToLiveSeconds="5200" overflowToDisk="true"> </cache> stop.hbm.xml <class name="ie.dataStructure.Stop.Stop" table="stops" catalog="hibernate3" mutable="false" > <cache usage="read-only"/> <comment></comment> <id name="stopID" type="int"> <column name="STOPID" /> <generator class="assigned" /> </id> <property name="coordinateID" type="int"> <column name="COORDINATEID" not-null="true"> <comment></comment> </column> </property> <property name="routeID" type="int"> <column name="ROUTEID" not-null="true"> <comment></comment> </column> </property> </class> Stop public class Stop implements Comparable<Stop>, Serializable { private static final long serialVersionUID = 7823769092342311103L; private Integer stopID; private int routeID; private int coordinateID; }

    Read the article

  • How to insert and reterive key from registry editor

    - by deepa
    Hi.. I am new to cryptography.. i have to develop project based on cryptography..In part of my project I have to insert a key to the registry and afterwards i have to reterive the same key for decryption.. i done until getting the path of the registry .. Here i given my code.. import java.io.IOException; import java.io.InputStream; import java.io.StringWriter; public static final String readRegistry(String location, String key) { try { // Run reg query, then read output with StreamReader (internal class) Process process = Runtime.getRuntime().exec("reg query " + '"' + location + "\" /v " + key); StreamReader reader = new StreamReader(process.getInputStream()); reader.start(); process.waitFor(); reader.join(); String output = reader.getResult(); // Output has the following format: // \n<Version information>\n\n<key>\t<registry type>\t<value> if (!output.contains("\t")) { return null; } // Parse out the value String[] parsed = output.split("\t"); return parsed[parsed.length - 1]; } catch (Exception e) { return null; } } static class StreamReader extends Thread { private InputStream is; private StringWriter sw = new StringWriter(); ; public StreamReader(InputStream is) { this.is = is; } public void run() { try { int c; while ((c = is.read()) != -1) { System.out.println("Reading" + c); sw.write(c); } } catch (IOException e) { System.out.println("Exception in run() " + e); } } public String getResult() { System.out.println("Content " + sw.toString()); return sw.toString(); } } public static boolean addValue(String key, String valName, String val) { try { // Run reg query, then read output with StreamReader (internal class) Process process = Runtime.getRuntime().exec("reg add \"" + key + "\" /v \"" + valName + "\" /d \"\\\"" + val + "\\\"\" /f"); StreamReader reader = new StreamReader(process.getInputStream()); reader.start(); process.waitFor(); reader.join(); String output = reader.getResult(); System.out.println("Processing........ggggggggggggggggggggg." + output); // Output has the following format: // \n&lt;Version information&gt;\n\n&lt;key&gt;\t&lt;registry type&gt;\t&lt;value&gt; return output.contains("The operation completed successfully"); } catch (Exception e) { System.out.println("Exception in addValue() " + e); } return false; } public static void main(String[] args) { // Sample usage JAXRDeleteConcept hc = new JAXRDeleteConcept(); System.out.println("Before Insertion"); if (JAXRDeleteConcept.addValue("HKEY_CURRENT_USER\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\ComDlg32\\OpenSaveMRU", "REG_SZ", "Muthus")) { System.out.println("Inserted Successfully"); } String value = JAXRDeleteConcept.readRegistry("HKEY_CURRENT_USER\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\ComDlg32\\OpenSaveMRU" , "Project_Key"); System.out.println(value); } } But i dont know how to insert a key in a registry and read the particular key which i inserted..Please help me.. Thanks in advance..

    Read the article

  • Perl MiniWebserver

    - by snikolov
    hey guys i have config this miniwebserver, however i require the server to download a file in the local directory i am getting a problem can you please fix my issue thanks !/usr/bin/perl use strict; use Socket; use IO::Socket; my $buffer; my $file; my $length; my $output; Simple web server in Perl Serves out .html files, echos form data sub parse_form { my $data = $_[0]; my %data; foreach (split /&/, $data) { my ($key, $val) = split /=/; $val =~ s/+/ /g; $val =~ s/%(..)/chr(hex($1))/eg; $data{$key} = $val;} return %data; } Setup and create socket my $port = shift; defined($port) or die "Usage: $0 portno\n"; my $DOCUMENT_ROOT = $ENV{'HOME'} . "public"; my $server = new IO::Socket::INET(Proto = 'tcp', LocalPort = $port, Listen = SOMAXCONN, Reuse = 1); $server or die "Unable to create server socket: $!" ; Await requests and handle them as they arrive while (my $client = $server-accept()) { $client-autoflush(1); my %request = (); my %data; { -------- Read Request --------------- local $/ = Socket::CRLF; while (<$client>) { chomp; # Main http request if (/\s*(\w+)\s*([^\s]+)\s*HTTP\/(\d.\d)/) { $request{METHOD} = uc $1; $request{URL} = $2; $request{HTTP_VERSION} = $3; } # Standard headers elsif (/:/) { (my $type, my $val) = split /:/, $_, 2; $type =~ s/^\s+//; foreach ($type, $val) { s/^\s+//; s/\s+$//; } $request{lc $type} = $val; } # POST data elsif (/^$/) { read($client, $request{CONTENT}, $request{'content-length'}) if defined $request{'content-length'}; last; } } } -------- SORT OUT METHOD --------------- if ($request{METHOD} eq 'GET') { if ($request{URL} =~ /(.*)\?(.*)/) { $request{URL} = $1; $request{CONTENT} = $2; %data = parse_form($request{CONTENT}); } else { %data = (); } $data{"_method"} = "GET"; } elsif ($request{METHOD} eq 'POST') { %data = parse_form($request{CONTENT}); $data{"_method"} = "POST"; } else { $data{"_method"} = "ERROR"; } ------- Serve file ---------------------- my $localfile = $DOCUMENT_ROOT.$request{URL}; Send Response if (open(FILE, "<$localfile")) { print $client "HTTP/1.0 200 OK", Socket::CRLF; print $client "Content-type: text/html", Socket::CRLF; print $client Socket::CRLF; my $buffer; while (read(FILE, $buffer, 4096)) { print $client $buffer; } $data{"_status"} = "200"; } else { $file = 'a.pl'; open(INFILE, $file); while (<INFILE>) { $output .= $_; ##output of the file } $length = length($output); print $client "'HTTP/1.0 200 OK", Socket::CRLF; print $client "Content-type: application/octet-stream", Socket::CRLF; print $client "Content-Length:".$length, Socket::CRLF; print $client "Accept-Ranges: bytes", Socket::CRLF; print $client 'Content-Disposition: attachment; filename="test.zip"', Socket::CRLF; print $client $output, Socket::CRLF; print $client 'Content-Transfer-Encoding: binary"', Socket::CRLF; print $client "Connection: Keep-Alive", Socket::CRLF; # #$data{"_status"} = "404"; # } close(FILE); Log Request print ($DOCUMENT_ROOT.$request{URL},"\n"); foreach (keys(%data)) { print (" $_ = $data{$_}\n"); } ----------- Close Connection and loop ------------------ close $client; } END

    Read the article

  • Where should I create my DbCommand instances?

    - by Domenic
    I seemingly have two choices: Make my class implement IDisposable. Create my DbCommand instances as private readonly fields, and in the constructor, add the parameters that they use. Whenever I want to write to the database, bind to these parameters (reusing the same command instances), set the Connection and Transaction properties, then call ExecuteNonQuery. In the Dispose method, call Dispose on each of these fields. Each time I want to write to the database, write using(var cmd = new DbCommand("...", connection, transaction)) around the usage of the command, and add parameters and bind to them every time as well, before calling ExecuteNonQuery. I assume I don't need a new command for each query, just a new command for each time I open the database (right?). Both of these seem somewhat inelegant and possibly incorrect. For #1, it is annoying for my users that I this class is now IDisposable just because I have used a few DbCommands (which should be an implementation detail that they don't care about). I also am somewhat suspicious that keeping a DbCommand instance around might inadvertently lock the database or something? For #2, it feels like I'm doing a lot of work (in terms of .NET objects) each time I want to write to the database, especially with the parameter-adding. It seems like I create the same object every time, which just feels like bad practice. For reference, here is my current code, using #1: using System; using System.Net; using System.Data.SQLite; public class Class1 : IDisposable { private readonly SQLiteCommand updateCookie = new SQLiteCommand("UPDATE moz_cookies SET value = @value, expiry = @expiry, isSecure = @isSecure, isHttpOnly = @isHttpOnly WHERE name = @name AND host = @host AND path = @path"); public Class1() { this.updateCookie.Parameters.AddRange(new[] { new SQLiteParameter("@name"), new SQLiteParameter("@value"), new SQLiteParameter("@host"), new SQLiteParameter("@path"), new SQLiteParameter("@expiry"), new SQLiteParameter("@isSecure"), new SQLiteParameter("@isHttpOnly") }); } private static void BindDbCommandToMozillaCookie(DbCommand command, Cookie cookie) { long expiresSeconds = (long)cookie.Expires.TotalSeconds; command.Parameters["@name"].Value = cookie.Name; command.Parameters["@value"].Value = cookie.Value; command.Parameters["@host"].Value = cookie.Domain; command.Parameters["@path"].Value = cookie.Path; command.Parameters["@expiry"].Value = expiresSeconds; command.Parameters["@isSecure"].Value = cookie.Secure; command.Parameters["@isHttpOnly"].Value = cookie.HttpOnly; } public void WriteCurrentCookiesToMozillaBasedBrowserSqlite(string databaseFilename) { using (SQLiteConnection connection = new SQLiteConnection("Data Source=" + databaseFilename)) { connection.Open(); using (SQLiteTransaction transaction = connection.BeginTransaction()) { this.updateCookie.Connection = connection; this.updateCookie.Transaction = transaction; foreach (Cookie cookie in SomeOtherClass.GetCookieArray()) { Class1.BindDbCommandToMozillaCookie(this.updateCookie, cookie); this.updateCookie.ExecuteNonQuery(); } transaction.Commit(); } } } #region IDisposable implementation protected virtual void Dispose(bool disposing) { if (!this.disposed && disposing) { this.updateCookie.Dispose(); } this.disposed = true; } public void Dispose() { this.Dispose(true); GC.SuppressFinalize(this); } ~Class1() { this.Dispose(false); } private bool disposed; #endregion }

    Read the article

  • How to make a jQuery plugin (the right way)?

    - by macek
    I know there are jQuery cookie plugins out there, but I wanted to write one for the sake of better learning the jQuery plugin pattern. I like the separation of "work" in small, manageable functions, but I feel like I'm passing name, value, and options arguments around too much. Is there a way this can be refactored? I'm looking for snippets of code to help illustrate examples provided with in answers. Any help is appreciated. Thanks :) example usage $.cookie('foo', 'bar', {expires:7}); $.cookie('foo'); //=> bar $.cookie('foo', null); $.cookie('foo'); //=> undefined Edit: I did a little bit of work on this. You can view the revision history to see where this has come from. It still feels like more refactoring can be done to optimize the flow a bit. Any ideas? the plugin (function($){ $.cookie = function(name, value, options) { if (typeof value == 'undefined') { return get(name); } else { options = $.extend({}, $.cookie.defaults, options || {}); return (value != null) ? set(name, value, options) : unset(name, options); } }; $.cookie.defaults = { expires: null, path: '/', domain: null, secure: false }; var set = function(name, value, options){ console.log(options); return document.cookie = options_string(name, value, options); }; var get = function(name){ var cookies = {}; $.map(document.cookie.split(';'), function(pair){ var c = $.trim(pair).split('='); cookies[c[0]] = c[1]; }); return decodeURIComponent(cookies[name]); }; var unset = function(name, options){ value = ''; options.expires = -1; set(name, value, options); }; var options_string = function(name, value, options){ var pairs = [param.name(name, value)]; $.each(options, function(k,v){ pairs.push(param[k](v)); }); return $.map(pairs, function(p){ return p === null ? null : p; }).join(';'); }; var param = { name: function(name, value){ return name + "=" + encodeURIComponent(value); }, expires: function(value){ // no expiry if(value === null){ return null; } // number of days else if(typeof value == "number"){ d = new Date(); d.setTime(d.getTime() + (value * 24 * 60 * 60 * 1000)); } // date object else if(typeof value == "object" && value instanceof "Date") { d = value; } return "expires=" + d.toUTCString(); }, path: function(value){ return "path="+value; }, domain: function(value){ return value === null ? null : "domain=" + value; }, secure: function(bool){ return bool ? "secure" : null; } }; })(jQuery);

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >