Search Results

Search found 5158 results on 207 pages for 'max mines'.

Page 55/207 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • String comparison in Java

    - by Kliver Max
    I want to compare two strings using Java. First sting name i get from .mif file using GDAL in cp1251 encoding. Second kadname i get from jsp. To compare i do this: if (attrValue instanceof String) { String string3 = new String((attrValue.toString()).getBytes("ISO-8859-1"), "cp1251"); dbFeature.setAttribute(name, string3); System.out.println("Name=" + name); System.out.println("kadname=" + kadname); if (name.equalsIgnoreCase(kadname)) { kadnum = string3; System.out.println("string3" + string3); } } And in console i get this: Name = kadnumm kadname = kadnumm Whats wrong with this?

    Read the article

  • including .h files.

    - by Max
    Suppose I have two .h files: A.h and B.h. Moreover, A.h includes B.h itself: B.h - declares class B. class B { ... }; A.h - declares class A, which uses class B. #include B.h class A { void SomeFunction(const B& b); }; Now, I have some .cpp file, that uses both A and B classes (B class maybe used not only in A::SomeFunction(B)) What are the pluses to include both A.h and B.h (instead of only A.h) from the perspective of design-patterns and coding style.

    Read the article

  • try finally block around every Object.Create?

    - by max
    Hi, I have a general question about best practice in OO Delphi. Currently, I but a try finally block around everywhere, where I create an object, to free that object after usage (to avoid memory leaks). E.g.: aObject := TObject.Create; try aOBject.AProcedure(); ... finally aObject.Free; end; instead of: aObject := TObject.Create; aObject.AProcedure(); .. aObject.Free; To you think, it is good practice, or too much overhead? And what about the performance?

    Read the article

  • a mathematical problem

    - by Max
    Hi i need a function that should give me a 10th or 100th array, for example if i pass 5, it should return 1 to 10 if i pass 67, it should return 1 to 100 if i pass 126, it should return 101 to 200 if i pass 2524, it should return 2001 to 3000 Any guidance?

    Read the article

  • should a student be diversifying or mastering programming languages?

    - by Max Link
    As the question states, is it better if a student diversifies or explores when learning programming languages or should they focus only on 2-3 languages and really get to know them well? Example of what I mean by diversifying: Functional -> Scheme Procedural -> C Object Oriented -> Java Dynamic or scripting -> Python Other -> C++ I have a few breaks in between semesters sometimes (up to 3 months) and I'm thinking of either learning a new language or "master" those that I know right now. Which would benefit me in the future? I know some(about 3 months of self studying each) Java, C, and C++ already . If I'm not mistaken, where I live, the industry is heavy on Java, C++, and C#.

    Read the article

  • reading files provided via $_GET

    - by Max
    I have a php script which takes a relative pathname via $_GET, reads that file and creates a thumbnail of it. I dont want the user to be able to read any file from the server. Only files from a certain directory should be allowed, otherwiese the script should exit(). Here is my folder structure: files/ <-- all files from this folder are public my_stuff/ <-- this is the folder of my script that reads the files My script is accessed via mydomain.com/my_stuff/script.php?pathname=files/some.jpg. What should not be allowed e. g.: mydomain.com/my_stuff/script.php?pathname=files/../db_login.php So, here is the relevant part of the script in my_stuff folder: ... $pathname = $_GET['pathname']; $pathname = realpath('../' . $_GET['pathname']); if(strpos($pathname, '/files/') === false) exit('Error'); ... I am not really sure about that approach, doesnt seem too safe for me. Anyone with a better idea?

    Read the article

  • Faster way of initializing arrays in Delphi

    - by Max
    I'm trying to squeeze every bit of performance in my Delphi application and now I came to a procedure which works with dynamic arrays. The slowest line in it is SetLength(Result, Len); which is used to initialize the dynamic array. When I look at the code for the SetLength procedure I see that it is far from optimal. The call sequence is as follows: _DynArraySetLength - DynArraySetLength DynArraySetLength gets the array length (which is zero for initialization) and then uses ReallocMem which is also unnecessary for initilization. I was doing SetLength to initialize dynamic array all the time. Maybe I'm missing something? Is there a faster way to do this?

    Read the article

  • Wrong class type in objective C

    - by Max Hui
    I have a parent class and a child class. GameObjectBase (parent) GameObjectPlayer(child). When I override a method in Child class and call it using [myPlayerClass showNextFrame] It is calling the parent class one. It turns out in the debugger, I see the myPlayerClass was indeed class type GameObjectBase (which is the parent class) How come? GameObjectBase.h #import <Foundation/Foundation.h> #import "cocos2d.h" @class GameLayer; @interface GameObjectBase : NSObject { /* CCSprite *gameObjectSprite; // Sprite representing this game object GameLayer *parentGameLayer; */ // Reference of the game layer this object // belongs to } @property (nonatomic, assign) CCSprite *gameObjectSprite; @property (nonatomic, assign) GameLayer *parentGameLayer; // Class method. Autorelease + (id) initWithGameLayer:(GameLayer *) gamelayer imageFileName:(NSString *) fileName; // "Virtual methods" that the derived class should implement. // If not implemented, this method will be called and Assert game - (void) update: (ccTime) dt; - (void) showNextFrame; @end GameObjectPlayer.h #import <Foundation/Foundation.h> #import "GameObjectBase.h" @interface GameObjectPlayer : GameObjectBase { int direction; } @property (nonatomic) int direction; @end GameLayer.h #import "cocos2d.h" #import "GameObjectPlayer.h" @interface GameLayer : CCLayer { } // returns a CCScene that contains the GameLayer as the only child +(CCScene *) scene; @property (nonatomic, strong) GameObjectPlayer *player; @end When I call examine in debugger what type "temp" is in this function inside GameLayer class, it's giving parent class GameObjectBase instead of subclass GameObjectPlayer - (void) update:(ccTime) dt { GameObjectPlayer *temp = _player; [temp showNextFrame]; }

    Read the article

  • How does Ruby's Array.| compare elements for equality?

    - by Max Howell
    Here's some example code: class Obj attr :c, true def == that p '==' that.c == self.c end def <=> that p '<=>' that.c <=> self.c end def equal? that p 'equal?' that.c.equal? self.c end def eql? that p 'eql?' that.c.eql? self.c end end a = Obj.new b = Obj.new a.c = 1 b.c = 1 p [a] | [b] It prints 2 objects but it should print 1 object. None of the comparison methods get called. How is Array.| comparing for equality?

    Read the article

  • Anyone help me! java basic question

    - by Max
    How can I get a specific value from an object? I'm trying to get a value of an instance for eg. ListOfPpl newListOfPpl = new ListOfPpl(id, name, age); Object item = newListOfPpl; How can I get a value of name from an Object item?? Even if it is easy or does not interest you can anyone help me??

    Read the article

  • blu-ray archiving in vmware ESXi 4

    - by spacecadet77
    Hi, I need some advice about using blu-ray writer for archiving data on vmware ESXi 4. At office we have IBM System x3400 Tower server with ESXi 4 hipervisor and OpenSuse and CentOS GNU/Linux system as guests. Will blu-ray writer work in this setup, and if it will is there any particular model you can suggest. Best regards IBM System x3400 Tower server specification: 1x Intel Quad-Core Xeon E5410 2.33GHz/ 12MB/ 1333MHz (2x CPU max) Intel 5000P chipset, 2x 1GB PC2-5300 DDR2 667MHz SDRAM ECC Chipkill (32GB max) 2x4GB (2x2GB) PC2-5300 CL5 ECC DDR2 FBDIMM (x3400, x3550, x3650) SAS/SATA Hot-Swap Open Bay (0xHDD std, 4xHDD max, 8xHDD optional) ServeRAID 8K dual channel SAS/SATA controller (RAID 0,1,1E,10,5,6, 256MB, Battery Backup) Graphics ATI® RN50(ES1000) 16MB DDR, CD-RW/DVD Combo no FDD GigaEthernet, Tower with Power Supply 835W (opt Redudant) Slot 1: half-length, PCI-Express x8(x4 electrical) Slot 2: full, PCI-Express x8 Slot 3: full, PCI-Express x8 Slot 4: full, 64-bit 133MHz 3.3v PCI-X Slot 5: full, 64-bit 133MHz 3.3v PCI-X , Slot 6: half-length, 32-bit 33MHz 5.0v PCI ports: 4x USB (Vers 2.0), 2x PS/2, parallel, 2x serial (9-pin), VGA, RJ-45 (ethernet ), RJ-45 (sys mgm) HDD 4 x TB 7200rpm / Serial ATA II 3.0Gb/s / 16MB, RoHS

    Read the article

  • Error In centos6 while compiling java classes,in tomcat6

    - by AJIT RANA
    I am newbie to Linux and Centos6. I bought server just now and want to deploy my web app in it. I am getting error while I am compiling my servlet classes. It showing me bash: javac: command not found when I try to compile my classes. But when I checked my class in '/usr/lib/jvm/java-1.6.0/bin .'I found my javac there. Then I checked javac with the help of command ./javac i got ERROR.. [root:ip_address.com]# ./javac There is insufficient memory for the Java Runtime Environment to continue. pthread_getattr_np Error occurred during initialization of VM java.lang.OutOfMemoryError: unable to create new native thread I followed the step as shown in "" Java outofmemoryerror when creating <100 threads "" which shows me command to get limits.. '[root:ipaddress.com]# ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 278528 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited' [root:ipaddress.com]# top bash: top: command not found link :- http://stackoverflow.com/q/12913857/1746764

    Read the article

  • Laptop Asus P50IJ with Intel 4500M GMA output going to a Dell 1907FP external monitor will not allow

    - by ProfessionalAmateur
    Hello - I just purchased an Asus P50IJ-X2 laptop which has a Intel GMA 4500M video card running Windows7. At work I output this laptop to a Dell 1907FP LCD which has a maximum resolution of 1280x1024. Not matter what I do the Windows will not allow the laptop to set a resolution higher than 1024x768 to this LCD monitor. Ive even gone to the extent of downloading PowerStrip (I'd post a link but Im new and can only enter 1 url, if you google for powerstrip its the first option) to create a custom driver for my monitor thinking Windows was having a hard time seeing the available resolutions it would accept. However, powerstrip read the registery and properly sees the monitor and what its capable of so Im now at a complete loss as to why Windows7 will not allow me to set/use a 1280x1024 resolution for this external monitor (as my last laptop did running Vista). The Intel documentation (http://software.intel.com/en-us/articles/quick-reference-guide-to-intel-integrated-graphics/) indicates that the GMA 4500M should be able to run up to a 2560x1600 max res. The Dell 1907FP specification states it can run up to a 1280x1024 res. But no matter what the computer will not allow me to set higher than a 1024x768. I'm completely baffled but I would really like to be able to output this laptop to a reasonable resolution, 1024x768 makes me feel like I'm using my mom's computer. Any help would be greatly appreciated! Here are some attached images (I apologize for the links, being new I cannot post images) that should help explain this better: Image 1 - This image is from powerstrip which shows the monitors max accepted resolution and at the top right the max res my PC currently allows. (http://imgur.com/agrno.png) Image 2 - This shows my Windows7 resolution picker. (http://imgur.com/3nv6q.png) Image 3 - The 'List all modes' option taken from the Screen Resolution Advanced Settings List All Modes. (http://imgur.com/AMREh.png) Image 4 - Monitor information from registry read by powerstrip, this shows the laptop is able to read the necessary info from the LCD monitor. (http://imgur.com/hUX4D.png)

    Read the article

  • Setting up SQL Server 2005 to use all available memory in 32bit Windows Server 2003 - and verifying

    - by Rizwan Kassim
    There are a number of questions along this line - but they either sometimes contradict each other, or don't show how to properly verify that everything is actually working - hopefully this can be comprehensive... I'm running SQL Server 2005 SP3 Standard on Windows Server 2003 R2 Standard. My server has 8GB of memory installed - my system is almost entirely used as a Database Server - there are some services running on them, but the OS + services can run within 1Gb of RAM. What I've done (please tell me if I'm doing something wrong): /3GB in the boot.ini. (To increase the amount of user-space memory available - info) /PAE in the boot.ini. (Windows claimed to be doing PAE even without this switch, somethow.) Enabled AWE in SQL Server. Enabled Lock Pages in Memory Option for users SYSTEM and Local Service. (info). SQL Server Standard doesn't seem to use this until Cumulative Update 4, which isn't installed on my server. (info) Set Min/Max Memory to : 1024Mb/5112Mb After doing all the above, we definately saw a level of improvement - but I'd like now to verify my settings, make sure that I'm making full use of the memory available. (There appeared to be a slowdown when max = 7Gb, so I edged off from that value, but it might have been just perceptual.) To verify, I checked the following levels in PerfMon : Process(sqlserv):Working Set : 76386304 SQL Server(Memory Manager) : Total Server Memory : 3538944 (I saw a doc that noted that this wasn't the full memory used by SQL Server, so I'm not sure whether to trust it) So -- my questions... Should my max be around 7Gb? If not, what should it be? Why is total server memory at 3.5G, when it's been allocated 5G? What is the proper metric for the amount of memory allocated to SQL Server? The Working Set seems a bit large... Am I possibly missing any steps in the setup? Any recommended resources on starting to tune the caching system now? Thanks

    Read the article

  • Swap space maxing out - JVM dying

    - by travega
    I have a server running 3 WordPress instances, MySql, Apache and the play framework 2.0 on 64m initial & max heap. If I increase the max heap of the JVM that play is running in even by 16m I see the 128m of swap space steadily fill up until the the JVM dies. I notice that it is only when I am plugging away at the wordpress sites that the JVM will die. I assume this is because the JVM is not asking for memory at the time so gets collected. I notice that when I restart Apache I reclaim about half of my swap and RAM. So is there some way I can configure apache to consume less memory? Also what could be causing the swap space to get so heavily thrashed with just 16m added to the max heap size of the JVM? Server running: Ubuntu 12.04 RAM: 408m Swap: 128m Apache mods: alias.conf alias.load auth_basic.load authn_file.load authz_default.load authz_groupfile.load authz_host.load authz_user.load autoindex.conf autoindex.load cgi.load deflate.conf deflate.load dir.conf dir.load env.load mime.conf mime.load negotiation.conf negotiation.load php5.conf php5.load proxy_ajp.load proxy_balancer.conf proxy_balancer.load proxy.conf proxy_connect.load proxy_ftp.conf proxy_ftp.load proxy_http.load proxy.load reqtimeout.conf reqtimeout.load rewrite.load setenvif.conf setenvif.load status.conf status.load

    Read the article

  • Cannot access certain URL on my wireless

    - by dehmann
    Problem: On my wireless network at home, there is one URL that I just cannot access with my browser: http://research.microsoft.com/ I have no problems with the Internet connection otherwise. But on that address I just get The connection was reset The connection to the server was reset while the page was loading. from Firefox. I am using a DSL modem (Westell) and Linksys wireless router (using DHCP). When I use my neighbor's wireless connection I can access the microsoft site without a problem. Additional technical details: But with my connection, here is what I get from nslookup. It is weird: It first cannot find the address, but after I look up another address it can find it: $ nslookup research.microsoft.com ;; connection timed out; no servers could be reached $ nslookup google.com Non-authoritative answer: Name: google.com Address: 72.14.204.104 Name: google.com Address: 72.14.204.147 Name: google.com Address: 72.14.204.99 Name: google.com Address: 72.14.204.103 $ nslookup research.microsoft.com Non-authoritative answer: Name: research.microsoft.com Address: 131.107.65.14 But even after nslookup finds it Firefox still cannot access it. Here is what traceroute says: $ traceroute http://research.microsoft.com/ traceroute: Warning: http://research.microsoft.com/ has multiple addresses; using 8.15.7.117 traceroute to http://research.microsoft.com/ (8.15.7.117), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 4.515 ms 2.760 ms 3.072 ms 2 * * * Traceroute just to the IP: $ traceroute 131.107.65.14 traceroute to 131.107.65.14 (131.107.65.14), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 11.912 ms 2.684 ms 2.808 ms 2 * * * Comparison: Traceroute to google.com IP: $ traceroute 72.14.204.99 traceroute to 72.14.204.99 (72.14.204.99), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 6.428 ms 6.981 ms 117.099 ms 2 * * * Any comments / help?

    Read the article

  • Apache2 - 500 internal server error

    - by Lucio Coire Galibone
    i'm running a VPS with Linux CentOs 6 with 4 GB of RAM, 10 GB of HD and 2 virtual CPU Intel(R) Xeon(R)CPU L5640 @ 2.27GHz. As my host says each virtual CPU must be at least 0.5 physical cpu. At certain times of the day, those with more traffic, trying accessing my php script i receive intermittently "500 internal server error". I activate logging to debug level from apache, and also the PHP logging with E_ALL, but I can't find reference to Error 500 in any logs(I checked the right logs!). I haven't got any .htaccess file in path script. The strange thing is that the error start at first php line in the script (the previous html displays correctly, but at the first php line the script send 500 error). The cpu load is always good (max 0.15 0.08 0.01) and RAM is close to 95% but it arrived to swap just 2 times in a month with 2-5 MB. Apache works with prefork with this values: <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 280 MaxClients 280 MaxRequestsPerChild 4000 </IfModule> Everthing works correctly and I don't get any error in quiet times, but i start receive errors when traffic rises (6-9000 visits per hour). Can i solve the problem increasing resources? (i can upgrade RAM up to 16 GB). It can depend from reaching MaxClients (but apache must write it on log, right?)? If I upgrade RAM to 6 or 8 GB i have to calculate MaxClients value with this? MaxClients = Total RAM dedicated to the web server / Max child process size Max child process size is around 20M. How else can the problem be? Thanks in advance

    Read the article

  • Chrome caching 302 redirects

    - by Thermionix
    I have a php script with is used to rotate banner images on a site. Under Firefox/IE page refreshes will make another request and a different image will be returned. Under Chrome, the request seems to be cached and only opening the page in a new tab will cause it to actually query the script. I believe this used to work in older versions of chrome, I've tried a few different types of redirect codes all with the same result. Any tips? <img class="banner" src="/inc/banner.php" alt=""> ~$ cat /var/www/inc/banner.php <?php header("HTTP/1.1 302 Redirect"); header("Cache-Control: max-age=0, no-cache, no-store, must-revalidate"); //header('HTTP/1.1 307 Temporary Redirect'); //header("expires: none"); //header("expires: max"); //header("Cache-Control: public"); $folder = '../img/banner/'; $exts = 'jpg jpeg png gif'; $files = array(); $i = -1; if ('' == $folder) $folder = './'; $handle = opendir($folder); $exts = explode(' ', $exts); while (false !== ($file = readdir($handle))) { foreach($exts as $ext) { // for each extension check the extension if (preg_match('/\.'.$ext.'$/i', $file, $test)) { // faster than ereg, case insensitive $files[] = $file; // it's good ++$i; } } } closedir($handle); // We're not using it anymore mt_srand((double)microtime()*1000000); // seed for PHP < 4.2 $rand = mt_rand(0, $i); // $i was incremented as we went along header('Location: '.$folder.$files[$rand]); flush(); ?> curl output; ~$ curl -I -k https://example.net/inc/banner.php HTTP/1.1 302 Redirect Server: nginx/1.1.14 Date: Fri, 24 Feb 2012 03:23:46 GMT Content-Type: text/html Connection: keep-alive X-Powered-By: PHP/5.3.10-1ubuntu1 Cache-Control: max-age=0, no-cache, no-store, must-revalidate Location: ../img/banner/2.jpg

    Read the article

  • Git fails to push with error 'out of memory'

    - by jwir3
    I'm using gitosis on a server that has a low amount of memory, specifically around 512 MB. When I try to push a large folder (happens to be a backup from an android phone), I get: me@corellia:~/Configs/$ git push origin master Counting objects: 18, done. Delta compression using up to 8 threads. Compressing objects: 100% (14/14), done. fatal: Out of memory, malloc failed MiB | 685 KiB/s error: pack-objects died of signal 13 error: failed to push some refs to 'git@dagobah:Configs' I've been searching the web, and notably found: http://www.mail-archive.com/[email protected]/msg01747.html as well as http://git.661346.n2.nabble.com/Out-of-memory-error-during-git-push-td5443705.html but these don't seem to help me for two reasons: 1) I am not actually out of memory when I push. When I run 'top' during the push, I get: 24262 git 18 0 16204 6084 1096 S 2 1.2 0:00.12 git-unpack-obje Also, during the push if I run /head/meminfo, I get: MemTotal: 524288 kB MemFree: 289408 kB Buffers: 0 kB Cached: 0 kB SwapCached: 0 kB Active: 0 kB Inactive: 0 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 524288 kB So, it seems that I have enough memory free, but it's actually still failing, and I'm not enough of a git guru to figure out what is happening. I would appreciate it if someone could give me a hand here and tell me what could be causing this problem, and what I can do to solve it. Thanks! EDIT: The output of running the ulimit -a command: scottj@dagobah:~$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 204800 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 204800 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

  • Maximum limit of filepointer in php reached and not changeable

    - by mlaug
    I have a server with the current 5.3.x version installed. Since we are running a really simple and small server in php using sockets, that connects to a lot clients using sockets we need to raise the open file limit that has been already done on the server for the user, that runs the server #ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 29879 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 8192 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 29879 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited and we compiled php with --enable-fd-setsize=8192 still we are getting [19-Nov-2012 09:24:23 Europe/Berlin] PHP Warning: socket_select(): You MUST recompile PHP with a larger value of FD_SETSIZE. It is set to 1024, but you have descriptors numbered at least as high as 1024. --enable-fd-setsize=2048 is recommended, but you may want to set it to equal the maximum number of open files supported by your system, in order to avoid seeing this error again at a later date. once in a while in our logs. Anyone knows who to configure the unix server and php correctly to have that working? I found a bug, but that is related to 2006 and marked as "not a bug" https://bugs.php.net/bug.php?id=37025&edit=1

    Read the article

  • Centos/Postfix able to send mail but not receive it

    - by Dan Hastings
    I have set up postfix and used the mail command to test and an email was successfully sent and delivered. The email arrived in my yahoo inbox BUT the sender also recieved an email in the Maildir directory saying "I'm sorry to have to inform you that your message could not be delivered to one or more recipients", even though the message was delivered. I tried replying from yahoo to the email but it never arrived. I have 1 MX record added to godaddy which i did last week. Priority0 Host @ Points to mail.domain.com TTL1 Hour Postfix main.cf has the following added to it myhostname = mail.domain.com mydomain = domain.com myorigin = $mydomain inet_interfaces = all mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mynetworks = 192.168.0.0/24, 127.0.0.0/8 relay_domains = home_mailbox = Maildir/ I checked var/logs/maillog and found the following errors occuring postfix/anvil[18714]: statistics: max connection rate 1/60s for (smtp:unknown) at Jun 3 09:30:15 postfix/anvil[18714]: statistics: max connection count 1 for (smtp:unknown) at Jun 3 09:30:15 postfix/anvil[18714]: statistics: max cache size 1 at Jun 3 09:30:15 postfix/smtpd[18772]: connect from unknown[unknown] postfix/smtpd[18772]: lost connection after CONNECT from unknown[unknown] postfix/smtpd[18772]: disconnect from unknown[unknown] output of postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = domain.com myhostname = mail.domain.com mynetworks = 168.100.189.0/28, 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550

    Read the article

  • TCP Server Memory management: #Connections Vs. #Requests

    - by Andrew
    Given that, there is no theoretical limit to number of concurrent TCP connections a Windows 2008 server can handle. Only thing will happen is, with each connection there will be memory consumption in server. Unfortunately, memory is not unlimited (and I want to utilize only physical memory). For example, lets say we've 2GB server memory. Now there are two extreme cases: Case 1: If we've allocated 64KB buffer for each connection (only to receive incoming request), then 32768 connections can consume all the 2GB of memory. This will not leave any memory to queue/process incoming requests from those connections. Case 2: On the other hand, lets say a single (or very few) connections continuously keeps sending request buffers (for example, video streaming from one connection to other) and server cannot process them within time, those buffers will get piled up in server and eventually will occupy most of the servers memory. And it will not leave any memory for new connection thereafter. This is the real dilemma in server design bugging me badly for last many days. If I can decide on max size of request buffer per connection and max number of requests to allow in queue per connection. Then, based on available server memory, it will then automatically set limit on max number of concurrent connections. How to decide on these limits to achieve best performance and throughput? I am just looking for perfect utilization of server resources. Are there any standard guidelines or empirical data available with someone who can share with me please.

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >