Search Results

Search found 6826 results on 274 pages for 'dedicated hosting'.

Page 236/274 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • Use of 'this keyword' javascript in IE ?

    - by Ron
    Is there a workaround for Internet Explorer to implement the functionality offered by 'this' javascript keyword to get the dom element that triggered the event? My problem scenario is : I have a variable number of text fields in the html form, like input type="text" id="11" input type="text" id="12" .. I need to handle the "onchange" event for each text field, and the handling is dependent on the 'id' of the field that triggered the event. So far I understand that my options are: 1) attach a dedicated event handler for each text field. so if I have n fields, i have n different functions, something like: input type="text" id="11" onchange="function11();" input type="text" id="12" onchange="function12();" but the text fields are added and removed dynamically, so a better way would be to have one generic function instead. 2) use the 'this' keyword like: input type="text" id="11" onchange="functionGeneric(this);" input type="text" id="12" onchange="functionGeneric(this);" But this option does not work with Internet Explorer. Can anyone suggest a work around for getting it work in IE or some other solution that can be applied here? Thanks.

    Read the article

  • How do I track down sporadic ASP.NET performance problems in a production environment?

    - by Steve Wortham
    I've had sporadic performance problems with my website for awhile now. 90% of the time the site is very fast. But occasionally it is just really, really slow. I mean like 5-10 seconds load time kind of slow. I thought I had narrowed it down to the server I was on so I migrated everything to a new dedicated server from a completely different web hosting company. But the problems continue. I guess what I'm looking for is a good tool that'll help me track down the problem, because it's clearly not the hardware. I'd like to be able to log certain events in my ASP.NET code and have that same logger also track server performance/resources at the time. If I can then look back at the logs then I can see what exactly my website was doing at the time of extreme slowness. Is there a .NET logging system that'll allow me to make calls into it with code while simultaneously tracking performance? What would you recommend?

    Read the article

  • Using SVN post-commit hook to update only files that have been commited

    - by fondie
    I am using an SVN repository for my web development work. I have a development site set up which holds a checkout of the repository. I have set up an SVN post-commit hook so that whenever a commit is made to the repository the development site is updated: cd /home/www/dev_ssl /usr/bin/svn up This works fine but due to the size of the repository the updates take a long time (approx. 3 minutes) which is rather frustrating when making regular commits. What I'd like is to change the post-commit hook to only update those files/directories that have been committed but I don't know how to go about doing this. Updating the "lowest common directory" would probably be the best solution, e.g. If committing the follow files: /branches/feature_x/images/logo.jpg /branches/feature_x/css/screen.css It would update the directory: /branches/feature_x/ Can anyone help me create a solution that achieves this please? Thanks! Update: The repository and development site are located on the same server so network issues shouldn't be involved. CPU usage is very low, and I/O should be ok (it's running on hi-spec dedicated server) The development site is approx. 7.5GB in size and contains approx. 600,000 items, this is mainly due to having multiple branches/tags

    Read the article

  • How to structure an index for type ahead for extremely large dataset using Lucene or similar?

    - by Pete
    I have a dataset of 200million+ records and am looking to build a dedicated backend to power a type ahead solution. Lucene is of interest given its popularity and license type, but I'm open to other open source suggestions as well. I am looking for advice, tales from the trenches, or even better direct instruction on what I will need as far as amount of hardware and structure of software. Requirements: Must have: The ability to do starts with substring matching (I type in 'st' and it should match 'Stephen') The ability to return results very quickly, I'd say 500ms is an upper bound. Nice to have: The ability to feed relevance information into the indexing process, so that, for example, more popular terms would be returned ahead of others and not just alphabetical, aka Google style. In-word substring matching, so for example ('st' would match 'bestseller') Note: This index will purely be used for type ahead, and does not need to serve standard search queries. I am not worried about getting advice on how to set up the front end or AJAX, as long as the index can be queried as a service or directly via Java code. Up votes for any useful information that allows me to get closer to an enterprise level type ahead solution

    Read the article

  • Access/Download server files, not in site root, with PHP

    - by user271619
    Usually I save documents (images, mpegs, excel, word docs, etc...) for my friends or family on my website's root, inside a directory called /files/ or something similar. Nothing too uncommon. But, I have been playing with user session control, and allowing users to upload files to the dedicated /files/ directory. (the file names are saved in a db, with that user's ID) But, that means other people could try to guess and locate other people's files. I do randomize the file names, upon upload. And I stop the apache from displaying the /files/ directory content. However, I'd like to start saving the files outside of the website's root. This way it can't be accessible via the browser. I don't have any code to show, but I didn't want to even start on this endeavor if it's not able to be accomplished. I did find this snippet that shows how to display an image, from outside your website root: $file = $_GET['file']; $fileDir = '/path/to/files/'; if (file_exists($fileDir . $file)) { // Note: You should probably do some more checks // on the filetype, size, etc. $contents = file_get_contents($fileDir . $file); // Note: You should probably implement some kind // of check on filetype header('Content-type: image/jpeg'); echo $contents; } ? Maybe I can use this for any file type, but has anyone heard of a better way to allow users (logged in) to access their files from online, but not letting other users has similar access?

    Read the article

  • How to manipulate *huge* amounts of data

    - by Alejandro
    Hi there! I'm having the following problem. I need to store huge amounts of information (~32 GB) and be able to manipulate it as fast as possible. I'm wondering what's the best way to do it (combinations of programming language + OS + whatever you think its important). The structure of the information I'm using is a 4D array (NxNxNxN) of double-precission floats (8 bytes). Right now my solution is to slice the 4D array into 2D arrays and store them in separate files in the HDD of my computer. This is really slow and the manipulation of the data is unbearable, so this is no solution at all! I'm thinking on moving into a Supercomputing facility in my country and store all the information in the RAM, but I'm not sure how to implement an application to take advantage of it (I'm not a professional programmer, so any book/reference will help me a lot). An alternative solution I'm thinking on is to buy a dedicated server with lots of RAM, but I don't know for sure if that will solve the problem. So right now my ignorance doesn't let me choose the best way to proceed. What would you do if you were in this situation? I'm open to any idea. Thanks in advance!

    Read the article

  • Question regarding ideal database implementation for iPhone app

    - by Jeff
    So I have a question about the ideal setup for an app I am getting ready to build. The app is basically going to be a memorization tool and I already have an sqlite database full of content that I will be using for the app. The user will navigate through the contents of the database(using the uipickerview), and select something for memorization. If that row or cell of data is selected, it is put into a pool or a uitableview that is dedicated to showing which items you have in your "need to memorize" pool. When you go to that tableview, you can select the row, and the actual data would be populated. All information in the tableview would be deletable, in the event that they don't want it there anymore... Thats it. I know that with database interfacing, there are a few different options out there, in this particular setup, is core data the easiest approach? Is there any other way that would be better? I am just kind of looking for a point in the right direction, any help is greatly appreciated!!

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • Checking if a console application is still running using the Process class

    - by Ced
    I'm making an application that will monitor the state of another process and restart it when it stops responding, exits, or throws an error. However, I'm having trouble to make it reliably check if the process (Being a C++ Console window) has stopped responding. My code looks like this: public void monitorserver() { while (true) { server.StartInfo = new ProcessStartInfo(textbox_srcdsexe.Text, startstring); server.Start(); log("server started"); log("Monitor started."); while (server.Responding) { if (server.HasExited) { log("server exitted, Restarting."); break; } log("server is running: " + server.Responding.ToString()); Thread.Sleep(1000); } log("Server stopped responding, terminating.."); try { server.Kill(); } catch (Exception) { } } } The application I'm monitoring is Valve's Source Dedicated Server, running Garry's Mod, and I'm over stressing the physics engine to simulate it stopping responding. However, this never triggers the process class recognizing it as 'stopped responding'. I know there are ways to directly query the source server using their own protocol, but i'd like to keep it simple and universal (So that i can maybe use it for different applications in the future). Any help appreciated

    Read the article

  • Why is "origin/HEAD" shown when running "git branch -r"?

    - by Ben Hamill
    When you run git branch -r why the blazes does it list origin/HEAD? For example, there's a remote repo on GitHub, say, with two branches: master and awesome-feature. If I do git clone to grab it and then go into my new directory and list the branches, I see this: $ git branch -r origin/HEAD origin/master origin/awesome-feature Or whatever order it would be in (alpha? I'm faking this example to keep the identity of an innocent repo secret). So what's the HEAD business? Is it what the last person to push had their HEAD pointed at when they pushed? Won't that always be whatever it was they pushed? HEADs move around... why do I care what someone's HEAD pointed at on another machine? I'm just getting a handle on remote tracking and such, so this is one lingering confusion. Thanks! EDIT: I was under the impression that dedicated remote repos (like GitHub where no one will ssh in and work on that code, but only pull or push, etc) didn't and shouldn't have a HEAD because there was, basically, no working copy. Not so?

    Read the article

  • Are bad data issues that common?

    - by Water Cooler v2
    I've worked for clients that had a large number of distinct, small to mid-sized projects, each interacting with each other via properly defined interfaces to share data, but not reading and writing to the same database. Each had their own separate database, their own cache, their own file servers/system that they had dedicated access to, and so they never caused any problems. One of these clients is a mobile content vendor, so they're lucky in a way that they do not have to face the same problems that everyday business applications do. They can create all those separate compartments where their components happily live in isolation of the others. However, for many business applications, this is not possible. I've worked with a few clients, one of whose applications I am doing the production support for, where there are "bad data issues" on an hourly basis. Yeah, it's that crazy. Some data records from one of the instances (lower than production, of course) would have been run a couple of weeks ago, and caused some other user's data to get corrupted. And then, a data script will have to be written to fix this issue. And I've seen this happening so much with this client that I have to ask. I've seen this happening at a moderate rate with other clients, but this one just seems to be out of order. If you're working with business applications that share a large amount of data by reading and writing to/from the same database, are "bad data issues" that common in your environment?

    Read the article

  • Advanced search engine or server for relational database [closed]

    - by Pawel
    In my current project we are storing big volume of data in relational database. One of the recent key requirements is to enrich application by adding some advanced search capabilities. In the Project, performance is one of the important factors due to very large tables (10+ milions of records) with parent-children relations (for example: multi-level parent-child relationship, where I am looking for all parents with specific children). The search engine should also be able to check these references for hits. I have found some potential engines on stack overflow, however it looks like that all of them are dedicated rather for text search than relational db and hosted on linux os: lucene Solr Sphinx As I understand some of them use documents as a source of searching, but is it possible or efficient to create programmaticaly documents based on my relational data? As I am not familiar with all of their features/capabilities can anyone please make some recommendations or propose some different solution? To summarize my requirements: framework/engine to search relational database including decendants. support for Microsoft SQL Server can be used in .NET applications preferably hosted on Windows systems Does any of mentioned above are able to solve my problem? do you know any better solution?

    Read the article

  • Refactoring or Rewriting Monolithic PHP Spaghetti Codebase

    - by nategood
    I've inherited a really poorly designed PHP spaghetti code project. It's been gaining a good bit of traffic recently and is starting to have performance issues on top of the poor monolithic code base. Its maxing out performance on a chunky 16GB dedicated machine when it really shouldn't be. I'm planning on doing some performance tweaks right off the bat to help the performance issue, but this still won't really help the horrible code base. The team is small but expecting to grow very soon. I've read Joel's article on the troubles of doing a complete rewrite and see the concerns. But how bad does the code base have to be before you consider a rewrite? There is PHP handling logic interjected into what one would usually consider a "view". Even worse, in some places SQL statements are in these same files! The only real separation of presentation and logic are a few PHP scripts that serve as function libraries. These scripts do most of the ORM stuff... if you can even call it that. Trying to slowly refractor this seems like a nightmare. Open to your thoughts and opinions... however not interested in hearing, "Run away, Run away!".

    Read the article

  • SQLDeveloper using over 100MB of PGA+UGA

    - by Leigh Riffel
    Perhaps this is normal, but in my Oracle 11g database I am seeing programmers using Oracle's SQL Developer regularly consume more than 100MB of combined UGA and PGA memory. I'd like to know if this is normal and what can be done about it. Our database is on the 32 bit version of Windows 2008, so memory limitations are becoming an increasing concern. I am using the following query to show the memory usage: SELECT e.SID, e.username, e.status, b.PGA_MEMORY FROM v$session e LEFT JOIN (select y.SID, y.value pga, TO_CHAR(ROUND(y.value/1024/1024),99999999) || ' MB' PGA_MEMORY from v$sesstat y, v$statname z where y.STATISTIC# = z.STATISTIC# and NAME = 'session pga memory') b ON e.sid=b.sid WHERE (PGA)/1024/1024 > 20 ORDER BY 4 DESC; It seems that the resource usage goes up any time a table is opened in SQLDeveloper, but even when it is closed the memory does not go away. The problem is worse if the table is sorted while it was open as that seems to use even more memory. I understand how this would use memory while it is sorting, and perhaps even while it is still open, but to use memory after it is closed seems wrong to me. Can anyone confirm this? Update: I discovered that my numbers were off due to not understanding that the UGA is stored in the PGA under dedicated server mode. This makes the numbers lower than they were, but the problem still remains that SQL Developer seems to use excessive PGA.

    Read the article

  • PHP mail() function not delivering mail

    - by Ryan Jones
    Hi there, I have a slight problem. I am using a working script (works on my testing account - shared server) to send a mail through PHP using the mail() function. I just got a dedicated server, and I haven't been able to get the function to work. I've spent the last 10 or so hours reading various documentations on BIND (for the SPF record), dovecot, sendmail and postfix trying various things to get this to work. There is clearly something that I am missing. So we know the PHP code works fine. All the headers are fine everything. We know this as it's a direct copy from my testing account. So the problem must arise somewhere in the server config. The path to sendmail is correct, and sendmail is (apparently) working fine. I've set up the script to now deliver "Sent" or "Error" based on the boolean result from the PHP mail() function. That is: if(mail($blah,$blah,$blah,$blah,$blah)) { echo "Sent"; } else { echo "Error";} And the result ALWAYS comes up as "Sent" - however, the email never arrives. Can someone suggest things to check, as I'm completely new to this (24 hours or so!). Thanks in advance. Ryan

    Read the article

  • efficient android rendering

    - by llll
    I've read quite a few tutorials on game programming on android, and all of them provide basically the same solution as to drawing the game, that is having a dedicated thread spinning like this: public void run() { while(true) { if(!surfaceHolder.getSurface().isValid()) continue; Canvas canvas = surfaceHolder.lockCanvas(); drawGame(canvas); /* do actual drawing here */ surfaceHolder.unlockCanvasAndPost(canvas); } } now I'm wondering, isn't this wasteful? Suppose I've a game with very simple graphics, so that the actual time in drawGame is little; then I'm going to draw the same things on and on, stealing cpu from the other threads; a possibility could be skipping the drawing and sleeping a bit if the game state hasn't changed, which I could check by having the state update thread mantaining a suitable status flag. But maybe there are other options. For example, couldn'it be possible to synchronize with rendering, so that I don't post updates too often? Or am I missing something and that is precisely what lockCanvas does, that is it blocks and burns no cpu until proper time? Thanks in advance L.

    Read the article

  • Should I use a regular server instead of AWS?

    - by Jon Ramvi
    Reading about and using the Amazon Web Services, I'm not really able to grasp how to use it correctly. Sorry about the long question: I have a EC2 instance which mostly does the work of a web server (apache for file sharing and Tomcat with Play Framework for the web app). As it's a web server, the instance is running 24/7. It just came to my attention that the data on the EC2 instance is non persistent. This means I lose my database and files if it's stopped. But I guess it also means my server settings and installed applications are lost as they are just files in the same way as the other data. This means that I will either have to rewrite the whole app to use amazon CloudDB or write some code which stores the db on S3 and make my own AMI with the correct applications installed and configured. Or can this be quick-fixed by using EBS somehow? My question is 1. is my understanding of aws is correct? and 2. is it's worth it? It could be a possibility to just set up a regular dedicated server where everything is persistent, as you would expect. Would love to have the scaleability of aws though..

    Read the article

  • How can I rewrite the history of a published git branch in multiple steps?

    - by Frerich Raabe
    I've got a git repository with two branches, master and amazing_new_feature. The latter branch contains the work on, well, an amazing new feature. A colleague and me are both working on the same repository, and the two of us commit to both branches. Now the work on the amazing new feature finished, and a bit more than 100 commits were accumulated in the amazing_new_feature branch. I'd like to clean those commits up a bit (using git rebase -i) before merging the work into master. The issue we're facing is that it's quite a pain to rewrite/reorder all 100 commits in one go. Instead, what I'd like to do is: Rewrite/merge/reorder the first few commits in the amazing_new_feature branch and put the result into a dedicated branch which contains the 'cleaned up' history (say, a amazing_new_feature_ready_for_merge branch). Rebase the remaining amazing_new_feature branch on the amazing_new_feature_ready_for_merge branch. Repeat at 1. My idea is that at some point, all the work from amazing_new_feature should be in amazing_new_feature_ready_for_merge and then I can merge the latter into master. Is this a sensible approach, or are there better/easier/more fool-proff solutions to this problem? I'm especially scared about the second step of the above algorithm since it means rebasing a published branch. IIRC it's a dangerous thing to do.

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • Specify More "case" in switch parameters

    - by Sophie Mackeral
    I have the following code: $ErrorType = null; switch ($ErrNo){ case 256,1: $ErrorType = "Error"; break; case 512,2: $ErrorType = "Warning"; break; case 1024,8: $ErrorType = "Notice"; break; case 2048: $ErrorType = "Strict Warning"; break; case 8192: $ErrorType = "Depreciated"; break; } But the problem is, I'm going from the pre-defined constants for an error handling software solution.. I cannot specify more than one "case" for the dedicated error categories, example: switch ($ErrNo){ case 1: $ErrorType = "Error"; break; case 256: $ErrorType = "Error"; } That's a repeat of code.. Whereas with a solution like my first example, it would be beneficial as two integers fall under the same category.. Instead, i'm returned with the following: Parse error: syntax error, unexpected ',' in Action_Error.php on line 37

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enoigh of memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. and it'll full again. that's the terrible circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • "Detecting" and loading of "plugins" in GAE

    - by Patrick Cornelissen
    Hi! I have a "plugin like" architecture and I want to create one instance of each class that implements a dedicated interface and put these in a cache. (To have a singleton-ish effect). The plugins will be provided as jars and put into the app engine war file before the app is uploaded. I have tried to use the ClassPathScanningCandidateComponentProvider as I'm using spring anyway, but this didn't work. The provider complained that it was not able to find the HttpServletResponse class file while scanning the classpath. I can't get around this, when I add the servlet jar, then I'll get of course problems, because the same jar is also provided by the GAE. If I don't, I'm getting the error above... So I tried to add a static initialization code, but of course this doesn't work, because the class is initialized when it's instantiated for the first time. (Well I knew that but it was worth a try) The last chance I currently see is that I create a properties file with all plugin classes when the package is created, but this requires writing of a maven plugin etc. and I'd like to avoid that. Is there something that I am missing?

    Read the article

  • Writing a synchronized thread-safety wrapper for NavigableMap

    - by polygenelubricants
    java.util.Collections currently provide the following utility methods for creating synchronized wrapper for various collection interfaces: synchronizedCollection(Collection<T> c) synchronizedList(List<T> list) synchronizedMap(Map<K,V> m) synchronizedSet(Set<T> s) synchronizedSortedMap(SortedMap<K,V> m) synchronizedSortedSet(SortedSet<T> s) Analogously, it also has 6 unmodifiedXXX overloads. The glaring omission here are the utility methods for NavigableMap<K,V>. It's true that it extends SortedMap, but so does SortedSet extends Set, and Set extends Collection, and Collections have dedicated utility methods for SortedSet and Set. Presumably NavigableMap is a useful abstraction, or else it wouldn't have been there in the first place, and yet there are no utility methods for it. So the questions are: Is there a specific reason why Collections doesn't provide utility methods for NavigableMap? How would you write your own synchronized wrapper for NavigableMap? Glancing at the source code for OpenJDK version of Collections.java seems to suggest that this is just a "mechanical" process Is it true that in general you can add synchronized thread-safetiness feature like this? If it's such a mechanical process, can it be automated? (Eclipse plug-in, etc) Is this code repetition necessary, or could it have been avoided by a different OOP design pattern?

    Read the article

  • g-wan - reproducing the performance claims

    - by user2603628
    Using gwan_linux64-bit.tar.bz2 under Ubuntu 12.04 LTS unpacking and running gwan then pointing wrk at it (using a null file null.html) wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1:8080/null.html Running 20s test @ http://127.0.0.1:8080/null.html 2 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 11.65s 5.10s 13.89s 83.91% Req/Sec 3.33k 3.65k 12.33k 75.19% 125067 requests in 20.01s, 32.08MB read Socket errors: connect 0, read 37, write 0, timeout 49 Requests/sec: 6251.46 Transfer/sec: 1.60MB .. very poor performance, in fact there seems to be some kind of huge latency issue. During the test gwan is 200% busy and wrk is 67% busy. Pointing at nginx, wrk is 200% busy and nginx is 45% busy: wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1/null.html Thread Stats Avg Stdev Max +/- Stdev Latency 371.81us 134.05us 24.04ms 91.26% Req/Sec 72.75k 7.38k 109.22k 68.21% 2740883 requests in 20.00s, 540.95MB read Requests/sec: 137046.70 Transfer/sec: 27.05MB Pointing weighttpd at nginx gives even faster results: /usr/local/bin/weighttp -k -n 2000000 -c 500 -t 3 http://127.0.0.1/null.html weighttp - a lightweight and simple webserver benchmarking tool starting benchmark... spawning thread #1: 167 concurrent requests, 666667 total requests spawning thread #2: 167 concurrent requests, 666667 total requests spawning thread #3: 166 concurrent requests, 666666 total requests progress: 9% done progress: 19% done progress: 29% done progress: 39% done progress: 49% done progress: 59% done progress: 69% done progress: 79% done progress: 89% done progress: 99% done finished in 7 sec, 13 millisec and 293 microsec, 285172 req/s, 57633 kbyte/s requests: 2000000 total, 2000000 started, 2000000 done, 2000000 succeeded, 0 failed, 0 errored status codes: 2000000 2xx, 0 3xx, 0 4xx, 0 5xx traffic: 413901205 bytes total, 413901205 bytes http, 0 bytes data The server is a virtual 8 core dedicated server (bare metal), under KVM Where do I start looking to identify the problem gwan is having on this platform ? I have tested lighttpd, nginx and node.js on this same OS, and the results are all as one would expect. The server has been tuned in the usual way with expanded ephemeral ports, increased ulimits, adjusted time wait recycling etc.

    Read the article

  • Link failure with either abnormal memory consumption or LNK1106 in Visual Studio 2005.

    - by Corvin
    Hello, I am trying to build a solution for windows XP in Visual Studio 2005. This solution contains 81 projects (static libs, exe's, dlls) and is being successfully used by our partners. I copied the solution bundle from their repository and tried setting it up on 3 similar machines of people in our group. I was successful on two machines and the solution failed to build on my machine. The build on my machine encountered two problems: During a simple build creation of the biggest static library (about 522Mb in debug mode) would fail with the message "13libd\ui1d.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x20101879" Full solution rebuild creates this library, however when it comes to linking the library to main .exe file, devenv.exe spawns link.exe which consumes about 80Mb of physical memory and 250MB of virtual and spawns another link.exe, which does the same. This goes on until the system runs out of memory. On PCs of my colleagues where successful build could be performed, there is only one link.exe process which uses all the memory required for linking (about 500Mb physical). There is a plenty of hard drive space on my machine and the file system is NTFS. All three of our systems are similar - Core2Quad processors, 4Gb of RAM, Windows XP SP3. We are using Visual studio installed from the same source. I tried using a different RAM and CPU, using dedicated graphics adapter to eliminate possibility of video memory sharing influencing the build, putting solution files to different location, using different versions of VS 2005 (Professional, Standard and Team Suite), changing the amount of available virtual memory, running memtest86 and building the project from scratch (i.e. a clean bundle). I have read what MSDN says about LNK1106, none of the cases apply to me except for maybe "out of heap space", however I am not sure how I should fight this. The only idea that I have left is reinstalling the OS, however I am not sure that it would help and I am not sure that my situation wouldn't repeat itself on a different machine. Would anyone have any sort of advice for me? Thanks

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >