Search Results

Search found 13249 results on 530 pages for 'virtualized performance'.

Page 161/530 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • integer division in php

    - by oezi
    hi guys, i'm looking for the fastest way to do an integer division in php. for example, 5 / 2 schould be 4 | 6 / 2 should be 3 and so on. if i simply do this, php will return 2.5 in the first case, the only solution i could find was using intval($my_number/2) - wich isn't as fast as i want it to be (but gives the expected results). can anyone help me out with this?

    Read the article

  • Approach for altering Primary Key from GUID to BigInt in SQL Server related tables

    - by Tom
    I have two tables with 10-20 million rows that have GUID primary keys and at leat 12 tables related via foreign key. The base tables have 10-20 indexes each. We are moving from GUID to BigInt primary keys. I'm wondering if anyone has any suggestions on an approach. Right now this is the approach I'm pondering: Drop all indexes and fkeys on all the tables involved. Add 'NewPrimaryKey' column to each table Make the key identity on the two base tables Script the data change "update table x, set NewPrimaryKey = y where OldPrimaryKey = z Rename the original primarykey to 'oldprimarykey' Rename the 'NewPrimaryKey' column 'PrimaryKey' Script back all the indexes and fkeys Does this seem like a good approach? Does anyone know of a tool or script that would help with this? TD: Edited per additional information. See this blog post that addresses an approach when the GUID is the Primary: http://www.sqlmag.com/blogs/sql-server-questions-answered/sql-server-questions-answered/tabid/1977/entryid/12749/Default.aspx

    Read the article

  • Spring Web application internal visual monitoring heap space and permgen.

    - by Veniamin
    If I have created web application used Spring framework(based on maven), can I include some module/dependency/plugin into my app to monitor Heap space, Permgen, etc. It whould be greate to get some charts output likes as in VisuaVM. For example: http://localhost:8080/monitoring = Including something likes VisualVM I have found next dependency without link to repo: <dependency> <groupId>com.sun.tools.visualvm</groupId> <artifactId>core</artifactId> <version>1.3.3</version> </dependency> How to use it?

    Read the article

  • Measuring debug vs release of ASP.NET applications

    - by Alex Angas
    A question at work came up about building ASP.NET applications in release vs debug mode. When researching further (particularly on SO), general advice is that setting <compilation debug="true"> in web.config has a much bigger impact. Has anyone done any testing to get some actual numbers about this? Here's the sort of information I'm looking for (which may give away my experience with testing such things): Execution time | Debug build | Release build -------------------+---------------+--------------- Debug web.config | average 1 | average 2 Retail web.config | average 3 | average 4 Max memory usage | Debug build | Release build -------------------+---------------+--------------- Debug web.config | average 1 | average 2 Retail web.config | average 3 | average 4 Output file size | Debug build | Release build -------------------+---------------+--------------- | size 1 | size 2

    Read the article

  • oprofile unable to produce call graph

    - by aaa
    hello I am trying to use oprofile to generate call graph. Compiler is g++, platform is linux x86-64, linker is gfortran C++ code is compiled with -fno- omit-frame-pointer. oprofile is started with --callgraph=25. report I run with --callgraph. the call graph is produced but it's only includes self time, which is not much use what am I missing?

    Read the article

  • Are closures in javascript recompiled

    - by Discodancer
    Let's say we have this code (forget about prototypes for a moment): function A(){ var foo = 1; this.method = function(){ return foo; } } var a = new A(); is the inner function recompiled each time the function A is run? Or is it better (and why) to do it like this: function method = function(){ return this.foo; } function A(){ this.foo = 1; this.method = method; } var a = new A(); Or are the javascript engines smart enough not to create a new 'method' function every time? Specifically Google's v8 and node.js. Also, any general recommendations on when to use which technique are welcome. In my specific example, it really suits me to use the first example, but I know thath the outer function will be instantiated many times.

    Read the article

  • How to increase query speed without using full-text search?

    - by andre matos
    This is my simple query; By searching selectnothing I'm sure I'll have no hits. SELECT nome_t FROM myTable WHERE nome_t ILIKE '%selectnothing%'; This is the EXPLAIN ANALYZE VERBOSE Seq Scan on myTable (cost=0.00..15259.04 rows=37 width=29) (actual time=2153.061..2153.061 rows=0 loops=1) Output: nome_t Filter: (nome_t ~~* '%selectnothing%'::text) Total runtime: 2153.116 ms myTable has around 350k rows and the table definition is something like: CREATE TABLE myTable ( nome_t text NOT NULL, ) I have an index on nome_t as stated below: CREATE INDEX idx_m_nome_t ON myTable USING btree (nome_t); Although this is clearly a good candidate for Fulltext search I would like to rule that option out for now. This query is meant to be run from a web application and currently it's taking around 2 seconds which is obviously too much; Is there anything I can do, like using other index methods, to improve the speed of this query?

    Read the article

  • C#/WPF FileSystemWatcher on every extension on every path

    - by BlueMan
    I need FileSystemWatcher, that can observing same specific paths, and specific extensions. But the paths could by dozens, hundreds or maybe thousand (hope not :P), the same with extensions. The paths and ext are added by user. Creating hundreds of FileSystemWatcher it's not good idea, isn't it? So - how to do it? Is it possible to watch/observing every device (HDDs, SD flash, pendrives, etc.)? Will it be efficient? I don't think so... . Every changing Windows log file, scanning file by antyvirus program - it could realy slow down my program with SystemWatcher :(

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • Partitioning requests in code among several servers

    - by Jacques René Mesrine
    I have several forum servers (what they are is irrelevant) which stores posts from users and I want to be able to partition requests among these servers. I'm currently leaning towards partitioning them by geographic location. To improve the locality of data, users will be separated into regions e.g. North America, South America and so on. Is there any design pattern on how to implement the function that maps the partioning property to the server, so that this piece of code has high availability and would not become a single point of failure ? f( Region ) -> Server IP

    Read the article

  • Tracking down slow managed DLL loading

    - by Alex K
    I am faced with the following issue and at this point I feel like I'm severely lacking some sort of tool, I just don't know what that tool is, or what exactly it should be doing. Here is the setup: I have a 3rd party DLL that has to be registered in GAC. This all works fine and good on pretty much every machine our software was deployed on before. But now we got 2 machines, seemingly identical to the ones we know work (they are cloned from the same image and stuffed with the same hardware, so pretty much the only difference is software settings, over which I went over and over, and they seem fine). Now the problem, the DLL in GAC takes a very long time to load. At least I believe this is the issue, what I can say definitively is that instantiating a single class from that DLL is the slow part. Once it is loaded, thing fly as they always have. But while on known-good machines the DLL loads so fast that a timestamp in the log doesn't even change, on these 2 machines it take over 1min to load. Knowns: I have no access to the source, so I can't debug through the DLL. Our app is the only one that uses it (so shouldn't be simultaneous access issues). There is only one version of this DLL in existance, so it shouldn't be a matter of version conflict. The GAC reference is being used (if I uninstall the DLL from GAC, an exception will be thrown about the missing GAC reference). Could someone with a greater skill in debug-fu suggest what I can do to track down the root cause of this issue?

    Read the article

  • Reusing of a PreparedStatement between methods?

    - by MRalwasser
    We all know that we should rather reuse a JDBC PreparedStatement than creating a new instance within a loop. But how to deal with PreparedStatement reuse between different method invocations? Does the reuse-"rule" still count? Should I really consider using a field for the PreparedStatement or should I close and re-create the prepared statement in every invocation? (Of course an instance of such a class would be bound to a Connection which might be a disadvantage) I am aware that the ideal answer might be "it depends". But I am looking for a best practice for less experienced developers that they will do the right choice in most of the cases.

    Read the article

  • Reasonably faster way to traverse a directory tree in Python?

    - by Sridhar Ratnakumar
    Assuming that the given directory tree is of reasonable size: say an open source project like Twisted or Python, what is the fastest way to traverse and iterate over the absolute path of all files/directories inside that directory? I want to do this from within Python (subprocess is allowed). os.path.walk is slow. So I tried ls -lR and tree -fi. For a project with about 8337 files (including tmp, pyc, test, .svn files): $ time tree -fi > /dev/null real 0m0.170s user 0m0.044s sys 0m0.123s $ time ls -lR > /dev/null real 0m0.292s user 0m0.138s sys 0m0.152s $ time find . > /dev/null real 0m0.074s user 0m0.017s sys 0m0.056s $ tree appears to be faster than ls -lR (though ls -R is faster than tree, but it does not give full paths). find is the fastest. Can anyone think of a faster and/or better approach? On Windows, I may simply ship a 32-bit binary tree.exe or ls.exe if necessary. Update 1: Added find

    Read the article

  • SQL Server uncorrelated subquery very slow

    - by brianberns
    I have a simple, uncorrelated subquery that performs very poorly on SQL Server. I'm not very experienced at reading execution plans, but it looks like the inner query is being executed once for every row in the outer query, even though the results are the same each time. What can I do to tell SQL Server to execute the inner query only once? The query looks like this: select * from Record record0_ where record0_.RecordTypeFK='c2a0ffa5-d23b-11db-9ea3-000e7f30d6a2' and ( record0_.EntityFK in ( select record1_.EntityFK from Record record1_ join RecordTextValue textvalues2_ on record1_.PK=textvalues2_.RecordFK and textvalues2_.FieldFK = '0d323c22-0ec2-11e0-a148-0018f3dde540' and (textvalues2_.Value like 'O%' escape '~') ) )

    Read the article

  • Time complexity O() of isPalindrome()

    - by Aran
    I have this method, isPalindrome(), and I am trying to find the time complexity of it, and also rewrite the code more efficiently. boolean isPalindrome(String s) { boolean bP = true; for(int i=0; i<s.length(); i++) { if(s.charAt(i) != s.charAt(s.length()-i-1)) { bP = false; } } return bP; } Now I know this code checks the string's characters to see whether it is the same as the one before it and if it is then it doesn't change bP. And I think I know that the operations are s.length(), s.charAt(i) and s.charAt(s.length()-i-!)). Making the time-complexity O(N + 3), I think? This correct, if not what is it and how is that figured out. Also to make this more efficient, would it be good to store the character in temporary strings?

    Read the article

  • How to find the worst performing queries in MS SQL Server 2008?

    - by Thomas Bratt
    How to find the worst performing queries in MS SQL Server 2008? I found the following example but it does not seem to work: SELECT TOP 5 obj.name, max_logical_reads, max_elapsed_time FROM sys.dm_exec_query_stats a CROSS APPLY sys.dm_exec_sql_text(sql_handle) hnd INNER JOIN sys.sysobjects obj on hnd.objectid = obj.id ORDER BY max_logical_reads DESC Taken from: http://www.sqlservercurry.com/2010/03/top-5-costly-stored-procedures-in-sql.html

    Read the article

  • Fastest XML parser for small, simple documents in Java

    - by Varkhan
    I have to objectify very simple and small XML documents (less than 1k, and it's almost SGML: no namespaces, plain UTF-8, you name it...), read from a stream, in Java. I am using JAXP to process the data from my stream into a Document object. I have tried Xerces, it's way too big and slow... I am using Dom4j, but I am still spending way too much time in org.dom4j.io.SAXReader. Does anybody out there have any suggestion on a faster, more efficient implementation, keeping in mind I have very tough CPU and memory constraints? [Edit 1] Keep in mind that my documents are very small, so the overhead of staring the parser can be important. For instance I am spending as much time in org.xml.sax.helpers.XMLReaderFactory.createXMLReader as in org.dom4j.io.SAXReader.read [Edit 2] The result has to be in Dom format, as I pass the document to decision tools that do arbitrary processing on it, like switching code based on the value of arbitrary XPaths, but also extracting lists of values packed as children of a predefined node. [Edit 3] In any case I eventually need to load/parse the complete document, since all the information it contains is going to be used at some point. (This question is related to, but different from, http://stackoverflow.com/questions/373833/best-xml-parser-for-java )

    Read the article

  • Python Speeding Up Retrieving data from extremely large string

    - by Burninghelix123
    I have a list I converted to a very very long string as I am trying to edit it, as you can gather it's called tempString. It works as of now it just takes way to long to operate, probably because it is several different regex subs. They are as follow: tempString = ','.join(str(n) for n in coords) tempString = re.sub(',{2,6}', '_', tempString) tempString = re.sub("[^0-9\-\.\_]", ",", tempString) tempString = re.sub(',+', ',', tempString) clean1 = re.findall(('[-+]?[0-9]*\.?[0-9]+,[-+]?[0-9]*\.?[0-9]+,' '[-+]?[0-9]*\.?[0-9]+'), tempString) tempString = '_'.join(str(n) for n in clean1) tempString = re.sub(',', ' ', tempString) Basically it's a long string containing commas and about 1-5 million sets of 4 floats/ints (mixture of both possible),: -5.65500020981,6.88999986649,-0.454999923706,1,,,-5.65500020981,6.95499992371,-0.454999923706,1,,, The 4th number in each set I don't need/want, i'm essentially just trying to split the string into a list with 3 floats in each separated by a space. The above code works flawlessly but as you can imagine is quite time consuming on large strings. I have done a lot of research on here for a solution but they all seem geared towards words, i.e. swapping out one word for another. EDIT: Ok so this is the solution i'm currently using: def getValues(s): output = [] while s: # get the three values you want, discard the 3 commas, and the # remainder of the string v1, v2, v3, _, _, _, s = s.split(',', 6) output.append("%s %s %s" % (v1.strip(), v2.strip(), v3.strip())) return output coords = getValues(tempString) Anyone have any advice to speed this up even farther? After running some tests It still takes much longer than i'm hoping for. I've been glancing at numPy, but I honestly have absolutely no idea how to the above with it, I understand that after the above has been done and the values are cleaned up i could use them more efficiently with numPy, but not sure how NumPy could apply to the above. The above to clean through 50k sets takes around 20 minutes, I cant imagine how long it would be on my full string of 1 million sets. I'ts just surprising that the program that originally exported the data took only around 30 secs for the 1 million sets

    Read the article

  • Best practice for handling memory leaks in large Java projects?

    - by knorv
    In almost all larger Java projects I've been involved with I've noticed that the quality of service of the application degrades with the uptime of the container. This is most probably due to memory leaks in the code. The correct way to solve this problem is obviously to trace back to the root cause of the problem and fix the leaks in the code. The quick and dirty way of solving the problem is simply restarting Tomcat (or whichever servlet container you're using). These are my three questions: Assume that you choose to solve the problem by tracing the root cause of the problem (the memory leaks), how would you collect data to zoom in on the problem? Assume that you choose the quick and dirty way of speeding things up by simply restarting the container, how would you collect data to choose the optimal restart cycle? Have you been able to deploy and run projects over an extended period of time without ever restarting the servlet container to regain snappiness? Or is an occasional servlet restart something that one has to simply accept?

    Read the article

  • How to delete duplicate/aggregate rows faster in a file using Java (no DB)

    - by S. Singh
    I have a 2GB big text file, it has 5 columns delimited by tab. A row will be called duplicate only if 4 out of 5 columns matches. Right now, I am doing dduping by first loading each coloumn in separate List , then iterating through lists, deleting the duplicate rows as it encountered and aggregating. The problem: it is taking more than 20 hours to process one file. I have 25 such files to process. Can anyone please share their experience, how they would go about doing such dduping? This dduping will be a throw away code. So, I was looking for some quick/dirty solution, to get job done as soon as possible. Here is my pseudo code (roughly) Iterate over the rows i=current_row_no. Iterate over the row no. i+1 to last_row if(col1 matches //find duplicate && col2 matches && col3 matches && col4 matches) { col5List.set(i,get col5); //aggregate } Duplicate example A and B will be duplicate A=(1,1,1,1,1), B=(1,1,1,1,2), C=(2,1,1,1,1) and output would be A=(1,1,1,1,1+2) C=(2,1,1,1,1) [notice that B has been kicked out]

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >