Search Results

Search found 1226 results on 50 pages for 'improvement'.

Page 30/50 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Storing script files outside web root

    - by memilanuk
    I've seen recommendations to store some or all php include files some place other than in the web document root directory (username/public_html in my case) for the specific reason of protecting php files with sensitive information (like database connection and login info) in the event that the web server hiccups and stops protecting php files and they become 'visible' to outsiders who know where to look. It seems somewhat paranoid to me, but I'm guessing people have gotten burned badly on this before so I'm willing to go along. The suggestion usually takes the form of having the include files in something like '../include_files/' so its not directly in the document root and not directly accessible to outsiders through the web server. My question is this: is there a significant difference in security between that way and just putting your 'include_files' directory under the document root and sticking an .htaccess file in there (with the appropriate entries)? Would putting an .htaccess file in '../include_files/' make any significant improvement there? TIA, Monte

    Read the article

  • Perl, efficient parsing of csv file

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although that might work effectively.

    Read the article

  • How do i get out of the habit of procedural programming and into object oriented programming?

    - by Shadi Almosri
    Hiya all, I'm hoping to get some tips to kinda help me break out of what i consider after all these years a bad habit of procedural programming. Every time i attempt to do a project in OOP i end up eventually reverting to procedural. I guess i'm not completely convinced with OOP (even though i think i've heard everything good about it!). So i guess any good practical examples of common programming tasks that i often carry out such as user authentication/management, data parsing, CMS/Blogging/eComs are the kinda of things i do often, yet i haven't been able to get my head around how to do them in OOP and away from procedural, especially as the systems i build tend to work and work well. One thing i can see as a downfall to my development, is that i do reuse my code often, and it often needs more rewrites and improvement, but i sometimes consider this as a natural evolution of my software development. Yet i want to change! to my fellow programmers, help :) any tips on how i can break out of this nasty habbit?

    Read the article

  • MySQL - are FK's useful / viable in a web app?

    - by yoda
    Hi all, I've encountered this discussion related to FK's and web applications. Basically some people say that FK's in web applications doesn't represent a real improvement and can even make the application slower in some cases. What do you guys think, what's your experience? -- A quote from Heikki Tuuri, creator of InnoDB engine, founder and CEO of Innobase: InnoDB checks foreign keys as soon as a row is updated, no batching is performed or checks delayed till transaction commit Foreign keys are often serious performance overhead, but help maintain data consistency Foreign Keys increase amount of row level locking done and can make it spread to a lot of tables besides the ones directly updated

    Read the article

  • Colors in Instruments when hunting down memory leaks

    - by Structurer
    Hi I'm currently hunting down a memory leak in my app for iPhone. I'm using Instruments to track down the code that is causing the leak (becoming more and more a friend of Instruments!). Now Instruments show two lines: one in dark blue (row 146) and one in a lighter blue (150). From some trial and error I get that they are connected somehow, but not good enough at Objective-C and Memory Management yet to really understand how. Does anyone know why different colors are used and what could be my problem? I have tried to release numberForArray but the the app crashes when showing the last line in a picker view. All ideas appreciated! (Posting this I also realize that line 139 is redundant! Se there, already an improvement ;-)

    Read the article

  • Avoiding Nested Queries

    - by Midhat
    How Important is it to avoid nested queries. I have always learnt to avoid them like a plague. But they are the most natural thing to me. When I am designing a query, the first thing I write is a nested query. Then I convert it to joins, which sometimes takes a lot of time to get right. And rarely gives a big performance improvement (sometimes it does) So are they really so bad. Is there a way to use nested queries without temp tables and filesort

    Read the article

  • python threading and performace?

    - by kumar
    I had to do heavy I/o bound operation, i.e Parsing large files and converting from one format to other format. Initially I used to do it serially, i.e parsing one after another..! Performance was very poor ( it used take 90+ seconds). So I decided to use threading to improve the performance. I created one thread for each file. ( 4 threads) for file in file_list: t=threading.Thread(target = self.convertfile,args = file) t.start() ts.append(t) for t in ts: t.join() But for my astonishment, there is no performance improvement whatsoever. Now also it takes around 90+ seconds to complete the task. As this is I/o bound operation , I had expected to improve the performance. What am I doing wrong?

    Read the article

  • Visual Studio add-in for performance benchmarking

    - by chiccodoro
    I'd like to measure the performance of some code blocks in my c# winforms application. In particular I want to measure performance regression/improvement after some restructuring of the code. So long I've seen the System.Diagnostics.Stopwatch. However, I want to avoid writing measuring code into my classes, I would rather prefer to separate measuring from actual code. As for debugging, you can set breakpoints on several code lines and "jump" from one to the next by "Continue Execution", I imagine something similar for measuring: Mark to lines of code and make Visual Studio display the time elapsing from one to the next. Is there any feature/add-in in that direction?

    Read the article

  • Code First / Database First / Model First : are they just personnal preferences?

    - by Antoine M
    Merely knowing the internal functionality of each approaches, and after reading a lot of posts, I still can't figure out if each one of them is just a matter of personnal preference for the developper or if they deserve different axes of productivity ? Does one of them should be applyed for some specific productivity needs or MS is just beeing kind offering three different flavours ? Should we consider CF as a sort of improvement over DBF or MF and thinking of it as a futur standard on wich spending a peculiar intelectual investment ... ? Is there a link showing a sort of synthetic table with un-passionate pros and cons for each approach, a little bit like for web-forms and MVC. Sorry for those who will find this question redondant. I know it is.

    Read the article

  • Is there a faster TList implementation ?

    - by dmauric.mp
    My application makes heavy use of TList, so I was wondering if there are any alternative implementations that are faster or optimized for particular use case. I know of RtlVCLOptimize.pas 2.77, which has optimized implementations of several TList methods. But I'd like to know if there is anything else out there. I also don't require it to be a TList descendant, I just need the TList functionality regardless of how it's implemented. It's entirely possible, given the rather basic functionality TList provides, that there is not much room for improvement, but would still like to verify that, hence this question.

    Read the article

  • Better way to write this Java code?

    - by Macha
    public void handleParsedCommand(String[] commandArr) { if(commandArr[0].equalsIgnoreCase("message")) { int target = Integer.parseInt(commandArr[1]); String message = commandArr[2]; MachatServer.sendMessage(target, this.conId, message); } else if(commandArr[0].equalsIgnoreCase("quit")) { // Tell the server to disconnect us. MachatServer.disconnect(conId); } else if(commandArr[0].equalsIgnoreCase("confirmconnect")) { // Blah blah and so on for another 10 types of command } else { try { out.write("Unknown: " + commandArr[0] + "\n"); } catch (IOException e) { System.out.println("Failed output warning of unknown command."); } } } I have this part of my server code for handling the types of messages. Each message contains the type in commandArr[0] and the parameters in the rest of commandArr[]. However, this current code, while working seems very unelegant. Is there a better way to handle it? (To the best of my knowledge, String values can't be used in switch statements, and even then, a switch statement would only be a small improvement.

    Read the article

  • Document Management System - Where to Store Files?

    - by Diego AC
    Hey, stack! I'm on charge of building an ASP.NET MVC Document Management System. It have to be able to do basic document management tasks like adding, editing and searching entries and also perform versioning. Anyways, I'm targeting PDF, Office and many image formats as the file attached to each document entry in the database. My question is: What design guidelines do pros follow when building the storage mechanism? Do they store the document files in the file system? Database? How file uploading is handled? I used to upload the files to a temporal location while the user was editing the data and move it to permanent storage when the user confirmed the entry creation. Is this good? Any suggestions on improvement?

    Read the article

  • is PHP itself transforming into a framework?

    - by Elzo Valugi
    At the beginning PHP was a scripting language. But after the introduction and improvement of OOP I see more and more objects added to the core. They started with SPL which grew a lot, now we have DOMDocument family, DateTime family which should be part of PECL, Pear or Zend Framework or implemented by each one of us. Shouldn't be php only for build-in functions and all these objects passed to something else? Example. DateTime class is part of the core and I see it very similar with Zend_Date.

    Read the article

  • Does the order of case in Switch statement can vary the performance?

    - by Bipul
    Let say I have a switch statement as below Switch(alphabet){ case "f": //do something break; case "c": //do something break; case "a": //do something break; case "e": //do something break; } Now suppose I know that the frequency of having Alphabet e is highest followed by a, c and f respectively. So, I just restructured the case statement order and made them as follows. Switch(alphabet){ case "e": //do something break; case "a": //do something break; case "c": //do something break; case "f": //do something break; } Will the second Switch statement better perform(means faster) than the first switch statement? If yes and if in my program I need to call this switch statement say many times, will that be a substantial improvement? Or if not in any how can I use my frequency knowledge to improve the performance? Thanks

    Read the article

  • Rejigging a floating point equation ...

    - by Jamie
    I'd like to know if there is a way to improve the accuracy of calculating a slope. (This came up a few months back here). It seems by changing: float get_slope(float dXa, float dXb, float dYa, float dYb) { return (dXa - dXb)/(dYa - dYb); } to float get_slope(float dXa, float dXb, float dYa, float dYb) { return dXa/(dYa - dYb) - dXb/(dYa - dYb); } might be an improvement. Suggestions? Edit: It's precision I'm after, not efficiency.

    Read the article

  • Help with this reg. exp. in PHP

    - by Jonathan
    Hi, i don't know about regular expressions, I asked here for one that: gets either anything up to the first parenthesis/colon or the first word inside the first parenthesis. This was the answer: preg_match('/(?:^[^(:]+|(?<=^\\()[^\\s)]+)/', $var, $match); I need an improvement, I need to get either anything up to the first parenthesis/colon/quotation marks or the first word inside the first parenthesis. So if I have something like: $var = 'story "The Town in Hell"s Backyard'; // I get this: $match = 'story'; $var = "screenplay (based on)"; // I get this: $match = 'screenplay'; $var = "(play)"; // I get this: $match = 'play'; $var = "original screen"; // I get this: $match = 'original screen'; Thanks!

    Read the article

  • django url tag performance

    - by zxygentoo
    I was trying to integrate django-voting into my project following the RedditStyleVoting instruction. In my urls.py, i did something like this: url(r'^sections/(?P<object_id>\d+)/(?P<direction>up|down|clear)vote/?$', vote_on_object, dict( model=Section, template_object_name='section', template_name='script/section_confirm_vote.html', allow_xmlhttprequest=True ), name="section_vote", then, in my template: {% vote_by_user user on section as vote %} {% score_for_object section as score %} {% vote_by_user user on section as vote %} {% score_for_object section as score %} {{ score.score|default:0 }} It takes over 1.3s to load the page, but by hard coding it like this: {% vote_by_user user on section as vote %} {% score_for_object section as score %} {{ score.score|default:0 }} I got 50ms. Just avoid the url tag resolving stuff I got a 20+ times performance improvement. Is there something I did wrong? If not, then what's the best practice here, should we do things the right way or the fast way?

    Read the article

  • Parallel Haskell in order to find the divisors of a huge number

    - by Dragno
    I have written the following program using Parallel Haskell to find the divisors of 1 billion. import Control.Parallel parfindDivisors :: Integer->[Integer] parfindDivisors n = f1 `par` (f2 `par` (f1 ++ f2)) where f1=filter g [1..(quot n 4)] f2=filter g [(quot n 4)+1..(quot n 2)] g z = n `rem` z == 0 main = print (parfindDivisors 1000000000) I've compiled the program with ghc -rtsopts -threaded findDivisors.hs and I run it with: findDivisors.exe +RTS -s -N2 -RTS I have found a 50% speedup compared to the simple version which is this: findDivisors :: Integer->[Integer] findDivisors n = filter g [1..(quot n 2)] where g z = n `rem` z == 0 My processor is a dual core 2 duo from Intel. I was wondering if there can be any improvement in above code. Because in the statistics that program prints says: Parallel GC work balance: 1.01 (16940708 / 16772868, ideal 2) and SPARKS: 2 (1 converted, 0 overflowed, 0 dud, 0 GC'd, 1 fizzled) What are these converted , overflowed , dud, GC'd, fizzled and how can help to improve the time.

    Read the article

  • Methods for making R plots look like Excel plots?

    - by brianjd
    I've been poking around with R graphical parameters trying to make my plots look a little more professional (e.g., las=1, bty="n" usually help). But not quite there. Started playing with tikzDevice. A huge improvement! Amazing how much better things look when the font sizes and styles in the figure match those of the surrounding document. Still, not quite there. What I'm ultimately looking for are those professional gradient shading, rounded corners, and shadow effects found in MS Excel plots. I know they're probably considered chart junk, but I like them. They're just nice looking. Q: How can I get these effects into my R plots? Do people usually just export to Inkscape and doodle over there? It would be nice if there were a literate programming approach. Is there an R package that handles these effects outright?

    Read the article

  • NP-complete problem in Prolog

    - by Ashley
    I saw this ECLiPSe solution to the problem mentioned in this XKCD comic. I tried to convert this to pure Prolog. go:- Total = 1505, Prices = [215, 275, 335, 355, 420, 580], length(Prices, N), length(Amounts, N), totalCost(Prices, Amounts, 0, Total), writeln(Total). totalCost([], [], TotalSoFar, TotalSoFar). totalCost([P|Prices], [A|Amounts], TotalSoFar, EndTotal):- between(0, 10, A), Cost is P*A, TotalSoFar1 is TotalSoFar + Cost, totalCost(Prices, Amounts, TotalSoFar1, EndTotal). I don't think that this is the best / most declarative solution that one can come up with. Does anyone have any suggestions for improvement? Thanks in advance!

    Read the article

  • Decryption with the public key in iphone

    - by vignesh
    Hi all, I have a public key and an encrypted string. I could encrypt with publickey successfully.But when i try to decrypt using the publickey it fails. I mean when i pass the publickey seckeyDecrypt it fails. I have Googled and found out that by default kSecAttrCanDecrypt is false for public keys.So When i import the public key, i have added this particular line , [publicKeyAttr setObject:(id)kCFBooleanTrue forKey:(id)kSecAttrCanDecrypt]; But there is no improvement it still fails. Please somebody help.

    Read the article

  • is PHP itself transforming into a framework or big library?

    - by Elzo Valugi
    At the beginning PHP was a scripting language. But after the introduction and improvement of OOP I see more and more objects added to the core. They started with SPL which grew a lot, now we have DOMDocument family, DateTime family which should be part of PECL, Pear or Zend Framework or implemented by each one of us. Shouldn't be php only for build-in functions and all these objects passed to something else? Example. DateTime class is part of the core and I see it very similar with Zend_Date.

    Read the article

  • In what circumstances can large pages produce a speedup ?

    - by timday
    Modern x86 CPUs have the ability to support larger page sizes than the legacy 4K (ie 2MB or 4MB), and there are OS facilities (Linux, Windows) to access this functionality. The Microsoft link above states large pages "increase the efficiency of the translation buffer, which can increase performance for frequently accessed memory". Which isn't very helpful in predicting whether large pages will improve any given situation. I'm interested in concrete, preferably quantified, examples of where moving some program logic (or a whole application) to use huge pages has resulted in some performance improvement. Anyone got any success stories ? There's one particular case I know of myself: using huge pages can dramatically reduce the time needed to fork a large process (presumably as the number of TLB records needing copying is reduced by a factor on the order of 1000). I'm interested in whether huge pages can also benefit more mundane applications though.

    Read the article

  • is putting N in front of strings in scripts considered a "best practice"?

    - by jcollum
    Let's say I have a table that has a varchar field. If I do an insert like this: INSERT MyTable SELECT N'the string goes here' Is there any fundamental difference between that and: INSERT MyTable SELECT 'the string goes here' My understanding was that you'd only have a problem if the string contained a Unicode character and the target column wasn't unicode. Other than that, SQL deals with it just fine and converts the string with the N'' into a varchar field (basically ignores the N). I was under the impression that N in front of strings was a good practice, but I'm unable to find any discussion of it that I'd consider definitive. Title may need improvement, feel free.

    Read the article

  • Tips for measuring the parallelism speed-up in multi-core development.

    - by fnCzar
    I have read many of the good questions and answers around multi-core programming how-tos etc. I am familiar with concurrency, IPC, MPI etc but what I need is advice on how to measure speed-up which will help in making a business case of spending the time to write such code. Please don't answer with "well run it with single-core code then multi-core code and figure out the difference". This is neither a scientific nor a reliable way to measure performance improvement. If you know of tools that will do some of the heavy lifting please mention them. Answers pertaining to methodology will be more fitting but listing tools is ok as well.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >