Search Results

Search found 2376 results on 96 pages for 'critical'.

Page 55/96 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Method may fail to close stream on exception

    - by 01
    I get the critical error with finbugs The method creates an IO stream object, does not assign it to any fields, pass it to other methods, or return it, and does not appear to close it on all possible exception paths out of the method. This may result in a file descriptor leak. It is generally a good idea to use a finally block to ensure that streams are closed. try { ... stdError = new BufferedReader(new InputStreamReader(p.getErrorStream())); ... } catch (IOException e) { throw new RuntimeException(e); } finally { try { if (stdInput != null) { stdInput.close(); } if (stdError != null) { stdError.close(); } } catch (IOException e) { throw new RuntimeException(e); } } do i need to close also InputStreamReader or p.getErrorStream(it returns InputStream) ??

    Read the article

  • .NET or Windows Synchronization Primitives Performance Specifications

    - by ovanes
    Hello *, I am currently writing a scientific article, where I need to be very exact with citation. Can someone point me to either MSDN, MSDN article, some published article source or a book, where I can find performance comparison of Windows or .NET Synchronization primitives. I know that these are in the descending performance order: Interlocked API, Critical Section, .NET lock-statement, Monitor, Mutex, EventWaitHandle, Semaphore. Many Thanks, Ovanes P.S. I found a great book: Concurrent Programming on Windows by Joe Duffy. This book is written by one of the head concurrency developers for .NET Framework and is simply brilliant with lots of explanations, how things work or were implemented.

    Read the article

  • Best practices for Subversion and Visual Studio projects

    - by Alex Marshall
    I've recently started working on various C# projects in Visual Studio as part of a plan for a large scale system that will be used to replace our current system that's built from a cobbling-together of various programs and scripts written in C and Perl. The projects I'm now working on have reached critical mass for being committed to subversion. I was wondering what should and should not be committed to the repository for Visual Studio projects. I know that it's going to generate various files that are just build-artifacts and don't really need to be committed, and I was wondering if anybody had any advice for properly using SVN with Visual Studio. At the moment, I'm using an SVN 1.6 server with Visual Studio 2010 beta. Any advice, opinions are welcome.

    Read the article

  • What is so bad about using SQL INNER JOIN

    - by Stephen B. Burris Jr.
    Everytime a database diagram gets looked out, one area people are critical of is inner joins. They look at them hard and has questions to see if an inner join really needs to be there. Simple Library Example: A many-to-many relationship is normally defined in SQL with three tables: Book, Category, BookCategory. In this situation, Category is a table that contains two columns: ID, CategoryName. In this situation, I have gotten questions about the Category table, is it need? Can it be used as a lookup table, and in the BookCategory table store the CategoryName instead of the CategoryID to stop from having to do an additional INNER JOIN. (For this question, we are going to ignore the changing, deleting of any CategoryNames) The question is, what is so bad about inner joins? At what point is doing them a negative thing (general guidelines like # of transactions, # of records, # of joins in a statement, etc)?

    Read the article

  • Student's t distribution in JavaScript

    - by Sai Emrys
    Google Spreadsheets currently does not support the standard function TDIST - i.e. the Student's t-distribution. This function is critical for calculating p-values. It seems that this is related to the fact that no integral-using functions (AFAICT) are implemented either. However, Google Docs allows people to add and publish their own scripts, in JavaScript. So ideally we should have something like: function tdist(t_value, degrees_of_freedom, two_tailed [defaults true]) {...} Anyone know of either an extant implementation of this (my google-fu has not turned up one, but may be weaker than yours) or a good idea for how to do it? I'd like to publish this together with some other useful functions that are currently calculable but a bit of a pain (like Student's t-test itself). Thanks!

    Read the article

  • Hadoop: Iterative MapReduce Performance

    - by S.N
    Is it correct to say that the parallel computation with iterative MapReduce can be justified only when the training data size is too large for the non-parallel computation for the same logic? I am aware that the there is overhead for starting MapReduce jobs. This can be critical for overall execution time when a large number of iterations is required. I can imagine that the sequential computation is faster than the parallel computation with iterative MapReduce as long as the memory allows to hold a data set in many cases. Is it the only benefit to use the iterative MapReduce? If not, what are the other benefits could be?

    Read the article

  • openmp in mex : stackoverflow error

    - by Edwin
    i have got the following fraction of code that getting me the stack overflow error #pragma omp parallel shared(Mo1, Mo2, sum_normalized_p_gn, Data, Mean_Out,Covar_Out,Prior_Out, det) private(i) num_threads( number_threads ) { //every thread has a new copy double* normalized_p_gn = (double*)malloc(NMIX*sizeof(double)); #pragma omp critical { int id = omp_get_thread_num(); int threads = omp_get_num_threads(); mexEvalString("drawnow"); } #pragma omp for //some parallel process..... } the variables declared in the shared are created by malloc. and they consumes with large amount of memory there are 2 questions regarding to the above code. 1) why this would generate the stack overflow error( i.e. segmentation fault) before it goes into the parallel for loop? it works fine when it runs in the sequential mode.... 2) am i right to dynamic allocate memory for each thread like "normalized_p_gn" above? Regards Edwin

    Read the article

  • Mapping Absolute / Relative (Local) Paths to Absolute URLs

    - by Alix Axel
    I need a fast and reliable way to map an absolute or relative local path (say ./images/Mafalda.jpg) to it's corresponding absolute URL, so far I've managed to come up with this: function Path($path) { if (file_exists($path) === true) { return rtrim(str_replace('\\', '/', realpath($path)), '/') . (is_dir($path) ? '/' : ''); } return false; } function URL($path) { $path = Path($path); if ($path !== false) { return str_replace($_SERVER['DOCUMENT_ROOT'], getservbyport($_SERVER['SERVER_PORT'], 'tcp') . '://' . $_SERVER['HTTP_HOST'], $path); } return false; } URL('./images/Mafalda.jpg'); // http://domain.com/images/Mafalda.jpg Seems to be working as expected, but since this is a critical feature to my app I want to ask if anyone can spot any problem that I might have missed and optimizations are also welcome since I'm going to use this function several times per each request.

    Read the article

  • Why is FxCop warning about an overflow (CA2233) in this C# code?

    - by matt
    I have the following function to get an int from a high-byte and a low-byte: public static int FromBytes(byte high, byte low) { return high * (byte.MaxValue + 1) + low; } When I analyze the assembly with FxCop, I get the following critical warning: CA2233: OperationsShouldNotOverflow Arithmetic operations should not be done without first validating the operands to prevent overflow. I can't see how this could possibly overflow, so I am just assuming FxCop is being overzealous. Am I missing something? And what steps could be taken to correct what I have (or at least make the FxCop warning go away!)?

    Read the article

  • Backup Google Calendar programmatically: http://www.google.com/reader/subscriptions/export

    - by Michael
    I'm struggling with writing a python script that automatically grabs the zip fail containing all my google calendars and stores it (as a backup) on my harddisk. I'm using ClientLogin to get an authentication token (and successfully can obtain the token). Unfortunately, i'm unable to retrieve the file at https://www.google.com/calendar/exporticalzip It always asks me for the login credentials again by returning a login page as html (instead of the zip). Here's the critical code: post_data = post_data = urllib.urlencode({ 'auth': token, 'continue': zip_url}) request = urllib2.Request('https://www.google.com/calendar', post_data, header) try: f = urllib2.urlopen(request) result = f.read() except: print "Error" Anyone any ideas or done that before? Or an alternative idea how to backup all my calendars (automatically!)

    Read the article

  • OpenGL Shading Language backwards compatibility

    - by Luca
    I've noticed that my GLSL shaders are not compilable when the GLSL version is lower than 130. What are the most critical elements for having a backward compatible shader source? I don't want to have a full backward compatibility, but I'd like to understand the main guidelines for having simple (forward compatible) shaders running on GPU with GLSL lower than 130. Of course the problem could be solved with the preprocessor #if __VERSION__ < 130 #define VERTEX_IN attribute #else #define VERTER_IN in #endif But there probably many issues that I ignore. Thank you

    Read the article

  • dotnet nuke error

    - by donfigga
    Hi there im trying to degub a dot net nuke server error and im not sure where to start. I dnt have the code locally else I could debug (no that im familiar the dnn setup). This bug affects making cms updates to the site with the message 'A critical error has occured', I have been unsuccessfully trying to find out the cause and im finally throwing up my hands, I dont even need a fix , I just want to find out what is causing the error so I can provide an estimate for a fix and I can even seem to do that. I have tried looking at the logs but nothing seems to be logged about this error, is there a way to turn off custom error handling so as to get some clues as what the cause of this bug is? any suggestions would be welcome as i am getting desperate here :)

    Read the article

  • mod_rewrite to find missing /img/foo.jpg in /img/f/

    - by Ambrose
    I've got a folder of images which is reaching a critical mass after a few years. I want to move images into alphabetical folders, so that /img/foo.jpg goes into /img/f/foo.jpg and /img/bar.jpg goes into /img/b/bar.jpg and so on. In order to make the transition smooth, and to allow the manual uploaders to put stuff into the top level, I'd like to use mod_rewrite to do this: if /img/foo.jpg exists, serve it up, if not look for it in /img/f/foo.jpg thanks for any suggestions. For the record, no, I don't think we need to go /img/f/fo/foo.jpg just yet.

    Read the article

  • The reasons to upgrade from Delphi 2009

    - by Serg
    I have made the question "community wiki" - it is subjective. I have upgraded to Delphi 2009 because of unicode support. I have found the anonymous methods a very interesting and useful language feature, I can't say the same about generics. The generics seemed important for me before the upgrade to Delphi 2009, but I have never used them and probably will never use. As for Delphi 2010, I don't need the attributes and I don't like the whole idea of extended RTTI - that is why Delphi 2009 is better for me. Sometimes I hit one or other annoying bug in Delphi 2009 IDE, but they are not critical and I can live with them. I have no plans to develop software for Mac or Linux. Sure sometime I will need 64-bit support, so I think about upgrading to Delphi 2012. Are where any more reasons that can force me to upgrade from Delphi 2009?

    Read the article

  • Managing execution priorities and request expiry time in your web application

    - by Dan
    Some installations that run our applications can be under hefty stress on a busy day. Our clients ask us is there is a way to manage priorities in our application. For example, in a typical internet banking application, banks are interested in having the form “Transfer money” responsive, while the “Statement” page is a lot less critical. Not being able to transfer money is a direct loss for the bank, while not being able to produce a statement or something similar can be fixed with an apology. AFAIK, neither can you manage different request or session timeouts in a typical web application, it is one value for the whole of your web app. Managing message priority and expiry time is a typical feature in many middleware platforms. Something like this can be useful for a web front end as well. Do any of web servers (either java or .net) or web frameworks provide these features? How would you go about implementing it if you’d have to go for roll-your-own?

    Read the article

  • How can I limit access to a particular class to one caller at a time in a web service?

    - by MusiGenesis
    I have a web service method in which I create a particular type of object, use it for a few seconds, and then dispose it. Because of problems arising from multiple threads creating and using instances of this class at the same time, I need to restrict the method so that only one caller at a time ever has one of these objects. To do this, I am creating a private static object: private static object _lock = new object(); ... and then inside the web service method I do this around the critical code: lock (_lock) { using (DangerousObject do = new DangerousObject()) { do.MakeABigMess(); do.CleanItUp(); } } I'm not sure this is working, though. Do I have this right? Will this code ensure that only one instance of DangerousObject is instantiated and in use at a time?

    Read the article

  • Make compiler copy characters using movsd

    - by Suma
    I would like to copy a relatively short sequence of memory (less than 1 KB, typically 2-200 bytes) in a time critical function. The best code for this on CPU side seems to be rep movsd. However I somehow cannot make my compiler to generate this code. I hoped (and I vaguely remember seeing so) using memcpy would do this using compiler built-in instrinsic, but based on disassembly and debugging it seems compiler is using call to memcpy/memmove library implementation instead. I also hoped the compiler might be smart enough to recognize following loop and use rep movsd on its own, but it seems it does not. char *dst; const char *src; // ... for (int r=size; --r>=0; ) *dst++ = *src++; Is there some way to make the Visual Studio compiler to generate rep movsd sequence other than using inline assembly?

    Read the article

  • Easiest way to retrofit retry logic on LINQ to SQL migration to SQL Azure

    - by Pat James
    I have a couple of existing ASP .NET web forms and MVC applications that currently use LINQ to SQL with a SQL Server 2008 Express database on a Windows VPS: one VPS for both IIS and SQL. I am starting to outgrow the VPS's ability to effectively host both SQL and IIS and am getting ready to split them up. I am considering migrating the database to SQL Azure and keeping IIS on the VPS. After doing initial research it sounds like implementing retry logic in the data access layer is a must-do when adopting SQL Azure. I suspect this is even more critical to implement in my situation where IIS will be on a VPS outside of the Azure infrastructure. I am looking for pointers on how to do this with the least effort and impact on my existing code base. Is there a good retry pattern that can be applied once at the LINQ to SQL data access layer, as opposed to having to wrap all of my LINQ to SQL operations in try/catch/wait/retry logic?

    Read the article

  • How is dynamic memory allocation handled when extreme reliability is required?

    - by sharptooth
    Looks like dynamic memory allocation without garbage collection is a way to disaster. Dangling pointers there, memory leaks here. Very easy to plant an error that is sometimes hard to find and that has severe consequences. How are these problems addressed when mission-critical programs are written? I mean if I write a program that controls a spaceship like Voyager 1 that has to run for years and leave a smallest leak that leak can accumulate and halt the program sooner or later and when that happens it translates into epic fail. How is dynamic memory allocation handled when a program needs to be extremely reliable?

    Read the article

  • Job conditions conflicting with personal principles on software-development - how much is too much?

    - by Baelnorn
    Sorry for the incoming wall'o'text (and for my probably bad English) but I just need to get this off somehow. I also accept that this question will be probably closed as subjective and argumentative, but I need to know one thing: "how much BS are programmers supposed to put up with before breaking?" My background I'm 27 years old and have a B.Sc. in Computer engineering with a graduation grade of 1.8 from a university of applied science. I went looking for a job right after graduation. I got three offers right away, with two offers paying vastly more than the last one, but that last one seemed more interesting so I went for that. My situation I've been working for the company now for 17 months now, but it feels like a drag more and more each day. Primarily because the company (which has only 5 other developers but me, and of these I work with 4) turned out to be pretty much the anti-thesis of what I expected (and was taught in university) from a modern software company. I agreed to accept less than half of the usual payment appropriate for my qualification for the first year because I was promised a trainee program. However, the trainee program turned out to be "here you got a computer, there's some links on the stuff we use, and now do what you colleagues tell you". Further, during my whole time there (trainee or not) I haven't been given the grace of even a single code-review - apparently nobody's interested in my work as long as it "just works". I was told in the job interview that "Microsoft technology played a central role in the company" yet I've been slowly eroding my congnitive functions with Flex/Actionscript/Cairngorm ever since I started (despite having applied as a C#/.NET developer). Actually, the company's primary projects are based on Java/XSLT and Flex/Actionscript (with some SAP/ABAP stuff here and there but I'm not involved in that) and they've been working on these before I even applied. Having had no experience either with that particular technology nor the framework nor the field (RIA) nor in developing business scale applications I obviously made several mistakes. However, my boss told me that he let me make those mistakes (which ate at least 2 months of development time on their own) on purpose to provide some "learning experience". Even when I was still a trainee I was already tasked with working on a business-critical application. On my own. Without supervision. Without code-reviews. My boss thinks agile methods are a waste of time/money and deems putting more than one developer on any project not efficient. Documentation is not necessary and each developer should only document what he himself needs for his work. Recently he wanted us to do bug tracking with Excel and Email instead of using an already existing Bugzilla, overriding an unanimous decision made by all developers and testers involved in the process - only after another senior developer had another hour-long private discussion with him he agreed to let us use the bugtracker. Project management is basically not present, there are only a few Excel sheets floating around where the senior developer lists some things (not all, mind you) with a time estimate ranging from days to months, trying to at least somehow organize the whole mess. A development process is also basically not present, each developer just works on his own however he wants. There are not even coding conventions in the company. Testing is done manually with a single tester (sometimes two testers) per project because automated testing wasn't given the least thought when the whole project was started. I guess it's not a big surprise when I say that each developer also has his own share of hundreds of overhours (which are, of course, unpaid). Each developer is tasked with working on his own project(s) which in turn leads to a very extensive knowledge monopolization - if one developer was to have an accident or become ill there would be absolutely no one who could even hope to do his work. Considering that each developer has his own business-critical application to work on, I guess that's a pretty bad situation. I've been trying to change things for the better. I tried to introduce a development process, but my first attempt was pretty much shot down by my boss with "I don't want to discuss agile methods". After that I put together a process that at least resembled how most of the developers were already working and then include stuff like automated (or at least organized) testing, coding conventions, etc. However, this was also shot down because it wasn't "simple" enought to be shown on a business slide (actually, I wasn't even given the 15 minutes I'd have needed to present the process in the meeting). My problem I can't stand working there any longer. Seriously, I consider to resign on monday, which still leaves me with 3 months to work there due to the cancelation period. My primary goal since I started studying computer science was being a good computer scientist, working with modern technologies and adhering to modern and proven principles and methods. However, the company I'm working for seems to make that impossible. Some days I feel as if was living in a perverted real-life version of the Dilbert comics. My question Am I overreacting? Is this the reality each graduate from university has to face? Should I betray my sound principles and just accept these working conditions? Or should I gtfo of there? What's the opinion of other developers on this matter. Would you put up with all that stuff?

    Read the article

  • Distributed transactions

    - by javi
    Hello! I've a question regarding distributed transactions. Let's assume I have 3 transaction programs: Transaction A begin a=read(A) b=read(B) c=a+b write(C,c) commit Transaction B begin a=read(A) a=a+1 write(A,a) commit Transaction C begin c=read(C) c=c*2 write(A,c) commit So there are 5 pairs of critical operations: C2-A5, A2-B4, B4-C4, B2-C4, A2-C4. I should ensure integrity and confidentiality, do you have any idea of how to achieve it? Thank you in advance!

    Read the article

  • Setting up DomainKeys/DKIM in a PHP-based SMTP client

    - by Alex
    It looks like there are some great libraries out there to do DomainKeys signing of emails on C#/.NET, but I'm having a really hard time finding the same kind of support for PHP. Maybe I'm not looking in the right place? The only one I found is http://php-dkim.sourceforge.net/; it looks incredibly hacky and supports PHP4 only. Considering how popular PHP is, and how critical DomainKeys are for email classification as non-spam, I'd expect better tools; do you know of any? Any other tricks you'd recommend?

    Read the article

  • How can I limit access to a particular class to one caller at a time in an ASMX web service?

    - by MusiGenesis
    I have a web service method in which I create a particular type of object, use it for a few seconds, and then dispose it. Because of problems arising from multiple threads creating and using instances of this class at the same time, I need to restrict the method so that only one caller at a time ever has one of these objects. To do this, I am creating a private static object: private static object _lock = new object(); ... and then inside the web service method I do this around the critical code: lock (_lock) { using (DangerousObject do = new DangerousObject()) { do.MakeABigMess(); do.CleanItUp(); } } I'm not sure this is working, though. Do I have this right? Will this code ensure that only one instance of DangerousObject is instantiated and in use at a time? Or does each caller get their own copy of _lock, rendering my code here laughable?

    Read the article

  • Advice needed from PHP/Cake PHP expert

    - by Hiro
    I'm very new to programming (besides SQL and databases), but I ultimately want to master Cake PHP for web development. Now, given how new I am, I'm rather lost as to how I get started. Down the road, I want to use MVC framework so that I help myself be disciplined in the way I build. However, I know basic knowledge of PHP and OOP PHP are required. So my question is this: what are the right steps to mastering Cake PHP? I don't want to skip critical phases of learning before learning to Cake PHP. At the same time, I don't want to spend more time than required learning PHP if I can learn it directly through Cake PHP knowledge. Any advice would be appreciated.

    Read the article

  • Global find object references in NHibernate

    - by Miral
    Is it possible to perform a global reversed-find on NHibernate-managed objects? Specifically, I have a persistent class called "Io". There are a huge number of fields across multiple tables which can potentially contain an object of that type. Is there a way (given a specific instance of an Io object), to retrieve a list of objects (of any type) that actually do reference that specific object? (Bonus points if it can identify which specific fields actually contain the reference, but that's not critical.) Since the NHibernate mappings define all the links (and the underlying database has corresponding foreign key links), there ought to be some way to do it.

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >