Search Results

Search found 110 results on 5 pages for 'negligible'.

Page 1/5 | 1 2 3 4 5  | Next Page >

  • Analyzing Memory Usage: Java vs C++ Negligible?

    - by Anthony
    How does the memory usage of an integer object written in Java compare\contrast with the memory usage of a integer object written in C++? Is the difference negligible? No difference? A big difference? I'm guessing it's the same because an int is an int regardless of the language (?) The reason why I asked this is because I was reading about the importance of knowing when a program's memory requirements will prevent the programmer from solving a given problem. What fascinated me is the amount of memory required for creating a single Java object. Take for example, an integer object. Correct me if I'm wrong but a Java integer object requires 24 bytes of memory: 4 bytes for its int instance variable 16 bytes of overhead (reference to the object's class, garbage collection info & synchronization info) 4 bytes of padding As another example, a Java array (which is implemented as an object) requires 48+bytes: 24 bytes of header info 16 bytes of object overhead 4 bytes for length 4 bytes for padding plus the memory needed to store the values How do these memory usages compare with the same code written in C++? I used to be oblivious about the memory usage of the C++ and Java programs I wrote, but now that I'm beginning to learn about algorithms, I'm having a greater appreciation for the computer's resources.

    Read the article

  • Debugging ASP.NET in VS

    - by negligible
    A lot of what I'm doing at the moment is figuring out other peoples code and adding or adapting functions, so currently I am debugging more than I am writing code of my own. I'm still new to this, Junior Developer, and I am always finding new ways to improve what I am doing. For example I recently found This Guide which had some excellent tips, such as overriding the ToString() method in your classes so children are readable from their parents. So I am looking for any other tips or tricks to make my debugging more efficient, as I recognise it as a big part of programming, that you more experienced programmers may have picked up or found. Anything appreciated, I can read websites just fine so no need to explain it yourself if you have a good link!

    Read the article

  • Intel P6100 CPU and Mobile Intel® HM55 Express Chipset

    - by Christopher Painter
    I have an Asus K52F-BBR5 notebook that uses an Intel P6100 ( 2GHZ 15x multiplier) and HM55 Express Chipset. I'm looking to replace it's 3GB with 8GB. The Crucial database seems to indicate that a PC3-8500 CAS 7 and PC3-10666 CAS 9 will both work. I'm not up to date on the latest DDR3 nomencalature and I'm wondering which would provide better performance. The price difference is negligible. Drawing on past experiences from many many years ago I could make an argument for either based on sync/async bus speed arguments and CAS latency differences but the truth is I don't know enough about the HM55 chipset to know which would be the correct choice. Does anyone know the answer or point me to information that would help me make the choice? I'm pretty sure the performance difference will be somewhat negligible also but still I'd like to make the optimal choice.

    Read the article

  • What is a good way to share internal helpers?

    - by toplel32
    All my projects share the same base library that I have build up over quite some time. It contains utilities and static helper classes to assist them where .NET doesn't exactly offer what I want. Originally all the helpers were written mainly to serve an internal purpose and it has to stay that way, but sometimes they prove very useful to other assemblies. Now making them public in a reliable way is more complicated than most would think, for example all methods that assume nullable types must now contain argument checking while not charging internal utilities with the price of doing so. The price might be negligible, but it is far from right. While refactoring, I have revised this case multiple times and I've come up with the following solutions so far: Have an internal and public class for each helper The internal class contains the actual code while the public class serves as an access point which does argument checking. Cons: The internal class requires a prefix to avoid ambiguity (the best presentation should be reserved for public types) It isn't possible to discriminate methods that don't need argument checking   Have one class that contains both internal and public members (as conventionally implemented in .NET framework). At first, this might sound like the best possible solution, but it has the same first unpleasant con as solution 1. Cons: Internal methods require a prefix to avoid ambiguity   Have an internal class which is implemented by the public class that overrides any members that require argument checking. Cons: Is non-static, atleast one instantiation is required. This doesn't really fit into the helper class idea, since it generally consists of independent fragments of code, it should not require instantiation. Non-static methods are also slower by a negligible degree, which doesn't really justify this option either. There is one general and unavoidable consequence, alot of maintenance is necessary because every internal member will require a public counterpart. A note on solution 1: The first consequence can be avoided by putting both classes in different namespaces, for example you can have the real helper in the root namespace and the public helper in a namespace called "Helpers".

    Read the article

  • NHibernate Performance Optimization | Suggestions invited!!!

    - by user336749
    Hi, I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen. While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen. For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet. The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param). Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization. My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case? Thanks in advance for any suggestions. Regards, -Mike

    Read the article

  • the effect of large number of files on disk space in unix filesystems

    - by user46976
    If I have a text file in Unix that contains N-many independent entries (e.g. records about employees, where each employee has a separate record), is it expected that this file will take up less space than if I split the file into N files, each containing the entry for one employee? in other words, can one save significant space on unix file systems by concatenating many files together, or is the difference negligible? thanks.

    Read the article

  • Resource consumption of FreeBSD's jails

    - by Juan Francisco Cantero Hurtado
    Just for curiosity. An example machine: an dedicated amd64 server with the last stable version of FreeBSD and UFS for the partitions. How much resources consume FreeBSD for each empty jail? I mean, I don't want know what is the resource consumption of a jailed server or whatever, just the overhead of each jail. I'm especially interested on CPU, memory and IO. For a few jails the overhead is negligible but imagine a server with 100 jails.

    Read the article

  • Does having your page file on decrease the life expectancy of your hard drive?

    - by user695874
    If I have my page file turned on in Windows as opposed to having it turned off as shown below: Would having the page file turned on decrease the life expectancy on my Hard Drive? If so, how much would the life decrease say with regular use? (4 hours a day) I'm thinking it would decrease some just because there would be more writing to the hard drive, but I wasn't sure if it would be too negligible to even matter.

    Read the article

  • What is the benefit of writing to a temp location, And then copying it to the intended destination?

    - by Devdatta Tengshe
    I am writing an application that works with satellite Images, and my boss asked me to look at some of the commercial application, and see how they behave. I found a strange behavior and then as I was looking, I found it in other standard applications as well. These Programs first write to the temp folder, and then copy it to the intended destination. Example: 7zip first extracts to the temp folder, and then copies the extracted data to the location that you had asked it to extract the data to. I see several problems with this approach: 1.The temp folder might not have enough space, while the intended location might have that much space. 2.If it is a large file, it can take a non-negligible amount of time for the copy operation. I thought about it a lot, but I couldn't see one single positive point to doing this. Am I missing something, or is there a real benefit to doing this?

    Read the article

  • Having a generic data type for a database table column, is it "good" practice?

    - by Yanick Rochon
    I'm working on a PHP project where some object (class member) may contain different data type. For example : class Property { private $_id; // (PK) private $_ref_id; // the object reference id (FK) private $_name; // the name of the property private $_type; // 'string', 'int', 'float(n,m)', 'datetime', etc. private $_data; // ... // ..snip.. public getters/setters } Now, I need to perform some persistence on these objects. Some properties may be a text data type, but nothing bigger than what a varchar may hold. Also, later on, I need to be able to perform searches and sorting. Is it a good practice to use a single database table for this (ie. is there a non negligible performance impact)? If it's "acceptable", then what could be the data type for the data column?

    Read the article

  • Is C++ indispensible for AAA game engines, as long as we have console-platform games? [closed]

    - by user1174924
    C++ has remained the industry standard for game engines much because of its features.. The primary reasons are(afaik): Technical reasons - High performance, native runtime, portibility, negligible latency, and more recently concurrency. Socio-Technical reasons - Availability of Libraries, Legecy stuff, most scripting languages on games have a good C api (ex lua), Good IDEs and most recently improved Development time.(C++11) Social reasons - People know C++, Licenced technologies, and battle proven. Does this make C++ for game engines indispensible, so long we have game consoles? Would not, the above features make me implement new graphics technology in C++ only? Edit: Will learning C++ garuntee me a job as a game engine dev In the future? I want to master every aspect of the language, but I already know C# and python. Should I allocate my time learning C++. I want to be a game engine developer.

    Read the article

  • Inserting Multiple Records in SQL2000

    - by Chris M
    I have a web app that currently is inserting x (between 1 + 40) records into a table that contains about 5 fields, via a linq-2-sql-stored procedure in a loop. Would it be better to manually write the SQL Inserts to say a string builder and run them against the database when the loops completed rather than 30 transactions? or should I just accept this is negligible for such a small number of inserts.

    Read the article

  • Which is faster??

    - by kaki
    is opening a large file once reading it completely once to list faster (or) opening smaller files whose total sum of size is equal to large file and loading smaller file into list manupalating one by one faster? which is faster?? is the difference is time large enough to impact my program?? total time difference of lesser then of 30 sec is negligible for me

    Read the article

  • Is it possible to keep mysql migration running without keeping connection open?

    - by taw
    ALTER TABLE can easily take a few days - and during this time there's a non-negligible chance of connection getting terminated due to network problems. Is it possible to start ALTER TABLE (or CREATE TABLE ... SELECT ...; or some other very long running query) and leave it running without keeping connection open all the time? (the obvious solution of screen + console mysql client won't easily work as there's no ssh running on that server, only mysqld).

    Read the article

  • Premature-Optimization and Performance Anxiety

    - by James Michael Hare
    While writing my post analyzing the new .NET 4 ConcurrentDictionary class (here), I fell into one of the classic blunders that I myself always love to warn about.  After analyzing the differences of time between a Dictionary with locking versus the new ConcurrentDictionary class, I noted that the ConcurrentDictionary was faster with read-heavy multi-threaded operations.  Then, I made the classic blunder of thinking that because the original Dictionary with locking was faster for those write-heavy uses, it was the best choice for those types of tasks.  In short, I fell into the premature-optimization anti-pattern. Basically, the premature-optimization anti-pattern is when a developer is coding very early for a perceived (whether rightly-or-wrongly) performance gain and sacrificing good design and maintainability in the process.  At best, the performance gains are usually negligible and at worst, can either negatively impact performance, or can degrade maintainability so much that time to market suffers or the code becomes very fragile due to the complexity. Keep in mind the distinction above.  I'm not talking about valid performance decisions.  There are decisions one should make when designing and writing an application that are valid performance decisions.  Examples of this are knowing the best data structures for a given situation (Dictionary versus List, for example) and choosing performance algorithms (linear search vs. binary search).  But these in my mind are macro optimizations.  The error is not in deciding to use a better data structure or algorithm, the anti-pattern as stated above is when you attempt to over-optimize early on in such a way that it sacrifices maintainability. In my case, I was actually considering trading the safety and maintainability gains of the ConcurrentDictionary (no locking required) for a slight performance gain by using the Dictionary with locking.  This would have been a mistake as I would be trading maintainability (ConcurrentDictionary requires no locking which helps readability) and safety (ConcurrentDictionary is safe for iteration even while being modified and you don't risk the developer locking incorrectly) -- and I fell for it even when I knew to watch out for it.  I think in my case, and it may be true for others as well, a large part of it was due to the time I was trained as a developer.  I began college in in the 90s when C and C++ was king and hardware speed and memory were still relatively priceless commodities and not to be squandered.  In those days, using a long instead of a short could waste precious resources, and as such, we were taught to try to minimize space and favor performance.  This is why in many cases such early code-bases were very hard to maintain.  I don't know how many times I heard back then to avoid too many function calls because of the overhead -- and in fact just last year I heard a new hire in the company where I work declare that she didn't want to refactor a long method because of function call overhead.  Now back then, that may have been a valid concern, but with today's modern hardware even if you're calling a trivial method in an extremely tight loop (which chances are the JIT compiler would optimize anyway) the results of removing method calls to speed up performance are negligible for the great majority of applications.  Now, obviously, there are those coding applications where speed is absolutely king (for example drivers, computer games, operating systems) where such sacrifices may be made.  But I would strongly advice against such optimization because of it's cost.  Many folks that are performing an optimization think it's always a win-win.  That they're simply adding speed to the application, what could possibly be wrong with that?  What they don't realize is the cost of their choice.  For every piece of straight-forward code that you obfuscate with performance enhancements, you risk the introduction of bugs in the long term technical debt of the application.  It will become so fragile over time that maintenance will become a nightmare.  I've seen such applications in places I have worked.  There are times I've seen applications where the designer was so obsessed with performance that they even designed their own memory management system for their application to try to squeeze out every ounce of performance.  Unfortunately, the application stability often suffers as a result and it is very difficult for anyone other than the original designer to maintain. I've even seen this recently where I heard a C++ developer bemoaning that in VS2010 the iterators are about twice as slow as they used to be because Microsoft added range checking (probably as part of the 0x standard implementation).  To me this was almost a joke.  Twice as slow sounds bad, but it almost never as bad as you think -- especially if you're gaining safety.  The only time twice is really that much slower is when once was too slow to begin with.  Think about it.  2 minutes is slow as a response time because 1 minute is slow.  But if an iterator takes 1 microsecond to move one position and a new, safer iterator takes 2 microseconds, this is trivial!  The only way you'd ever really notice this would be in iterating a collection just for the sake of iterating (i.e. no other operations).  To my mind, the added safety makes the extra time worth it. Always favor safety and maintainability when you can.  I know it can be a hard habit to break, especially if you started out your career early or in a language such as C where they are very performance conscious.  But in reality, these type of micro-optimizations only end up hurting you in the long run. Remember the two laws of optimization.  I'm not sure where I first heard these, but they are so true: For beginners: Do not optimize. For experts: Do not optimize yet. This is so true.  If you're a beginner, resist the urge to optimize at all costs.  And if you are an expert, delay that decision.  As long as you have chosen the right data structures and algorithms for your task, your performance will probably be more than sufficient.  Chances are it will be network, database, or disk hits that will be your slow-down, not your code.  As they say, 98% of your code's bottleneck is in 2% of your code so premature-optimization may add maintenance and safety debt that won't have any measurable impact.  Instead, code for maintainability and safety, and then, and only then, when you find a true bottleneck, then you should go back and optimize further.

    Read the article

  • Idle VMware Guests Resource Consumption

    - by Ravenor
    I want to setup a number of guests with multiple CPUs (4) and at least 4Gb ram running Ubuntu Linux. These machines will mostly be idle, but from time to time their workload will require all their resources, especially the CPU. The hosts are ESXi 5.x. The question is, am I right in thinking that the resource consumption of the machines when idle will be negligible? We know this true of disk and the CPU. The only concern left therefore is memory. Since ESX over-commits memory it makes sense that unused memory of any guest is paged out. If my thinking correct?

    Read the article

  • Windows hangs on Rename, Delete and Move for specific MKV files

    - by Creativehavoc
    I have been googling this issue for a long time, and no full solutions seem to be out there. For certain mkv files, windows hangs when moving, copying, or deleting. These files play fine in a player such as GOM player. System: a fast windows 7, 64 bit box. Results: CPU change is negligible Memory usage rockets up to ~100% In cased of delete or move, "discovering files" dialogue box stays up for a long time Rename simply shows spinning icon until it finishes Action is completed after EXTREMELY long amount of time Memory usage does not return to normal "Fixes:" Disable thumbnail creation (helps for some cases) After move/rename/delete kill explorer with taskmanager, and relaunch to re-claim your memory Even with the thumbnails turned off, the issue persists. I have also tried re-muxing a file, which worked fine, but still resulted in a file with the same above issues.

    Read the article

  • High Load - Low IO - Low CPU usage

    - by devup
    I have a system whose load is rather high. As you can see from the top output below, CPU usage and I/O is negligible: top - 17:31:59 up 4 days, 2:34, 2 users, load average: 1.00, 0.99, 1.00 Tasks: 71 total, 1 running, 70 sleeping, 0 stopped, 0 zombie Cpu(s): 2.0%us, 2.0%sy, 0.0%ni, 95.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 960720k total, 707288k used, 253432k free, 67328k buffers Swap: 2811896k total, 2644k used, 2809252k free, 528928k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15310 root 20 0 2512 1128 888 R 2.1 0.1 0:00.05 top I would appreciate any assistance with isolating the cause(s) of high load for when I/O and CPU are not factors.

    Read the article

  • Need advice on PC components for high-end games but in limited budget

    - by Md Atif
    I need advise, I want to buy PC components which will be good for gaming as well as be under my budget. First thing I chose is Graphics card : ATI HD 7850 2GB DDR5 then matching things up with this selected following: Processor: AMD 3.5 GHz AM3 FX 8320 8 Core Piledriver Mobo: MSI 990FXA-GD65 RAM : 8GB DDR3 (4x2) Is this setup looks compatible? If making any of above component inferior will have negligible affect on performance/gaming(like buying 970 chip mobo instead of 990) then please let me know so I can save some money :) ? Any other advice?

    Read the article

  • Vmstat indicates memory is disappearing

    - by jimbotron
    I wanted to profile the memory usage of a script. Here's the output before it was running: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 15624 186660 39460 439052 0 0 0 2 1 1 0 0 100 0 Here's the output while the script is running, at the point where free memory was at its lowest value: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 15624 11464 40312 473524 0 0 0 2 1 1 0 0 100 0 So free memory dropped by about 175 MB, and I expected that buff would increase by that amount. But it seems the other columns changed by relatively negligible amounts - how is this possible? Am I interpreting this wrong, or is some memory just not being accounted for in this output?

    Read the article

  • Very High Network out in ec2 instance

    - by Jatin
    I launched an ubuntu-14.04-64bit instance in Amazon EC2 two days back. And I started Tomcat 7.0.54 in that instance and deployed my application war files. It has no other software installed other than tomcat and the default ones. In the past 2 days, its shows 858 GB of Data Transfer(Network Out) from that instance. I have attached a graph of Amazon CloudWatch Metric "Network Out" My application does not do any data download/upload. Its a Java Spring application and the front end is in HTML&Javascript. My application traffic was very low (less than 20 hits) in those 2 days. Is there a way to find out why these data transfers happened and also to find what data has been transferred. If you can see in graph, network out was 20gb per minute. Some more info: Network in was negligible CPU Utilization was very high Everything else was low

    Read the article

  • Do most front and rear USB connections deliver the same power and performance?

    - by Bratch
    I was reading this Three Monitors For Every User and there were some comments about rear USB ports being able to deliver more power than front USB ports because they are directly connected to the motherboard and closer to the power supply (by circuit board runs). Even though the front USB ports may have connectors farther from the power supply, and there are cables from the motherboard to the front ports, I think that the difference in power would be negligible (unless the case is over 5 meters long). Anyone know for sure if they are the same or different? Note that I'm not talking about an older case where the front might have been USB 1.1 and the rear USB 2.0. A modern case would have USB 2.0 on all ports. And of course using a powered hub would deliver plenty of power.

    Read the article

  • What's the typical latency for key strokes using an ssh connection on a local wifi network?

    - by dan
    I develop software on a Macbook Air 1.6 Ghz but find running Rails test suites and generators on this computer very slow. I'm thinking about buying a Linux tower to put on my local wireless network to do my Rails development on. I would want to use my Macbook Air and ssh into the Linux box and do my development with Gnu Screen, vim, etc. Can I expect the keystroke and echo latency for a ssh session between two machines on a local wireless network to be negligible? Does anyone develop using this kind of local setup?

    Read the article

1 2 3 4 5  | Next Page >