Search Results

Search found 5377 results on 216 pages for 'robert low'.

Page 14/216 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Does low latency code sometimes have to be "ugly"?

    - by user997112
    (This is mainly aimed at those who have specific knowledge of low latency systems, to avoid people just answering with unsubstantiated opinions). Do you feel there is a trade-off between writing "nice" object orientated code and writing very fast low latency code? For instance, avoiding virtual functions in C++/the overhead of polymorphism etc- re-writing code which looks nasty, but is very fast etc? It stands to reason- who cares if it looks ugly (so long as its maintainable)- if you need speed, you need speed? I would be interested to hear from people who have worked in such areas.

    Read the article

  • How have you saved green by going green?

    - by Bob
    For the purpose of this question, I am interested in server/datacenter related hardware. Have you had any measureable amount of ROI by swapping existing hardware to more "green" or energy efficient hardware? For example, VMWare says you can reduce energy consumption by up to 80% by using virtualization. I have also heard of a cooling solution from HP which is suppose to reduce a small amount of engery usage (<25% I think). Google has also done something by integrating a UPS into their power supplies to reduce energy consumption. Any real world experiences would be great, but if you have any details on initial cost, savings and pay off time about what changes were make that would fantastic. I am not only interested in virtualization, I am interested in anything.

    Read the article

  • Windows XP seemingly out of resources but plenty of free RAM and swap available

    - by Artem Russakovskii
    This one has been bothering me for years and so far I couldn't find an adequate solution. The problem occurs on pretty much every XP install I've done. After opening a variety of programs or the system running existing programs for a while, Windows seemingly runs out of resources, without telling me. There's ALWAYS free RAM. For example, it just happened to me and I had over a gig of free RAM. There are no viruses, spyware, or other nonsense - it is a Windows resource problem, but the question is which resource is it running out of, how does one pinpoint it, and how does one prevent it? Sometimes, this happens after running specific programs - for example, today it happened when I started Photoshop CS4 and Flash CS4 at the same time. I also noticed that restarting The Bat (email client by Ritlabs) seems to get rid of this problem for a while but again, this happens on machines that don't even have The Bat installed. So what does exactly happen? The symptoms are: pressing alt-tab doesn't bring up the list anymore - it just jumps to the next window instantly, very similar to the way Alt-Esc works, however in this case, it's due to not having enough resources to bring up the alt-tab menu random programs would randomly crash, citing random errors, out of memory errors, system resources, inabilities to do system calls, etc. random programs would start missing random parts - for example, Firefox top menus might disappear, pull up partial selections, or not pull up anymore altogether. IE might lose a few of its toolbars. Some programs might fail to redraw or would just plain go gray where the UI used to be. Windows itself never complains about running out of RAM, virtual memory, or anything at all, yet it's running out of something. The only clue I was able to find and apply the fix today was this Desktop Heap Limitation. I haven't confirmed the fix working as not enough time passed. In the meantime, what are everyone's thoughts?

    Read the article

  • Avoid linux out-of-memory application teardown

    - by Eddie Parker
    I'm finding that on occasion my Linux box runs out of memory and it starts tearing down random processes to deal with it. I'm curious what administrators do to avoid this? Is the only real solution to up the amount of memory (will upping the swap alone help?), or is there better ways to set up the box with software to avoid this? (i.e., quotas, or some such?). I'd appreciate some feedback. Cheers, -e-

    Read the article

  • Allow image upload - most efficient way?

    - by K-P
    Hey everyone, In my site, I currently only allow users to import images from other sites rather than uploading it themselves. The main reason for this is because I don't have much storage space on my host (relatively speaking). The host charges quite a bit for additional space. What are the alternatives to hosting images users upload (max 1mb size). Would it be a good idea to purchase separate cheap hosting with "unlimited space" (I know that's not true, but I'm guessing it's more than 1gb)? Or are there some caveats with this approach (e.g. security since the site should not be browsable, but accessed via another server)? Are there alternative ideas that I could employ? Thanks for any suggestions

    Read the article

  • Custom built machine has much higher power consumption than expected

    - by foraidt
    I built a machine according to the specs of a computer magazine (c't, Germany). According to the magazine, the power consumption should be at around 10W. I don't want to go into the specifics of the hardware but rather ask for general advice on where to look: I updated the BIOS/UEFI version to the latest version, installed all the recommended drivers and unplugged all hardware that's not necessary to boot into Windows. All that was left is the power supply, mainboard, cpu, cpu cooler and one SSD drive. But still I measured a power consumption of 50W, which is 40W more than it should be. I tried booting Linux Mint from a USB stick, so I don't think it's some Windows-related problem.. Where else could I look? Update 1 I dind't want the question to get closed for being too localized but if more details are necessary, here they are: The system is a desktop PC. The power consumption is measured using a Brennenstuhl PM 231 device, which was tested also by c't and they found it quite accurate. The PSU is an Enermax ETL300AWT, the mainboard Intel DH87RL (Socket 1150) and the CPU Intel G3220 (Haswell). Update 2 There is no online version of the article*. The most details I found can be read on its project page (in German, though...) (*)You can pay for downloadable PDFs, however. English translation of that project page Update 3 Regarding the sceptics: It may sound ridiculous but apparently 10W idle consumption is possible with Intel's Haswell architecture. As a kind of proof, there's an additional Blog article explicitly listing the steps needed to reduce the idle consumption to 10W. Additional hardware: I measured the consumption without the HDD, and as expected the usage dropped by around 10W. I have no chassis fans and the CPU fan is a "Scythe Mugen 4" model. It runs at around 600rpm so I think it won't draw much. When stripping off all my extra components I should be at 10W. But I'm not getting anywhere near that. I would be happy to see "just" 15W in the stripped down version but currently I'm not getting below 50W no matter which component I remove. As I see it this cannot be explained by the PSU being less efficient at lower consumption. I also waited half an hour or so (also checked that no Windows updates were running in the background) and the consumption dind't drop by more than a few watts.

    Read the article

  • Does scheduling recycling app pool in IIS7 help the server conserve memory better?

    - by user29266
    Hello, I have a VPS (IIS7 with Win 2008) It's got: 40 websites and a SQL Server 2008 powering them with only 2 Gigs of RAM. None of the sites are mission critical, they are all just demos. I often have ram issues on the server because each site has does caching and generally uses a lot of memory. Would it make sense to set the application pools to recycle every 3 hours? I'm sure this would free up any memory leaks or processes left "hanging" Are there any other tips on this? Thank you very much!, Aron

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • Plesk Uninstall Memory issue

    - by user115079
    I am trying to uninstall plesk from my VPS by running following command: yum remove sw-* psa-* plesk-* when i run this command i get following error: Running rpm_check_debug Running Transaction Test memory alloc (4 bytes) returned NULL. First time when i run above command, this mem alloc (4 bytes) was very big number like (67864987). then i googled it, got some clear/ulimit commands. executed them. rebooted my system. stopped all process and executed this command again. but still getting 4 byte issue. dont know how to get rid of it. I also tried ulimit after reboot but no success and Yes. No swap attached. these are stats of my system [root@vps ~]# free -m total used free shared buffers cached Mem: 384 67 316 0 0 0 -/+ buffers/cache: 67 316 Swap: 0 0 0 top - 21:01:07 up 3:12, 1 user, load average: 0.24, 0.08, 0.03 Tasks: 31 total, 2 running, 29 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 393216k total, 69832k used, 323384k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached is there any other alternative to achieve my goal to uninstall plesk? thanks.

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    Hi I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • Single SingOn - Best practice

    - by halfdan
    Hi Guys, I need to build a scalable single sign-on mechanism for multiple sites. Scenario: Central web application to register/manage account (Server in Europe) Several web applications that need to authenticate against my user database (Servers in US/Europe/Pacific region) I am using MySQL as database backend. The options I came up with are either replicating the user database across all servers (data security?) or allowing the servers to directly connect to my MySQL instance by explicitly allowing connections from their IPs in my.cnf (high load? single point of failure?). What would be the best way to provide a scalable and low-latency single sign-on for all web applications? In terms of data security would it be a good idea to replicate the user database across all web applications? Note: All web applications provide an API which users can use to embed widgets into their own websites. These widgets work through a token auth mechanism which will again need to authenticate against my user database.

    Read the article

  • Single SignOn - Best practice

    - by halfdan
    Hi Guys, I need to build a scalable single sign-on mechanism for multiple sites. Scenario: Central web application to register/manage account (Server in Europe) Several web applications that need to authenticate against my user database (Servers in US/Europe/Pacific region) I am using MySQL as database backend. The options I came up with are either replicating the user database across all servers (data security?) or allowing the servers to directly connect to my MySQL instance by explicitly allowing connections from their IPs in my.cnf (high load? single point of failure?). What would be the best way to provide a scalable and low-latency single sign-on for all web applications? In terms of data security would it be a good idea to replicate the user database across all web applications? Note: All web applications provide an API which users can use to embed widgets into their own websites. These widgets work through a token auth mechanism which will again need to authenticate against my user database.

    Read the article

  • Count of memory copies in *nix systems between packet at NIC and user application?

    - by Michael_73
    Hi there, This is just a general question relating to some high-performance computing I've been wondering about. A certain low-latency messaging vendor speaks in its supporting documentation about using raw sockets to transfer the data directly from the network device to the user application and in so doing it speaks about reducing the messaging latency even further than it does anyway (in other admittedly carefully thought-out design decisions). My question is therefore to those that grok the networking stacks on Unix or Unix-like systems. How much difference are they likely to be able to realise using this method? Feel free to answer in terms of memory copies, numbers of whales rescued or areas the size of Wales ;) Their messaging is UDP-based, as I understand it, so there's no problem with establishing TCP connections etc. Any other points of interest on this topic would be gratefully thought about! Best wishes, Mike

    Read the article

  • Computing "average" of two colors

    - by Francisco P.
    This is only marginally programming related - has much more to do w/ colors and their representation. I am working on a very low level app. I have an array of bytes in memory. Those are characters. They were rendered with anti-aliasing: they have values from 0 to 255, 0 being fully transparent and 255 totally opaque (alpha, if you wish). I am having trouble conceiving an algorithm for the rendering of this font. I'm doing the following for each pixel: // intensity is the weight I talked about: 0 to 255 intensity = glyphs[text[i]][x + GLYPH_WIDTH*y]; if (intensity == 255) continue; // Don't draw it, fully transparent else if (intensity == 0) setPixel(x + xi, y + yi, color, base); // Fully opaque, can draw original color else { // Here's the tricky part // Get the pixel in the destination for averaging purposes pixel = getPixel(x + xi, y + yi, base); // transfer is an int for calculations transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.red + (float) pixel.red)/2); // This is my attempt at averaging newPixel.red = (Byte) transfer; transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.green + (float) pixel.green)/2); newPixel.green = (Byte) transfer; // transfer = (int) ((float) ((float) 255.0 - (float) intensity)/255.0 * (((float) color.blue) + (float) pixel.blue)/2); transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.blue + (float) pixel.blue)/2); newPixel.blue = (Byte) transfer; // Set the newpixel in the desired mem. position setPixel(x+xi, y+yi, newPixel, base); } The results, as you can see, are less than desirable. That is a very zoomed in image, at 1:1 scale it looks like the text has a green "aura". Any idea for how to properly compute this would be greatly appreciated. Thanks for your time!

    Read the article

  • How can I eliminate latency in quicktime streamed video

    - by JJFeiler
    I'm prototyping a client that displays streaming video from a HaiVision Barracuda through a quicktime client. I've been unable to reduce the buffer size below 3.0 seconds... for this application, we need as low a latency as the network allows, and prefer video dropouts to delay. I'm doing the following: - (void)applicationDidFinishLaunching:(NSNotification *)aNotification { NSString *path = [[NSBundle mainBundle] pathForResource:@"haivision" ofType:@"sdp"]; NSError *error = nil; QTMovie *qtmovie = [QTMovie movieWithFile:path error:&error]; if( error != nil ) { NSLog(@"error: %@", [error localizedDescription]); } Movie movie = [qtmovie quickTimeMovie]; long trackCount = GetMovieTrackCount(movie); Track theTrack = GetMovieTrack(movie,1); Media theMedia = GetTrackMedia(theTrack); MediaHandler theMediaHandler = GetMediaHandler(theMedia); QTSMediaPresentationParams myPres; ComponentResult c = QTSMediaGetIndStreamInfo(theMediaHandler, 1,kQTSMediaPresentationInfo, &myPres); Fixed shortdelay = 1<<15; OSErr theErr = QTSPresSetInfo (myPres.presentationID, kQTSAllStreams, kQTSTargetBufferDurationInfo, &shortdelay ); NSLog(@"OSErr %d", theErr); [movieView setMovie:qtmovie]; [movieView play:self]; } I seem to be getting valid objects/structures all the way down to the QTSPres, though the ComponentResult and OSErr are both returning -50. The streaming video plays fine, but the buffer is still 3.0seconds. Any help/insight appreciated. J

    Read the article

  • why my ubuntu always runs on low-grahphic mode?

    - by sam
    My graphic card is NVIDIA GTX 460, and ubuntu is ubuntu 10.04. I have reinstalled graphic driver for two times,but my ubuntu shows that it runs on low-graphic mode after a period of time of driver installation process. The following is the information the computer shows. http://imgur.com/GkZUz Also, when I try to change the graphic setting of NVIDIA on ubuntu, the following information shows. http://imgur.com/SVhCc How to fix it?Thank you!!

    Read the article

  • Will low level programms become obsolete once the "post-performance" world arives? [closed]

    - by nbv4
    With the new iPhone 5 being as powerful as the supercomputers of the 1980s, its only a matter of time when the latest phones will be powerful enough to run a twitter-scale web application from within my pocket. When that time comes, performance will no longer be something programmers need to care about. Will low level languages still have a place? Or will everyone move to dynamic languages like Python?

    Read the article

  • UPS - Two computers - How to get them to both shutdown when battery is low?

    - by hamlin11
    Short Version: How do I get 2 computers to shutdown when a UPS battery gets low? Long Version: I have an APC UPS, the RS 1500. It has a USB cord that goes into my main dev computer. My dev computer will shutdown when the battery gets low. However, in addition, I have now hooked up a database server to the same UPS. How can I have that database server also know that it needs to shut down when the battery gets low?

    Read the article

  • New PeopleSoft HCM 9.1 On Demand Standard Edition provides a complete set of IT services at a low, predictable monthly cost

    - by Robbin Velayedam
    At Oracle Open World last month, Oracle announced that we are extending our On Demand offerings with the general availability of PeopleSoft On Demand Standard Edition. Standard Edition represents Oracle’s commitment to providing customers a choice of solutions, technology, and deployment options commensurate with their business needs and future growth. The Standard Edition offering complements the traditional On Demand offerings (Enterprise and Professional Editions) by focusing on a low, predictable monthly cost model that scales with the size of your business.   As part of Oracle's open cloud strategy, customers can freely move PeopleSoft licensed applications between on premise and the various  on demand options as business needs arise.    In today’s business climate, aggressive and creative business objectives demand more of IT organizations. They are expected to provide technology-based solutions to streamline business processes, enable online collaboration and multi-tasking, facilitate data mining and storage, and enhance worker productivity. As IT budgets remain tight in a recovering economy, the challenge becomes how to meet these demands with limited time and resources. One way is to eliminate the variable costs of projects so that your team can focus on the high priority functions and better predict funding and resource needs two to three years out. Variable costs and changing priorities can derail the best laid project and capacity plans. The prime culprits of variable costs in any IT organization include disaster recovery, security breaches, technical support, and changes in business growth and priorities. Customers have an immediate need for solutions that are cheaper, predictable in cost, and flexible enough for long-term growth or capacity changes. The Standard Edition deployment option fulfills that need by allowing customers to take full advantage of the rich business functionality that is inherent to PeopleSoft HCM, while delegating all application management responsibility – such as future upgrades and product updates – to Oracle technology experts, at an affordable and expected price. Standard Edition provides the advantages of the secure Oracle On Demand hosted environment, the complete set of PeopleSoft HCM configurable business processes, and timely management of regular updates and enhancements to the application functionality and underlying technology. Standard Edition has a convenient monthly fee that is scalable by number of employees, which helps align the customer’s overall cost of ownership with its size and anticipated growth and business needs. In addition to providing PeopleSoft HCM applications' world class business functionality and Oracle On Demand's embassy-grade security, Oracle’s hosted solution distinguishes itself from competitors by offering customers the ability to transition between different deployment and service models at any point in the application ownership lifecycle. As our customers’ business and economic climates change, they are free to transition their applications back to on-premise at any time. HCM On Demand Standard Edition is based on configurability options rather than customizations, requiring no additional code to develop or maintain. This keeps the cost of ownership low and time to production less than a month on average. Oracle On Demand offers the highest standard of security and performance by leveraging a state-of-the-art data center with dedicated databases, servers, and secured URL all within a private cloud. Customers will not share databases, environments, platforms, or access portals with other customers because we value how mission critical your data are to your business. Oracle’s On Demand also provides a full breadth of disaster recovery services to provide customers the peace of mind that their data are secure and that backup operations are in place to keep their businesses up and running in the case of an emergency. Currently we have over 50 PeopleSoft customers delegating us with the management of their applications through Oracle On Demand. If you are a customer interested in learning more about the PeopleSoft HCM 9.1 Standard Edition and how it can help your organization minimize your variable IT costs and free up your resources to work on other business initiatives, contact Oracle or your Account Services Representative today.

    Read the article

  • If my team has low skill, should I lower the skill of my code?

    - by Florian Margaine
    For example, there is a common snippet in JS to get a default value: function f(x) { x = x || 10; } This kind of snippet is not easily understood by all the members of my team, their JS level being low. Should I not use this trick then? It makes the code less readable by peers, but more readable than the following according to any JS dev: function f(x) { if (!x) { x = 10; } } Sure, if I use this trick and a colleague sees it, then they can learn something. But the case is often that they see this as "trying to be clever". So, should I lower the level of my code if my teammates have a lower level than me?

    Read the article

  • Why should we use low level languages if a high level one like python can do almost everything? [closed]

    - by killown
    I know python is not suitable for things like microcontrolers, make drivers etc, but besides that, you can do everything using python, companys get stuck with speed optimizations for real hard time system but does forget other factors which one you can just upgrade your hardware for speed proposes in order to get your python program fit in it, if you think how much cust can the company have to maintain a system written in C, the comparison is like that: for example: 10 programmers to mantain a system written in c and just one programmer to mantain a system written in python, with python you can buy some better hardware to fit your python program, I think that low level languages tend to get more cost, since programmers aren't so cheaply than a hardware upgrade, then, this is my point, why should a system be written in c instead of python?

    Read the article

  • photshop: why low quality jpg saved as high quality increases the file size?

    - by Alex Angelico
    i have a low quality background about 85kb. If I open the file in Photoshop and save it, I have to save this image as 100% quality, the file size increases to 640kb. This doesn't make sense, the image si already compressed, the quality cannot be better than the source, so the 100% quality while saving should produce THE SAME FILE OPENED. The problem is, I have this background and want to add a logo image over it, and then save it. But If I do so, i have to save this with 10% quality, and the logo looks horrible. If I save this to 100% quality, the file size is huge... How can I achive this?

    Read the article

  • Why are cryptic short identifiers still so common in low-level programming?

    - by romkyns
    There used to be very good reasons for keeping instruction / register names short. Those reasons no longer apply, but short cryptic names are still very common in low-level programming. Why is this? Is it just because old habits are hard to break, or are there better reasons? For example: Atmel ATMEGA32U2 (2010?): TIFR1 (instead of TimerCounter1InterruptFlag), ICR1H (instead of InputCapture1High), DDRB (instead of DataDirectionPortB), etc. .NET CLR instruction set (2002): bge.s (instead of branch-if-greater.signed), etc. Aren't the longer, non-cryptic names easier to work with?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >