Search Results

Search found 16560 results on 663 pages for 'high tech resources'.

Page 51/663 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • loaded resources looks ugly

    - by Xaver
    I have the TreeView class using in my project. I use icons for it.First i load icons so: ImageList^ il = gcnew ImageList(); il->Images->Add(Image::FromFile("DISK.ico")); il->Images->Add(Image::FromFile("FILE.ico")); il->Images->Add(Image::FromFile("FOLDER.ico")); treeView1->ImageList = il; All was good. But i dont like that if i delete my icons from directory of project. there is error in my project. I decide to add icons in .resx file. Now icons loading look so: ImageList^ il = gcnew ImageList(); Resources::ResourceManager^ resourceManager = gcnew Resources::ResourceManager ("FilesSaver.Form1", GetType()->Assembly); Object^ disk = resourceManager->GetObject("DISK"); il->Images->Add(reinterpret_cast<Image^>(disk)); Object^ file = resourceManager->GetObject("FILE"); il->Images->Add(reinterpret_cast<Image^>(file)); Object^ folder = resourceManager->GetObject("FOLDER"); il->Images->Add(reinterpret_cast<Image^>(folder)); treeView1->ImageList = il; And why icons in the TreeView looks ugly (they look lighter and have a big black border). Why did this happen?

    Read the article

  • iPhone OS: Strategies for high density image work

    - by Jasconius
    I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time. I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases. This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory. Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility. I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following: 1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an external server.

    Read the article

  • High performance text file parsing in .net

    - by diamandiev
    Here is the situation: I am making a small prog to parse server log files. I tested it with a log file with several thousand requests (between 10000 - 20000 don't know exactly) What i have to do is to load the log text files into memory so that i can query them. This is taking the most resources. The methods that take the most cpu time are those (worst culprits first): string.split - splits the line values into a array of values string.contains - checking if the user agent contains a specific agent string. (determine browser ID) string.tolower - various purposes streamreader.readline - to read the log file line by line. string.startswith - determine if line is a column definition line or a line with values there were some others that i was able to replace. For example the dictionary getter was taking lots of resources too. Which i had not expected since its a dictionary and should have its keys indexed. I replaced it with a multidimensional array and saved some cpu time. Now i am running on a fast dual core and the total time it takes to load the file i mentioned is about 1 sec. Now this is really bad. Imagine a site that has tens of thousands of visits a day. It's going to take minutes to load the log file. So what are my alternatives? If any, cause i think this is just a .net limitation and i can't do much about it.

    Read the article

  • Collecting high-volume video viewing data

    - by DanK
    I want to add tracking to our Flash-based media player so that we can provide analytics that show what sections of videos are being watched (at the moment, we just register a view when a video starts playing) For example, if a viewer watches the first 30 seconds of a video and then clicks away to something else, we want the data to reflect that. Likewise, if someone watches the first 10 seconds, then scrubs the timeline to the last minute of the video and watches that, we want to register viewing on the parts watched and not the middle section. My first thought was to collect up the viewing data in the player and send it all to the server at the end of a viewing session. Unfortunately, Flash does not seem to have an event that you can hook into when a viewer clicks away from the page the movie is on (probably a good thing - it would be open to abuse) So, it looks like we're going to have to make regular requests to the server as the video is playing. This is obviously going to lead to a high volume of requests when there are large numbers of simultaneous viewers. The simple approach of dumping all these 'heartbeat' events from clients to a database feels like it will quickly become unmanageable so I'm wondering whether I should be taking an approach where viewing sessions are cached in memory and flushed to database when they become inactive (based on a timeout). That way, the data could be stored as time spans rather than individual heartbeats. So, to the question - what is the best way to approach dealing with this kind of high-volume viewing data? Are there any good existing architectures/patterns? Thanks, Dan.

    Read the article

  • What is the form_for syntax for nested resources?

    - by Kris
    I am trying to create a form for a nested resource. Here is my route: map.resources :websites do |website| website.resources :domains end Here are my attempts and the errors: <% form_for(@domain, :url => website_domains_path(@website)) do | form | %> <%= form.text_field :name %> # ArgumentError: wrong number of arguments (1 for 0) # form_helper.rb:290:in 'respond_to?' # form_helper.rb:290:in 'apply_form_for_options!' # form_helper.rb:277:in 'form_for' <% form_for([@website, @domain]) do | form | %> <%= form.text_field :name %> # ArgumentError: wrong number of arguments (1 for 0) # form_helper.rb:290:in 'respond_to?' # form_helper.rb:290:in 'apply_form_for_options!' # form_helper.rb:277:in 'form_for' <% form_for(:domain, @domain, :url => website_domains_path(@website)) do | form | %> <%= form.text_field :name %> # ArgumentError: wrong number of arguments (1 for 0) # wrapper.rb:14:in 'respond_to?' # wrapper.rb:14:in 'wrap' # active_record_helper.rb:174:in 'error_messages_for' <% form_for(:domain, [@website, @domain]) do | form | %> <%= form.text_field :name %> # UndefinedMethodError 'name' for #<Array:0x40fa498> I have confirmed both @website and @domain contain instances of the correct class. The routes also generate correctly is used like this for example, so I dont think their is an issue with the route or url helpers. <%= website_domains_path(1) %> <%= website_data_source_path(1, 1) %> Rails 2.3.5

    Read the article

  • High memory usage for dummies

    - by zaf
    I've just restarted my firefox web browser again because it started stuttering and slowing down. This happens every other day due to (my understanding) of excessive memory usage. I've noticed it takes 40M when it starts and then, by the time I notice slow down, it goes to 1G and my machine has nothing more to offer unless I close other applications. I'm trying to understand the technical reasons behind why its such a difficult problem to sol ve. Mozilla have a page about high memory usage: http://support.mozilla.com/en-US/kb/High+memory+usage But I'm looking for a slightly more in depth and satisfying explanation. Not super technical but enough to give the issue more respect and please the crowd here. Some questions I'm already pondering (they could be silly so take it easy): When I close all tabs, why doesn't the memory usage go all the way down? Why is there no limits on extensions/themes/plugins memory usage? Why does the memory usage increase if it's left open for long periods of time? Why are memory leaks so difficult to find and fix? App and language agnostic answers also much appreciated.

    Read the article

  • Rails - Seeking a Dry authorization method compatible with various nested resources

    - by adam
    Consensus is you shouldn't nest resources deeper than 1 level. So if I have 3 models like this (below is just a hypothetical situation) User has_many Houses has_many Tenants and to abide by the above i do map.resources :users, :has_many => :houses map.resorces :houses, :has_many => :tenants Now I want the user to be able edit both their houses and their tenants details but I want to prevent them from trying to edit another users houses and tenants by forging the user_id part of the urls. So I create a before_filter like this def prevent_user_acting_as_other_user if User.find_by_id(params[:user_id]) != current_user() @current_user_session.destroy flash[:error] = "Stop screwing around wiseguy" redirect_to login_url() return end end for houses that's easy because the user_id is passed via edit_user_house_path(@user, @house) but in the tenents case tenant house_tenent_path(@house) no user id is passed. But I can get the user id by doing @house.user.id but then id have to change the code above to this. def prevent_user_acting_as_other_user if params[:user_id] @user = User.find(params[:user_id] elsif params[:house_id] @user = House.find(params[:house_id]).user end if @user != current_user() #kick em out end end It does the job, but I'm wondering if there is a more elegant way. Every time I add a new resource that needs protecting from user forgery Ill have to keep adding conditionals. I don't think there will be many cases but would like to know a better approach if one exists.

    Read the article

  • Best way to display a "High Scores" Results

    - by George
    First, I would to thank everyone for all the help they provide via this website. It has gotten me to the point of almost being able to release my first iPhone app! Okay, so the last part I have is this: I have a game that allows users to save their high scores. I update a plist file which contains the users Name, Level, and score. Now I want to create a screen that will display the top 20 high scores. What would be the best way to do this? At first I thought possibly creating an HTML file with this info but am not even sure if that is possible. I would need to read the plist file, and then write it out as HTML. Is this possible? To write a file out as HTML? Or an even better question, is there a better way? Thanks in advance for any and all help! Geo...

    Read the article

  • Using Dispose on a Singleton to Cleanup Resources

    - by ImperialLion
    The question I have might be more to do with semantics than with the actual use of IDisposable. I am working on implementing a singleton class that is in charge of managing a database instance that is created during the execution of the application. When the application closes this database should be deleted. Right now I have this delete being handled by a Cleanup() method of the singleton that the application calls when it is closing. As I was writing the documentation for Cleanup() it struck me that I was describing what a Dispose() method should be used for i.e. cleaning up resources. I had originally not implemented IDisposable because it seemed out of place in my singleton, because I didn't want anything to dispose the singleton itself. There isn't currently, but in the future might be a reason that this Cleanup() might be called but the singleton should will need to still exist. I think I can include GC.SuppressFinalize(this); in the Dispose method to make this feasible. My question therefore is multi-parted: 1) Is implementing IDisposable on a singleton fundamentally a bad idea? 2) Am I just mixing semantics here by having a Cleanup() instead of a Dispose() and since I'm disposing resources I really should use a dispose? 3) Will implementing 'Dispose()' with GC.SuppressFinalize(this); make it so my singleton is not actually destroyed in the case I want it to live after a call to clean-up the database.

    Read the article

  • Resources and techniques/methods for SCJP preparation ?

    - by BenoitParis
    I am passing the SCJP 6 exam in a month. I have the "SCJP Sun Certified Programmer for Java 6 Exam 310-065" book. It seems great for the exam. But I want your advice on this. Getting the closest possible to 100% would be great. I have found a site that answered some of the questions you ask yourself when you go trough the book. Here is it : http://www.janeg.ca/java2.html As you can see it was written for Java 2 :/ I have written another specific question here on StackOverflow about the usefulness of JVM specification and Java compiler code for the SCJP. Will Update the results here. Here it is. Please share the resources you used in preparing the exam. Please also specify any resources that you think might help. Any type of resource is welcome: books, code, specs, sites, wikies, papers, online tests, grandmas... Please also share on any method/technique that helped you prepare the exam. Please also comment on the return you got from the resource and the method (for the learning process and for points in the exam) I'll begin: Book : "SCJP Sun Certified Programmer for Java 6 Exam 310-065". Seems like the official book for the preparation. Technique : Writing code in a text editor and compiling it with javac to test a question. NO IDEs! It helps you get a a straight answer to a question you have. It helps you pay attention to every word in the code (and this is very important in the SCJP) EDIT: Added dimension: Are there good, up-to-date online tests?

    Read the article

  • Object Design catalog and resources

    - by Tauren
    I'm looking for web sites, books, or other resources that provide a catalog of object designs used in common scenarios. I'm not looking for generic design patterns, but for samples of actual object designs that were used to solve real problems. For instance, I'm about to build an internal messaging system for a web application, similar to Facebook's messaging system. This system will allow administrators to send messages to all members, to selected groups of members, or to individuals. Members can send messages to other members or groups of members. Fairly common stuff and a feature that I'm sure thousands of web applications require. I know each situation is different and there are a million ways to design this solution. Although this scenario isn't really all that complex, I'm sure the basic design of the necessary objects and relationships for a system like this has already been done many times. It would be nice to review other similar designs before building my own. Is there a place where people can share their designs and others can browse/search through the catalog to review and provide feedback on them? StackOverflow could be used to a degree for this, but doesn't really provide a catalog of designs. Any other resources that would relate?

    Read the article

  • Ideal HTTP cache control headers for different types of resources

    - by chris_l
    I want to find a minimal set of headers, that work with "all" caches and browsers (also when using HTTPS!) On my (GWT-based) web site, I'll have three kinds of resources: 1. Forever cacheable (public / equal for all users) These files don't ever change, and they get a filename based on the MD5 of their contents (this is GWT's approach). They should get cached as much as possible, even when using HTTPS (so I assume, I should set Cache-Control: public, especially for Firefox?) 2. Changing for every new version of the site (public / equal for all users) These files can be cached, but probably need to be revalidated every time. 3. Individual for each request (private / user specific) These resources (e. g. JSON responses) should never be cached unencrypted to disk under no circumstances. (Maybe I'll have a few specific requests that could be cached.) I have a general idea on which headers I would probably use for each type, but there's always something I could be missing.

    Read the article

  • Resources for setting up a Visual Studio/C++ development environment

    - by Tom H.
    I haven't done much "front-end" development in about 15 years since moving to database development. I'm planning to start work on a personal project using C++ and since I already have MSDN I'll probably end up doing it in Visual Studio 2010. I'm thinking about using Subversion as a version control system eventually. Of course, I'd like to get up and running as quickly as I can, but I'd also like to avoid any pitfalls from a poorly organized project environment. So, my question is, are there any good resources with common best practices for setting up a development environment? I'm thinking along the lines of where to break down a solution into multiple projects if necessary, how to set up a unit testing process, organizing resources, directories, etc. Are there any great add-ons that I should make sure I have set up from the start? Most tutorials just have one simple project, type in your code and click on build to see that your new application says, "Hello World!". This will be a Windows application with several DLLs as well (no web development), so there doesn't need to be a deploy to a web server kind of process. Mostly I just want to make sure that I don't miss anything big and then have to extensively refactor because of it. Thanks!

    Read the article

  • Moving a high quality line on a panel c#

    - by user1787601
    I want to draw a line on a panel and then move it as the mouse moves. To do so, I draw the line and when the mouse moves I redraw the line to the new location and remove the previous line by drawing a line with the background color on it. It works fine if I do not use the high quality smoothing mode. But if use high quality smoothing mode, it leave traces on the panel. Does anybody know how to fix this? Thank you. Here is the code int x_previous = 0; int y_previous = 0; private void panel1_MouseMove(object sender, MouseEventArgs e) { Pen pen1 = new System.Drawing.Pen(Color.Black, 3); Pen pen2 = new System.Drawing.Pen(panel1.BackColor, 3); Graphics g = panel1.CreateGraphics(); g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality; g.DrawLine(pen2, new Point(0, 0), new Point(x_previous, y_previous)); g.DrawLine(pen1, new Point(0, 0), new Point(e.Location.X, e.Location.Y)); x_previous = e.Location.X; y_previous = e.Location.Y; } Here is the snapshot with SmoothingMode Here is the snapshot without SmoothingMode

    Read the article

  • WPF: Trying to add a class to Window.Resources Again

    - by user3952846
    I did exactly the same thing, but still the same error is occurring: "The tag 'CenterToolTipConverter' does not exist in XML namespace 'clr-namespace:WpfApplication1;assembly=WpfApplication1'. Line 12 Position 10." CenterToolTipConverter.cs namespace WpfApplication1 { public class CenterToolTipConverter : IMultiValueConverter { public object Convert(object[] values, Type targetType, object parameter, CultureInfo culture) { if (values.FirstOrDefault(v => v == DependencyProperty.UnsetValue) != null) { return double.NaN; } double placementTargetWidth = (double)values[0]; double toolTipWidth = (double)values[1]; return (placementTargetWidth / 2.0) - (toolTipWidth / 2.0); } public object[] ConvertBack(object value, Type[] targetTypes, object parameter, CultureInfo culture) { throw new NotSupportedException(); } } } MainWindow.xaml <Window x:Class="WpfApplication1.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:WpfApplication1;assembly=WpfApplication1" Title="MainWindow" Height="350" Width="525"> <Window.Resources> <local:CenterToolTipConverter x:Key="myCenterToolTipConverter"/> </Window.Resources> </Window> What am I doing wrong? Thanks in advance!!!

    Read the article

  • Is it worth moving from Microsoft tech to Linux, NodeJS & other open source frameworks to save money for a start-up?

    - by dormisher
    I am currently getting involved in a startup, I am the only developer involved at the moment, and the other guys are leaving all the tech decisions up to me at the moment. For my day job I work at a software house that uses Microsoft tech on a day to day basis, we utilise .NET, SqlServer, Windows Server etc. However, I realise that as a startup we need to keep costs down, and after having a brief look at the cost of hosting for Windows I was shocked to see some of the prices for a dedicated server. The cheapest I found was £100 a month. Also if the business needs to scale in the future and we end up needing multiple servers, we could end up shelling out £10's of £000's a year in SQL Server / Windows Server licenses etc. I then had a quick look at the price of Linux hosting for a dedicated server and saw the price was waaaaaay lower than windows hosting. One place was offering a machine with 2 cores for less than £20 a month. This got me thinking maybe the way to go is open source on Linux. As I write a lot of Javascript at work (I'm working on a single page backbone app at the moment), I thought maybe NodeJS and a web framework like Express would be cool to use. I then thought that instead of using SQL why not use an open source NoSQL database like MongoDB, which has great support on NodeJS? My only concern is that some of the work the application is going to do is going to be dynamically building images and various other image related stuff, i.e. stuff that is quite CPU heavy - so I'm thinking of maybe writing anything CPU heavy in C++ and consuming it as a module in Node. That's the background - but basically is Linux a good match for: Hosting a NodeJS/Express site? Compiling C++ node modules? Using a NoSQL DB like MongoDB? And is it a good idea to move to these unfamiliar technologies to save money?

    Read the article

  • Career advice on whether to stick with coding or move on to tech. lead\management [closed]

    - by flk
    I'm at a point in my career where I need to decide what to do next. I've mainly done C# desktop development (with a little python and Silverlight) for 5 or 6 years and I'm trying to decide whether to start learning JavaScript\HTML or to moving into a role where I do less coding and more tech. lead\management role. With all the talk around HTML5\JavaScript, the rise of mobile and the changes with Windows 8 (metro at least) I wonder if I should stick with coding to get some experience in these areas before moving on. But at the same time if I decide stick with coding for a ‘couple more years’ I will probably be faced with the same situation with some other new\interesting technology that I feel I should learn before moving on. I feel if I stick just with coding I'm limiting my career options but if I move to tech. lead\management I will loose my coding skills. Is going one direction or the other going to limiting my career options in the future? I know that there is no real answer to this question so I’m really just looking for some thoughts from others and perhaps experiences from other people that faced similar situations. Thanks

    Read the article

  • Directory tree in a Resource without extraction...

    - by Corelgott
    Hi all, i am looking for a way to store a complete directory including sub directories in an application's resource and not have to extract it to use it. Details: We would like to use GeckoFx (Gecko as C# Component) in one of our applications. GeckoFX needs the XUL-Runner and needs to find it's folder structure We have some other data which I would not prefer to extracted to the customer's pc; At least not onto something persistent like a hdd... Getting the complete directory into the resources is not that kind of a big deal. Compress to one file and done. But not writing it to the disk to use it is something else. I have a strong dislike against temp folders and such things. Would anything like a RAM drive be possible? Some part of the RAM beeing mounted? Does something like this even exist as a lib, or would this only be possible by a device driver? Any thoughts on this? Thanks in advance! Corelgott

    Read the article

  • Multimaster Keepalived Configuration (Virtual IP with Load Balancing)

    - by Rad Akefirad
    Here are requirements: 1. High Availability 2. Load Balancing First configuration 1. Two linux servers have been configured with one static IP for each: 10.17.243.11, 10.17.243.12 2. Keepalived has been installed and configured with one VRRP instance to provide one virtual IP (10.17.243.10 as VIP, 10.17.243.11 as master and 10.17.243.12 as backup). 3. Everything works fine. The VIP is assigned to the master server (10.17.243.11) as long as it is up and running. As soon as it goes down, the VIP will be assigned to the backup server (10.17.243.12). 4. The problem here is all communication goes to the master server. Second configuration 1. I found active-active configuration for Keepalived which is possible by defining more than one VRRP instance. So that both server have two IPs (real 10.17.243.11 and virtual 10.17.243.10 for server #1 and real 10.17.243.12 and virtual 10.17.243.20 for server #2. 2. Everything works fine. we have two VIPs which are accessible (HA). But all communication coming to each IP still goes to one single machine (either server #1 or #2 depending on the IP). However I found some tricks on the DNS to overcome this limitation. But it's not acceptable in our case. Question: Is there any way to have one virtual IP which is assigned to both servers? By that I mean both servers are handling some part of workload (like the thing we do in web server load balancing)? By using either keepalived or some other tools? Thanks in advance.

    Read the article

  • Keepalived for more than 20 virtual addresses

    - by cvaldemar
    I have set up keepalived on two Debian machines for high availability, but I've run into the maximum number of virtual IP's I can assign to my vrrp_instance. How would I go about configuring and failing over 20+ virtual IP's? This is the, very simple, setup: LB01: 10.200.85.1 LB02: 10.200.85.2 Virtual IPs: 10.200.85.100 - 10.200.85.200 Each machine is also running Apache (later Nginx) binding on the virtual IPs for SSL client certificate termination and proxying to backend webservers. The reason I need so many VIP's is the inability to use VirtualHost on HTTPS. This is my keepalived.conf: vrrp_script chk_apache2 { script "killall -0 apache2" interval 2 weight 2 } vrrp_instance VI_1 { interface eth0 state MASTER virtual_router_id 51 priority 101 virtual_ipaddress { 10.200.85.100 . . all the way to . 10.200.85.200 } An identical configuration is on the BACKUP machine, and it's working fine, but only up to the 20th IP. I have found a HOWTO discussing this problem. Basically, they suggest having just one VIP and routing all traffic "via" this one IP, and "all will be well". Is this a good approach? I'm running pfSense firewalls in front of the machines. Quote from the above link: ip route add $VNET/N via $VIP or route add $VNET netmask w.x.y.z gw $VIP Thanks in advance. EDIT: @David Schwartz said it would make sense to add a route, so I tried adding a static route to the pfSense firewall, but that didn't work as I expected it would. pfSense route: Interface: LAN Destination network: 10.200.85.200/32 (virtual IP) Gateway: 10.200.85.100 (floating virtual IP) Description: Route to VIP .100 I also made sure I had packet forwarding enabled on my hosts: $ cat /etc/sysctl.conf net.ipv4.ip_forward=1 net.ipv4.ip_nonlocal_bind=1 Am I doing this wrong? I also removed all VIPs from the keepalived.conf so it only fails over 10.200.85.100.

    Read the article

  • How should I perform database maintenance on a 24x7 system

    - by solublefish
    I'm a software developer who inherited a part-time DBA role. I'm responsible for an application backed by a small, high-volume 24x7 database on SQL Server 2008. While there's other stuff in the DB, the critical piece is a 50GB, 7.5M row table that serves 100K requests/sec during peak load, and about half that at "night". This is 99%+ read traffic, but the writes are constant, and required. I need to be able to perform periodic maintenance without a maintenance window. Say an index rebuild, a job to purge old data, Windows Update, or hardware upgrade. Most of the advice I've seen is along the lines of "MAKE a maintenance window." While I appreciate the sentiment, I hope there's another way. If it will solve this problem, I do have the ability to purchase new hardware or modify the database, the clients (a set of web services servers), and much of the application code (ADO.NET + ASP.NET). I've been thinking along the lines of using the warm spare (or a 3rd server) to do the maintenance, and then "swap" it into production. 1 Synchronize the spare by restoring backups, including a current transaction log. 2 Perform the maintenance tasks. 3 Reconfigure clients to connect to the spare server. Existing connections are finished within a minute or so. 4 The spare server is now the production server. The problem remaining is that the new production server is now out of date by however long it took to perform maintenance. Is there some way that the original production server can be made to queue up changes and merge them to the spare between steps 2 and 3? Any other ideas?

    Read the article

  • Two Firefox windows vs two browsers? Ram Consumption

    - by Kayle
    I don't know enough about Ram & sharing to know what the difference is here. Normally, I run Chrome in one desktop for personal use, and Firefox on a second desktop for business. I like the separation of saved passwords and whatnot. However, I recently learned that I can open two different profiles in Firefox at the same time, so I was wondering if that would be cheaper to my system resources, or not? Out the door, I don't think it would save more than 40-60mb of ram... but I'm wondering, 3 hours later, if ram handling will be better using just one browser for all my heavy lifting. I only have 2gb of ram and I run iTunes and Photoshop as well, almost all day. So I like to save ram where I can. Any thoughts? UPDATE: I've been centering around chrome more recently and using firefox for testing. Dev isn't bad on Chrome and it's great at releasing memory when I close tabs. In retrospect, I think the best answer to this question is simply for me to buy another 2gb of ram.

    Read the article

  • 100% CPU load on Ubuntu 10.04.3 LTS 64bit

    - by deadtired
    I have 2 days since I am trying to fix this issue, with no success. The server is a mysql database server. Hardware: DELL Poweredge 1950, 2x Intel Xeon Quad Core E5345 @ 2.33GHz, 16 Gb mem, 2x 146Gb SAS (software RAID1) Software: Ubuntu 10.04.3 LTS, MySQL 5.1.41 Issue: while mysql is not used and runs with no database, everything seems alright. As soon as I install a database, it has the reason to bring all 8 cores in 100% with low memory consumption. So, you can imagine the load average goes high (I saw 212 load average for the first time). The server doesn't become unresponsive, but you can see it's slow while browsing the project installed. Additional info: the database used is not more than 24MB and it was moved from a server with less resources and a lot more larger databases. So it's not the database/project. my.cnf is not a reason also, as I used both default one and the one I use on the same distribution on another server.What is interesting is that mysql doesn't close any process and runs to the limit of the max_connections. Logs are quiet. Nothing there. I switched to this Ubuntu version after I suspected some problems in the newly Ubuntu 11.10 server. This one worked alright for an hour after I made a kernel upgrade to 3.0.1 (it was using the memory also) I tested disk speed and seems alright. Some more output on the running server: dstat -cndymlp -N total -D total 3: htop command: Idea? Did anyone meet the same problem? Any fix you can think of?

    Read the article

  • How can I get the correct DisplayMetrics from an AppWidget in Android?

    - by Gary
    I need to determine the screen density at runtime in an Android AppWidget. I've set up an HDPI emulator device (avd). If set up a regular executable project, and insert this code into the onCreate method: DisplayMetrics dm = getResources().getDisplayMetrics(); Log.d("MyTag", "screen density " + dm.densityDpi); This outputs "screen density 240" as expected. However, if I set up an AppWidget project, and insert this code into the onUpdate method: DisplayMetrics dm = context.getResources().getDisplayMetrics(); Log.d("MyTag", "screen density " + dm.densityDpi); This outputs "screen density 160". I noticed, hooking up the debugger, that the mDefaultDisplay member of the Resources object here is null in the AppWidget case. Similarly, if I get a resource at runtime using the Resources object obtained from context.getResources() in the AppWidget, it returns the wrong resource based on screen density. For instance, I have a 60x60px drawable for mdpi, and an 80x80 drawable for hdpi. If I get this Drawable object using context.getResources().getDrawable(...), it returns the 60x60 version. Is there any way to correctly deal with resources at runtime from the context of an AppWidget? Thanks!

    Read the article

  • iPhone dev: Recurse all project resoureces and collect paths

    - by thomax
    I have a bunch of HTML resources in my app organized in a hierarchy like so: dirA |---> index.html |---> a1.html |---> a2.html |---> dirB |---> b1.html |---> b2.html |---> dirC |---> c5.html |---> c19.html I'm trying to collect the absolute paths of all HTML resources however deep down, but can't seem to figure out how. So far I've tried: NSArray myPaths = [[NSBundle mainBundle] pathsForResourcesOfType:@"html" inDirectory:nil]; This returns an empty array, because I have no HTML files at the root of my project. Then there is: NSArray myPaths = [[NSBundle mainBundle] pathsForResourcesOfType:@"html" inDirectory:@"dirA"]; This returns an array containing only the paths for the three HTML resources directly below dirA. If I read the doc correctly, I guess this is to be expected; pathsForResourcesOfType does not traverse subdirectories. But is there a (preferably nice) way to do it?

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >