Search Results

Search found 664 results on 27 pages for 'reducing'.

Page 1/27 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Reducing the opacity on a div without reducing the opacity of the contents

    - by user352744
    Want to reduce the opacity of page contents container background without reducing the opacity of the contents. <div id="container"> <div id="page contents"> page contents goes here, like amazing articles and all that. </div> </div> Needs to be able to expand with the content, thus can't have a fixed height. Absolute positioning it underneath the content will mean there will be no relationship between the two divs and it wont expand with the contents, so I think this is a dead end, feel free to say otherwise. Can't use Jquery as could be too laggy and not instant. Other options preferred please. May have to use 'png' background images but were hoping not to as it is a template and needs to be able to change colour based on colour schemes. Could generate images on demand but not ideal. Oh and to top it off cant use CSS3 as wont work in IE! of course! Any suggestions?

    Read the article

  • How to stop reducing life? [closed]

    - by SystemNetworks
    CODE Input input = gc.getInput(); int xpos = Mouse.getX(); int ypos = Mouse.getY(); emu = "Enemy Life : " + enemyLife; Life = "Your Life Is" + life; Mousepos = "X:" + xpos + "Y:" + ypos; //test test1 = "Test INT" + test1int; if(!repeatStop) { //if this button is press, the damage will add up. When //pressed fight, it would start reducing the enemy health. if(input.isKeyPressed(Input.KEY_1)) { test1int += 1; } } if((xpos>1007 &xpos<1297)&&(ypos>881 && ypos<971)) { //Fight button if(Mouse.isButtonDown(0)){ finishTurn=true; } } //fight has started if(finishTurn==true) { //this would reduce the enemy life if(floodControl1==false) { enemyLife-=test1int; } //PROBLEM: Does not stop reducing! //the below code was not successful. It did not stop it // from reducing further. if(test1int>10) { floodControl1=true; } } QUESTION: Ok now, this is what is does. When I press the key, 1, it adds up the damage to the enemy. When I press fight, It will then start to reduce the enemy's health. Now my problem is, it kept on reducing and deducting it until negative! How do I deduct it to my desired damage (My desired damage is the one when press key 1)?

    Read the article

  • SQL SERVER Improve Performance by Reducing IO Creating Covered Index

    This blog post is in the response of the T-SQL Tuesday #004: IO by Mike Walsh. The subject of this month is IO. Here is my quick blog post on how Cover Index can Improve Performance by Reducing IO.Let us kick off this post with disclaimers about Index. Index is a very complex subject and [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Reducing Deadlocks - not a DBA issue ?

    - by steveh99999
     As a DBA, I'm involved on an almost daily basis troubleshooting 'SQL Server' performance issues. Often, this troubleshooting soon veers away from a 'its a SQL Server issue' to instead become a wider application/database design/coding issue.One common perception with SQL Server is that deadlocking is an application design issue - and is fixed by recoding...  I see this reinforced by MCP-type questions/scenarios where the answer to prevent deadlocking is simply to change the order in code in which tables are accessed....Whilst this is correct, I do think this has led to a situation where many 'operational' or 'production support' DBAs, when faced with a deadlock, are happy to throw the issue over to developers without analysing the issue further....A couple of 'war stories' on deadlocks which I think are interesting :- Case One , I had an issue recently on a third-party application that I support on SQL 2008.  This particular third-party application has an unusual support agreement where the customer is allowed to change the index design on the third-party provided database.  However, we are not allowed to alter application code or modify table structure..This third-party application is also known to encounter occasional deadlocks – indeed, I have documentation from the vendor that up to 50 deadlocks per day is not unusual !So, as a DBA I have to support an application which in my opinion has too many deadlocks - but, I cannot influence the design of the tables or stored procedures for the application. This should be the classic - blame the third-party developers scenario, and hope this issue gets addressed in a future application release - ie we could wait years for this to be resolved and implemented in our production environment...But, as DBAs  can change the index layout, is there anything I could do still to reduce the deadlocks in the application ?I initially used SQL traceflag 1222 to write deadlock detection output to the SQL Errorlog – using this I was able to identify one table heavily involved in the deadlocks.When I examined the table definition, I was surprised to see it was a heap – ie no clustered index existed on the table.Using SQL profiler to see locking behaviour and plan for the query involved in the deadlock, I was able to confirm a table scan was being performed.By creating an appropriate clustered index - it was possible to produce a more efficient plan and locking behaviour.So, less locks, held for less time = less possibility of deadlocks. I'm still unhappy about the overall number of deadlocks on this system - but that's something to be discussed further with the vendor.Case Two,  a system which hadn't changed for months suddenly started seeing deadlocks on a regular basis. I love the 'nothing's changed' scenario, as it gives me the opportunity to appear wise and say 'nothings changed on this system, except the data'.. This particular deadlock occurred on a table which had been growing rapidly. By using DBCC SHOW_STATISTICS - the DBA team were able to see that the deadlocks seemed to be occurring shortly after auto-update stats had regenerated the table statistics using it's default sampling behaviour.As a quick fix, we were able to schedule a nightly UPDATE STATISTICS WITH FULLSCAN on the table involved in the deadlock - thus, greatly reducing the potential for stats to be updated via auto_update_stats, consequently reducing the potential for a bad plan to be generated based on an unrepresentative sample of the data. This reduced the possibility of a deadlock occurring.  Not a perfect solution by any means, but quick, easy to implement, and needed no application code changes. This fix gave us some 'breathing space'  to properly fix the code during the next scheduled application release.   The moral of this post - don't dismiss deadlocks as issues that can only be fixed by developers...

    Read the article

  • Video White Paper: Mega-Project Management: Reducing Risk & Complexity across the Value Chain

    - by Melissa Centurio Lopes
    Watch this short video white paper, to learn how Oracle Primavera can help you keep projects on track and protect your investments. You can also download the full white paper “Mega-Project Management: Reducing Risk & Complexity Across the Value Chain” to gain more in depth information about strategies for collaborating and sharing information and data in a systematic way across the value chain. Download the white paper in order to learn how your company can get the expected payoff from your next mega project. Register now to download the full complementary white paper, and discover how to: Improve decision-making and accountability through enterprise-wide visibility, workflows, and collaboration Reduce financial and performance risk

    Read the article

  • SQL SERVER – Reducing Page Contention on TempDB

    - by pinaldave
    I have recently received following email. “We are using TraceFlag 1118 to reduce the tempDB contention on our servers (2000 and 2005). What is your opinion? We have read lots of material, would you please answer me in single line.” Wow, this was very interesting question. What intrigued me was the second last where I am asked to answer in a single line. There is something about this strong email, I feel like blogging it here. I think I can talk over this subject forever – well, there is no clear answer. There are so many caveats about everything.  Again, I must stay honest to the request about answering in single line. I also do not like to answer which is YES/NO. What should I do? Let me ask this question to community today? What will you answer to this email? Let me start this by answering it myself in one line and taking one side. “I enable this trace flag in SQL Server 2000 without hot patch or service pack and not in later versions (2005+) onwards as code is improved”. What do you do in this case? The best answer will feature in this blog with due credit. Regarding further read and hint here is Microsoft KB which I think is very helpful. In quick summary: (Read KB for accuracy) When any page is allocated first 8 pages are allocated in mixed extended. This trace flag allocates uniform extended at the time, reducing contention. You can enable this trace flag at startup. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL TempDB, TempDB

    Read the article

  • SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database

    - by pinaldave
    While engaging in a performance tuning consultation for a client, a situation occurred where they were facing a lot of CXPACKET Waits Stats. The client asked me if I could help them reduce this huge number of wait stats. I usually receive this kind of request from other client as well, but the important thing to understand is whether this question has any merits or benefits, or not. Before we continue the resolution, let us understand what CXPACKET Wait Stats are. The official definition suggests that CXPACKET Wait Stats occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if a conflict concerning this wait type develops into a problem. (from BOL) In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Note that CXPACKET Wait is done by completed thread and not the one which are unfinished. “Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is also unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query.” Now let us see what the best practices to reduce the CXPACKET Wait Stats are. The suggestions, with which you will find that if you search online through the browser, would play a major role as and might be asked about their jobs In addition, might tell you that you should set ‘maximum degree of parallelism’ to 1. I do agree with these suggestions, too; however, I think this is not the final resolutions. As soon as you set your entire query to run on single CPU, you will get a very bad performance from the queries which are actually performing okay when using parallelism. The best suggestion to this is that you set ‘the maximum degree of parallelism’ to a lower number or 1 (be very careful with this – it can create more problems) but tune the queries which can be benefited from multiple CPU’s. You can use query hint OPTION (MAXDOP 0) to run the server to use parallelism. Here is the two-quick script which helps to resolve these issues: Change MAXDOP at Server Level EXEC sys.sp_configure N'max degree of parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Run Query with all the CPU (using parallelism) USE AdventureWorks GO SELECT * FROM Sales.SalesOrderDetail ORDER BY ProductID OPTION (MAXDOP 0) GO Below is the blog post which will help you to find all the parallel query in your server. SQL SERVER – Find Queries using Parallelism from Cached Plan Please note running Queries in single CPU may worsen your performance and it is not recommended at all. Infect this can be very bad advise. I strongly suggest that you identify the queries which are offending and tune them instead of following any other suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Reducing Oracle LOB Memory Use in PHP, or Paul's Lesson Applied to Oracle

    - by christopher.jones
    Paul Reinheimer's PHP memory pro tip shows how re-assigning a value to a variable doesn't release the original value until the new data is ready. With large data lengths, this unnecessarily increases the peak memory usage of the application. In Oracle you might come across this situation when dealing with LOBS. Here's an example that selects an entire LOB into PHP's memory. I see this being done all the time, not that that is an excuse to code in this style. The alternative is to remove OCI_RETURN_LOBS to return a LOB locator which can be accessed chunkwise with LOB->read(). In this memory usage example, I threw some CLOB rows into a table. Each CLOB was about 1.5M. The fetching code looked like: $s = oci_parse ($c, 'SELECT CLOBDATA FROM CTAB'); oci_execute($s); echo "Start Current :" . memory_get_usage() . "\n"; echo "Start Peak : " .memory_get_peak_usage() . "\n"; while(($r = oci_fetch_array($s, OCI_RETURN_LOBS)) !== false) { echo "Current :" . memory_get_usage() . "\n"; echo "Peak : " . memory_get_peak_usage() . "\n"; // var_dump(substr($r['CLOBDATA'],0,10)); // do something with the LOB // unset($r); } echo "End Current :" . memory_get_usage() . "\n"; echo "End Peak : " . memory_get_peak_usage() . "\n"; Without "unset" in loop, $r retains the current data value while new data is fetched: Start Current : 345300 Start Peak : 353676 Current : 1908092 Peak : 2958720 Current : 1908092 Peak : 4520972 End Current : 345668 End Peak : 4520972 When I uncommented the "unset" line in the loop, PHP's peak memory usage is much lower: Start Current : 345376 Start Peak : 353676 Current : 1908168 Peak : 2958796 Current : 1908168 Peak : 2959108 End Current : 345744 End Peak : 2959108 Even if you are using LOB->read(), unsetting variables in this manner will reduce the PHP program's peak memory usage. With LOBS in Oracle DB there is also DB memory use to consider. Using LOB->free() is worthwhile for locators. Importantly, the OCI8 1.4.1 extension (from PECL or included in PHP 5.3.2) has a LOB fix to free up Oracle's locators earlier. For long running scripts using lots of LOBS, upgrading to OCI8 1.4.1 is recommended.

    Read the article

  • Reminder: ATG Live Webcast June 29th: Reducing TCO Using Oracle E-Business Suite Management Packs

    - by Steven Chan (Oracle Development)
    Reminder: Our next ATG Live Webcast is happening tomorrow, Thursday, June 29th: How to Reduce TCO Using Oracle E-Business Suite Management Packs This one-hour webcast provides an overview of how EBS sysadmins can make their lives easier with the Management Packs for Oracle E-Business Suite Release 12.x. This session will highlight key features in Applications Management Pack (AMP) and Applications Change Management Pack (ACMP) that can automate or streamline some of the tasks needed to: Manage your EBS system configurations Monitor your EBS environment's performance and uptime Keep multiple EBS environments in sync with their patches and configurations Create patches for your EBS customizations and apply them with Oracle's own patching tools There will also be a special mention of Oracle E-Business Suite Adapter. How to Reduce TCO Using Oracle E-Business Suite Management Packs Date: Thursday, June 30, 2011 Time: 8:00 AM - 9:00 AM Pacific Standard Time (4.00 PM - 5.00 PM GMT) Presenters: Angelo Rosado, Product Manager, ATG Development Registration Link to Webcast Event Dial-in Numbers: U.S. Participants: 877-697-8128 International Participants: 706-634-9568 Passcode: You will receive this with your registration confirmation. Related Articles Oracle E-Business Suite Plug-in 4.0 Released for OEM 11g (11.1.0.1) ATG Live Webcast Replay Available: EBS 12 OAF Rich UI Enhancements WebCast Replay Available: Deploying Oracle VM Templates for E-Business Suite and PeopleSoft

    Read the article

  • Reducing brightness of large areas containing bright colours

    - by intuited
    I do most of my work in either a terminal or a web browser. I prefer my terminals to use bright colours on dark. I would really prefer that web pages tended to look this way as well, but that's not under my control. The problem is that when I switch from a light-on-dark terminal to a dark-on-light web page (like this one), my eyes have to adjust to the overall rise in screen brightness. Apparently this is bad for your eyes, in addition to being painful and annoying. It would seem to be possible for some layer of the interface to adjust the displayed colours for parts of the screen, or perhaps for particular windows, to reduce the brightness of the brighter areas of the screen. Can this be done, possibly with a Compiz extension?

    Read the article

  • A Method for Reducing Contention and Overhead in Worker Queues for Multithreaded Java Applications

    - by Janice J. Heiss
    A java.net article, rich in practical resources, by IBM India Labs’ Sathiskumar Palaniappan, Kavitha Varadarajan, and Jayashree Viswanathan, explores the challenge of writing code in a way that that effectively makes use of the resources of modern multicore processors and multiprocessor servers.As the article states: “Many server applications, such as Web servers, application servers, database servers, file servers, and mail servers, maintain worker queues and thread pools to handle large numbers of short tasks that arrive from remote sources. In general, a ‘worker queue’ holds all the short tasks that need to be executed, and the threads in the thread pool retrieve the tasks from the worker queue and complete the tasks. Since multiple threads act on the worker queue, adding tasks to and deleting tasks from the worker queue needs to be synchronized, which introduces contention in the worker queue.” The article goes on to explain ways that developers can reduce contention by maintaining one queue per thread. It also demonstrates a work-stealing technique that helps in effectively utilizing the CPU in multicore systems. Read the rest of the article here.

    Read the article

  • Ideas for reducing storage needs and/or costs (lots of images)

    - by James P.
    Hi, I'm the webmaster for a small social network and have noticed that images uploaded by users are taking a big portion of the capacity available. These are mostly JPEGs. What solutions could I apply to reduce storage needs? Is there a way to reduce the size of images without affecting quality too much? Is there a service out there that could be used to store static files at a cheaper price (< 1GB/0.04 eurocents)? Edit: Updated the question.

    Read the article

  • HTG Explains: How Do Noise Reducing Headphones Work?

    - by YatriTrivedi
    Passive noise reduction, active noise cancellation, sound isolation… The world of headphones has become quite advanced in giving you your own private sound bubble. Here’s how these different technologies work. Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) WizMouse Enables Mouse Over Scrolling on Any Window Enhance GIMP’s Image Editing Power with Gimp Paint Studio Reclaim Vertical UI Space by Moving Your Tabs to the Side in Firefox Wind and Water: Puzzle Battles – An Awesome Game for Linux and Windows How Star Wars Changed the World [Infographic] Tabs Visual Manager Adds Thumbnailed Tab Switching to Chrome

    Read the article

  • Analysing Indexes - reducing scans.

    - by GrumpyOldDBA
    The whole subject of database/application tuning is sometimes akin to a black art, it's pretty easy to find your worst 20 whatever but actually seeking to reduce operational overhead can be slightly more tricky. If you ever read through my analysing indexes post you'll know I have a number of ways of seeking out ways to tune the database. -- This is a slightly different slant on one of those which produced an interesting side effect. -- We all know that except for very small tables avoiding...(read more)

    Read the article

  • Any ideas on reducing lag in terrain generation?

    - by l5p4ngl312
    Ok so here's the deal. I've written an isometric engine that generates terrain based on camera values using 2D perlin noise. I planned on doing 3D but first I need to work out the lag issues I'm having. I will try to explain how I am doing this so that maybe someone can spot where I am going wrong. I know it should not be this laggy. There is the abstract class Block which right now just contains render(). BlockGrass, etc. extend this class and each has code in the render function to create a textured quad at the given position. Then there is the class Chunk which has the function Generate() and setBlocksInArea(). Generate uses 2D perlin noise to make a height map and stores the heights in a 2D array. It stores the positions of each block it generates in blockarray[x][y][z]. The chunks are 8x8x128. In the main game class there is a 3D array called blocksInArea. The blocks in this array are what gets rendered. When a chunk generates, it adds its blocks to this array at the correct index. It is like this so chunks can be saved to the hard drive (even though they aren't yet) but there can still be optimization with the rendering that you wouldn't have if you rendered each chunk separately. Here's where the laggy part comes in: When the camera moves to a new chunk, a row of chunks generates on the end of the axis that the camera moved on. But it still has to move the other chunks up/down in the blocksInArea (render) array. It does this by calculating the new position in the array and doing the Chunk.setBlocksInArea(): for(int x = 0; x < 8; x++){ for(int y = 0; y < 8; y++){ nx = x+(coordX - camCoordX)*8 ny = y+(coordY - camCoordY)*8 for(int z = 0; z < height[x][y]; z++){ blockarray[x][y][z] = Game.blocksInArea[nx][ny][z]; } } } My reasoning was that this would be much faster than doing the perlin noise all over again, but there are still little spikes of lag when you move in between chunks. Edit: Would it be possible to create a 3 dimensional array list so that shifting of chunks within the array would not be neccessary?

    Read the article

  • phpBB - Reducing Spam

    - by user44175
    I've installed phpBB Forums last week and the past 2 days I've been getting users sign up and posting spam chinese emails on each topic. I have:- Added captcha on registration Made sure users have to verify subscription by email before allowing to post What else can I do to stop this from happening? I've banned their IP addresses but this doesn't stop them from using a proxy to keep spamming the forums. I've read I can block all chinese IP addresses through ACP but is this the best step to block all this? Seems to be all chinese spam at the minute, any help would be much appreciated.

    Read the article

  • Reducing template bloat with inheritance

    - by benoitj
    Does anyone have experience reducing template code bloat using inheritance? i hesitate rewriting our containers this way: class vectorBase { public: int size(); void clear(); int m_size; void *m_rawData; //.... }; template< typename T > class vector : public vectorBase { void push_back( const T& ); //... }; I should keep maximum performance while reducing compile time I'm also wondering why stl implementations do not uses this approach Thanks for your feedbacks

    Read the article

  • Reducing IO caused by nginx

    - by glumbo
    I have a lot of free RAM but my IO is always 100 %util or very close. What ways can I reduce IO by using more RAM? My iotop shows nginx worker processes with the highest io rate. This is a file server serving files ranging from 1mb to 2gb. Here is my nginx.conf #user nobody; worker_processes 32; worker_rlimit_nofile 10240; worker_rlimit_sigpending 32768; error_log logs/error.log crit; #pid logs/nginx.pid; events { worker_connections 51200; } http { include mime.types; default_type application/octet-stream; access_log off; limit_conn_log_level info; log_format xfs '$arg_id|$arg_usr|$remote_addr|$body_bytes_sent|$status'; sendfile off; tcp_nopush off; tcp_nodelay on; directio 4m; output_buffers 3 512k; reset_timedout_connection on; open_file_cache max=5000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; client_body_buffer_size 32k; server_tokens off; autoindex off; keepalive_timeout 0; #keepalive_timeout 65;

    Read the article

  • Lotus Domino - DAOS not reducing file size?

    - by SydxPages
    I have implemented DAOS on a Lotus Domino Server (8.5.3 FP2) as follows: Lotus Domino Server Document: Store file attachments in DAOS: Enabled Minimum size of object before Domino will store in DAOS: 64000 bytes DAOS base path: E:\DAOS Defer object deletion for: 30 days Transaction logging is running, and the specific test database has the following advanced properties set: Domino Attachment and Object Service (ticked) Use LZ1 compression for atachments Compress Database Design Compress Data I have restarted the server. When I run a compact -c, it compacts the database, but does not reduce the size. I have checked the DB in Windows Explorer (60Gb) and the size is the same pre and post. I have checked the directory (E:\DAOS) and it is 35Gb in size. When I run the command 'Tell DAOSMgr Status tmp\test.nsf', I get the following response. From looking up on the net, I believe ticket count = 0 means that the db is not really DAOS'ed? Admin Process: Searching Administration Requests database DAOSMGR: Status tmptest.nsf started DAOS database status: Database: E:\Lotus\Domino\Data\tmp\test.nsf Database state = Synchronized Last resynchronized: 03/09/2012 02:49:13 PM Ticket count: 0 DAOSMGR: Status tmp\test.nsf completed I have run fixup on the database. When I have tried to run the DAOS estimator it has always crashed. This was a problem with larger databases on earlier versions of domino, but not anymore. Can anyone tell me why the size has not reduced? Am I missing anything?

    Read the article

  • Reducing latency for different geographic regions on Amazon Cloud

    - by Shoaibi
    I have got an application which has three components Application code : Amazon EC2 US-EAST-1 instance Application images, and other static data : Amazon S3 with CloudFront Application Database : Amazon RDS In short i need something like Cloud Front for EC2. In long, people using this application from a different region say middle east will have faster static content downloading due to Cloud Front but there would be a lot of latency in communicating to EC2 instance. I want to use a budget friendly way of enhancing this. Launching Amazon Instances in every region that offer is sure a choice, but isn't really cheap, so would try to avoid it unless its last resort. Also say if my clients also need to communicate to the RDS database directly, is there some kind of solution which gives that kind of functionality mentioned above, but for RDS?

    Read the article

  • Reducing video mode switching during Linux boot

    - by Zack
    When I boot up my desktop computer, which only has Linux on it, the video mode and/or console font gets switched four times: When GRUB starts, it switches from 80x25 text to a graphical mode so it can draw a pretty background behind its menu; GRUB then goes back to 80x25 text after I pick something from the menu; When the KMS driver for my video card loads, it switches to a much higher-resolution text mode (I don't know if this is a hardware text mode or not); Finally X starts and it goes graphics and stays that way. I think this last switch does not change the resolution of the video mode, only the graphicalness. I'd like to get rid of as many of these mode switches as possible. Ideally, when GRUB takes over from the BIOS it would go directly to the same high-resolution text mode that the KMS driver selects, and the display would stay in that mode till X starts and brings up graphics. I am under the impression that this is possible by mucking with the kernel command line and/or the GRUB console module load parameters, but I don't know the details. GRUB 1.98+20100706, kernel 2.6.32.15 using Nouveau video drivers. Distro is Debian unstable. Please no answers that involve recompiling anything or cobbling together bleeding-edge kernel/driver combinations, I don't care enough about this to go to that much trouble. EDIT: Tobu suggests setting GRUB_GFXMODE to the full pixel resolution of the monitor, and GRUB_GFXPAYLOAD_LINUX=keep to avoid the mode switch after the menu goes away. This does part of what I want, but winds up being worse overall. There's no mode switch after the menu, but there's still a painfully-slow screen repaint (I should probably just give up on GRUB's gfxmode, it's waaaay too slow at 1920x1200). More seriously, there's now a double mode switch when nouveaufb loads, along with fun-looking error messages in dmesg [ 5.923798] [drm] nouveau 0000:02:00.0: allocated 1920x1200 fb: 0x40250000, bo ffff8801ba5f4600 [ 5.923802] fb: conflicting fb hw usage nouveaufb vs EFI VGA - removing generic driver [ 5.923821] [drm] nouveau 0000:02:00.0: PFIFO_INTR 0x00000010 - Ch 1 ("PFIFO_INTR" message repeats 400+ times) [ 5.925609] Console: switching to colour dummy device 80x25 [ 5.925802] Console: switching to colour frame buffer device 240x75

    Read the article

  • Reducing apache VIRT and RES memory usage

    - by lisa
    On a quad-core server with 8GB of ram I have apache processes that use up to 2.3GB RES memory and 2.6GB VIRT memory. Here is a copy of the top -c command http://imgur.com/x8Lq9.png Is there a way to reduct the memory usage for these apache processes? These are my httpd.conf settings Timeout 160 TraceEnable Off ServerSignature Off ServerTokens ProductOnly FileETag None StartServers 6 <IfModule prefork.c> MinSpareServers 4 MaxSpareServers 16 </IfModule> ServerLimit 400 MaxClients 320 MaxRequestsPerChild 10000 KeepAlive On KeepAliveTimeout 4 MaxKeepAliveRequests 80

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >