Search Results

Search found 1004 results on 41 pages for 'minimize'.

Page 22/41 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Class Design - Space Simulator

    - by Peteyslatts
    I have pretty much taught myself everything I know about programming, so while I know how to teach myself (books, internet and reading API's), I'm finding that there hasn't been a whole lot in the way of good programming. So I have two questions: First the broad one: Does anyone have suggestions as to sources for learning about good programming habits and techniques? I'd prefer it if the resource wasn't a 5000 page tome. The more I can read it in installments the better. More specifically: I am finishing up learning the basics of XNA and I want to create a space simulator to test my knowledge. This isn't a full scale simulator, but just something that covers everything I learned. It's also going to be modular so I can build on it, after I get the basics down. One of the early features I want to implement is AI. And I want to take this into account as I'm designing my classes so I can minimize rewriting code. So my question: How should I design ship classes so that both the player and AI can use them? The only idea I have so far is: Create a ship class that contains stats, models, textures, collision data etc. The player and AI would then have the data for position, rotation, health, etc and would base their status off of the ship stats.

    Read the article

  • LiveMeeting VC PowerShell PASS – Troubleshooting SQL Server with PowerShell

    - by Laerte Junior
    Guys, join me on Wednesday July 18th 12 noon EDT (GMT -4) for a presentation called Troubleshooting SQL Server With PowerShell. It will be in English, so please make allowances for this. I’m sure that you’re aware that my English is not perfect, but it is not so bad. I will do my best, you can be sure. The registration link will be available soon from PowerShell.sqlpass.org, so I hope to see you there. It will be a session without slides. Just code; pure PowerShell code. Trust me, We will see a lot of COOL stuff.Big thanks to Aaron Nelson (@sqlvariant) for the opportunity! Here are some more details about the presentation: “Troubleshooting SQL Server with PowerShell – The Next Level’ It is normal for us to have to face poorly performing queries or even complete failure in our SQL server environments. This can happen for a variety of reasons including poor Database Designs, hardware failure, improperly-configured systems and OS Updates applied without testing. As Database Administrators, we need to take precaution to minimize the impact of these problems when they occur, and so we need the tools and methodology required to identify and solve issues quickly. In this Session we will use PowerShell to explore some common troubleshooting techniques used in our day-to-day work as s DBA. This will include a variety of such activities including Gathering Performance Counters in several servers at the same time using background jobs, identifying Blocked Sessions and Reading & filtering the SQL Error Log even if the Instance is offline The approach will be using some advanced PowerShell techniques that allow us to scale the code for multiple servers and run the data collection in asynchronous mode.

    Read the article

  • Be Prepared: Technology Trends Converge and Disrupt

    - by Richard Lefebvre
    Cloud. Big data. Mobile. Social media: these mega trends in technology have had a profound impact on our lives. And now according to SVP Ravi Puri from North America Oracle Consulting Services, these trends are starting to converge and will affect us even more. His article, “Cloud, Analytics, Mobile, And Social: Convergence Will Bring Even More Disruption” appeared in Forbes on June 6. For example, mobile and social are causing huge changes in the business world. Big data and cloud are coming together to help us with deep analytical insights. And much more. These convergences are causing another wave of disruption, which can drive all kinds of improvements in such things as customer satisfaction, competitive advantage, and growth. But, according to Puri, companies need to be prepared. In this article, Puri urges companies to get out in front of the new innovations. H3 gives good directions on how to do so to accelerate time to value and minimize risk. The post is a good thought leadership piece to pass on to your customers.

    Read the article

  • Changing domain name - what are the practical steps involved

    - by Homunculus Reticulli
    I launched a website a couple of years ago, bright eyed and bushy tailed, with dreams of conquering the world. Unfortunately it wasn't to be. Now, that I am a bit older and wiser, I have spent some money on branding and creating more quality content etc, I am rebranding and relaunching the site with a new domain name. Although the traffic on the old site is laughable (i.e. non-existent), there are a few pages of good information on there and I don't want to lose any "juice" those pages may have gained because web crawlers have been seeing it for a few years now. Ok, the upshot of all that is this: I want to change my domain name from xyz.com to abc.com. I am maintaining the same friendly urls I had before, only the domanin name part of the url will change, so that any traffic coming to the old page will be forwarded/redirected? to the new page seamlessly. How do I go about achieving this (i.e. what are the steps I need to carry out, and to minimize any "disruption" to any credibility the existing site has with Googlebot etc? I am running Apache 2.x on a headless Linux (Ubuntu) server.

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Massive Xubuntu desktop malfunction

    - by viktiglemma
    I'm using Xubuntu 11.04. Before everything worked fine, but when booting today the following happened: 1) Window focus does not leave the first-opened program. This means that if I keep Firefox open, and open a terminal, window focus will never be transferred to the terminal. (EDIT: if I open a terminal first, and then open Firefox, Firefox steals focus) 2) Window menus have disappeared. The maximize, minimize, etc., buttons and menu are gone. 3) In Xfce Settings Manager, the "Window Manager" settings window is empty. There is just a gray screen there, so I cannot modify any window settings. 4) The keyboard shortcuts I had previously defined using the Settings Manager do no longer work. Further, ALT-TAB no longer works for cycling between windows. 5) The mouse pointer does not show when I first log in. I have to log out and log in again (with an invisible pointer) before the mouse shows itself. EDIT: 6) I cannot resize or move the Thunderbird window, but I can move the Firefox window What can I do to troubleshoot this?

    Read the article

  • How to plan/manage multi-platform (mobile) products?

    - by PhD
    Say I've to develop an app that runs on iOS, Android and Windows 8 Mobile. Now all three platforms are technically in different program languages. The only 'reuse' that I can see is that of the boxes-and-lines drawings (UML :) charts and nothing else. So how do companies/programmers manage the variation of the same product across different platforms especially since the implementation languages differ? It's 'easier' in the desktop world IMO given the plethora of languages and cross-platform libraries to make your life easier. Not so in the mobile world. More so, product line management principles don't seem to be all that applicable - what is same and variant doesn't really matter - the application is the same (conceptually) and the implementation is variant. Some difficulties that come to mind: Bug Fixing: Applications maybe designed in a similar manner but the bug identification and fixing would be radically different. A bug on iOS may/may-not be existent for that on Android. Or a bug fix approach on one platform may not be the same on another (unless it's a semantic bug like a!=b instead of a==b which would require the same 'approach' to fixing in essence Enhancements: Making a change on one platform would be radically different than on another Code-Design Divergence: They way the code is written/organized, the class structures etc., could be very different given the different implementation environments - leading to further reuse of the (above) UML models. There are of course many others - just keeping the development in sync and making sure all applications are up to the same version with the same set of features etc. Seems the effort is 3x that of a single application. So how exactly does one manage this nightmarish situation? Some thoughts: Split application to client/server to minimize the effect to client side only (not always doable) Use frameworks like Unity-3D that could take care of the cross-platform problem (mostly applicable to games and probably not to other applications etc.) Any other ways of managing a platform line? What are some proven approaches to managing/taming the effects?

    Read the article

  • User Switching in XFCE 12.04 with LightDM and dumping unneeccesary Gnome libs

    - by user111120
    I'm an elder non-techie Mac-to-Linux convert trying to play the linux tech game by ear, so please be gentle! :) I am running XFCE Ubuntu 12.04 totally on a 8-gig flash drive and it's fantastic. I am starting to run into potential space issues (down to 1.0 gig free from 1.9 gigs since being installed last summer), most likely because of growing Thunderbird mail files, and this prompted my question. I just installed lightDM on my system because I want the ability to switch users in XFCE if I follow instructions on another blog. They advised using LightDM instead of GDM because LightDM doesn't download Gnome libraries. That's great since I need the space, but my question is how can I tell whether I don't already have Gnome libraries installed from other updates and such? And can I minimize having any Gnome libraries? The method for me to switch users entails creating a "fast-user-switch" file in /usr/local/bin; is there any easier way? One last thing so I din't have topen another needless thread; while experimenting I somehow lost the share folder in one of my accounts. Is there any way to get a share folder back? Thanks for any tips! Jim in NYC

    Read the article

  • Missing launcher after 12.04 upgrade

    - by Preston Zacharias
    I recently upgraded to ubuntu 12.04 and after doing some updates and such my application launcher and title bars (for window dialogues) are missing. Basically the entire unity GUI is missing! Not sure what happened so I installed gnome 3 and it was missing a launcher too, but did have title bars. In addition the bar found at the top that lets you know what's open and allows gnome extensions to be displayed is not interactive. I can't click, right click, alt + click (right or left), alt + super click (right or left) anywhere! I even installed an application menu from the gnome site and it is not interactive either. However, since there is no way to launch applications i have to use terminal and if i minimize an app it will disappear completely. Then i decided to try unity 2D and it is incredibly messed up. Black background, launcher is there but icons and top bar while on desktop are completely distorted. They're not just pixalated; they're all sorts of funky colors and when i open something from unity 2d launcher it will show it's opened in the launcher but nothing appears on my screen. When trying to view videos on youtube the video is distorted and looks just as unity 2d does. Strange enough: the audio works fine, just not videos. Pictures loads, but not ads that stream video. Any suggestions to get my launcher and the unity GUI back? I tried reinstalling gnome, unity 3d, and unity 2d from terminal. no change. also reinstalled unity desktop and tried resetting it: nothing happened.

    Read the article

  • Can it be a good idea to lease a house rather than a standard office-space for a software development shop? [closed]

    - by hamlin11
    Our lease is up on our US-based office-space in July, so it's back on my radar to evaluate our office-space situation. Two of our partners rather like the idea of leasing a house rather than standard office-space. We have 4 partners and one employee. I'm against the idea at this moment in time. Pros, as I see them Easier to get a good location (minimize commutes) All partners/employees have dogs. Easier to work longer hours without dog-duties pulling people back home More comfortable bathroom situation Residential Internet Rate Control of the thermostat Clients don't come to our office, so this would not change our image The additional comfort-level should facilitate a significantly higher-percentage of time "in the zone" for programmers and artists. Cons, as I see them Additional bills to pay (house-cleaning, yard, util, gas, electric) Additional time-overhead in dealing with bills (house-cleaning, yard, util, gas, electric) Additional overhead required to deal with issues that maintenance would have dealt with in a standard office-space Residential neighbors to contend with The equation starts to look a little nasty when factoring in potential time-overhead, especially on issues that a maintenance crew would deal with at a standard office complex. Can this be a good thing for a software development shop?

    Read the article

  • Restore Default Appearance in Ubuntu 12.04

    - by katya sehgal
    I'd like the original colour scheme, icon style of 12.04. I somehow lost the Ambiance theme (possible error or upgrade error). I re-installed 'light-themes' from the terminal and got it back. But the panel on the top that shows the options of sound, battery and wi-fi has changed and I can-not get the original setting back. In the windows, the close, minimize tools have shifted to the right instead of the original left side. I had installed MyUnity and Ubuntu Tweak but deleted them. As such, I want the original setting back. Kindly help me with the commands. I have searched for solutions; there are multiple and I need to be sure if I should follow the same. Kindly bear before marking duplicate. Discoveries *. [The appearance is gray and boxy as outlined here.] Not sure same problem. *. [Similar 'gray and boxy' article here] *. Desktop forgets theme.] *. I have also tried unity --reset command. It never completes. I gave it 20 minutes.

    Read the article

  • How to package static content outside of web application?

    - by chinto
    Our web application has static content packaged as part of WAR. We have been planning to move it out of the project and host it directly on Apache to achieve the following objectives. It's getting too big and bloating the EAR size resulting in slower deployment across nodes. Faster deployment times. Take the load of Application Server Host the static content on a sub domain allowing some browsers (IE) to load resources simultaneously Give us an option to use further caching such as Apache mod_cache apart from the cache headers we send out to browsers. We use yuicompressor-maven-plugin to aggregate and minimize JS file. My question is how do package and manage this static content out side of the web application? My current options are. New maven war project. Still use the same plugin for aggregation and compression. Just a plain directory in SVN and use YUI/Google compressor directly. Or is there a better technology out there to manage static content as a project?

    Read the article

  • End of Public Updates for Java SE 6

    - by Tori Wieldt
    It's important for developers and systems administrators to either make the transition over to Java SE 7 or to work with Oracle to get updates via the Java SE Support program. Have you updated to Java SE 7? Along with great features (Fork/Join, NIO, Project Coin), Java SE 7 is being updated and patched regularly. Java SE 7 has been out for over a year and is ready to download.  The last publicly available release of Oracle JDK 6 is to be released in February, 2013. This means that after 19 February 2013, all new security updates, patches and fixes for Java SE 6 and Java SE 5 will only be available through My Oracle Support and will thus require a commercial license with Oracle.    In the event you are not ready to migrate to Java SE 7, Oracle offers: Java SE Support for continued access to critical bug fixes and security fixes as well as general maintenance for JDK 6. Additionally, Java SE Advanced and Suite offers superior diagnostics and manageability tools that minimize the costs of deployment, monitoring and maintenance of Java-based IT environments. The Java SE Support Roadmap reflects an updated timeline for the End of Public Updates for JDK 6. The End of Public Updates date has been extended from November 2012 to February 2013, to allow some more time for the transition to JDK 7. Older releases of Java SE 6 will still be available on the Java SE archive, but will require a commercial license with Oracle for any new security updates, patches and fixes.  Th End of Public Updates for Java SE 6 will not impact the usage, availability, patching of Java SE 6 used for Fusion Middleware 11g and 12c. The support schedule for Java SE used for and in Fusion Middleware is not impacted by this announcement. For More Information Visit the Java SE page on Oracle.com.

    Read the article

  • How does a segment-based rendering engine (as in Descent) work?

    - by Calmarius
    As far as I know Descent was one of the first games that featured a fully 3D environment, and it used a segment based rendering engine. Its levels are built from cubic segments (these cubes may be deformed as long as it remains convex and sides remain roughly flat). These cubes are connected by their sides. The connected sides are traversable (maybe doors or grids can be placed on these sides), while the unconnected sides are not traversable walls. So the game is played inside of this complex. Descent was software rendered and it had to be very fast, to be playable on those 10-100MHz processors of that age. Some latter levels of the game are huge and contain thousands of segments, but these levels are still rendered reasonably fast. So I think they tried to minimize the amount of cubes rendered somehow. How to choose which cubes to render for a given location? As far as I know they used a kind of portal rendering, but I couldn't find what was the technique used in this particular kind of engine. I think the fact that the levels are built from convex quadrilateral hexahedrons can be exploited.

    Read the article

  • How do I improve terrain rendering batch counts using DirectX?

    - by gamer747
    We have determined that our terrain rendering system needs some work to minimize the number of batches being transferred to the GPU in order to improve performance. I'm looking for suggestions on how best to improve what we're trying to accomplish. We logically split our terrain mesh into smaller grid cells which are 32x32 world units. Each cell has meta data that dictates the four 256x256 textures that are used for spatting along with the alpha blend data, shadow, and light mappings. Each cell contains 81 vertices in a 9x9 grid. Presently, we examine each cell and determine the four textures that are being used to spat the cell. We combine that geometry with any other cell that perhaps uses the same four textures regardless of spat order. If the spat order for a cell differs, the blend map is adjusted so that the spat order is maintained the same as other like cells and blending happens in the right order too. But even with this batching approach, it isn't uncommon when looking out across an area of open terrain to have between 1200-1700 batch count depending upon how frequently textures differ or have different texture blends are between cells. We are only doing frustum culling presently. So using texture spatting, are there other alternatives that can reduce the batch count and allow rendering to be extremely performance-friendly even under DirectX9c? We considered using texture atlases since we're targeting DirectX 9c & older OpenGL platforms but trying to repeat textures using atlases and shaders result in seam artifacts which we haven't been able to eliminate with the exception of disabling mipmapping. Disabling mipmapping results in poor quality textures from a distance. How have others batched together terrain geometry such that one could spat terrain using various textures, minimizing batch count and texture state switches so that rendering performance isn't negatively impacted?

    Read the article

  • Does it make sense to write build scripts in C++?

    - by Klaim
    I'm using CMake to generate my projects IDE/makefiles, but I still need to call custom "scripts" to manipulate my compiled files or even generate code. In previous projects I've been using Python and it was OK, but now I'm having serious trouble managing a lot of dependencies in two very big projects I'm working on so I want to minimize the dependencies everywhere. Someone suggested to me to use C++ to write my build scripts instead of adding a language dependency just for that. The projects themeselves already use C++ so there are several advantages that I can see: to build the whole project, only a C++ compiler and CMake would be necessary, nothing else (all the other dependencies are C or C++); C++ type safety (when using modern C++) makes everything easier to get "correct"; it's also the language I know the better so I'm more at ease with it even if I'm able to write some good Python code; potential gain in execution speed (but i don't think it will really be perceptible); However, I think there might be some drawbacks and I'm not sure of the real impact as I didn't try yet: might be longer to write the code (that said I'm not sure because I'm efficient enough in C++ to write something that work quickly, so maybe for this system it wouldn't be so long to write) (compilation time shouldn't be a problem for this case); I must assume that all the text files I'll read as input are in UTF-8, I'm not sure it can be easilly checked at runtime in C++ and the language will not check it for you; libraries in C++ are harder to manage than in scripting languages; I lack experience and forsight so maybe I'm missing advantages and drawbacks. So the question is: does it make sense to use C++ for this? do you have experiences to report and do you see advantages and disadvantages that might be important?

    Read the article

  • Run Grunt task in Visual Studio Release Build with a bat file

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/19/run-grunt-task-in-visual-studio-release-build-with-a.aspx 1. Add a BeforeBuild in your csproj file. Edit the xml with a text editor. <Target Name="BeforeBuild"> <Exec Condition="'$(Configuration)' == 'Release'" Command="script-optimize.bat" /> </Target> 2. Create the script-optimize.batREM "%~dp0" maps to the directory where this file exists cd %~dp0\..\YourProjectFolder call npm uninstall grunt call npm uninstall grunt call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 grunt typescript requirejs copy less:compile less:mincompileThis grunt command will compile typescript, run the requireJs optimizer, complie and minimize less.3. Make it use the minified code when the Web.config compilation debug is set to false <!-- These CustomCollectFiles actions are used so that the Scripts-Release folder/files are included        when publishing even though they are not project references -->  <Target Name="CustomCollectFiles">    <ItemGroup>      <_CustomFiles Include="Scripts-Release\**\*" />  </ItemGroup>  </Target> That should be all you need to get a Grunt task to minify and combine JS (plus other tasks) in Visual Studio Release build with debug = false. This is a great video of Steve Sanderson talking about SPAs, npm, Knockout, Grunt, Gulp, ect. I highly recommend it.

    Read the article

  • Recommended formats to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Impact of Server Failure on Coherence Request Processing

    - by jpurdy
    Requests against a given cache server may be temporarily blocked for several seconds following the failure of other cluster members. This may cause issues for applications that can not tolerate multi-second response times even during failover processing (ignoring for the moment that in practice there are a variety of issues that make such absolute guarantees challenging even when there are no server failures). In general, Coherence is designed around the principle that failures in one member should not affect the rest of the cluster if at all possible. However, it's obvious that if that failed member was managing a piece of state that another member depends on, the second member will need to wait until a new member assumes responsibility for managing that state. This transfer of responsibility is (as of Coherence 3.7) performed by the primary service thread for each cache service. The finest possible granularity for transferring responsibility is a single partition. So the question becomes how to minimize the time spent processing each partition. Here are some optimizations that may reduce this period: Reduce the size of each partition (by increasing the partition count) Increase the number of JVMs across the cluster (increasing the total number of primary service threads) Increase the number of CPUs across the cluster (making sure that each JVM has a CPU core when needed) Re-evaluate the set of configured indexes (as these will need to be rebuilt when a partition moves) Make sure that the backing map is as fast as possible (in most cases this means running on-heap) Make sure that the cluster is running on hardware with fast CPU cores (since the partition processing is single-threaded) As always, proper testing is required to make sure that configuration changes have the desired effect (and also to quantify that effect).

    Read the article

  • How to revert to Eclipse's old behavior when double-clicking on frames or windows, in Juno?

    - by mattquiros
    I find Eclipse Juno very counter-intuitive. In my workspace, I used to just have the package view at the left, my code on the right, and whenever I'm printing something in the console, that's only when the console frame shows up from the bottom right. When you double click on a particular window, it either only goes fullscreen within Eclipse, or back to its original size together with the size of the other frames. In Juno, however, it seems that frames are put on layers on top of each other. When I print something to the console, the output frame only shows up as a small, useless square to my right. When I double-click, it goes full screen. When I double-click again, it occupies the full right frame, hiding my code. I double-click again and it goes full screen, then it double-clicks before it shares the right frame with my code on top of it. Then when I minimize it, it goes to the side. Any way to go back to the good old days of Eclipse? Thanks.

    Read the article

  • Domain Specific Software Engineering (DSSE)

    Domain Specific Software Engineering (DSSE) believes that creating every application from nothing is not advantageous when existing systems can be leveraged to create the same application in less time and with less cost.  This belief is founded in the idea that forcing applications to recreate exiting functionality is unnecessary. Why would we build a better wheel when we already have four really good and proven wheels? DSSE suggest that we take an existing wheel and just modify it to fit an existing need of a system. This allows developers to leverage existing codebases so that more time and expense are focused on creating more usable functionality compared to just creating more functionality. As an example, how many functions do we need to create to send an email when one can be created and used by all other applications within the existing domain? Key Factors of DSSE Domain Technology Business A Domain in DSSE is used to control the problem space for a project. This control allows for applications to be developed within specific constrains that focus development is to a specific direction.Technology in DSSE offers a variety of technological solutions to be applied within a domain. Technology Examples: Tools Patterns Architectures & Styles Legacy Systems Business is the motivator for any originations to use DSSE in there software development process. Business reason to use DSSE: Minimize Costs Maximize market and Profits When these factors are used in combination additional factors and benefits can be found. Result of combining Key Factors of DSSE Domain + Business  = Corporate Core Competencies Domain expertise improved by market and business expertise Domain + Technology = Application Family Architectures All possible technological solutions to problems in a domain without any business constraints.  Business + Technology =  Domain independent infrastructure Tools and techniques for building systems  independent of all domains  Domain + Business + Technology = Domain-specific software engineering Applies technology to domain related goals in the context of business and market expertise

    Read the article

  • Ubuntu 12.10 Quantal Quetzal and AMD 12.11 Beta Driver

    - by White
    I'm using a Quantal AMDx64 install and a XFX Radeon HD5850 video card. I first enabled restricted drivers through additional drivers, but it resulted in breaking Unity and Compiz (I can only see my wallpaper and shortcuts. But the terminal still works and Nautilus too, however, without Close/Maximize/Minimize and slower). Then I uninstalled it and everything went back to normal. Then I installed it via terminal (12.10 version), and the result was the same. Then I downloaded it via ATI's web site (12.11 beta) and installed the .run file using the terminal, but the result was yet again the same. Then I went to the terminal and entered these commands: sudo apt-get remove --purge fglrx fglrx_* fglrx-amdcccle* fglrx-dev* - It said it had nothing to uninstall sudo rm /ect/x11/xorg.conf - No such directory sudo dpkg-reconfigure xserver-xorg sudo startx sudo cp/ect/x11/xorg.conf.orig /ect/x11/xorg.conf - Also, no such directory sudo aticonfig --initial sudo reboot Then, I was presented with the log in screen, but when I tried to login (with my account), it flashed a black screen and then threw me back. Guest account still works (without unity and compiz, tough) and I can still use TTY. And I also got the "AMD Testing Only" watermark. Then I figured that I should stop messing with the terminal and get help before I unleashed Apocalypse XD. Side notes: My Ubuntu is installed on a ext4 partition with 60GB, and I dual boot with Windows 7 (at least for now). My internet is a 50kbps 3G-ish, so downloading even small files is a pain, let alone a video driver. I would rather not reinstall the O.S., it was a herculean task to download everything I had in there, and I have very little free disk space for backups. I'm still new to Ubuntu (I know some basic commands), and I don't know how to debug, so please, be patient XD And using Windows, my internet is even slower (is that possible?), so it kind of leaves a torture aftertaste xD. So, if you guys could answer quickly, it would be greatly appreciated. Thanks in advance. If you need any info, just ask (and explain how to get it XD).

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • Slow draw on some apps and dynamic clocks not working properly with ATI/AMD proprietary drivers

    - by Rakeka
    I've recently purchased a new computer (around July 2010) and I've been having some problems with proprietary video drivers on Linux. The hardware is: Video: ATI/AMD Radeon HD 5870 (XFX HD-587X-ZNFC); Motherboard: Asus P7P55D-E Deluxe; Processor: Intel i5 750; Memory: Kingston Hyperx KHX1600C8D3K2/4GX (2x - 8GB Total); Power Supply: XFX P1-750B-CAG9; There are no overclocks, not even the memories (they are at 1333mhz due processor memory controller limitation). The operational system is a homebrew Linux distribution with the following software: Architecture: x86_64 (multilib) Kernel: 2.6.35.10 Xorg: 7.5 Window Manager: wmii-3.9.2 Video Driver: ATI/AMD Catalyst 10.12 There are no desktop effects programs like compiz fusion or beryl. The problems: With ATI/AMD proprietary driver, some applications are with slow draw/redraw, and, the same applications make the driver to increase the card clocks to maximum (0% gpu activity, only the clocks are increased). I dunno exactly how to describe the slow draw but I'll list some applications and symptoms. xterm Flickers a lot when drawing continuous output; When I'm in a workspace with fullscreen xterm, The gpu load stays at 12% in idle, and, with smaller xterm, smaller GPU load. "aticonfig --odgc" output: Default Adapter - ATI Radeon HD 5800 Series Core (MHz) Memory (MHz) Current Clocks : 157 300 Current Peak : 850 1200 Configurable Peak Range : [600-900] [900-1300] GPU load : 12% "aticonfig --pplib-cmd 'get activity'" output: Current Activity is Core Clock: 157MHZ Memory Clock: 300MHZ VDDC: 950 Activity: 12 percent Performance Level: 0 Bus Speed: 5000 Bus Lanes: 16 Maximum Bus Lanes: 16 More examples: mplayer time info flickers on terminal; "find /" flickers a lot (It takes some time to stop with control-c. But, If I change the workspace or put some window upon it, just after the control-c, it stops instantly); "cat somefile" if the file is big (Xorg.0.log for example) it takes some time to display; vim and less (ex: find / | less) don't have much problems, just a little flicker when scrolling; mplayer (no gui) Slow reproduction and seek with -vo x11; Tearing with -vo xv; Time info flickers on terminal (xterm consequence); gvim A little slow draw when scrolling with page up/page down; Firefox Slow draw/redraw on some pages like www.boadica.com.br and sometimes on www.youtube.com with flash enable (never noticed on many pages); Corruptions when informative yellow boxes are showing and scroll the page (an gray box appears at the same place of the informative box); "Wallpaper" After minimizing a fullscreen window or changing to an empty workspace it takes some time to redraw wallpaper. "Video Card" The core and memory clocks are increased with the events described above and on other situations like change workspace (even without wallpaper), minimize, maximize or move a window; Idle clocks: Core: 157mhz, Memory: 300mhz Full clocks: Core: 850mhz, Memory: 1200mhz xpdf Painful slow scrolling; display (from ImageMagick) Slow menus and sometimes slow image redraw; Programs that I use and are apparently without problems: gimp; pidgin; mplayer (-vo gl, gl2); blender; unigine heaven (better fps than on Windows); doom3; tibia; penumbra overture; amnesia the dark descent (wine); diablo 2 (wine); No problems on Windows (Windows 7 Ultimate 64bit). And special note to this: Full desktop effects from Debian and Ubuntu gnome appearance cpanel don't cause ANY problems, even the core and memory clocks don't increase when change workspace, minimize, maximize or move a window. What I've tested: Unsuccessful tests: Tested all drivers versions since 10.6 (released approximately when I've installed the first slackware in this PC); Tested other video card - ATI/AMD Radeon HD 5570 (XFX HD-557X-ZHF2); Tested some options on xorg.conf and that I've found googling (some of these options are commented on my xorg.conf. I'll send the links at the end of post); Tested some patches like 107_fedora_dont_fill_bg_none.patch and xserver-xorg-backclear.patch from Arch Linux Catalyst page (https://wiki.archlinux.org/index.php/ATI_Catalyst); Tested other distros and software versions: Tested XORG-7.6 on my own distribution; Tested Debian Squeeze (testing - from 2010-12-20); Tested Ubuntu Marverick (10.10); Tested Slackware 13.1; Distros info: Architecture: i386 Debian and Ubuntu with all default software (kernel, gnome, xorg, drivers); Slackware with Catalyst from AMD page and default window managers like: fvwm, xfce, and my own build of wmii; Successful tests: Tested other video card (only on my homebrew distro) - NVIDIA Geforce 7300GS with driver 260.19.29; That didn't shown the slow draw problems, but that card is a bit obsolete, so, dunno if that lacks features like the dynamic clocks. I don't dispose of other video cards like nvidia g/gt/gts/gtx 200~400~500 or Radeon HD 3000/4000/6000 to make more tests. Tested other hardware: Video: ATI/AMD Radeon HD 5570 (XFX HD-557X-ZHF2); Motherboard: Intel DG31PR; Processor: Core 2 Duo E6750; Software for that hardware: Fresh install of same distros (except for the mine) with same program versions; That video card (HD 5570) were full time at the maximum clocks (something like 500/750, don't remember) in all the operational systems (Windows XP and Windows 7 too), but it didn't shown the same problems that I have here. I've googled a lot about common problems with ATI/AMD proprietary drivers for Linux and didn't find similar problems, except by the Firefox corruptions, that the solutions were to disable ATI Direct2DAccel and use XAA. With XAA the problems persists and the other applications like pidgin and rest of Firefox showed the same problems of slow draw/redraw. Open source Drivers: With open source drivers (xf86-video-ati-6.13.2) I hadn't the same slow draw problems, but, had other problems, that, for now, make it no viable solution. I'll not discuss it here because this is another line of problems and will confuse everything. If it happens to be the only solution, I'll make another thread to discuss it. Logs and Configs: kernel .config dmesg xorg package list xorg.conf Xorg.0.log

    Read the article

  • HttpAddUrl permissions

    - by Ghostrider
    I'm trying to run a custom WinHTTP based web-server on Windows Server 2008 machine. I pass "http://*:22222/" to HttpAddUrl When I start my executable as Administrator or LocalSystem everything works fine. However if I try to run it as NetworkService to minimize security risks (since there are no legitimate reasons for the app to use admin rights) function fails with "Access Denied" error code. I wasn't aware of NetworkService having any restrictions on which ports and interfaces it can listen on. Is there a way to configure permissions in such a way so that I actually can run the app under NetworkService account and connect to it from other internet hosts?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >