Search Results

Search found 10215 results on 409 pages for 'ram usage'.

Page 94/409 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • What should you bring to the table as a Software Architect?

    - by Ahmad Mageed
    There have been many questions with good answers about the role of a Software Architect (SA) on StackOverflow and Programmers SE. I am trying to ask a slightly more focused question than those. The very definition of a SA is broad so for the sake of this question let's define a SA as follows: A Software Architect guides the overall design of a project, gets involved with coding efforts, conducts code reviews, and selects the technologies to be used. In other words, I am not talking about managerial rest and vest at the crest (further rhyming words elided) types of SAs. If I were to pursue any type of SA position I don't want to be away from coding. I might sacrifice some time to interface with clients and Business Analysts etc., but I am still technically involved and I'm not just aware of what's going on through meetings. With these points in mind, what should a SA bring to the table? Should they come in with a mentality of "laying down the law" (so to speak) and enforcing the usage of certain tools to fit "their way," i.e., coding guidelines, source control, patterns, UML documentation, etc.? Or should they specify initial direction and strategy then be laid back and jump in as needed to correct the ship's direction? Depending on the organization this might not work. An SA who relies on TFS to enforce everything may struggle to implement their plan at an employer that only uses StarTeam. Similarly, an SA needs to be flexible depending on the stage of the project. If it's a fresh project they have more choices, whereas they might have less for existing projects. Here are some SA stories I have experienced as a way of sharing some background in hopes that answers to my questions might also shed some light on these issues: I've worked with an SA who code reviewed literally every single line of code of the team. The SA would do this for not just our project but other projects in the organization (imagine the time spent on this). At first it was useful to enforce certain standards, but later it became crippling. FxCop was how the SA would find issues. Don't get me wrong, it was a good way to teach junior developers and force them to think of the consequences of their chosen approach, but for senior developers it was seen as somewhat draconian. One particular SA was against the use of a certain library, claiming it was slow. This forced us to write tons of code to achieve things differently while the other library would've saved us a lot of time. Fast forward to the last month of the project and the clients were complaining about performance. The only solution was to change certain functionality to use the originally ignored approach despite early warnings from the devs. By that point a lot of code was thrown out and not reusable, leading to overtime and stress. Sadly the estimates used for the project were based on the old approach which my project was forbidden from using so it wasn't an appropriate indicator for estimation. I would hear the PM say "we've done this before," when in reality they had not since we were using a new library and the devs working on it were not the same devs used on the old project. The SA who would enforce the usage of DTOs, DOs, BOs, Service layers and so on for all projects. New devs had to learn this architecture and the SA adamantly enforced usage guidelines. Exceptions to usage guidelines were made when it was absolutely difficult to follow the guidelines. The SA was grounded in their approach. Classes for DTOs and all CRUD operations were generated via CodeSmith and database schemas were another similar ball of wax. However, having used this setup everywhere, the SA was not open to new technologies such as LINQ to SQL or Entity Framework. I am not using this post as a platform for venting. There were positive and negative aspects to my experiences with the SA stories mentioned above. My questions boil down to: What should an SA bring to the table? How can they strike a balance in their decision making? Should one approach an SA job (as defined earlier) with the mentality that they must enforce certain ground rules? Anything else to consider? Thanks! I'm sure these job tasks are easily extended to people who are senior devs or technical leads, so feel free to answer at that capacity as well.

    Read the article

  • New Computer

    - by Matt Christian
    Last night I received my computer that was ordered with my tax return money.  Here are the specs of my old computer: - Pentium 4 Processor - 3-4 GB RAM - ~256 GB HDD space (2 drives) - nVidia card (AGP 8x) Sorry I can't be more specific, my memory is gone :p  Here are the new computer specs (mostly): - 2.8ghz Pentium i7 quadcore - 6 GB RAM - 1 TB HDD space (1 drive) - 1 GB Radeon card (PCI-X) I also got a new monitor (22" Asus with HDMI) so will be using my 19" widescreen as a secondary monitor. If I remember I'll hop on here and post the specifics later on...

    Read the article

  • Is my computer slow due to lack of swap

    - by Kristian Jensen
    A few months ago, I installed Ubuntu 12.04 alongside with Windows 7 on my Asus EEE-PC 1015bx. It has a tendency of freezing and when trying to investigate I found that a swap partition of only 256 MB had been created. The Asus EEE-PC 1015bx is born with 1 GByte RAM only and it is not possible to add further or exchange the existing 1 GByte with a larger card. When looking at the system monitor, it looks like all swap is being utilized along with 70-75% of the RAM, even with very few applications running. Can the lack of much swap space be the reason for my computer running slowly and at times freezing? How can I add a swap partition? Or should I add a swap file instead? At the moment, I see two partitions when viewing the system monitor: one 28.6 GByte ext4 partition which must be the one containing Ubuntu and one 100 GByte fuseblk partition which I assume is the one holding Windows. It shows that I have 18.6 GByte free space on the ext4 partition. Can I "take a bite" from the ext4 partition and convert this into a swap partition? I was thinking something like 3 GBytes for swap considering my limited RAM. I hope that someone can guide me through. Thank you. 20th Oct 2012 - Further details Thank you for below answer which I find very useful. I am certainly considering switching to one of your suggested shells as I can see from the Internet that many have posted that these require much fewer resources than ubuntu. It seems to me that lubuntu is the perfect match for my very limited computer. I will have to wait a few days, though, as I am presently limited by a very slow and restricted Internet connection via satellite. But will lubuntu install as simply another shell replacing unity or will it replace ubuntu all together? Will the software that I have installed under ubuntu still be accessible in lubuntu? And can I return to ubuntu if required? Regarding the actual question of swap: When I run gparted, it shows me that there is one ntfs partition of 100 GBytes from where it boots and the before mentioned ext4 partition of 28.6 GBytes is not mentioned. Could it be that my ubuntu installation resides inside this 100 GBytes ntfs partiotion? And if so, can I take a bite of this for my swap partition? Realising that gparted is shown in Danish, I hope that you can make out what I mean. System monitoring shows below details: Once again I sincerely hope that you can help. Thank you.

    Read the article

  • Ubuntu Lagging even LXDE freezes

    - by Anas Ismail Khan
    Laptop, i3, Ram: 2GB. Using 14.04LTS... and it lags like hell. Even if i open more than 4 tabs in Chrome, it freezes, and often I have no choice but to restart and multi-tasking is kinda difficult and at times impossible. Now there's whole thing about Lubuntu and LXDE that are suposed to be super-fast.. installed LXDE.. mind, not lubuntu-desktop. just LXDE. And it too freezes every now and then, and trust this.. when it freezes, it does so worse than Unity.. ESPECIALLY when i start PCManFM... and mount a disk or two... Any ideas as to why this is happening.. The minimum requirements for Unity are supposed to be 1Gig RAM.. and people are running it fine even on 512 MB...

    Read the article

  • Screen Corruption in half the screen only

    - by Guy DAmico
    About 50% of the my NATTY desktop screen is corrupted. Once that happens I can re-boot as many times as I want but the problem continues. If I logout and then into WINDOWS for a day I may be successful and boot UBUNTU with a good screen. The desktop is formatted correctly, there's no pixelation, rather there is a fine grained white crosshatch pattern covering the entire screen. If I open any application the screen corruption worsens eventually to the point I can no longer make out anything. I ran ram memory test w/o any errors. I have no display issues when running WINDOWS 7. Any ideas. My computer is a Dual Boot stock DELL 5150 w/3gig of ram an on board video.

    Read the article

  • Routing Manager for WCF4

    This article describes a design, implementation and usage of the Custom Routing Manager for managing messages via Routing Service built-in .Net 4 Technology.

    Read the article

  • Problem with Ogmo Editor (is Tiled Editor a solution?)

    - by Mentoliptus
    I made a level editor for a puzzle game with Ogmo Editor and gave it to our designer/level designer. When he downloaded and started Ogmo, his CPU went to 100%. I looked at my CPU usage while Ogmo is running, and it goes from 20% to 30% (which is also high for an application alike Ogmo). He has a Windows 7 VM running on his Mac and I have a normal Windows PC, can this be a problem? I found a thread on FlashFunk forum that confirms that Ogmo has CPU usage issues. Has anybody maybe solved this issue? The solution seems to use Tiled Editor, but I never used it before. Is it difficult to change a level editor from Ogmo to Tiled? Can they export in the same format (XML with CSV elements for my puzzle game)?

    Read the article

  • Does software rot refer primarily to performance, or to messy code?

    - by Kazark
    Wikipedia's definition of software rot focuses on the performance of the software. This is a different usage than I am used to; I had thought of it much more in terms of the cleanliness and design of the code—in terms of the code's having all the standard quality characteristics: readability, maintainability, etc. Now, performance is likely to go down when the code becomes unreadable, because no one knows what is going on. But does the term software rot have special reference to performance? or am I right in thinking it refers to the cleanliness of the code? or is this perhaps a case of multiple senses of the term being in common usage—from the user's perspective, it has do with performance; but for the software craftsman, it has to do more specifically with how the code reads?

    Read the article

  • Performance required to improve Windows Experience Index?

    - by Ian Boyd
    Is there a guide on the metrics required to obtain a certain Windows Experience Index? A Microsoft guy said in January 2009: On the matter of transparency, it is indeed our plan to disclose in great detail how the scores are calculated, what the tests attempt to measure, why, and how they map to realistic scenarios and usage patterns. Has that amount of transparency happened? Is there a technet article somewhere? If my score was limited by my Memory subscore of 5.9. A nieve person would suggest: Buy a faster RAM Which is wrong of course. From the Windows help: If your computer has a 64-bit central processing unit (CPU) and 4 gigabytes (GB) or less random access memory (RAM), then the Memory (RAM) subscore for your computer will have a maximum of 5.9. You can buy the fastest, overclocked, liquid-cooled, DDR5 RAM on the planet; you'll still have a maximum Memory subscore of 5.9. So in general the knee-jerk advice "buy better stuff" is not helpful. What i am looking for is attributes required to achieve a certain score, or move beyond a current limitation. The information i've been able to compile so far, chiefly from 3 Windows blog entries, and an article: Memory subscore Score Conditions ======= ================================ 1.0 < 256 MB 2.0 < 500 MB 2.9 <= 512 MB 3.5 < 704 MB 3.9 < 944 MB 4.5 <= 1.5 GB 5.9 < 4.0GB-64MB on a 64-bit OS Windows Vista highest score 7.9 Windows 7 highest score Graphics Subscore Score Conditions ======= ====================== 1.0 doesn't support DX9 1.9 doesn't support WDDM 4.9 does not support Pixel Shader 3.0 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 7.9 Windows 7 highest score Gaming graphics subscore Score Result ======= ============================= 1.0 doesn't support D3D 2.0 supports D3D9, DX9 and WDDM 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 6.0-6.9 good framerates (e.g. 40-50fps) at normal resoltuions (e.g. 1280x1024) 7.0-7.9 even higher framerates at even higher resolutions 7.9 Windows 7 highest score Processor subscore Score Conditions ======= ========================================================================== 5.9 Windows Vista highest score 6.0-6.9 many quad core processors will be able to score in the high 6 low 7 ranges 7.0+ many quad core processors will be able to score in the high 6 low 7 ranges 7.9 8-core systems will be able to approach 8.9 Windows 7 highest score Primary hard disk subscore (note) Score Conditions ======= ======================================== 1.9 Limit for pathological drives that stop responding when pending writes 2.0 Limit for pathological drives that stop responding when pending writes 2.9 Limit for pathological drives that stop responding when pending writes 3.0 Limit for pathological drives that stop responding when pending writes 5.9 highest you're likely to see without SSD Windows Vista highest score 7.9 Windows 7 highest score Bonus Chatter You can find your WEI detailed test results in: C:\Windows\Performance\WinSAT\DataStore e.g. 2011-11-06 01.00.19.482 Disk.Assessment (Recent).WinSAT.xml <WinSAT> <WinSPR> <DiskScore>5.9</DiskScore> </WinSPR> <Metrics> <DiskMetrics> <AvgThroughput units="MB/s" score="6.4" ioSize="65536" kind="Sequential Read">89.95188</AvgThroughput> <AvgThroughput units="MB/s" score="4.0" ioSize="16384" kind="Random Read">1.58000</AvgThroughput> <Responsiveness Reason="UnableToAssess" Kind="Cap">TRUE</Responsiveness> </DiskMetrics> </Metrics> </WinSAT> Pre-emptive snarky comment: "WEI is useless, it has no relation to reality" Fine, how do i increase my hard-drive's random I/O throughput? Update - Amount of memory limits rating Some people don't believe Microsoft's statement that having less than 4GB of RAM on a 64-bit edition of Windows doesn't limit the rating to 5.9: And from xxx.Formal.Assessment (Recent).WinSAT.xml: <WinSPR> <LimitsApplied> <MemoryScore> <LimitApplied Friendly="Physical memory available to the OS is less than 4.0GB-64MB on a 64-bit OS : limit mem score to 5.9" Relation="LT">4227858432</LimitApplied> </MemoryScore> </LimitsApplied> </WinSPR> References Windows Vista Team Blog: Windows Experience Index: An In-Depth Look Understand and improve your computer's performance in Windows Vista Engineering Windows 7 Blog: Engineering the Windows 7 “Windows Experience Index”

    Read the article

  • Video stutter when using external drive

    - by psion
    When using boxee to play video files off of an external western digital 1TB drive formatted NTFS, I notice a slight stutter in the video every 5-10 seconds. When using mplayer, it doesn't stutter as often, but it still stutters occasionally. If I play the video off of the local sata drive, it plays fine even in boxee. I use this computer as my HTPC and I just switched from windows to linux on it. In windows, I never had any sort of stutter playing movies from the drive. I am using the latest intel graphics drivers (for the intel GMA 950) root@eee-htpc:/home/htpc# grep wd /etc/mtab /dev/sdb1 /mnt/wd2 fuseblk rw,nosuid,nodev,allow_other,blksize=512 0 0 I notice that despite trying to use ntfs or ntfs-3g, ubuntu uses ntfs-fuse which I've heard is slower. /dev/sdb1: Timing buffered disk reads: 80 MB in 3.07 seconds = 26.08 MB/sec root@eee-htpc:/mnt/wd2# dd if=/dev/zero of=./120mb bs=1024 count=120000 root@eee-htpc:/mnt/wd2# time mv ./120mb /home/htpc real 0m2.095s user 0m0.016s sys 0m0.736s Even though fuse has a reputation for being slow, it should easily be fast enough for playing standard definition video files. So why the video stutter? edit: The issue seems to be overhead cpu usage from either playing off of a usb device or ntfs/fuse. Watching CPU usage with top, local files use 10-40% CPU. Watching the same video on the external formatted ntfs, it spikes to 170% (over 100% because of hyperthreading). To me it seems like it must be overhead from the fuse driver, though I don't know if it has more or less overhead than ntfs-3g. It's a EEEBox B202 that has an atom 270, so not exactly the most powerful out there. edit2: I believe the solution would be to use non-fuse drivers or different fuse drivers. so far I have not been able to. edit3: I've probably edited this more times than I should, but as an update I have upgraded ntfs drivers to ntfs-3g 2010.8.8 external FUSE 28 - Third Generation NTFS Driver using the following PPA - ppa:x3lectric/team-iquik-releases. When first opening a video file in boxee that's on ntfs there's still the same amount of lag. After a few minutes of video, the lag seems to go away and the cpu usage comes down to 10-40%. Every so often though, it begins to stutter again. Also, if I skip ahead/back in the file, it begins to stutter a lot.

    Read the article

  • ATI Radeon HD 5750 and lagging in games and youtube videos

    - by Morten Fjord Christensen
    I have a X-ONE W-601 desktop pc: 3,1GHz AMD QuadCore Athlon II 645 X4 8 GB DDR3 RAM 1000 GB Harddisk 7200RPM ATI Radeon HD5750 with 1GB DDR5 RAM I'm running Ubuntu 11.10 64-bit and have installed the proprietary driver, but still games lag and videos a little bit. Been googling around and seen that it has something to do with the older drivers from AMD and KMS, but no guide helped me correctly through to make my graphic card work smoothly. I don't know if this helps but "fglrxinfo" in terminal shows: display: :0 screen: 0 OpenGL vendor string: ATI Technologies Inc. OpenGL renderer string: ATI Radeon HD 5700 Series OpenGL version string: 4.1.11005 Compatibility Profile Context And the driver check command shows: [ 51.184] (II) ATI Proprietary Linux Driver Version Identifier:8.88.7 Any help appreciated :D

    Read the article

  • detecting when you are going to reach your hit limit for Google Analytics free account

    - by crmpicco
    I am a user of a free Google Analytics account and i'm slightly concerned that I may be approaching the 10,000,000 hit (Pageviews, Events etc) per month. Google state in their documentation: These limits apply to the Web Property / Property / Tracking ID. 10 million hits per month per property If you go over this limit, the Google Analytics team might contact you and ask you upgrade to Premium or implement client sampling to reduce the amount of data being sent to Google Analytics. However, I note that there is nothing to say that you can review or check up on your current usage for the month. I have administrator access to the Google Analytics account, but I see no feature that lets me check up on my monthly usage. I don't know if Google offer this, either by means of the admin interface or via their support channels - but it would certainly be a useful feature. Is there anyway for a free GA user to obtain this information?

    Read the article

  • "Well, Swing took a bit of a beating this week..."

    - by Geertjan
    One unique aspect of the NetBeans community presence at JavaOne 2012 was its usage of large panels to highlight and discuss various aspects (e.g., Java EE, JavaFX, etc) of NetBeans IDE usage and tools. For example, here's a pic of one of the panels, taken by Markus Eisele: Above you see me, Sean Comerford from ESPN.com, Gerrick Bivins from Halliburton, Angelo D'Agnano and Ioannis Kostaras from the NATO Programming Center, and Çagatay Çivici from PrimeFaces. (And Tinu Awopetu was also on the panel but not in the picture!) On one of those panels a remark was made which has kind of stuck with me. Henry Arousell, a member of the "NetBeans Platform Discussion Panel", who works on accounting software in Sweden, together with Thomas Boqvist, who was also at JavaOne, said, a bit despondently, I thought, the following words at the start of the demo of his very professional looking accounting software: "Well, Swing took a bit of a beating this week..." That remark comes in the light of several JavaFX sessions held at JavaOne, together with many sessions from the web and mobile worlds making the argument that the browser, tablet, and mobile platforms are the future of all applications everywhere. However, then I had another look at the list of Duke's Choice Award winners: http://www.oracle.com/us/corporate/press/1854931 OK, there are 10 winners of the Duke's Choice Award this year. Three of them (JDuchess, London Java Community, Student Nokia Developer Group) are not awards for software, but for people or groups. So, that leaves seven awards. Three of them (Hadoop, Jelastic, and Parleys) are, in one way or another, some kind of web-oriented solution, though both Hadoop and Jelastic are broader than that, but are service-oriented solutions, relating to cloud technologies. That leaves four others: NATO air defense software, Liquid Robotics software, AgroSense software, and UNHCR Refugee Registration software. All these are, on the software level, Java desktop solutions that, on the UI layer, make use of Java Swing, together with LuciadMaps (NATO), GeoToolkit (AgroSense), and WorldWind (Liquid Robotics). (And, it went even further than that, i.e., this is not passive usage of Swing but active and motivated: Timon Veenstra, during his AgroSense demo, said "There are far more Swing applications out there than we seem to think. Web developers just make more noise." And, during his Liquid Robotics demo, James Gosling said: "Not everything can be done in HTML.") Seems to me that Java Swing was the enabler of more Duke's Choice Award winners this year than any other UI-oriented Java technology. Now, I'm not going to interpret that one way or another, since I've noticed that interpretations of facts tend to validate some underlying agenda. Take any fact anywhere and you can interpret it to prove whatever opinion you're already holding to be true. Therefore, no interpretation from me. Simply stating the fact that Swing, far from taking a beating during JavaOne 2012, was a more significant user interface enabler of Duke's Choice Award winners than any other Java user interface technology. That's not an interpretation, but a fact.

    Read the article

  • How are minimum system requirements determined?

    - by Michael McGowan
    We've all seen countless examples of software that ships with "minimum system requirements" like the following: Windows XP/Vista/7 1GB RAM 200 MB Storage How are these generally determined? Obviously sometimes there are specific constraints (if the program takes 200 MB on disk then that is a hard requirement). Aside from those situations, many times for things like RAM or processor it turns out that more/faster is better with no hard constraint. How are these determined? Do developers just make up numbers that seem reasonable? Does QA go through some rigorous process testing various requirements until they find the lowest settings with acceptable performance? My instinct says it should be the latter but is often the former in practice.

    Read the article

  • Not getting GUI mode on ubuntu 12.04 desktop edition after installation on vmware workstation 7

    - by Salil Naik
    I installed Ubuntu 12.04 desktop edition on my PC using VMWare workstation 7. I assigned 1GB RAM and 20 GB Hard disk to Ubuntu. While starting Ubuntu virtual machine, It is not starting up with GUI mode. In is prompting me my Login ID in textual mode always. After waiting for long time as well the GUI mode is not appearing. I tried running sudo apt-get install updates sudo apt-get install xinit sudo apt-get install ubuntu-desktop Honestly, i don't know the meaning of all these.I am very new to ubuntu.Please help me here what to do? Below is my Laptop configuration OS: Genuine Windows 7 Home Basic(64 bit) RAM: 3 GB Processor: Intel core i3 Regards Salil

    Read the article

  • Empirical Evidence of Popularity of Git and Mercurial

    - by ana
    It's 2012! Mercurial and Git are both still strong. I understand the trade-offs of both. I also understand everyone has some sort of preference for one or the other. That's fine. I'm looking for some information on level of usage of both. For example, on stackoverflow.com, searching for Git gets you 12000 hits, Mercurial gets you 3000. Google Trends says it's 1.9:1.0 for Git. What other empirical information is available to estimate the relative usage of both tools?

    Read the article

  • Object Oriented programming on 8-bit MCU Case Study

    - by Calvin Grier
    I see that there's a lot of questions related to OO Programming here. I'm actually trying to find a specific resource related to embedded OO approaches for an 8 bit MCU. Several years back (maybe 6) I was looking for material related to Object Oriented programming for resource constrained 8051 microprocessors. I found an article/website with a case history of a design group that used a very small RAM part, and implemented many Object based constructs during their C design and development. I believe it was an 8051. The project was a success, and managed to stay inside the very small ROM/RAM they had available. I'm attempting to find it again, but Google can't locate it. The article was well written, and recommended a "mixed" approach using C methods for inheritance and encapsulation - if I recall correctly. Can anyone help me locate this article?

    Read the article

  • Chargeback and showback...both a 'throw back'

    - by llaszews
    Been getting asked again by customers and partners about chargeback and showback in the cloud so thought I would blog on my response to this question. Charge Back background, information and industry analysis: Cloud computing is all about shared resources. These shared resources are computer servers (including memory and CPU), network devices, hard disk storage, database servers, application servers, cooling, floor space, electricity and more. These resources are shared by departments within a company, or by a number of companies, when resources are hosted in the public or hybrid cloud. Currently, hosting providers that run other companies on their cloud platforms do not have an accurate way to measure the shared computing resources used by a specific user let alone used by a specific customer. Additionally, companies running their own cloud data centers, for private or hybrid clouds, have no way of measure and charging back the departments in the company that are using these shared cloud resources. In both cases, the lack of determine shared resource costs and to charge them back to the company, department or user that is using this resources is limited a clear measure of business benefit and impacting company’s ability to measure the Return on Investment (ROI). An IT chargeback system is an accounting strategy that applies the costs of IT services, hardware or software to the business unit in which they are used. This system contrasts with traditional IT accounting models in which a centralized department bears all of the IT costs in an organization and those costs are treated simply as corporate overhead. Showback involves showing the IT costs to a department or customer but not actually charging them for their IT usage. Showback is a gradual method of introducing chargeback into an enterprise. Most companies implement a show back mechanism before a full chargeback system is put in place. Oracle chargeback product: Oracle Enterprise Manager provides tools for defining detailed Chargeback plans spanning different metrics collected for each type of resources as well as defining Cost Centers for grouping costs across multiple developers. Chargeback plans can use not only usage based costs, but also configuration based costs (e.g. version of the platform) or fixed costs (e.g. flat-rate management fee). Chargeback has rich out of the box reports. Trending reports show how charge and resource consumption varies over time, while Summary reports show the breakdown of charges or usage by different dimensions such as Cost Center or Target Type. These reports help consumers in understanding how their charges relate to their consumption and also assist the IT department with budgeting and planning activities. With BI Publisher, the reports can be made available in a variety of formats such as PDF, HTML, Word, Excel or PowerPoint.

    Read the article

  • XNA: Huge Tile Map, long load times

    - by Zach
    Recently I built a tile map generator for a game project. What I am very proud of is that I finally got it to the point where I can have a GIANT 2D map build perfectly on my PC. About 120000pixels by 40000 pixels. I can go larger actually, but I have only 1 draw back. #1 ram, the map currently draws about 320MB of ram and I know the Xbox allows 512MB I think? #2 It takes 20 mins for the map to build then display on the Xbox, on my PC it take less then a few seconds. I need to bring that 20 minutes of generating from 20 mins to how ever little bit I can, and how can a lower the amount of RAM usage while still being able to generate my map. Right now everything is stored in Jagged Arrays, each piece generating in a size of 1280x720 (the mother piece). Up to the amount that I need, every block is exactly 40x40 pixels however the blocks get removed from a List or regenerated in a List depending how close the mother piece is to the player. Saving A LOT of CPU, so at all times its no more then looping through 5184 some blocks. Well at least I'm sure of this. But how can I lower my RAM usage without hurting the size of the map, and how can I lower these INSANE loading times? EDIT: Let me explain my self better. Also I'd like to let everyone know now that I'm inexperienced with many of these things. So here is an example of the arrays I'm using. Here is the overall in a shorter term: int[][] array = new int[30][]; array[0] = new int[] { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 }; array[1] = new int[] { 1, 3, 3, 3, 3, 1, 0, 0, 0, 0, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 }; that goes on for around 30 arrays downward. Now for every time it hits a 1, it goes and generates a tile map 1280x720 and it does that exactly the way it does it above. This is how I loop through those arrays: for (int i = 0; i < array.Length; i += 1) { for (int h = 0; h < array[i].Length; h += 1) { } { Now how the tiles are drawn and removed is something like this: public void Draw(SpriteBatch spriteBatch, Vector2 cam) { if (cam.X >= this.Position.X - 1280) { if (cam.X <= this.Position.X + 2560) { if (cam.Y >= this.Position.Y - 720) { if (cam.Y <= this.Position.Y + 1440) { if (visible) { if (once == 0) { once = 1; visible = false; regen(); } } for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles[i].Draw(spriteBatch, cam); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles[i].Draw(spriteBatch, cam); } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } } If you guys still need more information just ask in the comments.

    Read the article

  • Video capture Performance

    - by volting
    I have noticed high CPU utilization in a number of applications (except mplayer) which read from the embedded webcam on my laptop. Bizarrely CPU utilization varies proportionately to the level of illumination present. I know that that high CPU usage has nothing to do with rendering the video, as I have written a simple app using the OpenCV library to simply grab frames from the webcam, and cpu usage is still high. I think that mplayer might be using my GPU (and the other apps aren't), but since its not an issue with rendering, I dont think this explains anything. Cheese Low light --- ~12% CPU Bright Light ---- ~63% CPU Camorama Low light --- ~7% CPU Bright Light ---- ~30% CPU Opencv C++ library, (display in a single highgui window) Low light --- ~13% CPU Bright Light ---- ~40% CPU (same test on windows 7, 4-9%) Mplayer No problem, 1-2% regardless of light levels Note: If all I want't to do is capture a feed from my webcam I would use mplayer and forget about it, but I'm developing an application which uses the OpenCV to capture a video feed among other things, performance is important.

    Read the article

  • Installed Ubuntu In a low Specs PC and it is too slow, even with LXDE

    - by Herudae
    I'm new with Linux and started with Ubuntu 11.10, I installed it in my PC (Core2Duo 2Ghz, 512Mb Ram DDR2, integrated video in Motherboard), I know the requirements for Unity are 1Gb Ram so I decided to download a Desktop environment more lightweight, so I Installed LXDE, it loads very fast, compared to the 3.5 min from login screen to open desktop in Unity, but it freezes every time I open a single program, I can't even navigate in Internet, it freezes, sometimes for a pair of minutes and the graph at bottom right is all green as if iyt were using 100% CPU, it happens with every program. As additional data it takes 3+ min to get from boot system selection screen to Login screen and 3.5 Min more to get into Ubuntu with Unity, with LXDE it turns to 30 secs aproximately. Is Ubuntu + LXDE Desktop Environment Package = Lubuntu? or should I download Lubuntu directly instead? I installed some other desktop environments, as Gnome but it doesn't log in, the screen just turns grey. Should I get an older Ubuntu version? I'm thinking about uninstalling Ubuntu but I'll try to deplete the options, thanks for your support.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >