Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 23/145 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • How to make a big switch control structure with variable check values?

    - by mystify
    For example, I have a huge switch control structure with a few hundred checks. They're an animation sequence, which is numbered from 0 to n. Someone said I can't use variables with switch. What I need is something like: NSInteger step = 0; NSInteger i = 0; switch (step) { case i++: // do stuff break; case i++: // do stuff break; case i++: // do stuff break; case i++: // do stuff break; } The point of this is, that the animation system calls a method with this big switch structure, giving it a step number. I want to be able to simply cut-copy-paste large blocks and put them in a different position inside the switch. for example, the first 50 blocks to the end. I could do that easily with a huge if-else structure, but it would look ugly and something tells me switch is much faster. How to?

    Read the article

  • Generating PDF document using XSLT

    - by Nair
    I have one huge XML document. I have set of XSL representing each node in the XML. These XSL also have java script to generate the dynamic content. It uses images which are in seperate images folder and it uses fonts as well. At present, I have a program which displays all the nodes that can be transformed and user click on one of the node and the program performs XSLT and display the content in HTML format on IE screen. I want to write a program (.Net , C# or any .Net language) which will allow user to do XSLT tranform on all the available notes and create one PDF document. My initial requirement was to display all the document in the IE itself, so I reused the existing code, and foreach node, perform XSLT and then append it to the current HTML with a page break and it worked ok till we hit huge files. So the requirement changed to create one PDF file with all the nodes. I have couple of questions, 1. What is the best way to create PDF file using XSLT transformation? 2. Since the images are relative path, if we generate the XSLT in html and then write it to a output stream will it loose the images? 3. Will the font be preserved in the PDF document? Really appriciate if someone could point me to some good example that I can take and run with it. Thanks a lot.

    Read the article

  • 'Bank Switching' Sprites on old NES applications

    - by Jeffrey Kern
    I'm currently writing in C# what could basically be called my own interpretation of the NES hardware for an old-school looking game that I'm developing. I've fired up FCE and have been observing how the NES displayed and rendered graphics. In a nutshell, the NES could hold two bitmaps worth of graphical information, each with the dimensions of 128x128. These are called the PPU tables. One was for BG tiles and the other was for sprites. The data had to be in this memory for it to be drawn on-screen. Now, if a game had more graphical data then these two banks, it could write portions of this new information to these banks -overwriting what was there - at the end of each frame, and use it from the next frame onward. So, in old games how did the programmers 'bank switch'? I mean, within the level design, how did they know which graphic set to load? I've noticed that Mega Man 2 bankswitches when the screen programatically scrolls from one portion of the stage to the next. But how did they store this information in the level - what sprites to copy over into the PPU tables, and where to write them at? Another example would be hitting pause in MM2. BG tiles get over-written during pause, and then get restored when the player unpauses. How did they remember which tiles they replaced and how to restore them? If I was lazy, I could just make one huge static bitmap and just grab values that way. But I'm forcing myself to limit these values to create a more authentic experience. I've read the amazing guide on how M.C. Kids was made, and I'm trying to be barebones about how I program this game. It still just boggles my mind how these programmers accomplisehd what they did with what they had. EDIT: The only solution I can think of would be to hold separate tables that state what tiles should be in the PPU at what time, but I think that would be a huge memory resource that the NES wouldn't be able to handle.

    Read the article

  • HTML + CSS: fixed background image and body width/min-width (including fiddle)

    - by insertusernamehere
    So, here is my problem. I'm kinda stuck at the moment. I have a huge background image and content in the middle with those attributes: content is centered with margin auto and has a fixed width the content is related to the image (like the image is continued within the content) this relation is only horizontally (vertically scrolling moves everything around) This works actually fine (I'm only talking desktop, not mobile here :) ) with a position fixed on the huge background image. The problem that occurs is the following: When I resize the window to "smaller than the content" the background image gets it width from the body instead of the viewport. So the relation between content and image gets lost. Now I have this little JavaScript which does the trick, but this is of course some overhead I want to avoid: $(window).resize(function(){ img.css('left', (body.width() - img.width()) / 2 ); }); This works with a fixed positioned image, but can get a litty jumpy while calculating. I also tried things like that: <div id="test" style=" position: absolute; z-index: 0; top: 0; left: 0; width: 100%: height: 100%; background: transparent url(content/dummy/brand_backgroud_1600_1.jpg) no-repeat fixed center top; "></div> But this gets me back to my problem described. Is there any "script-less", elegant solution for this problem? UPDATE: now with Fiddle The one I'm trying to solve: http://jsfiddle.net/insertusernamehere/wPmrm/ The one with Javascript that works: http://jsfiddle.net/insertusernamehere/j5E8z/ NOTE The image size is always fixed. The image never gets scaled by the browser. In the JavaScript example it get's blown. So don't care about the size.

    Read the article

  • Image processing on bifurcation diagram to get small eps size

    - by yCalleecharan
    Hello, I'm producing bifurcation diagrams (which are normally used in nonlinear dynamics). These diagrams identify abrupt changes in topologies due to stability changes. These abrupt changes occur as one or more parameters pass through some critical value(s). An example is here: http://en.wikipedia.org/wiki/File:LogisticMap_BifurcationDiagram.png On the above figure, some image processing has been done so as to make the plot more visually pleasant. A bifurcation diagram usually contains hundreds of thousands of points and the resulting eps file can become very big. Journal submission in the LaTeX format require that figures are to be submitted in the eps format. In my case one of such figures can result in about 6 MB in Matlab and even much more in Gnuplot. For the example in the above figure, 100,000 x values are calculated for each r and one can imagine that the resulting eps file would be huge. The site however explains some image processing that makes the plot more visually pleasing. Can anyone explain to me stepwise how go about? I can't understand the explanation provided in the "summary" section. Will the resulting image processing also reduce the figure size? Furthermore, any tips on reducing the file size of such a huge eps figure? Thanks a lot...

    Read the article

  • Parsing multiple files at a time in Perl

    - by sfactor
    I have a large data set (around 90GB) to work with. There are data files (tab delimited) for each hour of each day and I need to perform operations in the entire data set. For example, get the share of OSes which are given in one of the columns. I tried merging all the files into one huge file and performing the simple count operation but it was simply too huge for the server memory. So, I guess I need to perform the operation each file at a time and then add up in the end. I am new to perl and am especially naive about the performance issues. How do I do such operations in a case like this. As an example two columns of the file are. ID OS 1 Windows 2 Linux 3 Windows 4 Windows Lets do something simple, counting the share of the OSes in the data set. So, each .txt file has millions of these lines and there are many such files. What would be the most efficient way to operate on the entire files.

    Read the article

  • Interchange structured data between Haskell and C

    - by Eonil
    First, I'm a Haskell beginner. I'm planning integrating Haskell into C for realtime game. Haskell does logic, C does rendering. To do this, I have to pass huge complexly structured data (game state) from/to each other for each tick (at least 30 times per second). So the passing data should be lightweight. This state data may laid on sequential space on memory. Both of Haskell and C parts should access every area of the states freely. In best case, the cost of passing data can be copying a pointer to a memory. In worst case, copying whole data with conversion. I'm reading Haskell's FFI(http://www.haskell.org/haskellwiki/FFICookBook#Working_with_structs) The Haskell code look specifying memory layout explicitly. I have a few questions. Can Haskell specify memory layout explicitly? (to be matched exactly with C struct) Is this real memory layout? Or any kind of conversion required? (performance penalty) If Q#2 is true, Any performance penalty when the memory layout specified explicitly? What's the syntax #{alignment foo}? Where can I find the document about this? If I want to pass huge data with best performance, how should I do that? *PS Explicit memory layout feature which I said is just C#'s [StructLayout] attribute. Which is specifying in-memory position and size explicitly. http://www.developerfusion.com/article/84519/mastering-structs-in-c/ I'm not sure Haskell has matching linguistic construct matching with fields of C struct.

    Read the article

  • NHibernate Performance Optimization | Suggestions invited!!!

    - by user336749
    Hi, I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen. While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen. For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet. The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param). Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization. My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case? Thanks in advance for any suggestions. Regards, -Mike

    Read the article

  • Dealing with large directories in a checkout

    - by Eric
    I am trying to come up with a version control process for a web app that I work on. Currently, my major stumbling blocks are two directories that are huge (both over 4GB). Only a few people need to work on things within the huge directories; most people don't even need to see what's in them. Our directory structure looks something like: / --file.aspx --anotherFile.aspx --/coolThings ----coolThing.aspx --/bigFolder ----someHugeMovie.mov ----someHugeSound.mp3 --/anotherBigFolder ----... I'm sure you get the picture. It's hard to justify a checkout that has to pull down 8GB of data that's likely useless to a developer. I know, it's only once, but even once could be really frustrating for someone (and will make it harder for me to convince everyone to use source control). (Plus, clean checkouts will be painfully slow.) These folders do have to be available in the web application. What can I do? I've thought about separate repositories for the big folders. That way, you only download if you need it; but then how do I manage checking these out onto our development server? I've also thought about not trying to version control those folders: just update them directly on the web server... but I am not enamored of this idea. Is there some magic way to simply exclude directories from a checkout that I haven't found? (Pretty sure there is not.) Of course, there's always the option to just give up, bite the bullet, and accept downloading 8 useless GB. What say you? Have you encountered this problem before? How did you solve it?

    Read the article

  • LSI MegaRAID Expected Chip Temperature?

    - by Myles Gray
    We recently built a replicating SAN array from 2x Dell R720XD's, we are using LSI 9270-8i MegaRAID cards with CacheCade 2.0, BBU and Write Back cache enabled. Our cards are showing HUGE chip temperatures (97*C+ with NO disk activity!). Our R720's are in auto temp management mode so the max exhaust temp is 50*C. The MegaRAID cards are passively cooled and depend on good airflow to cool them - however is 97*C normal? - I have seen reference to 60*C max ambients but nothing for chip temp.

    Read the article

  • How to filter http traffic in Wireshark?

    - by par
    I suspect my server has a huge load of http requests from its clients. I want to measure the volume of http traffic. How can I do it with Wireshark? Or probably there is an alternative solution using another tool? This is how a single http request/response traffic looks in Wireshark. The ping is generated by WinAPI funciton ::InternetCheckConnection() Thanks!

    Read the article

  • What's with the unicorns?

    - by Ward
    And don't tell me this is meta and should only be discussed over there. It's happened here, there should be an answer here. I guess that in UTC it's April 1, but it's a huge distraction.

    Read the article

  • Outlook 2010 Reminder Notification/OneNote Draw Bugs

    - by Bobby Varghese
    Can Outlook 2010 send me reminder alarms when the program is closed? If not, is there an add-on for this? Also, I use OneNote to take class notes, and sometimes, I will use the draw feature. When I close the program and come back to the notebook later, the drawings will shift and be scrambled, making my notes impossible to read and a huge hassle to fix. Has MS released an update other than (KB2288640), 32-Bit Edition to fix this?

    Read the article

  • Can I compile Passenger (mod_rails/mod_rack) to make a statically linked Apache httpd?

    - by Oleg
    I prefer to disable httpd dynamic module loading on my production server. I've been using mod_jk linked statically into httpd for quite a long time and it proved to be stable. Now I would like to add Ruby Passenger (mod_rails/mod_rack) to my httpd. I wonder if it is possible to link it statically into Apache httpd the same way too? (without producing a huge httpd) If it is, are there any potential pitfalls, safety or performance concerns having both mod_jk and mod_rails within the same executable? Thanks

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • Windows Movie Maker 2012 No Sound issue with Windows 8.1

    - by zzlalani
    I've windows 8.1 pro Build 9600 x64 installed, I have recently installed Windows Movie Maker 2012 (Latest) via Windows Live Essential, Now when I run Movie Maker it disable Movie Maker sound as well as all windows sound and keeps it mute until I close Movie Maker, as per their suggestion Huge Problems With Movie Maker Sound I have also updated my audio drivers, I'm using Dell Inspiron 15R 5520, and I have this audio device/driver Conexant HD CX20672-21Z Audio Driver with Version 8.54.37.0,A03 Last Updated 12/20/2013 I need to edit and create a video by this weekend and this is the only tool I know how to use,

    Read the article

  • List full timestamps of files in a tarball

    - by Mechanical snail
    I have a large tar archive and want to see the exact (nanosecond) timestamps that are stored for each file in the archive. In case it's relevant, the tarball is in POSIX-2001 format (tar --format=posix). tar --list --verbose displays the timestamps rounded off to the minute. For comparison, ls --full-time does what I want, but I'd rather not have to extract everything first because it's huge. For my purposes, command-line and GUI tools are both fine.

    Read the article

  • Dealing with Word spell check in technical documents?

    - by Robert MacLean
    I have waste millions of hours clicking the Ignore Once button in Word, while trying to spell check a document related to development. Be that something light on terms like a proposal or something worse like technical specs. I'm beginning to think that this is a huge waste and someone may have developed a dictionary for Word with common development terms that I could add and no longer have this problem. Does such a dictionary exist or is there some other tricks to use to improve this process?

    Read the article

  • How to resize Yahoo Messenger window?

    - by Ian Boyd
    Yahoo messenger is huge: And that's as short as you can make it, it won't shrink vertically anymore. How can i make it a more reasonable size, e.g.: In fact, if we're making it look better, how about Yahoo stops pissing on their users and makes it: Okay, yes, the last one was me complaining. But the first two are a valid question! Note: The cut-off title bar is because Yahoo! developers don't pay their taxes, and assume that everyone runs at 96dpi.

    Read the article

  • Make the middle mouse button behave as a double-click in Windows 7?

    - by Geoff Olynyk
    What is the best, lightest-weight way to make the middle mouse button (i.e. clicking the scroll wheel) behave as a double-left-click in Windows 7? I want this to be universal, so that other programs don't ever see the middle-click, they just see a double-left-click. I used to do it under Windows XP with Logitech SetPoint drivers but it was always an ugly solution - installing a huge ( 50 MB!) binary just to enable one simple little bit of functionality.

    Read the article

  • Using Export-Mailbox without including subfolders

    - by AspNyc
    I want to delete a certain group of messages from somebody's mailbox. I already have the basic Powershell command ready to go: Get-Mailbox -Identity jshmoe | Export-Mailbox -SubjectKeywords "VirusWarning" -IncludeFolders "\Inbox" -StartDate "02/24/2010" -DeleteContent The problem is that Joe Shmoe's "Inbox" is huge, and I know the messages I want to delete are only in the main Inbox folder. However, the above Powershell command appears to crawl all subfolders beneath "Inbox". Is there a way to tell it not to?

    Read the article

  • resume file copy linux

    - by Andrew Johnson
    How do I resume a copy of a large file in linux? I have a huge file (serveral gigabyes) partially copied to a network drive, and it took a long time, and it was mostly done before the copy operation stopped due to a network problem that is now fixed. How do I resume the file copy. I don't want an inefficient script, and ecp didn't work (it doesn't seem to work for large files).

    Read the article

  • compile Passenger statically into Apache httpd?

    - by Oleg
    I prefer to disable httpd dynamic module loading on my production server. I've been using mod_jk linked statically into httpd for quite a long time and it proved to be stable. Now I would like to add Ruby Passenger (mod_rails/mod_rack) to my httpd. I wonder if it is possible to link it statically into Apache httpd the same way too? (without producing a huge httpd) If it is, are there any potential pitfalls, safety or performance concerns having both mod_jk and mod_rails within the same executable? Thanks

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >