Search Results

Search found 80 results on 4 pages for 'grok'.

Page 3/4 | < Previous Page | 1 2 3 4  | Next Page >

  • What will be the setup process for website development?

    - by Vijay Shanker
    Hi, I want to create a simple site for my personal usage. And this only in python based technologies. So I want to get a expert oponian on this topic. What should i used as platform? I did a search for available options and found Django, grok, web2py and many more of these. Which one a novice use should use? If I choose to use only the basic python scripts then what option i have to work on? http://wiki.python.org/moin/WebBrowserProgramming. This link on python site confused me more, instead of solving my curiosity about the topic. Please give some pointer to accurate and easy to understand reading materials. I have got a idea of developing java based web applications using either spring-webmvc and struts. Can I relate Java process to python process for web development?

    Read the article

  • How are files (especially audio files) organized internally?

    - by mystify
    I try to grok that: Apple is talking about "packets" in audio files, and there is a fancy function called AudioFileReadPackets which takes a lot of arguments. One of them specifies the "start packet", and another one the number of packets which you want to read. So I imagine an audio file to look like this, internally: It's made up of a lot of packets. If it's an audio file which has an variable bit rate format, then every packet may have a different size. If the file has an constant bit rate format, then every packet is the same size. So an audio file is like a truck full of boxes, and every box contains some interesting stuff. Is that correct? Does it apply to any kind of file? Is this how files actually look like?

    Read the article

  • C++ code visualization

    - by bobobobo
    A sort of follow up/related question to this. I'm trying to get a grip on a large code base that has hundreds and hundreds of classes and a large inheritance hierarchy. I want to be able to see the "main veins" of the inheritance hierarchy at a glance - not all the "peripheral" classes that only do some very specific / specialized thing. Visual Studio's "View Class Diagram" makes something that looks like a train and its sprawled horizontally across the screen and isn't very organized. You can't grok it easily. I've just tried doxygen and graphviz but the results are .. somewhat similar to Visual Studio. I'm getting sweet looking call graphs but again too much detail for what I'm trying to get. I need a quick way to generate the inheritance hierarchy, in some kind of collapsible view.

    Read the article

  • nested list comprehension using intermediate result

    - by KentH
    I am trying to grok the output of a function which doesn't have the courtesy of setting a result code. I can tell it failed by the "error:" string which is mixed into the stderr stream, often in the middle of a different conversion status message. I have the following list comprehension which works, but scans for the "error:" string twice. Since it is only rescanning the actual error lines, it works fine, but it annoys me I can't figure out how to use a single scan. Here's the working code: errors = [e[e.find('error:'):] for e in err.splitlines() if 'error:' in e] The obvious (and wrong) way to simplify is to save the "find" result errors = [e[i:] for i in e.find('error:') if i != -1 for e in err.splitlines()] However, I get "UnboundLocalError: local variable 'e' referenced before assignment". Blindly reversing the 'for's in the comprehension also fails. How is this done? THanks. Kent

    Read the article

  • Where would my different development rhythm be suitable for the work?

    - by DarenW
    Over the years I have worked on many projects, with some successful and a great benefit to the company, and some total failures with me getting fired or otherwise leaving. What is the difference? Naturally I prefer the former and wish to avoid the latter, so I'm pondering this issue. The key seems to be that my personal approach differs from the norm. I write code first, letting it be all spaghetti and chaos, using whatever tools "fit my hand" that I'm fluent in. I try to organize it, then give up and start over with a better design. I go through cycles, from thinking-design to coding-testing. This may seem to be the same as any other development process, Agile or whatever, cycling between design and coding, but there does seem to be a subtle difference: The methods (ideally) followed by most teams goes design, code; design, code; ... while I'm going code, design; code, design; (if that makes any sense.) Music analogy: some types of music have a strong downbeat while others have prominent syncopation. In practice, I just can't think in terms of UML, specifications and so on, but grok things only by attempting to code and debug and refactor ad-hoc. I need the grounding provided by coding in order to think constructively, then to offer any opinions, advice or solutions to the team and get real work done. In positions where I can initially hack up cowboy code without constraints of tool or language choices, I easily gain a "feel" for the data, requirements etc and eventually do good work. In formalized positions where paperwork and pure "design" comes first and only later any coding (even for small proof-of-concept projects), I am lost at sea and drown. Therefore, I'd like to know how to either 1) change my rhythm to match the more formalized methodology-oriented team ways of doing things, or 2) find positions at organizations where my sense of development rhythm is perfect for the work. It's probably unrealistic for a person to change their fundamental approach to things. So option 2) is preferred. So where I can I find such positions? How common is my approach and where is it seen as viable but different, and not dismissed as undisciplined or cowboy coder ways?

    Read the article

  • Challenge: Learn One New Thing Today

    - by BuckWoody
    Most of us know that there's a lot to learn. I'm teaching a class this morning, and even on the subject where I'm the "expert" (that word always makes me nervous!) I still have a lot to learn. To learn, sometimes I take a class, read a book, or carve out a large chunk of time so that I can fully grasp the subject. But since I've been working, I really don't have a lot of opportunities to do that. Like you, I'm really busy. So what I've been able to learn is to take just a few moments each day and learn something new about SQL Server. I thought I would share that process here. First, I started with an outline of the product. You can use Books Online, a college class syllabus, a training class outline, or a comprehensive book table of contents. Then I checked off the things I felt I knew a little about. Sure, I'll come back around to those, but I want to be as efficient as I can. I then trolled various checklists to see what I needed to know about the subjects I didn't have checked off. From there (I'm doing all this in a notepad, and then later in OneNote when that came out) I developed a block of text for that subject. Every time I ran across a book, article, web site or recording on that topic I wrote that reference down. Later I went back and quickly looked over those resources and tried to figure out how I could parcel it out - 10 minutes for this one, a free seminar (like the one I'm teaching today - ironic) takes 4 hours, a web site takes an hour to grok, that sort of thing.  Then all I did was figure out how much time each day I'll give to training. Sure, it literally may be ten minutes, but it adds up. One final thing - as I used something I learned, I came back and made notes in that topic. You learn to play the piano not just from a book, but by playing the piano, after all. If you don't use what you learn, you'll lose it. So if you're interested in getting better at SQL Server, and you're willing to do a little work, try out this method. Leave a note here for others to encourage them.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What should I do to make sure that IIS does not recycle my application?

    - by AngryHacker
    I have a WCF service app hosted in IIS. On startup, it goes and fetches a really expensive (in terms of time and cpu) resource to use as local cache. Unfortunately, IIS seems to recycle the process on a fairly regular basis. So I am trying to change the settings on the Application Pool to make sure that IIS does not recycle the application. So far, I've change the following: Limit Interval under CPU from 5 to 0. Idle Time-out under Process Model from 20 to 0. Regular Time Interval under Recycling from 1740 to 0. Will this be enough? And I have specific questions about the items I changed: What specifically does Limit Interval setting under CPU mean? Does it mean that if a certain CPU usage is exceeded, the application pool will be recycled? What exactly does "recycled" mean? Is the application completely torn down and started up again? What is the difference between "Worker Process shutdown" and "Application Pool recycling"? The documentation for the Idle Time-out under Process Model talks about shutting down the worker process. While the docs for Regular Time Interval under Recycling talk about application pool recycling. I don't quite grok the difference between the two. I thought the w3wp.exe is the worker process which runs the application pool. Can someone explain the difference to the application between the two? The reason for having IIS7 and IIS7.5 tags is because the app will run in both and hope the answers are the same between the versions. Image for reference:

    Read the article

  • Would like to change audio codec, but keep video settings with ffmpeg

    - by Craig Tataryn
    I have a video for which I'd like to convert the audio codec to AAC 320 kbps / 44.100 kHz. What would I use for ffmpeg switches such that all the video settings and codec remain the same, but only the audio codec and settings change? Here's my video: $ ffmpeg -i Winnipeg.rb\ Scala-Talk.mov FFmpeg version SVN-r25375, Copyright (c) 2000-2010 the FFmpeg developers built on Oct 6 2010 13:02:41 with gcc 4.2.1 (Apple Inc. build 5664) configuration: --enable-libmp3lame --enable-shared --disable-mmx --arch=x86_64 libavutil 50.32. 2 / 50.32. 2 libavcore 0. 9. 1 / 0. 9. 1 libavcodec 52.92. 0 / 52.92. 0 libavformat 52.80. 0 / 52.80. 0 libavdevice 52. 2. 2 / 52. 2. 2 libavfilter 1.48. 0 / 1.48. 0 libswscale 0.12. 0 / 0.12. 0 Seems stream 0 codec frame rate differs from container frame rate: 2000.00 (2000/1) -> 10.00 (10/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Winnipeg.rb Scala-Talk.mov': Metadata: major_brand : qt minor_version : 537199360 compatible_brands: qt Duration: 01:10:53.00, start: 0.000000, bitrate: 283 kb/s Stream #0.0(eng): Video: h264, yuv420p, 800x598, 94 kb/s, 10 fps, 10 tbr, 1k tbn, 2k tbc Stream #0.1(eng): Audio: adpcm_ima_qt, 22050 Hz, 1 channels, s16 Stream #0.2(eng): Audio: adpcm_ima_qt, 22050 Hz, 1 channels, s16 At least one output file must be specified Many thanks in advance! One with with ffmpeg I've never been able to grok is how to just "tweak" files without having to regurgitate every little setting for things you don't want changes.

    Read the article

  • How do I configure Jetty (via jettyrunner) so that it names a character set in the Content-Type response header?

    - by Pointy
    I use Jetty (via the oh-so-handy Jetty Runner) for day-to-day web application testing. One thing I've recently stumbled on is the fact that I don't get a character set called out in the "Content-Type" response header all the time. I do get it in response to my application's XMLHttpRequest transactions, but not for plain old pages loaded by <a> links or whatever. I've read a little bit about how to set up a Jetty config file, but I've never been able to completely understand that; all servlet containers are complicated, and while Jetty is pretty simple it's just weird enough that I don't grok the overall idea. Thus, all I do to launch my app is to run the Jetty Runner .jar file with a couple of simple arguments to set up the port number and logfile path, and then I just give it the .war file to run. It works great — except for the missing character set :-) Anybody have a quick sample config file that might fix this? edit — oh if it matters, I'm running Jetty 7.0.0 RC3; I've also tried with a slightly newer version (still 7.something) with exactly the same issue. All my testing is on Ubuntu.

    Read the article

  • Why do I have redundant routing in Windows 7?

    - by Mark
    I'm trying to better understand my routing tables. My routing table is: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 1. 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.151 25 2. 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 3. 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 4. 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 5. 192.168.1.0 255.255.255.0 On-link 192.168.1.151 281 6. 192.168.1.151 255.255.255.255 On-link 192.168.1.151 281 7. 192.168.1.255 255.255.255.255 On-link 192.168.1.151 281 8. 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 9. 224.0.0.0 240.0.0.0 On-link 192.168.1.151 281 10. 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 11. 255.255.255.255 255.255.255.255 On-link 192.168.1.151 281 =========================================================================== Most of those entries make sense to me, but a few confuse me: 2 and 3 seem redundant, as do 2 and 4. Why not just 2? 5 and 6 seem redundant, as do 5 and 7. Why not just 5? I'm trying to grok routing tables and this bit is still confusing me.

    Read the article

  • What are the reasons to use dos batch programs in Windows?

    - by DVK
    Question What would be a good (ideally, technical) reason to ever program some non-trivial task in dos batch language on a modern Windows system as opposed to downloading either PowerShell, or ActiveState Perl? To be more specific, I make the following two assumptions for the duration of this question: anyone technical enough to be able to write a medium-complexity batch script is technical enough to install either of the scripting interpreters. Neither of those two present enough of a learning curve for basic batch replacement tasks that said curve would outweigh the pain of doing any remotely-non-trivial task in batch. Notes "You need a batch program for autoexec.bat" is not a valid reason. Your autoexec.bat may consist of simply calling non-batch script. If you disagree with either of my 2 assumptions above, that's fine, and I may be wrong. But my question is specifically "assuming those 2 assumptions are correct, what would be the reason to still stick with batch?" If it makes it easier to suspend disbelief (in case you disagree with me), add in a 3rd assumption that the question is limited to people who already posess at least some modicum of PowerShell or Perl experience. To re-iterate - this is not meant to be a subjective question about how easy it is to learn PSh or ASPerl compared to doing advanced batch coding. That is a separate question that is too subjective to be bothered with in this post. Background: I used to do some fairly complicated batch programming back in the elder days, and remember batch as one of the worst possble programming languages I had encountered. The idea for this question came after seeing a bunch of batch questions on SO, and trying to grok the answer of one of them out of sheer curiosity and giving up in pain after a minute, exclaiming mentally "why would anyone go through this pain instead of doing that in 1 line of Perl?" :) My own plausible answer I assume there may be an an likely DOS-compatible system, which has DOS interpreter but has no compatible PowerShell or Perl... I'm not aware of one but not completely impossible.

    Read the article

  • Always can't separate these guys: ascending and descending! Are there good examples?

    - by mystify
    As a non-english dude, I have trouble differentiating this. When I try to translate this into my language, I get something weird like "go up" for ascending. So lets say I want to sort the names of all my pets alphabetically. I want that A comes first, then B, then C... and so on. So since the alphabet is not a number for me, my brain refuses to grok what's "going up". A = 0? B = 1? C = 2? If yes, then ascending would be what I'm most of the time looking for. Table would start showing A, then B, then C... Or is that the other way around? Must I look from the bottom of the table, up? And with numbers: If it's an ascending order, the smallest comes first? (would seem logical...) Can someone post a short but good example for what is an ascending sort order, and what is an descending sort order? And does that apply to whatever platform, programming language, API, etc.?

    Read the article

  • Threads, Sockets, and Designing Low-Latency, High Concurrency Servers

    - by lazyconfabulator
    I've been thinking a lot lately about low-latency, high concurrency servers. Specifically, http servers. http servers (fast ones, anyway) can serve thousands of users simultaneously, with very little latency. So how do they do it? As near as I can tell, they all use events. Cherokee and Lighttpd use libevent. Nginx uses it's own event library performing much the same function of libevent, that is, picking a platform optimal strategy for polling events (like kqueue on *bsd, epoll on linux, /dev/poll on Solaris, etc). They all also seem to employ a strategy of multiprocess or multithread once the connection is made - using worker threads to handle the more cpu intensive tasks while another thread continues to listen and handle connections (via events). This is the extent of my understanding and ability to grok the thousand line sources of these applications. What I really want are finer details about how this all works. In examples of using events I've seen (and written) the events are handling both input and output. To this end, do the workers employ some sort of input/output queue to the event handling thread? Or are these worker threads handling their own input and output? I imagine a fixed amount of worker threads are spawned, and connections are lined up and served on demand, but how does the event thread feed these connections to the workers? I've read about FIFO queues and circular buffers, but I've yet to see any implementations to work from. Are there any? Do any use compare-and-swap instructions to avoid locking or is locking less detrimental to event polling than I think? Or have I misread the design entirely? Ultimately, I'd like to take enough away to improve some of my own event-driven network services. Bonus points to anyone providing solid implementation details (especially for stuff like low-latency queues) in C, as that's the language my network services are written in.

    Read the article

  • Help needed in grokking password hashes and salts

    - by javafueled
    I've read a number of SO questions on this topic, but grokking the applied practice of storing a salted hash of a password eludes me. Let's start with some ground rules: a password, "foobar12" (we are not discussing the strength of the password). a language, Java 1.6 for this discussion a database, postgreSQL, MySQL, SQL Server, Oracle Several options are available to storing the password, but I want to think about one (1): Store the password hashed with random salt in the DB, one column Found on SO and elsewhere is the automatic fail of plaintext, MD5/SHA1, and dual-columns. The latter have pros and cons MD5/SHA1 is simple. MessageDigest in Java provides MD5, SHA1 (through SHA512 in modern implementations, certainly 1.6). Additionally, most RDBMSs listed provide methods for MD5 encryption functions on inserts, updates, etc. The problems become evident once one groks "rainbow tables" and MD5 collisions (and I've grokked these concepts). Dual-column solutions rest on the idea that the salt does not need to be secret (grok it). However, a second column introduces a complexity that might not be a luxury if you have a legacy system with one (1) column for the password and the cost of updating the table and the code could be too high. But it is storing the password hashed with a random salt in single DB column that I need to understand better, with practical application. I like this solution for a couple of reasons: a salt is expected and considers legacy boundaries. Here's where I get lost: if the salt is random and hashed with the password, how can the system ever match the password? I have theory on this, and as I type I might be grokking the concept: Given a random salt of 128 bytes and a password of 8 bytes ('foobar12'), it could be programmatically possible to remove the part of the hash that was the salt, by hashing a random 128 byte salt and getting the substring of the original hash that is the hashed password. Then re hashing to match using the hash algorithm...??? So... any takers on helping. :) Am I close?

    Read the article

  • Tools\addin's for formating or cleaning up xaml?

    - by cody
    I'm guessing these don't exist since I searched around for these but I'm looking for a few tools: 1) A tool that cleans up my xaml so that the properties of elements are consistent through a file. I think enforcing that consistence would make the xaml easier to read. Maybe there could be a hierarchy of what comes first but if not alphabetical might work. Example before: TextBox Name="myTextBox1" Grid.Row="3" Grid.Column="1" Margin="4" TextBox Grid.Column="1" Margin="4" Name="t2" Grid.Row="3" Example after: TextBox Name="myTextBox1" Grid.Row="3" Grid.Column="1" Margin="4" TextBox Name="t2" Grid.Row="3" Grid.Column="1" Margin="4" (note < / has been remove from the above since control seem to have issues parsing whe the after section was added) 2) Along the same lines as above, to increase readability, a tool to align properties, so from the above code example similar props would start in the same place. <TextBox Name="myTextBox1" Grid.Row="3" Grid.Column="1" Margin="4"/> <TextBox Name="t2" Grid.Row="3" Grid.Column="1" Margin="4"/> I know VS has default settings for XAML documents so props can be on one line or separate lines so maybe if there was a tool as described in (1) this would not be needed...but it would still be nice if you like your props all on one line. 3) A tool that adds X to the any of the Grid.Row values and Y to any of the Grid.Column values in the selected text. Every time I add a new row\column I have to go manually fix these. From my understanding Expression Blend can help with this but seem excessive to open Blend just to increment some numbers (and just don't grok Blend). Maybe vs2010 with the designer will help but right now I'm on VS08 and Silverlight. Any one know of any tools to help with this? Anyone planning to write something like this...I'm looking at you JetBrains and\or DevExpress. Thanks.

    Read the article

  • Design patterns for Agent / Actor based concurrent design.

    - by nso1
    Recently i have been getting into alternative languages that support an actor/agent/shared nothing architecture - ie. scala, clojure etc (clojure also supports shared state). So far most of the documentation that I have read focus around the intro level. What I am looking for is more advanced documentation along the gang of four but instead shared nothing based. Why ? It helps to grok the change in design thinking. Simple examples are easy, but in a real world java application (single threaded) you can have object graphs with 1000's of members with complex relationships. But with agent based concurrency development it introduces a whole new set of ideas to comprehend when designing large systems. ie. Agent granularity - how much state should one agent manage - implications on performance etc or are their good patterns for mapping shared state object graphs to agent based system. tips on mapping domain models to design. Discussions not on the technology but more on how to BEST use the technology in design (real world "complex" examples would be great).

    Read the article

  • Pairs from single list

    - by Apalala
    Often enough, I've found the need to process a list by pairs. I was wondering which would be the pythonic and efficient way to do it, and found this on Google: pairs = zip(t[::2], t[1::2]) I thought that was pythonic enough, but after a recent discussion involving idioms versus efficiency, I decided to do some tests: import time from itertools import islice, izip def pairs_1(t): return zip(t[::2], t[1::2]) def pairs_2(t): return izip(t[::2], t[1::2]) def pairs_3(t): return izip(islice(t,None,None,2), islice(t,1,None,2)) A = range(10000) B = xrange(len(A)) def pairs_4(t): # ignore value of t! t = B return izip(islice(t,None,None,2), islice(t,1,None,2)) for f in pairs_1, pairs_2, pairs_3, pairs_4: # time the pairing s = time.time() for i in range(1000): p = f(A) t1 = time.time() - s # time using the pairs s = time.time() for i in range(1000): p = f(A) for a, b in p: pass t2 = time.time() - s print t1, t2, t2-t1 These were the results on my computer: 1.48668909073 2.63187503815 1.14518594742 0.105381965637 1.35109519958 1.24571323395 0.00257992744446 1.46182489395 1.45924496651 0.00251388549805 1.70076990128 1.69825601578 If I'm interpreting them correctly, that should mean that the implementation of lists, list indexing, and list slicing in Python is very efficient. It's a result both comforting and unexpected. Is there another, "better" way of traversing a list in pairs? Note that if the list has an odd number of elements then the last one will not be in any of the pairs. Which would be the right way to ensure that all elements are included? I added these two suggestions from the answers to the tests: def pairwise(t): it = iter(t) return izip(it, it) def chunkwise(t, size=2): it = iter(t) return izip(*[it]*size) These are the results: 0.00159502029419 1.25745987892 1.25586485863 0.00222492218018 1.23795199394 1.23572707176 Results so far Most pythonic and very efficient: pairs = izip(t[::2], t[1::2]) Most efficient and very pythonic: pairs = izip(*[iter(t)]*2) It took me a moment to grok that the first answer uses two iterators while the second uses a single one. To deal with sequences with an odd number of elements, the suggestion has been to augment the original sequence adding one element (None) that gets paired with the previous last element, something that can be achieved with itertools.izip_longest().

    Read the article

  • How to extract data from F# list

    - by David White
    Following up my previous question, I'm slowly getting the hang of FParsec (though I do find it particularly hard to grok). My next newbie F# question is, how do I extract data from the list the parser creates? For example, I loaded the sample code from the previous question into a module called Parser.fs, and added a very simple unit test in a separate module (with the appropriate references). I'm using XUnit: open Xunit [<Fact>] let Parse_1_ShouldReturnListContaining1 () = let interim = Parser.parse("1") Assert.False(List.isEmpty(interim)) let head = interim.Head // I realise that I have only one item in the list this time Assert.Equal("1", ???) Interactively, when I execute parse "1" the response is: val it : Element list = [Number "1"] and by tweaking the list of valid operators, I can run parse "1+1" to get: val it : Element list = [Number "1"; Operator "+"; Number "1"] What do I need to put in place of my ??? in the snippet above? And how do I check that it is a Number, rather than an Operator, etc.?

    Read the article

  • Best Practices For Secure APIs?

    - by Ferrett Steinmetz
    Let's say I have a website that has a lot of information on our products. I'd like some of our customers (including us!) to be able to look up our products for various methods, including: 1) Pulling data from AJAX calls that return data in cool, JavaScripty-ways 2) Creating iPhone applications that use that data; 3) Having other web applications use that data for their own end. Normally, I'd just create an API and be done with it. However, this data is in fact mildly confidential - which is to say that we don't want our competitors to be able to look up all our products every morning and then automatically set their prices to undercut us. And we also want to be able to look at who might be abusing the system, so if someone's making ten million complex calls to our API a day and bogging down our server, we can cut them off. My next logical step would be then to create a developers' key to restrict access - which would work fine for web apps, but not so much for any AJAX calls. (As I see it, they'd need to provide the key in the JavaScript, which is in plaintext and easily seen, and hence there's actually no security at all. Particularly if we'd be using our own developers' keys on our site to make these AJAX calls.) So my question: after looking around at Oauth and OpenID for some time, I'm not sure there is a solution that would handle all three of the above. Is there some sort of canonical "best practices" for developers' keys, or can Oauth and OpenID handle AJAX calls easily in some fashion I have yet to grok, or am I missing something entirely?

    Read the article

  • How to tune system settings for mongoDB on Linux?

    - by jsh
    Trying to squeeze a lot out of one question here -- please bear with me. Although the MongoDB man pages make several useful recommendations about system settings like ulimit (http://docs.mongodb.org/manual/reference/ulimit/), and other production factors (http://docs.mongodb.org/manual/administration/production-notes/) they seem mysteriously silent on things like virtual memory and swap settings. The closest we get to a hint is that "...the operating system’s virtual memory subsystem manages MongoDB’s memory..." (http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram). Running the same job - high writes and high reads on about 10,000,000 records in a single collection -- on my 4-processor, 4GB RAM macbook and an 8-core ubuntu box with 64GB RAM I saw dramatically WORSE read performance on the linux box with factory settings, and could hear the disk constantly spinning, indicating high I/O and presumably swapping. Yes, other things were happening on the box, but there was plenty of free RAM, disk space, etc.; furthermore, I did not see evidence that Mongo was expanding to take advantage of all that free RAM as it is touted to do. Linux box default settings were as follows: vm.swappiness =60 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 vm.dirty_expire_centisecs =3000 vm.dirty_writeback_centisecs=500 I hazarded some guesses looking at docs and blogs for other types of databases (Oracle, MYSQL, etc.), experimented, and adjusted as below. vm.swappiness=10 vm.dirty_background_ratio=5 vm.dirty_ratio=5 vm.dirty_writeback_centisecs=250 vm.dirty_expire_centisecs=500 I saw some immediate apparent improvements in read time. However, when I ran my test jobs again, read performance continued to be painfully sluggish during heavy writes. Then, I REBUILT the collection from an available data source - and suddenly I can read at 1ms or less per record WHILE doing the write job! So the question is really two-fold: 1) What are appropriate VM settings for MongoDB on Linux? 2) (bonus) Does Mongo do some checking or optimization with the OS while data is being built? In other words, if I have built a large data set with suboptimal VM or I/O settings, does Mongo make assumptions during the memory-mapping process that will fail to take advantage of optimizations down the road? Obviously I don't fully grok memory mapping under the hood (I was hoping I wouldn't have to). Any help appreciated...thanks! -j

    Read the article

  • NSObjectController confusion binding to a class property. Help!

    - by scottw
    Hi, I'm teaching myself cocoa and enjoying the experience most of the time. I have been struggling all day with a simple problem that google has let me down on. I have read the Cocoa Bindings Program Topics and think I grok it but still can't solve my issue. I have a very simple class called MTSong that has various properties. I have used @synthesize to create getter/setters and can use KVC to change properties. i.e in my app controller the following works: mySong = [[MTSong alloc]init]; [mySong setValue:@"2" forKey:@"version"]; In case I am doing something noddy in my class code MTSong.h is: #import <Foundation/Foundation.h> @interface MTSong : NSObject { NSNumber *version; NSString *name; } @property(readwrite, assign) NSNumber *version; @property(readwrite, assign) NSString *name; @end and MTSong.m is: #import "MTSong.h" @implementation MTSong - (id)init { [super init]; return self; } - (void)dealloc { [super dealloc]; } @synthesize version; @synthesize name; @end In Interface Builder I have a label (NSTextField) that I want to update whenever I use KVC to change the version of the song. I do the following: Drag NSObjectController object into the doc window and in the Inspector-Attributes I set: Mode: Class Class Name: MTSong Add a key called version and another called name Go to Inspector-Bindings-Controller Content Bind To: File's Owner (Not sure this is right...) Model Key Path: version Select the cell of the label and go to Inspector Bind to: Object Controller Controller Key: mySong Model Key Path: version I have attempted changing the Model Key Path in step 2 to "mySong" which makes more sense but the compiler complains. Any suggestions would be greatly appreciated. Scott Update Post Comments I wasn't exposing mySong property so have changed my AppController.h to be: #import <Cocoa/Cocoa.h> @class MTSong; @interface AppController : NSObject { IBOutlet NSButton *start; IBOutlet NSTextField *tf; MTSong *mySong; } -(IBAction)convertFile:(id)sender; @end I suspect File's owner was wrong as I am not using a document based application and I need to bind to the AppController, so step 2 is now: Go to Inspector-Bindings-Controller Content Bind To: App Controller Model Key Path: mySong I needed to change 3. to Select the cell of the label and go to Inspector Bind to: Object Controller Controller Key: selection Model Key Path: version All compiles and is playing nice!

    Read the article

  • ASP.NET- forcing child/container events to fire before parent onload?

    - by Hans Gruber
    I'm working on a questionnaire type application in which questions are stored in a database. Therefore, I create my controls dynamically on every Page.OnLoad. This works like a charm and ViewState is persisted between postbacks because I ensure that my dynamic controls always have the same generated Control.ID. In addition to the user control that dynamically populates the questions, my questionnaire page also contains a 'Status' section (also encapsulated by a user control) which represents the status of the questionnaire (choices are 'Complete', 'Started' or 'In Progress'). If the user changes the status of questionnaire (i.e. from 'In Progress' to 'Complete'), I need to postback to the server because the contents of the dynamic portion of the questionnaire depend on the selected status. Some questions are always present regardless of status, and yet others may not be present at all for the selected status. The point is, when the status changes, I have to postback to the page and render the right set of questions. Additionally, I need to preserve any user entered values for those questions which are 'always available'. However, due to the page life cycle in ASP.NET, the 'Status' user control's OnLoad, which contains the correct status needed to load the right questions from the DB, doesn't get executed until after the 'dynamic questions' user control has already been populated (with the wrong/stale values). To get around this, I raise an event from my 'Status' user control to the main page to indicate that the Status has changed. The main page then raises an event on the 'dynamic questions' user control. Since by the time this event bubbles up, the 'dynamic questions' user control has already loaded the 'wrong' questions from the DB, it first calls Controls.Clear. It then happily uses the new status to query the database for the 'correct' questions and does a Control.Add() on each. FYI, Control.IDs are consistent across postbacks. This solution works...sorta. The correct set of questions for the selected status do get rendered; however ViewState is getting lost for those 'always available' questions. I'm guessing this is because the 'dynamic questions' user control calls Controls.Clear when responding to the status changed event. This must somehow kill the association between ViewState and my dynamic controls, even though the Control.ID are consistent. This seems like such a common requirement, I'm virtually certain there is a better, cleaner and less error prone approach to accomplish this. In case its not plain obvious, I haven't been able to grok the ASP.NET page life-cycle despite working with it for the last year. Any help is much appreciated!

    Read the article

  • Using Sitecore RenderingContext Parameters as MVC controller action arguments

    - by Kyle Burns
    I have been working with the Technical Preview of Sitecore 6.6 on a project and have been for the most part happy with the way that Sitecore (which truly is an MVC implementation unto itself) has been expanded to support ASP.NET MVC. That said, getting up to speed with the combined platform has not been entirely without stumbles and today I want to share one area where Sitecore could have really made things shine from the "it just works" perspective. A couple days ago I was asked by a colleague about the usage of the "Parameters" field that is defined on Sitecore's Controller Rendering data template. Based on the standard way that Sitecore handles a field named Parameters, I was able to deduce that the field expected key/value pairs separated by the "&" character, but beyond that I wasn't sure and didn't see anything from a documentation perspective to guide me, so it was time to dig and find out where the data in the field was made available. My first thought was that it would be really nice if Sitecore handled the parameters in this field consistently with the way that ASP.NET MVC handles the various parameter collections on the HttpRequest object and automatically maps them to parameters of the action method executing. Being the hopeful sort, I configured a name/value pair on one of my renderings, added a parameter with matching name to the controller action and fired up the bugger to see... that the parameter was not populated. Having established that the field's value was not going to be presented to me the way that I had hoped it would, the next assumption that I would work on was that Sitecore would handle this field similar to how they handle other similar data and would plug it into some ambient object that I could reference from within the controller method. After a considerable amount of guessing, testing, and cracking code open with Redgate's Reflector (a must-have companion to Sitecore documentation), I found that the most direct way to access the parameter was through the ambient RenderingContext object using code similar to: string myArgument = string.Empty; var rc = Sitecore.Mvc.Presentation.RenderingContext.CurrentOrNull; if (rc != null) {     var parms = rc.Rendering.Parameters;     myArgument = parms["myArgument"]; } At this point, we know how this field is used out of the box from Sitecore and can provide information from Sitecore's Content Editor that will be available when the controller action is executing, but it feels a little dirty. In order to properly test the action method I would have to do a lot of setup work and possible use an isolation framework such as Pex and Moles to get at a value that my action method is dependent upon. Notice I said that my method is dependent upon the value but in order to meet that dependency I've accepted another dependency upon Sitecore's RenderingContext.  I'm a big believer in, when possible, ensuring that any piece of code explicitly advertises dependencies using the method signature, so I found myself still wanting this to work the same as if the parameters were in the request route, querystring, or form by being able to add a myArgument parameter to the action method and have this parameter populated by the framework. Lucky for us, the ASP.NET MVC framework is extremely flexible and provides some easy to grok and use extensibility points. ASP.NET MVC is able to provide information from the request as input parameters to controller actions because it uses objects which implement an interface called IValueProvider and have been registered to service the application. The most basic statement of responsibility for an IValueProvider implementation is "I know about some data which is indexed by key. If you hand me the key for a piece of data that I know about I give you that data". When preparing to invoke a controller action, the framework queries registered IValueProvider implementations with the name of each method argument to see if the ValueProvider can supply a value for the parameter. (the rest of this post will assume you're working along and make a lot more sense if you do) Let's pull Sitecore out of the equation for a second to simplify things and create an extremely simple IValueProvider implementation. For this example, I first create a new ASP.NET MVC3 project in Visual Studio, selecting "Internet Application" and otherwise taking defaults (I'm assuming that anyone reading this far in the post either already knows how to do this or will need to take a quick run through one of the many available basic MVC tutorials such as the MVC Music Store). Once the new project is created, go to the Index action of HomeController.  This action sets a Message property on the ViewBag to "Welcome to ASP.NET MVC!" and invokes the View, which has been coded to display the Message. For our example, we will remove the hard coded message from this controller (although we'll leave it just as hard coded somewhere else - this is sample code). For the first step in our exercise, add a string parameter to the Index action method called welcomeMessage and use the value of this argument to set the ViewBag.Message property. The updated Index action should look like: public ActionResult Index(string welcomeMessage) {     ViewBag.Message = welcomeMessage;     return View(); } This represents the entirety of the change that you will make to either the controller or view.  If you run the application now, the home page will display and no message will be presented to the user because no value was supplied to the Action method. Let's now write a ValueProvider to ensure this parameter gets populated. We'll start by creating a new class called StaticValueProvider. When the class is created, we'll update the using statements to ensure that they include the following: using System.Collections.Specialized; using System.Globalization; using System.Web.Mvc; With the appropriate using statements in place, we'll update the StaticValueProvider class to implement the IValueProvider interface. The System.Web.Mvc library already contains a pretty flexible dictionary-like implementation called NameValueCollectionValueProvider, so we'll just wrap that and let it do most of the real work for us. The completed class looks like: public class StaticValueProvider : IValueProvider {     private NameValueCollectionValueProvider _wrappedProvider;     public StaticValueProvider(ControllerContext controllerContext)     {         var parameters = new NameValueCollection();         parameters.Add("welcomeMessage", "Hello from the value provider!");         _wrappedProvider = new NameValueCollectionValueProvider(parameters, CultureInfo.InvariantCulture);     }     public bool ContainsPrefix(string prefix)     {         return _wrappedProvider.ContainsPrefix(prefix);     }     public ValueProviderResult GetValue(string key)     {         return _wrappedProvider.GetValue(key);     } } Notice that the only entry in the collection matches the name of the argument to our HomeController's Index action.  This is the important "secret sauce" that will make things work. We've got our new value provider now, but that's not quite enough to be finished. Mvc obtains IValueProvider instances using factories that are registered when the application starts up. These factories extend the abstract ValueProviderFactory class by initializing and returning the appropriate implementation of IValueProvider from the GetValueProvider method. While I wouldn't do so in production code, for the sake of this example, I'm going to add the following class definition within the StaticValueProvider.cs source file: public class StaticValueProviderFactory : ValueProviderFactory {     public override IValueProvider GetValueProvider(ControllerContext controllerContext)     {         return new StaticValueProvider(controllerContext);     } } Now that we have a factory, we can register it by adding the following line to the end of the Application_Start method in Global.asax.cs: ValueProviderFactories.Factories.Add(new StaticValueProviderFactory()); If you've done everything right to this point, you should be able to run the application and be presented with the home page reading "Hello from the value provider!". Now that you have the basics of the IValueProvider down, you have everything you need to enhance your Sitecore MVC implementation by adding an IValueProvider that exposes values from the ambient RenderingContext's Parameters property. I'll provide the code for the IValueProvider implementation (which should look VERY familiar) and you can use the work we've already done as a reference to create and register the factory: public class RenderingContextValueProvider : IValueProvider {     private NameValueCollectionValueProvider _wrappedProvider = null;     public RenderingContextValueProvider(ControllerContext controllerContext)     {         var collection = new NameValueCollection();         var rc = RenderingContext.CurrentOrNull;         if (rc != null && rc.Rendering != null)         {             foreach(var parameter in rc.Rendering.Parameters)             {                 collection.Add(parameter.Key, parameter.Value);             }         }         _wrappedProvider = new NameValueCollectionValueProvider(collection, CultureInfo.InvariantCulture);         }     public bool ContainsPrefix(string prefix)     {         return _wrappedProvider.ContainsPrefix(prefix);     }     public ValueProviderResult GetValue(string key)     {         return _wrappedProvider.GetValue(key);     } } In this post I've discussed the MVC IValueProvider used to map data to controller action method arguments and how this can be integrated into your Sitecore 6.6 MVC solution.

    Read the article

  • tile_static, tile_barrier, and tiled matrix multiplication with C++ AMP

    - by Daniel Moth
    We ended the previous post with a mechanical transformation of the C++ AMP matrix multiplication example to the tiled model and in the process introduced tiled_index and tiled_grid. This is part 2. tile_static memory You all know that in regular CPU code, static variables have the same value regardless of which thread accesses the static variable. This is in contrast with non-static local variables, where each thread has its own copy. Back to C++ AMP, the same rules apply and each thread has its own value for local variables in your lambda, whereas all threads see the same global memory, which is the data they have access to via the array and array_view. In addition, on an accelerator like the GPU, there is a programmable cache, a third kind of memory type if you'd like to think of it that way (some call it shared memory, others call it scratchpad memory). Variables stored in that memory share the same value for every thread in the same tile. So, when you use the tiled model, you can have variables where each thread in the same tile sees the same value for that variable, that threads from other tiles do not. The new storage class for local variables introduced for this purpose is called tile_static. You can only use tile_static in restrict(direct3d) functions, and only when explicitly using the tiled model. What this looks like in code should be no surprise, but here is a snippet to confirm your mental image, using a good old regular C array // each tile of threads has its own copy of locA, // shared among the threads of the tile tile_static float locA[16][16]; Note that tile_static variables are scoped and have the lifetime of the tile, and they cannot have constructors or destructors. tile_barrier In amp.h one of the types introduced is tile_barrier. You cannot construct this object yourself (although if you had one, you could use a copy constructor to create another one). So how do you get one of these? You get it, from a tiled_index object. Beyond the 4 properties returning index objects, tiled_index has another property, barrier, that returns a tile_barrier object. The tile_barrier class exposes a single member, the method wait. 15: // Given a tiled_index object named t_idx 16: t_idx.barrier.wait(); 17: // more code …in the code above, all threads in the tile will reach line 16 before a single one progresses to line 17. Note that all threads must be able to reach the barrier, i.e. if you had branchy code in such a way which meant that there is a chance that not all threads could reach line 16, then the code above would be illegal. Tiled Matrix Multiplication Example – part 2 So now that we added to our understanding the concepts of tile_static and tile_barrier, let me obfuscate rewrite the matrix multiplication code so that it takes advantage of tiling. Before you start reading this, I suggest you get a cup of your favorite non-alcoholic beverage to enjoy while you try to fully understand the code. 01: void MatrixMultiplyTiled(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: static const int TS = 16; 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M,N,vC); 07: parallel_for_each(c.grid.tile< TS, TS >(), 08: [=] (tiled_index< TS, TS> t_idx) restrict(direct3d) 09: { 10: int row = t_idx.local[0]; int col = t_idx.local[1]; 11: float sum = 0.0f; 12: for (int i = 0; i < W; i += TS) { 13: tile_static float locA[TS][TS], locB[TS][TS]; 14: locA[row][col] = a(t_idx.global[0], col + i); 15: locB[row][col] = b(row + i, t_idx.global[1]); 16: t_idx.barrier.wait(); 17: for (int k = 0; k < TS; k++) 18: sum += locA[row][k] * locB[k][col]; 19: t_idx.barrier.wait(); 20: } 21: c[t_idx.global] = sum; 22: }); 23: } Notice that all the code up to line 9 is the same as per the changes we made in part 1 of tiling introduction. If you squint, the body of the lambda itself preserves the original algorithm on lines 10, 11, and 17, 18, and 21. The difference being that those lines use new indexing and the tile_static arrays; the tile_static arrays are declared and initialized on the brand new lines 13-15. On those lines we copy from the global memory represented by the array_view objects (a and b), to the tile_static vanilla arrays (locA and locB) – we are copying enough to fit a tile. Because in the code that follows on line 18 we expect the data for this tile to be in the tile_static storage, we need to synchronize the threads within each tile with a barrier, which we do on line 16 (to avoid accessing uninitialized memory on line 18). We also need to synchronize the threads within a tile on line 19, again to avoid the race between lines 14, 15 (retrieving the next set of data for each tile and overwriting the previous set) and line 18 (not being done processing the previous set of data). Luckily, as part of the awesome C++ AMP debugger in Visual Studio there is an option that helps you find such races, but that is a story for another blog post another time. May I suggest reading the next section, and then coming back to re-read and walk through this code with pen and paper to really grok what is going on, if you haven't already? Cool. Why would I introduce this tiling complexity into my code? Funny you should ask that, I was just about to tell you. There is only one reason we tiled our extent, had to deal with finding a good tile size, ensure the number of threads we schedule are correctly divisible with the tile size, had to use a tiled_index instead of a normal index, and had to understand tile_barrier and to figure out where we need to use it, and double the size of our lambda in terms of lines of code: the reason is to be able to use tile_static memory. Why do we want to use tile_static memory? Because accessing tile_static memory is around 10 times faster than accessing the global memory on an accelerator like the GPU, e.g. in the code above, if you can get 150GB/second accessing data from the array_view a, you can get 1500GB/second accessing the tile_static array locA. And since by definition you are dealing with really large data sets, the savings really pay off. We have seen tiled implementations being twice as fast as their non-tiled counterparts. Now, some algorithms will not have performance benefits from tiling (and in fact may deteriorate), e.g. algorithms that require you to go only once to global memory will not benefit from tiling, since with tiling you already have to fetch the data once from global memory! Other algorithms may benefit, but you may decide that you are happy with your code being 150 times faster than the serial-version you had, and you do not need to invest to make it 250 times faster. Also algorithms with more than 3 dimensions, which C++ AMP supports in the non-tiled model, cannot be tiled. Also note that in future releases, we may invest in making the non-tiled model, which already uses tiling under the covers, go the extra step and use tile_static memory on your behalf, but it is obviously way to early to commit to anything like that, and we certainly don't do any of that today. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Farseer tutorial for the absolute beginners

    - by Bil Simser
    This post is inspired (and somewhat a direct copy) of a couple of posts Emanuele Feronato wrote back in 2009 about Box2D (his tutorial was ActionScript 3 based for Box2D, this is C# XNA for the Farseer Physics Engine). Here’s what we’re building: What is Farseer The Farseer Physics Engine is a collision detection system with realistic physics responses to help you easily create simple hobby games or complex simulation systems. Farseer was built as a .NET version of Box2D (based on the Box2D.XNA port of Box2D). While the constructs and syntax has changed over the years, the principles remain the same. This tutorial will walk you through exactly what Emanuele create for Flash but we’ll be doing it using C#, XNA and the Windows Phone platform. The first step is to download the library from its home on CodePlex. If you have NuGet installed, you can install the library itself using the NuGet package that but we’ll also be using some code from the Samples source that can only be obtained by downloading the library. Once you download and unpacked the zip file into a folder and open the solution, this is what you will get: The Samples XNA WP7 project (and content) have all the demos for Farseer. There’s a wealth of info here and great examples to look at to learn. The Farseer Physics XNA WP7 project contains the core libraries that do all the work. DebugView XNA contains an XNA-ready class to let you view debug data and information in the game draw loop (which you can copy into your project or build the source and reference the assembly). The downloaded version has to be compiled as it’s only available in source format so you can do that now if you want (open the solution file and rebuild everything). If you’re using the NuGet package you can just install that. We only need the core library and we’ll be copying in some code from the samples later. Your first Farseer experiment Start Visual Studio and create a new project using the Windows Phone template can call it whatever you want. It’s time to edit Game1.cs 1 public class Game1 : Game 2 { 3 private readonly GraphicsDeviceManager _graphics; 4 private DebugViewXNA _debugView; 5 private Body _floor; 6 private SpriteBatch _spriteBatch; 7 private float _timer; 8 private World _world; 9 10 public Game1() 11 { 12 _graphics = new GraphicsDeviceManager(this) 13 { 14 PreferredBackBufferHeight = 800, 15 PreferredBackBufferWidth = 480, 16 IsFullScreen = true 17 }; 18 19 Content.RootDirectory = "Content"; 20 21 // Frame rate is 30 fps by default for Windows Phone. 22 TargetElapsedTime = TimeSpan.FromTicks(333333); 23 24 // Extend battery life under lock. 25 InactiveSleepTime = TimeSpan.FromSeconds(1); 26 } 27 28 protected override void LoadContent() 29 { 30 // Create a new SpriteBatch, which can be used to draw textures. 31 _spriteBatch = new SpriteBatch(_graphics.GraphicsDevice); 32 33 // Load our font (DebugViewXNA needs it for the DebugPanel) 34 Content.Load<SpriteFont>("font"); 35 36 // Create our World with a gravity of 10 vertical units 37 if (_world == null) 38 { 39 _world = new World(Vector2.UnitY*10); 40 } 41 else 42 { 43 _world.Clear(); 44 } 45 46 if (_debugView == null) 47 { 48 _debugView = new DebugViewXNA(_world); 49 50 // default is shape, controller, joints 51 // we just want shapes to display 52 _debugView.RemoveFlags(DebugViewFlags.Controllers); 53 _debugView.RemoveFlags(DebugViewFlags.Joint); 54 55 _debugView.LoadContent(GraphicsDevice, Content); 56 } 57 58 // Create and position our floor 59 _floor = BodyFactory.CreateRectangle( 60 _world, 61 ConvertUnits.ToSimUnits(480), 62 ConvertUnits.ToSimUnits(50), 63 10f); 64 _floor.Position = ConvertUnits.ToSimUnits(240, 775); 65 _floor.IsStatic = true; 66 _floor.Restitution = 0.2f; 67 _floor.Friction = 0.2f; 68 } 69 70 protected override void Update(GameTime gameTime) 71 { 72 // Allows the game to exit 73 if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) 74 Exit(); 75 76 // Create a random box every second 77 _timer += (float) gameTime.ElapsedGameTime.TotalSeconds; 78 if (_timer >= 1.0f) 79 { 80 // Reset our timer 81 _timer = 0f; 82 83 // Determine a random size for each box 84 var random = new Random(); 85 var width = random.Next(20, 100); 86 var height = random.Next(20, 100); 87 88 // Create it and store the size in the user data 89 var box = BodyFactory.CreateRectangle( 90 _world, 91 ConvertUnits.ToSimUnits(width), 92 ConvertUnits.ToSimUnits(height), 93 10f, 94 new Point(width, height)); 95 96 box.BodyType = BodyType.Dynamic; 97 box.Restitution = 0.2f; 98 box.Friction = 0.2f; 99 100 // Randomly pick a location along the top to drop it from 101 box.Position = ConvertUnits.ToSimUnits(random.Next(50, 400), 0); 102 } 103 104 // Advance all the elements in the world 105 _world.Step(Math.Min((float) gameTime.ElapsedGameTime.TotalMilliseconds*0.001f, (1f/30f))); 106 107 // Clean up any boxes that have fallen offscreen 108 foreach (var box in from box in _world.BodyList 109 let pos = ConvertUnits.ToDisplayUnits(box.Position) 110 where pos.Y > _graphics.GraphicsDevice.Viewport.Height 111 select box) 112 { 113 _world.RemoveBody(box); 114 } 115 116 base.Update(gameTime); 117 } 118 119 protected override void Draw(GameTime gameTime) 120 { 121 GraphicsDevice.Clear(Color.FromNonPremultiplied(51, 51, 51, 255)); 122 123 _spriteBatch.Begin(); 124 125 var projection = Matrix.CreateOrthographicOffCenter( 126 0f, 127 ConvertUnits.ToSimUnits(_graphics.GraphicsDevice.Viewport.Width), 128 ConvertUnits.ToSimUnits(_graphics.GraphicsDevice.Viewport.Height), 0f, 0f, 129 1f); 130 _debugView.RenderDebugData(ref projection); 131 132 _spriteBatch.End(); 133 134 base.Draw(gameTime); 135 } 136 } 137 Lines 4: Declare the debug view we’ll use for rendering (more on that later). Lines 8: Declare _world variable of type class World. World is the main object to interact with the Farseer engine. It stores all the joints and bodies, and is responsible for stepping through the simulation. Lines 12-17: Create the graphics device we’ll be rendering on. This is an XNA component and we’re just setting it to be the same size as the phone and toggling it to be full screen (no system tray). Lines 34: We create a SpriteFont here by adding it to the project. It’s called “font” because that’s what the DebugView uses but you can name it whatever you want (and if you’re not using DebugView for your production app you might have several fonts). Lines 37-44: We create the physics environment that Farseer uses to contain all the objects by specifying it here. We’re using Vector2.UnitY*10 to represent the gravity to be used in the environment. In other words, 10 units going in a downward motion. Lines 46-56: We create the DebugViewXNA here. This is copied from the […] from the code you downloaded and provides the ability to render all entities onto the screen. In a production release you’ll be doing the rendering yourself of each object but we cheat a bit for the demo and let the DebugView do it for us. The other thing it can provide is to render out a panel of debugging information while the simulation is going on. This is useful in tracking down objects, figuring out how something works, or just keeping track of what’s in the engine. Lines 49-67: Here we create a rigid body (Farseer only supports rigid bodies) to represent the floor that we’ll drop objects onto. We create it by using one of the Farseer factories and specifying the width and height. The ConvertUnits class is copied from the samples code as-is and lets us toggle between display units (pixels) and simulation units (usually metres). We’re creating a floor that’s 480 pixels wide and 50 pixels high (converting them to SimUnits for the engine to understand). We also position it near the bottom of the screen. Values are in metres and when specifying values they refer to the centre of the body object. Lines 77-78: The game Update method fires 30 times a second, too fast to be creating objects this quickly. So we use a variable to track the elapsed seconds since the last update, accumulate that value, then create a new box to drop when 1 second has passed. Lines 89-94: We create a box the same way we created our floor (coming up with a random width and height for the box). Lines 96-101: We set the box to be Dynamic (rather than Static like the floor object) and position it somewhere along the top of the screen. And now you created the world. Gravity does the rest and the boxes fall to the ground. Here’s the result: Farseer Physics Engine Demo using XNA Lines 105: We must update the world at every frame. We do this with the Step method which takes in the time interval. [more] Lines 108-114: Body objects are added to the world but never automatically removed (because Farseer doesn’t know about the display world, it has no idea if an item is on the screen or not). Here we just loop through all the entities and anything that’s dropped off the screen (below the bottom) gets removed from the World. This keeps our entity count down (the simulation never has more than 30 or 40 objects in the world no matter how long you run it for). Too many entities and the app will grind to a halt. Lines 125-130: Farseer knows nothing about the UI so that’s entirely up to you as to how to draw things. Farseer is just tracking the objects and moving them around using the physics engine and it’s rules. You’ll still use XNA to draw items (using the SpriteBatch.Draw method) so you can load up your usual textures and draw items and pirates and dancing zombies all over the screen. Instead in this demo we’re going to cheat a little. In the sample code for Farseer you can download there’s a project called DebugView XNA. This project contains the DebugViewXNA class which just handles iterating through all the bodies in the world and drawing the shapes. So we call the RenderDebugData method here of that class to draw everything correctly. In the case of this demo, we just want to draw Shapes so take a look at the source code for the DebugViewXNA class as to how it extracts all the vertices for the shapes created (in this case simple boxes) and draws them. You’ll learn a *lot* about how Farseer works just by looking at this class. That’s it, that’s all. Simple huh? Hope you enjoy the code and library. Physics is hard and requires some math skills to really grok. The Farseer Physics Engine makes it pretty easy to get up and running and start building games. In future posts we’ll get more in-depth with things you can do with the engine so this is just the beginning. Enjoy!

    Read the article

< Previous Page | 1 2 3 4  | Next Page >