Search Results

Search found 1288 results on 52 pages for 'prime factors'.

Page 35/52 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Soft Shadows in Raytracing 3D to 2D

    - by Myx
    Hello: I wish to implement soft shadows produced by area lights in my raytracer. I'm having trouble generating the random samples. So I have a scene in which I have an area light (represented as a circle) whose world (x,y,z) coordinates of the center are given, the radius is given, the normal of the plane on which the circle lies is given, as well as the color and attenuation factors. The sampling scheme I wish to use is the following: generate samples on the quadrilateral that encompasses the circle and discard points outside the circle until the required number of samples within the circle have been found. I'm having trouble understanding how I can transform the 3D coordinates of the center of the circle to its 2D representation (I don't think I can assume that the projection of the circle is on the x-y axis and thus just get rid of the z-component). I think the plane normal information should be used but I'm not sure how. Any and all suggestions are appreciated.

    Read the article

  • Life Cycle Tools Suite

    - by pearcewg
    I am looking to replace the life cycle tools currently used by my development teams. Tools that I'm looking for: Version Control Defect/Issue Tracking Requirements Tracking Test Case Management (potentially) Project Management: Project Status, hours entry I have a new beefy server (Windows 2008 Server) to run all tools on. I'm looking at COTS and Open Source options, but haven't decided so far. Other factors: Distributed team (different physical sites) Some Windows Development, some Linux Development Software, Firmware, Technical Writing need to be able to use it Recommendations on a good suite that will work together? If Open Source, best approach to run on the Windows 2008 Server?

    Read the article

  • In LINQ-SQL, wrap the DataContext is an using statement - pros cons

    - by hIpPy
    Can someone pitch in their opinion about pros/cons between wrapping the DataContext in an using statement or not in LINQ-SQL in terms of factors as performance, memory usage, ease of coding, right thing to do etc. Update: In one particular application, I experienced that, without wrapping the DataContext in using block, the amount of memory usage kept on increasing as the live objects were not released for GC. As in, in below example, if I hold the reference to List of q object and access entities of q, I create an object graph that is not released for GC. DataContext with using using (DBDataContext db = new DBDataContext()) { var q = from x in db.Tables where x.Id == someId select x; return q.toList(); } DataContext without using and kept alive DBDataContext db = new DBDataContext() var q = from x in db.Tables where x.Id == someId select x; return q.toList(); Thanks.

    Read the article

  • Programmer productivity by programming language?

    - by Jason Baker
    In code complete, there's a nice table listing how productive a programmer is depending on language. Jeff Atwood has a nice blog post about it. This chart is at least 4 years old by now. I'm curious: have there been any more recent studies done on this? (insert standard anti-flamewar boilerplate here... we're all adults) Update: I appreciate everyone's opinions on the subject and whether or not this is a relevant question or not. But that's not really what I'm asking for. I'm wanting any studies on the subject. I'm inclined to agree with most of the opinions posted thus far, but I'd like to see if there's any research to back that up. And I'm also aware that choice of programming language is a complicated subject that depends on other factors like developer familiarity. To me, this is all the more reason to have these kinds of discussions backed by research. Also, thanks for the link, Robert Gamble.

    Read the article

  • What is technically more advanced: Brainf*ck or Assembler?

    - by el ka es
    I wondered which of these languages is more powerful. With powerful I don't mean the readability, assembler would be naturally the winner here, but something resulting from, for example, the following factors: Which of them is more high-level? (Both aren't really but one has to be more) Who would be the possibly fastest in compiled state? (There is no BF compiler out there as far as I know but it wouldn't be hard writing one I suppose) Which of the both has the better code length/code action ratio? What I mean is If you get to distracted by the, compared to Brainf*ck, improved readability of assembler, just think of writing plain binary/machine code as what assembler assembles to. Both languages are so basic that it should be possible to answer the question(s) in a rather objective view, I hope.

    Read the article

  • Statistical approach to chess?

    - by Chinmay Kanchi
    Reading about how Google solves the translation problem got me thinking. Would it be possible to build a strong chess engine by analysing several million games and determining the best possible move based largely (completely?) on statistics? There are several such chess databases (this is one that has 4.5 million games), and one could potentially weight moves in identical (or mirrored or reflected) positions using factors such as the ratings of the players involved, how old the game is (to factor in improvements in chess theory) etc. Any reasons why this wouldn't be a feasible approach to building a chess engine?

    Read the article

  • Which offer to pick?

    - by Shawn
    OK. This is kinda silly, but I'd really like to hear what you guys say. Here is the thing: I used to be an php developer and started looking for jobs 3 weeks ago and now I got 2 offers at hand. If I took the first one what I do would be 100% in PHP - simply dev works on a ecommerce project. If I go with the second one, the projects would be in python which I didn't have too much experiences on and is kinda exited about the new stuff I'm going to learn. But wait, the first offer gives more money(1k plus) and that's why I'm still hesitating and troubling you guys.) If you were me, which one would you pick? I know there could be other factors to be considered like 'which position is closer to your home so you could take good care of your dog', well let's forget about those things for now.

    Read the article

  • C++ project type: unicode vs multi-byte; pros and cons

    - by Stefan Valianu
    I'm wondering what the Stack Overflow community thinks when it comes to creating a project (thinking primarily c++ here) with a unicode or a multi-byte character set. Are there pros to going Unicode straight from the start, implying all your strings will be in wide format? Are there performance issues / larger memory requirements because of a standard use of a larger character? Is there an advantage to this method? Do some processor architectures handle wide characters better? Are there any reasons to make your project Unicode if you don't plan on supporting additional languages? What reasons would one have for creating a project with a multi-byte character set? How do all of the factors above collide in a high performance environment (such as a modern video game) ?

    Read the article

  • Linear complexity and quadratic complexity

    - by jasonline
    I'm just not sure... If you have a code that can be executed in either of the following complexities: A sequence of O(n), like for example: two O(n) in sequence O(n²) The preferred version would be the one that can be executed in linear time. Would there be a time such that the sequence of O(n) would be too much and that O(n²) would be preferred? In other words, is the statement C x O(n) < O(n²) always true for any constant C? Why or why not? What are the factors that would affect the condition such that it would be better to choose the O(n²) complexity?

    Read the article

  • What is technically more advanced: Python or Assembler? [closed]

    - by el ka es
    I wondered which of these languages is more powerful. With powerful I don't mean the readability, assembler would be naturally the winner here, but something resulting from, for example, the following factors: Which of them is more high-level? (Both aren't really but one has to be more) Who would be the possibly fastest in compiled state? (There is no Python compiler out there as far as I know but it wouldn't be hard writing one I suppose) Which of the both has the better code length/code action ratio? What I mean is If you get to distracted by the, compared to Python, improved readability of assembler, just think of writing plain binary/machine code as what assembler assembles to. Both languages are so basic that it should be possible to answer the question(s) in a rather objective view, I hope.

    Read the article

  • Can I use XText for a DSL involving an XML file type?

    - by Chris
    I have defined a small DSL that is mostly written in the form of different types of XML files in conjuction with some property files. This works very well but I wish to create an Eclipse Editor to make editing these files easier for beginners (I already have a working parser). The main XML file can reference some items from the .properties files and vice-versa. THe main xml file can also reference other XML files. Certain options should only be available in the main xml file based on the contents of the .properties files and based on some osgi plugins that can be added to the DSL project (the syntax is dynamic depending on context). The structure of the language is fixed but the options available in each attribute or the choice of attributes themselves changes depending on metadata contained in plugin .jar files. Questions: Does XText support dynamic syntax (validation changes depending on external factors)? Does XText support XML files / .properties files? Thank you very much for your help in advance.

    Read the article

  • Cannot combine commits using TortoiseGit

    - by JC
    I have two branches with several commits each. On one branch, I can go to the log, select two commits, and TortoiseGit shows "combine to one commit" in the context menu. On the other branch this option does not show in the context menu. Both sequence of commits is very similar; add file then modify it, so there is no difference really between the branches. What factors would cause this "combine to one commit" to not be available? I'm wondering if I should just switch to the command line.

    Read the article

  • Calculation of charged traffic in GPRS network

    - by TyBoer
    I am working with a distributed application communicating over GPRS. I use UDP packets to send business data and ICMP pings to verify connectivity. And now I have a problem with calculating a traffic for which I will be charged by the provider. I have to consider following factors: UDP payload: that is obvious. UDP overhead: UDP header + IP header = 8 + 20 bytes. ICMP echo request without data: IP header + ICMP payload = 28 bytes. ICMP echo reply: as in 3. Above means that for evey data packet I am charged for payload + 28 bytes and for every ping 56 bytes. Am I right or I am missing/misunderstanding something?

    Read the article

  • Java NIO SocketChannel writing problem

    - by Nilesh
    I am using Java NIO's SocketChannel to write : int n = socketChannel.write(byteBuffer); Most of the times the data is sent in one or two parts; i.e. if the data could not be sent in one attemmpt, remaining data is retried. The issue here is, sometimes, the data is not being sent completely in one attempt, rest of the data when tried to send multiple times, it occurs that even after trying several times, not a single character is being written to channel, finally after some time the remaning data is sent. What could be the cause of such behaviour? Could external factors such as RAM, etc cause the hindarance? Please help me solve this issue. If any other information is required please let me know. Thanks

    Read the article

  • How to use custom drawing in QGraphicsViews in PyQt?

    - by DSblizzard
    I need to view QGraphicsScene in 2 QGraphicsViews with condition that they have different scale factors for items in scene. Closest function which I found is drawItems(), but as far I can understand, it must be called manually. How to repaint views automatically? I have these two code fragments in program: class TGraphicsView(QGraphicsView): def __init__(self, parent = None): print("__init__") QGraphicsView.__init__(self, parent) def drawItems(self, Painter, ItemCount, Items, StyleOptions): print("drawItems") Brush = QBrush(Qt.red, Qt.SolidPattern) Painter.setBrush(Brush) Painter.drawEllipse(0, 0, 100, 100) ... Mw.gvNavigation = TGraphicsView(Mw) # Mw - main window Mw.gvNavigation.setGeometry(0, 0, Size1, Size1) Mw.gvNavigation.setScene(Mw.Scene) Mw.gvNavigation.setSceneRect(0, 0, Size2, Size2) Mw.gvNavigation.show() _init_ works, Mw.gvNavigation is displayed and there are Mw.Scene items in it, but drawItems() isn't called.

    Read the article

  • PInvoke or using /clr:pure to compile

    - by Yin Zhu
    I have a set of numerical libraries in C++ and I want to call them interactively in a interpretive language like F# or IronPython. So I have two choices now: Compile the library in native DLL and use PInvoke to call functions in it. Compile the C++ library to .Net dll using visual c++ (/clr:pure compile option). The advantage of 1 is that it is very fast, however there are more work in it, e.g. I cannot PInvoke double pointer (e.g. float **), I must write another wrapper in the C++ library to make the interface friendly to .Net. The advantage of 2 is that I don't need to do know Mashaling strings, arrays, etc. However, the .net dll is slower compared to the native one. What others factors should be considered when choosing between the two?

    Read the article

  • Can you tune C runtime heap segment reservation size on XP?

    - by Jason
    When the VC6 C runtime on XP can't serve an allocation request within an existing heap segment, it reserves a new segment. The size of these new segments increase by factors of 2 (until there are not large enough free areas to do that, at which point it falls down to smaller segments.) In any case, is there any way to control this behavior on XP with the VC6 runtime? For example, doubling up to a point, but capping at 64MB segments. If there is no way on XP but there is on 7, that would be good to know too. Or if there is no way on VC6 but there is on VC8 or up would be interesting.

    Read the article

  • NSScrollView jumping to bottom on scroll

    - by Nick Locking
    I have an NSScrollView containing an NSImageView, which resizes based on various factors. When it resizes I have generally changed the image, so I scroll the NSScrollView to the top. This works fine. However, when I start to scroll the NSScrollView again, it moves a few pixels and then (most of the time) jumps to the bottom of the scroll. After it jumps once, it works as normal until I move the scroller to the top again. This is driving me insane. All I'm really doing is this: [_imageView setImage: anNSImage]; NSRect frame; NSSize imageSize = [anNSImage] size]; frame.size = imageSize; frame.origin = NSZeroPoint; [_imageView setFrame: frame]; [[_mainScrollview contentView] scrollToPoint: NSMakePoint(0, [_imageView frame].size.height - [_mainScrollview frame].size.height)];

    Read the article

  • Choosing between ExtJS and YUI based on application parameters.

    - by Kabeer
    Hello. I need help in taking call to choose between Ext JS and YUI libraries. Here are the key factors I have derived from my application requirements & development process: Complex, windows forms like controls Widgets, Layouts, Utilities Inter widget communication Easy to extend Easy to learn Intuitive & concise coding Strong exception handling Active support / community To update with upcoming technologies (HTML5, etc.) Skins & Themes to be easy to change Skins & Themes to support variety (a text box for different context to appear differently) Support & Utilities for standard protocols (XmlHttp, JSON) Good performance (responsive) Cost is not crucial, but I don't mind saving :)

    Read the article

  • When should you use intermediate variables for expressions?

    - by froadie
    There are times that one writes code such as this: callSomeMethod(someClass.someClassMethod()); And other times when one would write the same code like this: int result = someClass.someClassMethod(); callSomeMethod(result); This is just a basic example to illustrate the point. My question isn't if you should use an intermediate variable or not, as that depends on the code and can sometimes be a good design decision and sometimes a terrible one. The question is - when would you choose one method over the other? What factors would you consider when deciding whether to use an intermediate step? (I'd assume length and understandability of the fully inlined code would have something to do with it...)

    Read the article

  • break dataframe into subsets by factor values, send to function that returns glm class, how to recom

    - by Alex Holcombe
    Thanks to Hadley's plyr package ddply function we can take a dataframe, break it down into subdataframes by factors, send each to a function, and then combine the function results for each subdataframe into a new dataframe. But what if the function returns an object of a class like glm or in my case, a c("glm", "lm"). Then, these can't be combined into a dataframe can they? I get this error instead Error in as.data.frame.default(x[[i]], optional = TRUE, stringsAsFactors = stringsAsFactors) : cannot coerce class 'c("glm", "lm")' into a data.frame Is there some more flexible data structure that will accommodate all the complex glm class results of my function calls, preserving the information regarding the dataframe subsets? Or should this be done in an entirely different way?

    Read the article

  • New to AVL tree implementation.

    - by nn
    I am writing a sliding window compression algorithm (LZ77) that searches for phrases in a "moving" dictionary. So far I have written a BST where each node is stored in an array and it's index in the array is also the value of the starting position in the window itself. I am now looking at transforming the BST to an AVL tree. I am a little confused at the sample implementations I have seen. Some only appear to store the balance factors whereas others store the height of each tree. Are there any performance advantage/disadvantages of storing the height and/or balance factor for each node? Apologies if this is a very simple question, but I'm still not visualizing how I want to restructure my BST to implement height balancing. Thanks.

    Read the article

  • Choosing between ExtJS and YUI based of application parameters.

    - by Kabeer
    Hello. I need help in taking call to choose between Ext JS and YUI libraries. Here are the key factors I have derived from my application requirements & development process: Complex, windows forms like controls Widgets, Layouts, Utilities Inter widget communication Easy to extend Easy to learn Intuitive & concise coding Strong exception handling Active support / community To update with upcoming technologies (HTML5, etc.) Skins & Themes to be easy to change Skins & Themes to support variety (a text box for different context to appear differently) Support & Utilities for standard protocols (XmlHttp, JSON) Good performance (responsive) Cost is not crucial, but I don't mind saving :)

    Read the article

  • Download estimator control using JavaScript and Ajax

    - by Anil Namde
    I would like to implement the download estimator using the JavaScript and the Ajax. I have gone trough Google to find the existing implementations for the download estimator and i found most of the time asking user bandwidth and then calculating the number is strategy. It good approach and there is hardly anything on reliable to get the estimated time right. What i would like to try is use Ajax to request file size 100KB - 200 KB and do the maths get the number and update the display. Now this is surrounded with so many questions like network, number of packets formed, proxies etc ? These all factors are sufficient to turn down the approach. But THIS IS HOW I HAVE TO DO THIS ? Now i would like here inputs from you all to make it better (as good discussion)? what all can be added to this ? Can we get to know bandwidth user using without asking ?

    Read the article

  • Sorting Table Cells based on data from NSArray

    - by Graeme
    Hi, I have an NSArray which contains information from an RSS feed on dogs, such as [dog types], [dog age] and [dog size]. At the moment my UITableView simply displays each cell on each dog and within the cell lists [dog types], [dog age] and [dog size]. I want to be able to allow users of my app to "sort" this data based on the dog name, dog size or dog age when they press a UIButton in the top nav-bar. I'm struggling to work out how to filter the UITableView based on these factors, so any help is appreciated. Thanks.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >