Search Results

Search found 10328 results on 414 pages for 'behavior tree'.

Page 37/414 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Weird MySQL behavior, seems like a SQL bug

    - by Daniel Magliola
    I'm getting a very strange behavior in MySQL, which looks like some kind of weird bug. I know it's common to blame the tried and tested tool for one's mistakes, but I've been going around this for a while. I have 2 tables, I, with 2797 records, and C, with 1429. C references I. I want to delete all records in I that are not used by C, so i'm doing: select * from i where id not in (select id_i from c); That returns 0 records, which, given the record counts in each table, is physically impossible. I'm also pretty sure that the query is right, since it's the same type of query i've been using for the last 2 hours to clean up other tables with orphaned records. To make things even weirder... select * from i where id in (select id_i from c); DOES work, and brings me the 1297 records that I do NOT want to delete. So, IN works, but NOT IN doesn't. Even worse: select * from i where id not in ( select i.id from i inner join c ON i.id = c.id_i ); That DOES work, although it should be equivalent to the first query (i'm just trying mad stuff at this point). Alas, I can't use this query to delete, because I'm using the same table i'm deleting from in the subquery. I'm assuming something in my database is corrupt at this point. In case it matters, these are all MyISAM tables without any foreign keys, whatsoever, and I've run the same queries in my dev machine and in the production server with the same result, so whatever corruption there might be survived a mysqldump / source cycle, which sounds awfully strange. Any ideas on what could be going wrong, or, even more importantly, how I can fix/work around this? Thanks! Daniel

    Read the article

  • Populate a tree from Hierarchical data using 1 LINQ statement

    - by Midhat
    Hi. I have set up this programming exercise. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication2 { class DataObject { public int ID { get; set; } public int ParentID { get; set; } public string Data { get; set; } public DataObject(int id, int pid, string data) { this.ID = id; this.ParentID = pid; this.Data = data; } } class TreeNode { public DataObject Data {get;set;} public List<DataObject> Children { get; set; } } class Program { static void Main(string[] args) { List<DataObject> data = new List<DataObject>(); data.Add(new DataObject(1, 0, "Item 1")); data.Add(new DataObject(2, 0, "Item 2")); data.Add(new DataObject(21, 2, "Item 2.1")); data.Add(new DataObject(22, 2, "Item 2.2")); data.Add(new DataObject(221, 22, "Item 2.2.1")); data.Add(new DataObject(3, 0, "Item 3")); } } } The desired output is a List of 3 treenodes, having items 1, 2 and 3. Item 2 will have a list of 2 dataobjects as its children member and so on. I have been trying to populate this tree (or rather a forest) using just 1 SLOC using LINQ. A simple group by gives me the desired data but the challenge is to organize it in TreeNode objects. Can someone give a hint or an impossibility result for this?

    Read the article

  • Behavior of <- NULL on lists versus data.frames for removing data

    - by Ananda Mahto
    Many R users eventually figure out lots of ways to remove elements from their data. One way is to use NULL, particularly when you want to do something like drop a column from a data.frame or drop an element from a list. Eventually, a user comes across a situation where they want to drop several columns from a data.frame at once, and they hit upon <- list(NULL) as the solution (since using <- NULL will result in an error). A data.frame is a special type of list, so it wouldn't be too tough to imagine that the approaches for removing items from a list should be the same as removing columns from a data.frame. However, they produce different results, as can be seen in the example below. ## Make some small data--two data.frames and two lists cars1 <- cars2 <- head(mtcars)[1:4] cars3 <- cars4 <- as.list(cars2) ## Demonstration that the `list(NULL)` approach works cars1[c("mpg", "cyl")] <- list(NULL) cars1 # disp hp # Mazda RX4 160 110 # Mazda RX4 Wag 160 110 # Datsun 710 108 93 # Hornet 4 Drive 258 110 # Hornet Sportabout 360 175 # Valiant 225 105 ## Demonstration that simply using `NULL` does not work cars2[c("mpg", "cyl")] <- NULL # Error in `[<-.data.frame`(`*tmp*`, c("mpg", "cyl"), value = NULL) : # replacement has 0 items, need 12 Switch to applying the same concept to a list, and compare the difference in behavior. ## Does not fully drop the items, but sets them to `NULL` cars3[c("mpg", "cyl")] <- list(NULL) # $mpg # NULL # # $cyl # NULL # # $disp # [1] 160 160 108 258 360 225 # # $hp # [1] 110 110 93 110 175 105 ## *Does* drop the `list` items while this would ## have produced an error with a `data.frame` cars4[c("mpg", "cyl")] <- NULL # $disp # [1] 160 160 108 258 360 225 # # $hp # [1] 110 110 93 110 175 105 The main questions I have are, if a data.frame is a list, why does it behave so differently in this scenario? Is there a foolproof way of knowing when an element will be dropped, when it will produce an error, and when it will simply be given a NULL value? Or do we depend on trial-and-error for this?

    Read the article

  • Recursion through a directory tree in PHP

    - by phphelpplz
    I have a set of folders that has a depth of at least 4 or 5 levels. I'm looking to recurse through the directory tree as deep as it goes, and iterate over every file. I've gotten the code to go down into the first sets of subdirectories, but no deeper, and I'm not sure why. Any ideas? $count = 0; $dir = "/Applications/MAMP/htdocs/idahohotsprings.com"; function recurseDirs($main, $count){ $dir = "/Applications/MAMP/htdocs/idahohotsprings.com"; $dirHandle = opendir($main); echo "here"; while($file = readdir($dirHandle)){ if(is_dir($file) && $file != '.' && $file != '..'){ echo "isdir"; recurseDirs($file); } else{ $count++; echo "$count: filename: $file in $dir/$main \n<br />"; } } } recurseDirs($dir, $count); ?

    Read the article

  • Is this the intention behavior in JComboBox? How I can avoid this behavior?

    - by Yan Cheng CHEOK
    I realize that if you are having a same selection in JComboBox, using up/down arrow key, will not help you to navigate the selection around. How I can avoid this behavior? See the below screen shoot /* * To change this template, choose Tools | Templates * and open the template in the editor. */ /* * NewJFrame.java * * Created on May 8, 2010, 7:46:28 PM */ package javaapplication26; /** * * @author yccheok */ public class NewJFrame extends javax.swing.JFrame { /** Creates new form NewJFrame */ public NewJFrame() { initComponents(); /* If you are having 3 same strings here. Using, up/down arrow key, * will not move the selection around. */ this.jComboBox1.addItem("Intel"); this.jComboBox1.addItem("Intel"); this.jComboBox1.addItem("Intel"); } /** This method is called from within the constructor to * initialize the form. * WARNING: Do NOT modify this code. The content of this method is * always regenerated by the Form Editor. */ @SuppressWarnings("unchecked") // <editor-fold defaultstate="collapsed" desc="Generated Code"> private void initComponents() { jComboBox1 = new javax.swing.JComboBox(); setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE); jComboBox1.setEditable(true); javax.swing.GroupLayout layout = new javax.swing.GroupLayout(getContentPane()); getContentPane().setLayout(layout); layout.setHorizontalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(layout.createSequentialGroup() .addGap(105, 105, 105) .addComponent(jComboBox1, javax.swing.GroupLayout.PREFERRED_SIZE, 158, javax.swing.GroupLayout.PREFERRED_SIZE) .addContainerGap(137, Short.MAX_VALUE)) ); layout.setVerticalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(layout.createSequentialGroup() .addGap(63, 63, 63) .addComponent(jComboBox1, javax.swing.GroupLayout.PREFERRED_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.PREFERRED_SIZE) .addContainerGap(217, Short.MAX_VALUE)) ); pack(); }// </editor-fold> /** * @param args the command line arguments */ public static void main(String args[]) { java.awt.EventQueue.invokeLater(new Runnable() { public void run() { new NewJFrame().setVisible(true); } }); } // Variables declaration - do not modify private javax.swing.JComboBox jComboBox1; // End of variables declaration }

    Read the article

  • Weird behavior of matching array keys after json_decode()

    - by arnorhs
    I've got some very weird behavior in my PHP code. I don't know if this is actually a good SO question, since it almost looks like a bug in PHP. I had this problem in a project of mine and isolated the problem: // json object that will be converted into an array $json = '{"5":"88"}'; $jsonvar = (array) json_decode($json); // notice: Casting to an array // Displaying the array: var_dump($jsonvar); // Testing if the key is there var_dump(isset($jsonvar["5"])); var_dump(isset($jsonvar[5])); That code outputs the following: array(1) { ["5"]=> string(2) "88" } bool(false) bool(false) The big problem: Both of those tests should produce bool(true) - if you create the same array using regular php arrays, this is what you'll see: // Let's create a similar PHP array in a regular manner: $phparr = array("5" => "88"); // Displaying the array: var_dump($phparr); // Testing if the key is there var_dump(isset($phparr["5"])); var_dump(isset($phparr[5])); The output of that: array(1) { [5]=> string(2) "88" } bool(true) bool(true) So this doesn't really make sense. I've tested this on two different installations of PHP/apache. You can copy-paste the code to a php file yourself to test it. It must have something to do with the casting from an object to an array.

    Read the article

  • Weird compile-time behavior when trying to use primitive type in generics

    - by polygenelubricants
    import java.lang.reflect.Array; public class PrimitiveArrayGeneric { static <T> T[] genericArrayNewInstance(Class<T> componentType) { return (T[]) Array.newInstance(componentType, 0); } public static void main(String args[]) { int[] intArray; Integer[] integerArray; intArray = (int[]) Array.newInstance(int.class, 0); // Okay! integerArray = genericArrayNewInstance(Integer.class); // Okay! intArray = genericArrayNewInstance(int.class); // Compile time error: // cannot convert from Integer[] to int[] integerArray = genericArrayNewInstance(int.class); // Run time error: // ClassCastException: [I cannot be cast to [Ljava.lang.Object; } } I'm trying to fully understand how generics works in Java. Things get a bit weird for me in the 3rd assignment in the above snippet: the compiler is complaining that Integer[] cannot be converted to int[]. The statement is 100% true, of course, but I'm wondering WHY the compiler is making this complaint. If you comment that line, and follow the compiler's "suggestion" as in the 4th assignment, the compiler is actually satisfied!!! NOW the code compiles just fine! Which is crazy, of course, since like the run time behavior suggests, int[] cannot be converted to Object[] (which is what T[] is type-erased into at run time). So my question is: why is the compiler "suggesting" that I assign to Integer[] instead for the 3rd assignment? How does the compiler reason to arrive to that (erroneous!) conclusion?

    Read the article

  • Strange behavior while returning csv file from spring controller

    - by Fanooos
    I working in a spring application which have an action method that returns a CSV file. This action works fine but in some cases it throws a predefined exception (MyAppException). I have another method that is annotated @ExceptionHandler(MyAppException.class) In the exception handler method I return another csv file but with different contents. The code that returns the csv file is almost the same in the two methods. List<String[]> list= new ArrayList<String[]>(); list.add(new String[]{ integrationRequestErrorLog.getErrorMessage(), Long.toString(integrationRequestErrorLog.getId()), Integer.toString(integrationRequestErrorLog.getErrorCode()) }); CSVWriter writer = new CSVWriter(response.getWriter(), ','); writer.writeAll(list); writer.close(); the difference between the two method is the list of contents. In the first method the file is returned normally while in the exception handler method I have a strange behavior. The exception handler method works fine with Opera browser while it gives me a 404 with FireFox. Opera browser give me 404 also but it download the file while firefox does not? Really I do not understand what is the difference here.

    Read the article

  • Strange behavior with large Object Types

    - by Peter Lang
    I recognized that calling a method on an Oracle Object Type takes longer when the instance gets bigger. The code below just adds rows to a collection stored in the Object Type and calls the empty dummy-procedure in the loop. Calls are taking longer when more rows are in the collection. When I just remove the call to dummy, performance is much better (the collection still contains the same number of records): Calling dummy: Not calling dummy: 11 0 81 0 158 0 Code to reproduce: Create Type t_tab Is Table Of VARCHAR2(10000); Create Type test_type As Object( tab t_tab, Member Procedure dummy ); Create Type Body test_type As Member Procedure dummy As Begin Null; --# Do nothing End dummy; End; Declare v_test_type test_type := New test_type( New t_tab() ); Procedure run_test As start_time NUMBER := dbms_utility.get_time; Begin For i In 1 .. 200 Loop v_test_Type.tab.Extend; v_test_Type.tab(v_test_Type.tab.Last) := Lpad(' ', 10000); v_test_Type.dummy(); --# Removed this line in second test End Loop; dbms_output.put_line( dbms_utility.get_time - start_time ); End run_test; Begin run_test; run_test; run_test; End; I tried with both 10g and 11g. Can anyone explain/reproduce this behavior?

    Read the article

  • Tree-like queues

    - by Rehno Lindeque
    I'm implementing a interpreter-like project for which I need a strange little scheduling queue. Since I'd like to try and avoid wheel-reinvention I was hoping someone could give me references to a similar structure or existing work. I know I can simply instantiate multiple queues as I go along, I'm just looking for some perspective by other people who might have better ideas than me ;) I envision that it might work something like this: The structure is a tree with a single root. You get a kind of "insert_iterator" to the root and then push elements onto it (e.g. a and b in the example below). However, at any point you can also split the iterator into multiple iterators, effectively creating branches. The branches cannot merge into a single queue again, but you can start popping elements from the front of the queue (again, using a kind of "visitor_iterator") until empty branches can be discarded (at your discretion). x -> y -> z a -> b -> { g -> h -> i -> j } f -> b Any ideas? Seems like a relatively simple structure to implement myself using a pool of circular buffers but I'm following the "think first, code later" strategy :) Thanks

    Read the article

  • std::stringstream GCC Abnormal Behavior

    - by FlorianZ
    I have a very interesting problem with compiling a short little program on a Mac (GCC 4.2). The function below would only stream chars or strings into the stringstream, but not anything else (int, double, float, etc.) In fact, the fail flag is set if I attempt to convert for example an int into a string. However, removing the preprocessor flag: _GLIBCXX_DEBUG=1, which is set by default in XCode for the debug mode, will yield the desired results / correct behavior. Here is the simple function I am talking about. value is template variable of type T. Tested for int, double, float (not working), char and strings (working). template < typename T > const std::string Attribute<T>::getValueAsString() const { std::ostringstream stringValue; stringValue << value; return stringValue.str(); } Any ideas what I am doing wrong, why this doesn't work, or what the preprocessor flag does to make this not work anymore? Thanks!

    Read the article

  • breadth-first traversal of directory tree is not lazy

    - by user855443
    I try to traverse the diretory tree. A naive depth-first traversal seems not to produce the data in a lazy fashion and runs out of memory. I next tried a breadth first approach, which shows the same problem - it uses all the memory available and then crashes. the code i have is: getFilePathBreadtFirst :: FilePath -> IO [FilePath] getFilePathBreadtFirst fp = do fileinfo <- getInfo fp res :: [FilePath] <- if isReadableDirectory fileinfo then do children <- getChildren fp lower <- mapM getFilePathBreadtFirst children return (children ++ concat lower) return (children ++ concat () else return [fp] -- should only return the files? return res getChildren :: FilePath -> IO [FilePath] getChildren path = do names <- getUsefulContents path let namesfull = map (path </>) names return namesfull testBF fn = do -- crashes for /home/frank, does not go to swap fps <- getFilePathBreadtFirst fn putStrLn $ unlines fps I think all the code is either linear or tail recursive, and I would expect that the listing of filenames starts immediately, but in fact it does not. Where is the error in my code and my thinking? where have I lost lazy evaluation?

    Read the article

  • Strange thread behavior in Perl

    - by Zaid
    Tom Christiansen's example code (à la perlthrtut) is a recursive, threaded implementation of finding and printing all prime numbers between 3 and 1000. Below is a mildly adapted version of the script #!/usr/bin/perl # adapted from prime-pthread, courtesy of Tom Christiansen use strict; use warnings; use threads; use Thread::Queue; sub check_prime { my ($upstream,$cur_prime) = @_; my $child; my $downstream = Thread::Queue->new; while (my $num = $upstream->dequeue) { next unless ($num % $cur_prime); if ($child) { $downstream->enqueue($num); } else { $child = threads->create(\&check_prime, $downstream, $num); if ($child) { print "This is thread ",$child->tid,". Found prime: $num\n"; } else { warn "Sorry. Ran out of threads.\n"; last; } } } if ($child) { $downstream->enqueue(undef); $child->join; } } my $stream = Thread::Queue->new(3..shift,undef); check_prime($stream,2); When run on my machine (under ActiveState & Win32), the code was capable of spawning only 118 threads (last prime number found: 653) before terminating with a 'Sorry. Ran out of threads' warning. In trying to figure out why I was limited to the number of threads I could create, I replaced the use threads; line with use threads (stack_size => 1);. The resultant code happily dealt with churning out 2000+ threads. Can anyone explain this behavior?

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • selenium and firefox timeout behavior

    - by Md. Reazul Karim
    I am facing this strange timeout behavior. I tried to load some page using: WebDriver drv = new FirefoxDriver(); drv.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); drv.get(url); String email = ""; try { WebElement aElem = Util.safeFindElementByXpath(drv, "//p[1]/a[1]"); if (aElem != null) { email = aElem.getAttribute("href"); } } catch (Exception e) { } drv.quit(); return email; The safeFindElementByXpath is: static WebElement safeFindElementByXpath(WebDriver driver, String xpath) { try { return driver.findElement(By.xpath(xpath)); } catch (NoSuchElementException e) { return null; } } Now I set the firefox network.http.connection-timeout and network.http.connection-retry-timeout both to 10. Now if I restart the firefox I can see the new values. But if I try to run the above code - the firefox window opens and it waits for a particular website for a long time. Hence I open another tab in the same firefox window and check the values of timeout and find them to be 90 and 250. If I change the values on that same window, the selenium code immediately finishes requesting for the page (ie it goes after the exception block). So the thing is that the same code will work on many sites and I don't know beforehand which site is invalid/down and I was thinking of putting this code in a separate thread and kill the thread after some time from the calling thread. But that is not possible I guess because the child thread has no way to poll anything as it is stuck at page request and can't go to the polling code afterwards. Now the thing is that I want any kind of hack/workaround to accomplish this: try getting the emails for good sites and if there are bad sites just give up after a certain period (press the panic button sorta thing). Thanks

    Read the article

  • Optimizing C++ Tree Generation

    - by cam
    Hi, I'm generating a Tic-Tac-Toe game tree (9 seconds after the first move), and I'm told it should take only a few milliseconds. So I'm trying to optimize it, I ran it through CodeAnalyst and these are the top 5 calls being made (I used bitsets to represent the Tic-Tac-Toe board): std::_Iterator_base::_Orphan_me std::bitset<9::test std::_Iterator_base::_Adopt std::bitset<9::reference::operator bool std::_Iterator_base::~_Iterator_base void BuildTreeToDepth(Node &nNode, const int& nextPlayer, int depth) { if (depth > 0) { //Calculate gameboard states int evalBoard = nNode.m_board.CalculateBoardState(); bool isFinished = nNode.m_board.isFinished(); if (isFinished || (nNode.m_board.isWinner() > 0)) { nNode.m_winCount = evalBoard; } else { Ticboard tBoard = nNode.m_board; do { int validMove = tBoard.FirstValidMove(); if (validMove != -1) { Node f; Ticboard tempBoard = nNode.m_board; tempBoard.Move(validMove, nextPlayer); tBoard.Move(validMove, nextPlayer); f.m_board = tempBoard; f.m_winCount = 0; f.m_Move = validMove; int currPlay = (nextPlayer == 1 ? 2 : 1); BuildTreeToDepth(f,currPlay, depth - 1); nNode.m_winCount += f.m_board.CalculateBoardState(); nNode.m_branches.push_back(f); } else { break; } }while(true); } } } Where should I be looking to optimize it? How should I optimize these 5 calls (I don't recognize them=.

    Read the article

  • WPF: Sort of inconsistence in the visual appearence of WPF controls derived by Selector class

    - by msfanboy
    Hello, focused items == selected items but selected items != focused items. Have you ever wondered about that? Do you specially style the backcolor of a focused/selected item ? I ask this question because a user has an enabled button because some customer items are selected. The user is wondering now "why the heck can I delete this customers (button is enabled...) when I have just clicked on the orders control selecting some orders ready to delete (button is enabled too). The thing is the selected customer items are nearly looking grayed out in the default style... Well its sort of inconsistence to the user NOT to the programmer because WE know the behavior. Do you cope with stuff like that?

    Read the article

  • Best practices to build a highly configurable software product.

    - by Kabeer
    Hello. I am working on a software product that can substantially change behavior based on the configuration & meta-data supplied. I would like to know best practices to architect / build a highly configurable software product. Considering that there are substantial number of configuration parameters, I'd like to look at something that will not affect the performance before I look at dependency injection. My platform is .Net ... I seek recommendations on architecture / design and implementations fronts.

    Read the article

  • How do languages handle side effects of compound operators?

    - by Kos
    Hello, Assume such situation: int a = (--t)*(t-2); int b = (t/=a)+t; In C and C++ this is undefined behaviour, as described here: Undefined Behavior and Sequence Points However, how does this situation look in: JavaScript, Java, PHP... C# well, any other language which has compound operators? I'm bugfixing a Javascript - C++ port right now in which this got unnoticed in many places. I'd like to know how other languages generally handle this... Leaving the order undefined is somehow specific to C and C++, isn't it?

    Read the article

  • Scroll returns to default after display:none in Chrome/IE

    - by Sam
    Here's the example: http://jsfiddle.net/sammy/RubNy/ Scroll down in the div container. Then click anywhere in the window to hide the element. Then click once more to show the element. You'll notice in Chrome/IE that the scroll is reset, but in Firefox, the scroll remains how you left it. Which is the standards behavior, Chrome/IE or Firefox? Should I report this to the Chrome issue tracker? Thanks in advance for any help on this, and happy new year, and thanks again, and cheers, and stuff. =D

    Read the article

  • Is it legal for a C++ reference to be NULL?

    - by BCS
    A while back I ran into a bug the looked something like this: void fn(int &i) { printf(&i == NULL ? "NULL\n" : "!NULL\n"); } int main() { int i; int *ip = NULL; fn(i); // prints !NULL fn(*ip); // prints NULL return 0; } More recently, I ran into this comment about C++ references: [References arguments make] it clear, unlike with pointers, that NULL is not a possible value. But, as show above, NULL is a possible value. So where is the error? In the language spec? (Unlikely.) Is the compiler in error for allowing that? Is that coding guide in error (or a little ambiguous)? Or am I just wandering into the minefield known as undefined behavior?

    Read the article

  • Use of class definitions inside a method in Java

    - by Bragaadeesh
    Hi public class TestClass { public static void main(String[] args) { TestClass t = new TestClass(); } private static void testMethod(){ class TestMethodClass{ int a; int b; int c; } TestMethodClass testMethodClass = new TestMethodClass(); } } I found out that the above piece of code is perfectly legal in Java. I have the following questions. What is the use of ever having a class definition inside a method? Will a class file be generated for TestMethodClass Its hard for me to imagine this concept in an Object Oriented manner. Having a class definition inside a behavior. Probably can someone tell me with equivalent real world examples. Thanks

    Read the article

  • OpenGL FrameBuffer Objects weird behavior

    - by Ben Jones
    My algorithm is this: Render the scene to a FBO with shadow mapping from multiple locations Render the scene to the screen with shadow mapping ...black magic that I still have to imlement... Combine the samples from step 1 with the image from step 2 I'm trying to debug steps 1 and 2 and am coming across STRANGE behavior. My algorithm for each shadow mapped pass is: render the scene to a FBO connected to a depth array texture from the POV of each light render the scene from the viewpoint and use vertex/frag shaders to compare the depths When I run my algorithm this way: render from point to FBO render from point to screen glutSwapBuffers() The normal vectors in the screen pass appear to be incorrect (inverted possibly). I'm pretty sure that's the issue because my diffuse lighting calculation is incorrect, but the material colors are correct, and the shadows appear in the correct places. So, it seems like the only thing that could be the culprit is the normals. However if I do render from point to FBO render from point to Screen glutSwapBuffers() //wrong here render from point to Screen glutSwapBuffers() the second pass is correct. I assume there's a problem with my framebuffer calls. Can anyone see what the problem is from the log below? Its from a bugle trace grepped for 'buffer' with a few edits to make it a little more clear. Thanks! [INFO] trace.call: glGenFramebuffersEXT(1, 0xdfeb90 - { 1 }) [INFO] trace.call: glGenFramebuffersEXT(1, 0xdfebac - { 2 }) [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glDrawBuffer(GL_NONE) [INFO] trace.call: glReadBuffer(GL_NONE) [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) //start render to FBO [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 2) [INFO] trace.call: glReadBuffer(GL_NONE) [INFO] trace.call: glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 2, 0) [INFO] trace.call: glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, 3, 0) [INFO] trace.call: glDrawBuffer(GL_COLOR_ATTACHMENT0) //bind to the FBO attached to a depth tex array for shadows [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry //bind to the FBO I want the shadow mapped image rendered to [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 2) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) //draw geometry //draw to screen pass //again shadow mapping FBO [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry //bind to the screen [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) //finished, swap buffers [INFO] trace.call: glXSwapBuffers(0xd5fc10, 0x05800002) //INCORRECT OUTPUT //second try at render to screen: [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) draw geometry [INFO] trace.call: glXSwapBuffers(0xd5fc10, 0x05800002) //correct output

    Read the article

  • UIScrollView strange zoom behavior when content is a UIView subclass

    - by sigsegv
    Hi, I'm experiencing the following: I created a UIView subclass with a CATiledLayer as backing layer by overriding the layerClass method. The layer properties (delegate, tileSize, etc) are set in the initWithFrame: method of the subclass. +(Class)layerClass { return [CATiledLayer class]; } -(id)initWithFrame:(CGRect)frame { if(self = [super initWithFrame:frame]) { renderer = [[MFPDFRenderer alloc]init]; tiledLayer = (CATiledLayer *)[self layer]; [tiledLayer setFrame:frame]; [tiledLayer setLevelsOfDetail:2]; [tiledLayer setLevelsOfDetailBias:3]; [tiledLayer setTileSize:CGSizeMake(512, 512)]; [tiledLayer setDelegate:renderer]; } return self; } Then I add an instance of said class as the content of an UIScrollView and set UIScrollView properties and implement the required delegate's methods. Everything works fine but when zooming the scroll view keep repositioning itself on its center. It's hardly noticeable when zooming in the center of the content, but unbearable otherwise. The same scroll view works fine when I use as (zoomable) content any other view such as an UIImageView or even a normal UIView with a CATiledLayer with the same properties and delegate of the subclass implementation as sublayer. When I check layer bounds and frame in the drawLayer:inContext: method of the delegate I get the following result as the zoom increase UIView with CATiledLayer as sublayer: 2010-04-03 21:05:33.499 Renderer[89293:4903] Layer: (0.000, 0.000) 320.000 x 460.000 2010-04-03 21:05:33.500 Renderer[89293:4903] Bounds: (0.000, 0.000) 320.000 x 460.000 2010-04-03 21:05:33.529 Renderer[89293:4903] Layer: (0.000, 0.000) 320.000 x 460.000 2010-04-03 21:05:33.534 Renderer[89293:4903] Bounds: (0.000, 0.000) 320.000 x 460.000 Custom subclass: 2010-04-03 21:04:15.969 Renderer[88957:4903] Layer: (0.000, 0.000) 657.910 x 945.746 2010-04-03 21:04:15.970 Renderer[88957:4903] Bounds: (0.000, 0.000) 320.000 x 460.000 2010-04-03 21:04:17.428 Renderer[88957:4903] Layer: (-0.000, 0.000) 766.964 x 1102.510 2010-04-03 21:04:17.429 Renderer[88957:4903] Bounds: (0.000, 0.000) 320.000 x 460.000 [...] 2010-04-03 21:19:10.388 Renderer[92573:4903] Layer: (-0.000, 0.000) 905.680 x 1301.916 2010-04-03 21:19:10.388 Renderer[92573:4903] Bounds: (0.000, 0.000) 320.000 x 460.000 I suppose that's the culprit or at least another symptom. I can add that I get the same erratic behavior if my subclass is built over a standard CALayer with the same renderer. Any suggestion will be appreciated!

    Read the article

  • B-trees that use redistribution on insertion

    - by Phenom
    If I insert the following keys into a B-tree of order 4 (meaning 4 pointers and 3 elements in each node), I get the following B-tree. G / \ A IY Would it look any different if redistribution on insertion were used? How does redistribution on insertion work?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >