Search Results

Search found 2221 results on 89 pages for 'equal'.

Page 80/89 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • ImageMagick: convert png fail via PHP and works via bash shell

    - by wedix
    I've got a very weird bug which I've yet to find a solution. UPDATE see solution below What I am trying to do is convert a full size picture into a 160x120 thumbnail. It works great with jpg and jpeg files of any size, but not with png. ImageMagick command: /opt/local/bin/convert '/WEBSERVER/images/img_0003-192-10.png' -thumbnail x320 -resize '320x<' -resize 50% -gravity center -crop 160x120+0+0 +repage -quality 91 '/WEBSERVER/thumbs/small_img_0003-192-10.png' PHP function (shortened) ... $cmd = "/opt/local/bin/convert '/WEBSERVER/images/img_0003-192-10.png' -thumbnail x320 -resize '320x<' -resize 50% -gravity center -crop 160x120+0+0 +repage -quality 91 '/WEBSERVER/thumbs/small_img_0003-192-10.png'"; exec($cmd, $output, $retval); $errors += $retval; if ($errors > 0) { die(print_r($output)); } When this function runs $retval equal 1 which means the convert command failed (thumbnail isn't created). This is where it gets interesting, if I run the exact same command in my shell, it works. wedbook:~ wedix$ /opt/local/bin/convert '/WEBSERVER/images/img_0003-192-10.png' -thumbnail x320 -resize '320x<' -resize 50% -gravity center -crop 160x120+0+0 +repage -quality 91 '/WEBSERVER/thumbs/small_img_0003-192-10.png' wedbook:~ wedix$ I've tried using different PHP function such as system, passthru but it didn't work. I thought maybe someone here knew the solution. I'm using MAMP 1.7.2 Apache/2.0.59 PHP/5.2.6 Thanks! UPDATE I updated the following dependencies libpng from 1.2.35 to 1.2.37 libiconv from 1.12_2 to 1.13_0 ImageMagick 6.5.2-4_1 to 6.5.2-9_0 However, it did not fix my problem. 2nd UPDATE I finally found something that might help, when the function runs this is what gets printed in the Apache logs: dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/bin/convert Reason: Incompatible library version: convert requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 3rd UPDATE libiconv.2.dylib is version 8.0.0... bash-3.2$ otool -L /opt/local/lib/libiconv.2.dylib /opt/local/lib/libiconv.2.dylib: /opt/local/lib/libiconv.2.dylib (compatibility version 8.0.0, current version 8.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 111.1.4) 4th UPDATE Problem was related to MAMP, see solution below

    Read the article

  • how to find maximum frequent item sets from large transactional data file

    - by ANIL MANE
    Hi, I have the input file contains large amount of transactions like Transaction ID Items T1 Bread, milk, coffee, juice T2 Juice, milk, coffee T3 Bread, juice T4 Coffee, milk T5 Bread, Milk T6 Coffee, Bread T7 Coffee, Bread, Juice T8 Bread, Milk, Juice T9 Milk, Bread, Coffee, T10 Bread T11 Milk T12 Milk, Coffee, Bread, Juice i want the occurrence of every unique item like Item Name Count Bread 9 Milk 8 Coffee 7 Juice 6 and from that i want an a fp-tree now by traversing this tree i want the maximal frequent itemsets as follows The basic idea of method is to dispose nodes in each “layer” from bottom to up. The concept of “layer” is different to the common concept of layer in a tree. Nodes in a “layer” mean the nodes correspond to the same item and be in a linked list from the “Head Table”. For nodes in a “layer” NBN method will be used to dispose the nodes from left to right along the linked list. To use NBN method, two extra fields will be added to each node in the ordered FP-Tree. The field tag of node N stores the information of whether N is maximal frequent itemset, and the field count’ stores the support count information in the nodes at left. In Figure, the first node to be disposed is “juice: 2”. If the min_sup is equal to or less than 2 then “bread, milk, coffee, juice” is a maximal frequent itemset. Firstly output juice:2 and set the field tag of “coffee:3” as “false” (the field tag of each node is “true” initially ). Next check whether the right four itemsets juice:1 be the subset of juice:2. If the itemset one node “juice:1” corresponding to is the subset of juice:2 set the field tag of the node “false”. In the following process when the field tag of the disposed node is FALSE we can omit the node after the same tagging. If the min_sup is more than 2 then check whether the right four juice:1 is the subset of juice:2. If the itemset one node “juice:1” corresponding to is the subset of juice:2 then set the field count’ of the node with the sum of the former count’ and 2 After all the nodes “juice” disposed ,begin to dispose the node “coffee:3”. Any suggestions or available source code, welcome. thanks in advance

    Read the article

  • Count number of queries executed by NHibernate in a unit test

    - by Bittercoder
    In some unit/integration tests of the code we wish to check that correct usage of the second level cache is being employed by our code. Based on the code presented by Ayende here: http://ayende.com/Blog/archive/2006/09/07/MeasuringNHibernatesQueriesPerPage.aspx I wrote a simple class for doing just that: public class QueryCounter : IDisposable { CountToContextItemsAppender _appender; public int QueryCount { get { return _appender.Count; } } public void Dispose() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; logger.RemoveAppender(_appender); } public static QueryCounter Start() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; lock (logger) { foreach (IAppender existingAppender in logger.Appenders) { if (existingAppender is CountToContextItemsAppender) { var countAppender = (CountToContextItemsAppender) existingAppender; countAppender.Reset(); return new QueryCounter {_appender = (CountToContextItemsAppender) existingAppender}; } } var newAppender = new CountToContextItemsAppender(); logger.AddAppender(newAppender); logger.Level = Level.Debug; logger.Additivity = false; return new QueryCounter {_appender = newAppender}; } } public class CountToContextItemsAppender : IAppender { int _count; public int Count { get { return _count; } } public void Close() { } public void DoAppend(LoggingEvent loggingEvent) { if (string.Empty.Equals(loggingEvent.MessageObject)) return; _count++; } public string Name { get; set; } public void Reset() { _count = 0; } } } With intended usage: using (var counter = QueryCounter.Start()) { // ... do something Assert.Equal(1, counter.QueryCount); // check the query count matches our expectations } But it always returns 0 for Query count. No sql statements are being logged. However if I make use of Nhibernate Profiler and invoke this in my test case: NHibernateProfiler.Intialize() Where NHProf uses a similar approach to capture logging output from NHibernate for analysis via log4net etc. then my QueryCounter starts working. It looks like I'm missing something in my code to get log4net configured correctly for logging nhibernate sql ... does anyone have any pointers on what else I need to do to get sql logging output from Nhibernate?

    Read the article

  • How do bind a List<object> to a DataGrid in Silverlight?

    - by Ben McCormack
    I'm trying to create a simple Silverlight application that involves parsing a CSV file and displaying the results in a DataGrid. I've configured my application to parse the CSV file to return a List<CSVTransaction> that contains properties with names: Date, Payee, Category, Memo, Inflow, Outflow. The user clicks a button to select a file to parse, at which point I want the DataGrid object to be populated. I'm thinking I want to use data binding, but I can't seem to figure out how to get the data to show up in the grid. My XAML for the DataGrid looks like this: <data:DataGrid IsEnabled="False" x:Name="TransactionsPreview"> <data:DataGrid.Columns> <data:DataGridTextColumn Header="Date" Binding="{Binding Date}" /> <data:DataGridTextColumn Header="Payee" Binding="{Binding Payee}"/> <data:DataGridTextColumn Header="Category" Binding="{Binding Category}"/> <data:DataGridTextColumn Header="Memo" Binding="{Binding Memo}"/> <data:DataGridTextColumn Header="Inflow" Binding="{Binding Inflow}"/> <data:DataGridTextColumn Header="Outflow" Binding="{Binding Outflow}"/> </data:DataGrid.Columns> </data:DataGrid> The code-behind for the xaml.cs file looks like this: private void OpenCsvFile_Click(object sender, RoutedEventArgs e) { try { CsvTransObject csvTO = new CsvTransObject.ParseCSV(); //This returns a List<CsvTransaction> and passes it //to a method which is supposed to set the DataContext //for the DataGrid to be equal to the list. BindCsvTransactions(csvTO.CsvTransactions); TransactionsPreview.IsEnabled = true; MessageBox.Show("The CSV file has a valid header and has been loaded successfully."); } catch (Exception ex) { MessageBox.Show(ex.Message); } } private void BindCsvTransactions(List<CsvTransaction> listYct) { TransactionsPreview.DataContext = listYct; } My thinking is to bind the CsvTransaction properties to each DataGridTextColumn in the XAML and then set the DataContext for the DataGrid to the List<CsvTransaction at run-time, but this isn't working. Any ideas about how I might approach this (or do it better)?

    Read the article

  • jQuery: How to find and change multiply th classes between n & n

    - by Ravex
    Hi everyone. I have some table structure: <tr class="row-2"><tr> <tr class="row-3">..<tr> <tr class="row-4">..<tr> <tr class="row-5">..<tr> <tr class="row-6">..<tr> <tr class="row-7"><tr> <tr class="row-8">..<tr> <tr class="row-9">..<tr> <tr class="row-10">..<tr> <tr class="row-11">..<tr> ...etc for this example TR with classes "row-2" and "row-7" is parrent product link wich expand child rows. <script> $(function() { $('tr.parent') .css("cursor","pointer") .css("color","red") .attr("title","Click to expand/collapse") .click(function(){ $(this).siblings('.child-'+this.id).toggle(); }); $('tr[@class^=child-]').hide().children('td'); }); </script> Rows -3...-6 is child of row-2 and Rows -8...-11 is child of row-7 How i can find row-2, row-7, etc then add second class "parrent" and ID similar class (id="row-2", id="row-7", etc)? Also i need add in each TR between row-2 and row-7 class equal previous parrent row. In bottom line i need something like this: <tr class="row-2 parrent" id="row-2"><tr> <tr class="row-3 child-row2">..<tr> <tr class="row-4 child-row2">..<tr> <tr class="row-5 child-row2">..<tr> <tr class="row-6 child-row2">..<tr> <tr class="row-7 parrent" id="row-7"><tr> <tr class="row-8 child-row7">..<tr> <tr class="row-9 child-row7">..<tr> <tr class="row-10 child-row7">..<tr> <tr class="row-11 child-row7">..<tr> ..etc Thanks for any Help.

    Read the article

  • How to properly test Hibernate length restriction?

    - by Cesar
    I have a POJO mapped with Hibernate for persistence. In my mapping I specify the following: <class name="ExpertiseArea"> <id name="id" type="string"> <generator class="assigned" /> </id> <version name="version" column="VERSION" unsaved-value="null" /> <property name="name" type="string" unique="true" not-null="true" length="100" /> ... </class> And I want to test that if I set a name longer than 100 characters, the change won't be persisted. I have a DAO where I save the entity with the following code: public T makePersistent(T entity){ transaction = getSession().beginTransaction(); transaction.begin(); try{ getSession().saveOrUpdate(entity); transaction.commit(); }catch(HibernateException e){ logger.debug(e.getMessage()); transaction.rollback(); } return entity; } Actually the code above is from a GenericDAO which all my DAOs inherit from. Then I created the following test: public void testNameLengthMustBe100orLess(){ ExpertiseArea ea = new ExpertiseArea( "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890"); assertTrue("Name should be 100 characters long", ea.getName().length() == 100); ead.makePersistent(ea); List<ExpertiseArea> result = ead.findAll(); assertEquals("Size must be 1", result.size(),1); ea.setName(ea.getName()+"1234567890"); ead.makePersistent(ea); ExpertiseArea retrieved = ead.findById(ea.getId(), false); assertTrue("Both objects should be equal", retrieved.equals(ea)); assertTrue("Name should be 100 characters long", (retrieved.getName().length() == 100)); } The object is persisted ok. Then I set a name longer than 100 characters and try to save the changes, which fails: 14:12:14,608 INFO StringType:162 - could not bind value '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890' to parameter: 2; data exception: string data, right truncation 14:12:14,611 WARN JDBCExceptionReporter:100 - SQL Error: -3401, SQLState: 22001 14:12:14,611 ERROR JDBCExceptionReporter:101 - data exception: string data, right truncation 14:12:14,614 ERROR AbstractFlushingEventListener:324 - Could not synchronize database state with session org.hibernate.exception.DataException: could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] //Stack trace 14:12:14,615 DEBUG GenericDAO:87 - could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] 14:12:14,616 DEBUG JDBCTransaction:186 - rollback 14:12:14,616 DEBUG JDBCTransaction:197 - rolled back JDBC Connection That's expected behavior. However when I retrieve the persisted object to check if its name is still 100 characters long, the test fails. The way I see it, the retrieved object should have a name that is 100 characters long, given that the attempted update failed. The last assertion fails because the name is 110 characters long now, as if the ea instance was indeed updated. What am I doing wrong here?

    Read the article

  • Reliable and fast way to convert a zillion ODT files in PDF?

    - by Marco Mariani
    I need to pre-produce a million or two PDF files from a simple template (a few pages and tables) with embedded fonts. Usually, I would stay low level in a case like this, and compose everything with a library like ReportLab, but I joined late in the project. Currently, I have a template.odt and use markers in the content.xml files to fill with data from a DB. I can smoothly create the ODT files, they always look rigth. For the ODT to PDF conversion, I'm using openoffice in server mode (and PyODConverter w/ named pipe), but it's not very reliable: in a batch of documents, there is eventually a point after which all the processed files are converted into garbage (wrong fonts and letters sprawled all over the page). Problem is not predictably reproducible (does not depend on the data), happens in OOo 2.3 and 3.2, in Ubuntu, XP, Server 2003 and Windows 7. My Heisenbug detector is ticking. I tried to reduce the size of batches and restarting OOo after each one; still, a small percentage of the documents are messed up. Of course I'll write about this on the Ooo mailing lists, but in the meanwhile, I have a delivery and lost too much time already. Where do I go? Completely avoid the ODT format and go for another template system. Suggestions? Anything that takes a few seconds to run is way too slow. OOo takes around a second and it sums to 15 days of processing time. I had to write a program for clustering the jobs over several clients. Keep the format but go for another tool/program for the conversion. Which one? There are many apps in the shareware or commercial repositories for windows, but trying each one is a daunting task. Some are too slow, some cannot be run in batch without buying it first, some cannot work from command line, etc. Open source tools tend not to reinvent the wheel and often depend on openoffice. Converting to an intermediate .DOC format could help to avoid the OOo bug, but it would double the processing time and complicate a task that is already too hairy. Try to produce the PDFs twice and compare them, discarding the whole batch if there's something wrong. Although the documents look equal, I know of no way to compare the binary content. Restart OOo after processing each document. it would take a lot more time to produce them it would lower the percentage of the wrong files, and make it very hard to identify them. Go for ReportLab and recreate the pages programmatically. This is the approach I'm going to try in a few minutes. Learn to properly format bulleted lists Thanks a lot.

    Read the article

  • Wondering about a way to conserve memory in C# using List<> with structs

    - by Michael Ryan
    I'm not even sure how I should phrase this question. I'm passing some CustomStruct objects as parameters to a class method, and storing them in a List. What I'm wondering is if it's possible and more efficient to add multiple references to a particular instance of a CustomStruct if a equivalent instance it found. This is a dummy/example struct: public struct CustomStruct { readonly int _x; readonly int _y; readonly int _w; readonly int _h; readonly Enum _e; } Using the below method, you can pass one, two, or three CustomStruct objects as parameters. In the final method (that takes three parameters), it may be the case that the 3rd and possibly the 2nd will have the same value as the first. List<CustomStruct> _list; public void AddBackground(CustomStruct normal) { AddBackground(normal, normal, normal); } public void AddBackground(CustomStruct normal, CustomStruct hover) { AddBackground(normal, hover, hover); } public void AddBackground(CustomStruct normal, CustomStruct hover, CustomStruct active) { _list = new List<CustomStruct>(3); _list.Add(normal); _list.Add(hover); _list.Add(active); } As the method stands now, I believe it will create new instances of CustomStruct objects, and then adds a reference of each to the List. It is my understanding that if I instead check for equality between normal and hover and (if equal) insert normal again in place of hover, when the method completes, hover will lose all references and eventually be garbage collected, whereas normal will have two references in the List. The same could be done for active. That would be better, right? The CustomStruct is a ValueType, and therefore one instance would remain on the Stack, and the three List references would just point to it. The overall List size is determined not by the object Type is contains, but by its Capacity. By eliminating the "duplicate" CustomStuct objects, you allow them to be cleaned up. When the CustomStruct objects are passed to these methods, new instances are created each time. When the structs are added to the List, is another copy made? For example, if i pass just one CustomStruct, AddBackground(normal) creates a copy of the original variable, and then passes it three times to Addbackground(normal, hover, active). In this method, three copies are made of the original copy. When the three local variables are added to the List using Add(), are additional copies created inside Add(), and does that defeat the purpose of any equality checks as previously mentioned? Am I missing anything here?

    Read the article

  • xslt test if a variable value is contained in a node set

    - by Aamir
    I have the following two files: <?xml version="1.0" encoding="utf-8" ?> <!-- D E F A U L T H O S P I T A L P O L I C Y --> <xas DefaultPolicy="open" DefaultSubjectsFile="subjects.xss"> <rule id="R1" access="deny" object="record" subject="roles/*[name()!='Staff']"/> <rule id="R2" access="deny" object="diagnosis" subject="roles//Nurse"/> <rule id="R3" access="grant" object="record[@id=$user]" subject="roles/member[@id=$user]"/> </xas> and the other xml file called subjects.xss is: <?xml version="1.0" encoding="utf-8" ?> <subjects> <users> <member id="dupont" password="4A-4E-E9-17-5D-CE-2C-DD-43-43-1D-F1-3F-5D-94-71"> <name>Pierre Dupont</name> </member> <member id="durand" password="3A-B6-1B-E8-C0-1F-CD-34-DF-C4-5E-BA-02-3C-04-61"> <name>Jacqueline Durand</name> </member> </users> <roles> <Staff> <Doctor> <member idref="dupont"/> </Doctor> <Nurse> <member idref="durand"/> </Nurse> </Staff> </roles> </subjects> I am writing an xsl sheet which will read the subject value for each rule in policy.xas and if the currently logged in user (accessible as variable "user" in the stylesheet) is contained in that subject value (say roles//Nurse), then do something. I am not being able to test whether the currently logged in user ($user which is equal to say "durand") is contained in roles//Nurse in the subjects file (which is a different xml file). Hope that clarifies my question. Any ideas? Thanks in advance.

    Read the article

  • How can I capture the keystroke that triggers "CellEndEdit" on a DataGridView in C#?

    - by Andy Stampor
    I have a DataGridView that is set to EditOnF2. I do some special processing of data in the CellEndEdit eventhandler that sets the value of the cell. I still want the functionality of the EditOnKeystrokeOrF2 of reverting to the original value when the Esc key is pressed. Unfortunately, at the CellEndEdit eventhandler, I don't see a way to tell what caused the CellEndEdit event to be fired. I only want to change the value of the cell if the Esc key is not pressed. How can I tell if it was or not? Edit: It is worth noting that the KeyDown event does not get fired when the cell is being edited, nor for the final ending keystroke. Edit2: I have tried the KeyPreview suggestion, but the form still does not capture the Escape key being pressed. Edit3: I've been experimenting with trying to get this working. I originally posted some of the following as a separate post, but feel it might be more relevant to include it here. I have a cell in a DataGridView that is now set to EditProgrammatically. To capture the keystroke that starts an edit, I am setting the cell.Value equal to the keystroke. However, this ruins the "Escape" functionality of the cell - when you press escape, instead of reverting to the original value, it reverts to the keystroke that I programmatically inserted into the cell. I believe that if I could set the "EditedFormattedValue" on a cell, this would be where I want to put my keystroke value, however this appears to be read only. How can I accomplish what I am attempting? An example to clarify: If the cell has a value of "54.3" in it, and I press the "9" key, it begins editing the cell and places a "9" there. If I hit Escape, instead of reverting to "54.3" it reverts to "9". What I want is for it to return to its original value of "54.3". So, I am trying to tackle this issue from both the beginning and the end. I think the real problem is that I am overwriting the original value and have no way to determine if I should revert it or not. Edit4: It looks like CellValidating might be worth using, but I am seeing strange behavior when I experiment with it. In a new project, I create the DataGridView and register for the various events and see that CellValidating is fired before the CellEndEdit. However, in my project where I am trying to get this to work, CellEndEdit is firing BEFORE CellValidating. Any ideas on what the difference might be?

    Read the article

  • How do we, as a community, help encourage programming in public schools? (Or state Schools for the U

    - by NoMoreZealots
    PRIMARY MOTIVATION My office gets involved with the "First Robotics" competitions and one thing that lingers year to year is the students typically have no preparation for doing even simple programming as part of the public schools system. While the science classes provide some basic grasp of mechanical and electrical concepts, by in large computer programming gets no coverage from the curriculum. (This my be different in other areas of the country/world.) What makes it worse is there is only a short period of time you have to prepare the student's and help them design the robot. Talking to some professors from local colleges, it's a problem because you can't assume even the most basic understanding for freshman CS majors. Languages like Python, Lua and BASIC are simple enough for at least high school level students, if not younger. SCOPE So how do you get public schools to support a programming, at least to the level of "Try it in BASIC" examples that used to be at the end of a chapter in my Algebra book? At least enough to prepare them for event's such as the FIRST Robotic competitions. Which the primary objectives are to teach problem solving and team work, and to possible foster an interest in Math, Science and Engineering in general. (Not force feed to them, as some people her seem to be implying.) Edit: Why teach kids: (Since 2000 CS enrollment in US colleges has decreased by 70% while college enrollment has increased, this is a PROBLEM.) Saying there is no value in teaching someone programming in Jr./High school because they might think "they know programming." Is like saying there's no value in teaching High school science and physics, because they might decide they "know physics." Leading to abuse like: "I passed a high school physics class, I'm going to develop a Unified Quantum Gravitational Theory." Better Prepared students are better students. Instead it would allows college programs to raise the bar on the entry level courses, allowing students to be weeded out based on their understanding of more advanced material. Plus people who did poorly in that in topic in High school aren't as likely to say "I think there's money in computer's so I'll computer science." Plus if people take it in high school and decide THEN that it's not for them, it's better than them wasting their money to PAY a college to figure that out. The result is that people who take the degree are more likely to succeed and be there for the RIGHT reasons. (i.e. It's what they REALLY want to do. And that's REALLY the key to being good at anything.) Programming is like anything else, the more practice and genuine interest you have the better you get. If you start them later, they get less practice. The earlier give them the opportunity to start, the more practice they will get. All other things equal, the more practice the better the programmer.

    Read the article

  • Hibernate can't load Custom SQL collection

    - by Geln Yang
    Hi, There is a table Item like, code,name 01,parent1 02,parent2 0101,child11 0102,child12 0201,child21 0202,child22 Create a java object and hbm xml to map the table.The Item.parent is a Item whose code is equal to the first two characters of its code : class Item{ String code; String name; Item parent; List<Item> children; .... setter/getter.... } <hibernate-mapping> <class name="Item" table="Item"> <id name="code" length="4" type="string"> <generator class="assigned" /> </id> <property name="name" column="name" length="50" not-null="true" /> <many-to-one name="parent" class="Item" not-found="ignore"> <formula> <![CDATA[ (select i.code,r.name from Item i where (case length(code) when 4 then i.code=SUBSTRING(code,1,2) else false end)) ]]> </formula> </many-to-one> <bag name="children"></bag> </class> </hibernate-mapping> I try to use formula to define the many-to-one relationship,but it doesn't work!Is there something wrong?Or is there other method? Thanks! ps,I use mysql database. add 2010/05/23 Pascal's answer is right,but the "false" value must be replaced with other expression,like "1=2".Because the "false" value would be considered to be a column of the table. select i.code from Item i where ( case length(code) when 4 then i.code=SUBSTRING(code,1,2) else 1=2 end) And I have another question about the children "bag" mapping.There isn't formula configuration option for "bag",but we can use "loader" to load a sql-query.I configure the "bag" as following.But it get a list whose size is 0.What's wrong with it? <class> ... ... <bag name="children"> <key /> <one-to-many class="Item"></one-to-many> <loader query-ref="getChildren"></loader> </bag> </class> <sql-query name="getChildren"> <load-collection alias="r" role="Item.children" /> <![CDATA[(select {r.*} from Item r join Item o where o.code=:code and ( case length(o.code) when 2 then (length(r.code)=4 and SUBSTRING(r.code,1,2)=o.code) else 1=2 end ))]]> </sql-query>

    Read the article

  • Powershell $LastExitCode=0 but $?=False . Redirecting stderr to stdout gives NativeCommandError

    - by Colonel Panic
    Can anyone explain Powershell's surprising behaviour in the second example below? First, a example of sane behaviour: PS C:\> & cmd /c "echo Hello from standard error 1>&2"; echo "`$LastExitCode=$LastExitCode and `$?=$?" Hello from standard error $LastExitCode=0 and $?=True No surprises. I print a message to standard error (using cmd's echo). I inspect the variables $? and $LastExitCode. They equal to True and 0 respectively, as expected. However, if I ask Powershell to redirect standard error to standard output over the first command, I get a NativeCommandError: PS C:\> & cmd /c "echo Hello from standard error 1>&2" 2>&1; echo "`$LastExitCode=$LastExitCode and `$?=$?" cmd.exe : Hello from standard error At line:1 char:4 + cmd <<<< /c "echo Hello from standard error 1>&2" 2>&1; echo "`$LastExitCode=$LastExitCode and `$?=$?" + CategoryInfo : NotSpecified: (Hello from standard error :String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError $LastExitCode=0 and $?=False My first question, why the NativeCommandError ? Secondly, why is $? False when cmd ran successfully and $LastExitCode is 0? Powershell's docs about_Automatic_Variables don't explicitly define $?. I always supposed it is True if and only if $LastExitCode is 0 but my example contradicts that. Here's how I came across this behaviour in the real-world (simplified). It really is FUBAR. I was calling one Powershell script from another. The inner script: cmd /c "echo Hello from standard error 1>&2" if (! $?) { echo "Job failed. Sending email.." exit 1 } # do something else Running this simply .\job.ps1, it works fine, no email is sent. However, I was calling it from another Powershell script, logging to a file .\job.ps1 2>&1 > log.txt. In this case, an email is sent! Here, the act of observing a phenomenon changes its outcome. This feels like quantum physics rather than scripting! [Interestingly: .\job.ps1 2>&1 may or not blow up depending on where you run it]

    Read the article

  • Code-Golf: Friendly Number Abbreviator

    - by David Murdoch
    Based on this question: Is there a way to round numbers into a friendly format? THE CHALLENGE - UPDATED! (removed hundreds abbreviation from spec) The shortest code by character count that will abbreviate an integer (no decimals). Code should include the full program. Relevant range is from 0 - 9,223,372,036,854,775,807 (the upper limit for signed 64 bit integer). The number of decimal places for abbreviation will be positive. You will not need to calculate the following: 920535 abbreviated -1 place (which would be something like 0.920535M). Numbers in the tens and hundreds place (0-999) should never be abbreviated (the abbreviation for the number 57 to 1+ decimal places is 5.7dk - it is unneccessary and not friendly). Remember to round half away from zero (23.5 gets rounded to 24). Banker's rounding is verboten. Here are the relevant number abbreviations: h = hundred (102) k = thousand (103) M = million (106) G = billion (109) T = trillion (1012) P = quadrillion (1015) E = quintillion (1018) SAMPLE INPUTS/OUTPUTS (inputs can be passed as separate arguments): First argument will be the integer to abbreviate. The second is the number of decimal places. 12 1 => 12 // tens and hundreds places are never rounded 1500 2 => 1.5k 1500 0 => 2k // look, ma! I round UP at .5 0 2 => 0 1234 0 => 1k 34567 2 => 34.57k 918395 1 => 918.4k 2134124 2 => 2.13M 47475782130 2 => 47.48G 9223372036854775807 3 => 9.223E // ect... . . . Original answer from related question (javascript, does not follow spec): function abbrNum(number, decPlaces) { // 2 decimal places => 100, 3 => 1000, etc decPlaces = Math.pow(10,decPlaces); // Enumerate number abbreviations var abbrev = [ "k", "m", "b", "t" ]; // Go through the array backwards, so we do the largest first for (var i=abbrev.length-1; i>=0; i--) { // Convert array index to "1000", "1000000", etc var size = Math.pow(10,(i+1)*3); // If the number is bigger or equal do the abbreviation if(size <= number) { // Here, we multiply by decPlaces, round, and then divide by decPlaces. // This gives us nice rounding to a particular decimal place. number = Math.round(number*decPlaces/size)/decPlaces; // Add the letter for the abbreviation number += abbrev[i]; // We are done... stop break; } } return number; }

    Read the article

  • problem while displayin the texture image on view that works fine on iphone simulator but not on dev

    - by yunas
    hello i am trying to display an image on iphone by converting it into texture and then displaying it on the UIView. here is the code to load an image from an UIImage object - (void)loadImage:(UIImage *)image mipmap:(BOOL)mipmap texture:(uint32_t)texture { int width, height; CGImageRef cgImage; GLubyte *data; CGContextRef cgContext; CGColorSpaceRef colorSpace; GLenum err; if (image == nil) { NSLog(@"Failed to load"); return; } cgImage = [image CGImage]; width = CGImageGetWidth(cgImage); height = CGImageGetHeight(cgImage); colorSpace = CGColorSpaceCreateDeviceRGB(); // Malloc may be used instead of calloc if your cg image has dimensions equal to the dimensions of the cg bitmap context data = (GLubyte *)calloc(width * height * 4, sizeof(GLubyte)); cgContext = CGBitmapContextCreate(data, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast); if (cgContext != NULL) { // Set the blend mode to copy. We don't care about the previous contents. CGContextSetBlendMode(cgContext, kCGBlendModeCopy); CGContextDrawImage(cgContext, CGRectMake(0.0f, 0.0f, width, height), cgImage); glGenTextures(1, &(_textures[texture])); glBindTexture(GL_TEXTURE_2D, _textures[texture]); if (mipmap) glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); else glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data); if (mipmap) glGenerateMipmapOES(GL_TEXTURE_2D); err = glGetError(); if (err != GL_NO_ERROR) NSLog(@"Error uploading texture. glError: 0x%04X", err); CGContextRelease(cgContext); } free(data); CGColorSpaceRelease(colorSpace); } The problem that i currently am facing is this code workd perfectly fine and displays the image on simulator where as on the device as seen on debugger an error is displayed i.e. Error uploading texture. glError: 0x0501 any idea how to tackle this bug.... thnx in advance 4 ur soluitons

    Read the article

  • Need help iteratating over an array, retrieve two possibilites, no repeats, for Poker AI

    - by elguapo-85
    Can't really think of a good way to word this question, nor a good title, and maybe the answer is so ridiculously simple that I am missing it. I am working on a poker AI, and I want to calculate the number of hands that exist which are better then mine, I understand how to that, but want I can't figure out is the best way to iterate over a group of cards. So I am at the flop, I know what my two cards are and there are 3 cards on the board. So there are 47 unknown cards and I want to iterate over all possible combination of those 47 cards assuming that two are passed out, so you can't have two cards of the same rank and suit, and you if you have previously calculated a set you don't want to do it over again, because I will being wasting time, and this will be called many times. If you don't understand want I am asking please tell me and I will clarify more. So I can set something up like this, if that element equals one, it means it is not in my hand and not on the board, 4 for each suit, and 13 for each rank. setOfCards[4][13] If I do a simple set of for loops like this: (pseudocode) //remove cards I know are in play from setOfCards by setting values to zero for(int i = 0; i < 4; i++) for(int j = 0; j < 13; j++) for(int k = 0; k < 4; k++) for(int l = 0; l < 4; l++) //skip if values equal zero card1 = setOfCards[i][j] card2 = setOfCards[k][l] //now compare card1, card2 and set of board cards So this is actually going to repeat many values, for example: card1 = AceOfHearts, card2 = KingOfHearts is the same as card1 = KingOfHearts, card2 = AceOfHearts. It will also alter my calculations. How should I go about avoiding this? Also is there a name for this technique? Thank you.

    Read the article

  • Designing a database file format

    - by RoliSoft
    I would like to design my own database engine for educational purposes, for the time being. Designing a binary file format is not hard nor the question, I've done it in the past, but while designing a database file format, I have come across a very important question: How to handle the deletion of an item? So far, I've thought of the following two options: Each item will have a "deleted" bit which is set to 1 upon deletion. Pro: relatively fast. Con: potentially sensitive data will remain in the file. 0x00 out the whole item upon deletion. Pro: potentially sensitive data will be removed from the file. Con: relatively slow. Recreating the whole database. Pro: no empty blocks which makes the follow-up question void. Con: it's a really good idea to overwrite the whole 4 GB database file because a user corrected a typo. I will sell this method to Twitter ASAP! Now let's say you already have a few empty blocks in your database (deleted items). The follow-up question is how to handle the insertion of a new item? Append the item to the end of the file. Pro: fastest possible. Con: file will get huge because of all the empty blocks that remain because deleted items aren't actually deleted. Search for an empty block exactly the size of the one you're inserting. Pro: may get rid of some blocks. Con: you may end up scanning the whole file at each insert only to find out it's very unlikely to come across a perfectly fitting empty block. Find the first empty block which is equal or larger than the item you're inserting. Pro: you probably won't end up scanning the whole file, as you will find an empty block somewhere mid-way; this will keep the file size relatively low. Con: there will still be lots of leftover 0x00 bytes at the end of items which were inserted into bigger empty blocks than they are. Rigth now, I think the first deletion method and the last insertion method are probably the "best" mix, but they would still have their own small issues. Alternatively, the first insertion method and scheduled full database recreation. (Probably not a good idea when working with really large databases. Also, each small update in that method will clone the whole item to the end of the file, thus accelerating file growth at a potentially insane rate.) Unless there is a way of deleting/inserting blocks from/to the middle of the file in a file-system approved way, what's the best way to do this? More importantly, how do databases currently used in production usually handle this?

    Read the article

  • Points in CSS specificity

    - by Sam
    Researching specificity I stumbled upon this blog - http://www.htmldog.com/guides/cssadvanced/specificity/ It states that specificity is a point-scoring system for CSS. It tells us that elements are worth 1 point, classes are worth 10 points and IDs are worth 100 points. It also goes on top say that these points are totaled and the overall amount is that selector's specificity. For example: body = 1 point body .wrapper = 11 points body .wrapper #container = 111 points So, using these points surely the following CSS and HTML will result in the text being blue: CSS: #a { color: red; } .a .b .c .d .e .f .g .h .i .j .k .l .m .n .o { color: blue; } HTML: <div class="a"> <div class="b"> <div class="c"> <div class="d"> <div class="e"> <div class="f"> <div class="g"> <div class="h"> <div class="i"> <div class="j"> <div class="k"> <div class="l"> <div class="m"> <div class="n"> <div class="o" id="a"> This should be blue. </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> RESULT: http://jsfiddle.net/hkqCF/ Why is the text red when 15 classes would equal 150 points compared to 1 ID which equals 100 points?

    Read the article

  • .NET unit test runner outputting FaultException.Detail

    - by Adam
    Hello, I am running some unit tests on a WCF service. The service is configured to include exception details in the fault response (with the following in my service configuration file). <serviceDebug includeExceptionDetailInFaults="true" /> If a test causes an unhandled exception on the server the fault is received by the client with a fully populated server stack trace. I can see this by calling the exception's ToString() method. The problem is that this doesn't seem to be output by any of the test runners that I have tried (xUnit, Gallio, MSTest). They appear to just output the Message and the StackTrace properties of the exception. To illustrate what I mean, the following unit test run by MSTest would output three sections: Error Message Error Stack Trace Standard Console Output (contains the information I would like, e.g. "Fault Detail is equal to An ExceptionDetail, likely created by IncludeExceptionDetailInFaults=true, whose value is: ..." try { service.CallMethodWhichCausesException(); } catch (Exception ex) { Console.WriteLine(ex); // this outputs the information I would like throw; } Having this information will make the initial phase of testing and deployment a lot less painful. I know I can just wrap each unit test in a generic exception handler and write the exception to the console and rethrow (as above) within all my unit tests but that seems a very long-winded way of achieving this (and would look pretty awful). Does anyone know if there's any way to get this information included for free whenever an unhandled exception occurs? Is there a setting that I am missing? Is my service configuration lacking in proper fault handling? Perhaps I could write some kind of plug-in / adapter for some unit testing framework? Perhaps theres a different unit testing framework which I should be using instead! My actual set-up is xUnit unit tests executed via Gallio for the development environment, but I do have a separate suite of "smoke tests" written which I would like to be able to have our engineers run via the xUnit GUI test runner (or Gallio or whatever) to simplify the final deployment. Thanks. Adam

    Read the article

  • CSS three column layout, liquid center, no left-margin!

    - by moontear
    Hi, I am all in favor of CSS based layouts, but this one I just can't figure out. With a table it is oh-so-easy: <html> <head><title>Three Column</title></head> <body> <p>Test</p> <table style="width: 100%; border: 1px solid black; min-height: 300px;"> <tr> <td style="border: 1px solid green;" colspan="3">Header</td> </tr> <tr> <td style="border: 1px solid green; width: 150px;" rowspan="2">Left</td> <td style="border: 1px solid yellow;">Content</td> <td style="border: 1px solid blue; width: 200px;" rowspan="2">Right</td> </tr> <tr> <td style="border: 1px solid fuchsia;">Additional stuff</td> </tr> <tr><td style="border: 1px solid green;" colspan="3">Footer</td></tr> </body> <html> Left is fixed width Right is fixed width Content is liquid Additional stuff sits beneath content Now here is the important part: "Left" may not exist. Again this is easy with the table. Delete the column and "Content" expands. Beautiful. I have looked through many examples (and "holy grails") of liquid and table less three-column CSS based layouts, but I have not found one which is not using some kind of margin-left for the middle column ("Content"). Any margin-left will suck once "Left" is gone as "Content" will just stay at it's place. I'm just about to switch to old school table based layout for this problem, so I'm hoping someone has some idea - I don't care about excess markup, wrappers and the like, I would just like to know how to solve this with plain CSS. Btw: look at how easy equal height columns are... Cheers PS: No CSS3 please

    Read the article

  • Is the recent trend toward widescreen (16:9) computer monitors a plus or minus for programmers?

    - by DanM
    It's almost gotten to the point where you can't buy a conventional (4:3) monitor anymore. Pretty much everything is widescreen. This is fine for watching movies or TV, but is it good or bad for programming? My initial thoughts on the issue are that widescreens are a net negative for programmers. Here are some of the disadvantages I see: Poor space utiliziation One disadvantage of widescreens you can't argue with is that they offer poor space utilization for the amount of total pixels you get. For example, my Thinkpad, which I bought just before the widescreen craze, has a 15" monitor with a native resolution of 1600 x 1200. The newer 15.4" Thinkpads run at most 1680 x 1050. So (if you do the math) you get fewer pixels in a wider (but not shorter) package. With desktop monitors, you pay a price in terms of desk space used. Two 1680 x 1050 monitors will simply take up more of your desk than two 1600 x 1200 monitors (assuming equal dot pitch). More scrolling If you compare a 1680 x 1050 monitor to a 1600 x 1200 monitor, you get 80 extra pixels of width but 150 fewer pixels of height. The height reduction means you lose approximately 11 lines of code. That's less you can see on the screen at one time and more scrolling you have to do. This harms productivity, maybe not dramatically, but insidiously. Less room for wide panels Widescreens also mean you lose space for wide but short panels common in programming environments. If you use Visual Studio, for example, your code window will be that much shorter when viewing the Find Results, Task List, or Error List (all of which I use frequently). This isn't to say the 80 pixels of extra width you get with widescreen would never be useful, but I tend to keep my lines of code short, so seeing more lines would be more valuable to me than seeing fewer, longer lines. What do you think? Do you agree/disagree? Are you now using one or more widescreen monitors for development? What resolution are you running on each? Do you ever miss the height of the traditional 4:3 monitor? Would you complain if your monitors were one inch narrower but two inches taller?

    Read the article

  • Can i store a Queue in viewstate? only will store the first item i add to queue

    - by Mausimo
    Hey, as the question states i am trying to store a Queue in a viewstate (to track postbacks and refreshes to stop a form from resubmitting). Here is just the viewstate code: private Queue<string> p_tempQue { set { ViewState["sTemp"] = value; } get { return (Queue<string>)ViewState["sTemp"]; } } //BasePage constructor public BasePage() { //create a Queue of string //sTemp = new Queue(); this.Load += new EventHandler(this.Page_Load); this.Init += new EventHandler(this.Page_Init); } //In the 'page_Init' event we have created a simple hidden field by name 'hdnGuid' which is attached to the page on the first hit itself. protected void Page_Init(object sender, EventArgs e) { //initializing the hidden field //create a hidden field with a ID HiddenField hdnGuid = new HiddenField(); hdnGuid.ID = "hdnGuid"; //if it is the first time the page is loaded, create a new guid and assign it as the hidden field value if (!Page.IsPostBack) hdnGuid.Value = Guid.NewGuid().ToString(); //add the hidden field to the page Page.Form.Controls.Add(hdnGuid); } //In the 'page_Load' event we check if the hidden field value is same as the old value. In case the value is not same that means it's a 'postback' //and if the value is same then its 'refresh'. As per situation we set the 'httpContent.Items["Refresh"]' value. protected void Page_Load(object sender, EventArgs e) { if(p_tempQue != null) sTemp = p_tempQue; else sTemp = new Queue<string>(); //The hdnGuid will be set the first time page is loaded, else the hdnGuid //will be set after each time the form is submitted using javascript. //assign the hidden field currently on the page for manipulation HiddenField h1 = (HiddenField)(Page.Form.FindControl("hdnGuid")); //create an instance of the GuidClass GuidClass currentGuid = new GuidClass(); //set the GuidClass Guid property to the value of the hidden field currentGuid.Guid = h1.Value; //check to see if the Queue of strings contains the string which is the current Guid property of the GuidClass //if the are equal, then the page was refreshed if (sTemp.Contains<string>(currentGuid.Guid)) { //adds item as key/value pair to share data between an System.Web.IHttpModule interface and an System.Web.IHttpHandler interface during an HTTP request. System.Web.HttpContext.Current.Items.Add("IsRefresh", true); } //if they are not requal, the page is not refreshed else { //if the current Guid property in the GuidClass is not null or not an empty string //add the new Guid to the Queue if (!(currentGuid.Guid.Equals(null) || currentGuid.Guid.Equals(""))) sTemp.Enqueue(currentGuid.Guid); System.Web.HttpContext.Current.Items.Add("IsRefresh", false); } p_tempQue = sTemp; }

    Read the article

  • Interoperability between two AES algorithms

    - by lpfavreau
    Hello, I'm new to cryptography and I'm building some test applications to try and understand the basics of it. I'm not trying to build the algorithms from scratch but I'm trying to make two different AES-256 implementation talk to each other. I've got a database that was populated with this Javascript implementation stored in Base64. Now, I'm trying to get an Objective-C method to decrypt its content but I'm a little lost as to where the differences in the implementations are. I'm able to encrypt/decrypt in Javascript and I'm able to encrypt/decrypt in Cocoa but cannot make a string encrypted in Javascript decrypted in Cocoa or vice-versa. I'm guessing it's related to the initialization vector, nonce, counter mode of operation or all of these, which quite frankly, doesn't speak to me at the moment. Here's what I'm using in Objective-C, adapted mainly from this and this: @implementation NSString (Crypto) - (NSString *)encryptAES256:(NSString *)key { NSData *input = [self dataUsingEncoding: NSUTF8StringEncoding]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:TRUE]; return [Base64 encode:output]; } - (NSString *)decryptAES256:(NSString *)key { NSData *input = [Base64 decode:self]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:FALSE]; return [[[NSString alloc] initWithData:output encoding:NSUTF8StringEncoding] autorelease]; } + (NSData *)cryptoAES256:(NSData *)input key:(NSString *)key doEncrypt:(BOOL)doEncrypt { // 'key' should be 32 bytes for AES256, will be null-padded otherwise char keyPtr[kCCKeySizeAES256 + 1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [input length]; // See the doc: For block ciphers, the output size will always be less than or // equal to the input size plus the size of one block. // That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void* buffer = malloc(bufferSize); size_t numBytesCrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(doEncrypt ? kCCEncrypt : kCCDecrypt, kCCAlgorithmAES128, kCCOptionECBMode | kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, nil, // initialization vector (optional) [input bytes], dataLength, // input buffer, bufferSize, // output &numBytesCrypted ); if (cryptStatus == kCCSuccess) { // the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesCrypted]; } free(buffer); // free the buffer; return nil; } @end Of course, the input is Base64 decoded beforehand. I see that each encryption with the same key and same content in Javascript gives a different encrypted string, which is not the case with the Objective-C implementation that always give the same encrypted string. I've read the answers of this post and it makes me believe I'm right about something along the lines of vector initialization but I'd need your help to pinpoint what's going on exactly. Thank you!

    Read the article

  • Moving a unit precisely along a path in x,y coordinates

    - by Adam Eberbach
    I am playing around with a strategy game where squads move around a map. Each turn a certain amount of movement is allocated to a squad and if the squad has a destination the points are applied each turn until the destination is reached. Actual distance is used so if a squad moves one position in the x or y direction it uses one point, but moving diagonally takes ~1.4 points. The squad maintains actual position as float which is then rounded to allow drawing the position on the map. The path is described by touching the squad and dragging to the end position then lifting the pen or finger. (I'm doing this on an iPhone now but Android/Qt/Windows Mobile would work the same) As the pointer moves x, y points are recorded so that the squad gains a list of intermediate destinations on the way to the final destination. I'm finding that the destinations are not evenly spaced but can be further apart depending on the speed of the pointer movement. Following the path is important because obstacles or terrain matter in this game. I'm not trying to remake Flight Control but that's a similar mechanic. Here's what I've been doing, but it just seems too complicated (pseudocode): getDestination() { - self.nextDestination = remove_from_array(destinations) - self.gradient = delta y to destination / delta x to destination - self.angle = atan(self.gradient) - self.cosAngle = cos(self.angle) - self.sinAngle = sin(self.angle) } move() { - get movement allocation for this turn - if self.nextDestination not valid - - getNextDestination() - while(nextDestination valid) && (movement allocation remains) { - - find xStep and yStep using movement allocation and sinAngle/cosAngle calculated for current self.nextDestination - - if current position + xStep crosses the destination - - - find x movement remaining after self.nextDestination reached - - - calculate remaining direct path movement allocation (xStep remaining / cosAngle) - - - make self.position equal to self.nextDestination - - else - - - apply xStep and yStep to current position - } - round squad's float coordinates to integer screen coordinates - draw squad image on map } That's simplified of course, stuff like sign needs to be tweaked to ensure movement is in the right direction. If trig is the best way to do it then lookup tables can be used or maybe it doesn't matter on modern devices like it used to. Suggestions for a better way to do it? an update - iPhone has zero issues with trig and tracking tens of positions and tracks implemented as described above and it draws in floats anyway. The Bresenham method is more efficient, trig is more precise. If I was to use integer Bresenham I would want to multiply by ten or so to maintain a little more positional accuracy to benefit collisions/terrain detection.

    Read the article

  • Checking to see if a number is evenly divisible by other numbers with recursion in Python

    - by Ernesto
    At the risk of receiving negative votes, I will preface this by saying this is a midterm problem for a programming class. However, I have already submitted the code and passed the question. I changed the name of the function(s) so that someone can't immediately do a search and find the correct code, as that is not my purpose. I am actually trying to figure out what is actually MORE CORRECT from two pieces that I wrote. The problem tells us that a certain fast food place sells bite-sized pieces of chicken in packs of 6, 9, and 20. It wants us to create a function that will tell if a given number of bite-sized piece of chicken can be obtained by buying different packs. For example, 15 can be bought, because 6 + 9 is 15, but 16 cannot be bought, because no combination of the packs will equal 15. The code I submitted and was "correct" on, was: def isDivisible(n): """ n is an int Returns True if some integer combination of 6, 9 and 20 equals n Otherwise returns False. """ a, b, c = 20, 9, 6 if n == 0: return True elif n < 0: return False elif isDivisible(n - a) or isDivisible(n - b) or isDivisible(n - c): return True else: return False However, I got to thinking, if the initial number is 0, it will return True. Would an initial number of 0 be considered "buying that amount using 6, 9, and/or 20"? I cannot view the test cases the grader used, so I don't know if the grader checked 0 as a test case and decided that True was an acceptable answer or not. I also can't just enter the new code, because it is a midterm. I decided to create a second piece of code that would handle an initial case of 0, and assuming 0 is actually False: def isDivisible(n): """ n is an int Returns True if some integer combination of 6, 9 and 20 equals n Otherwise returns False. """ a, b, c = 20, 9, 6 if n == 0: return False else: def helperDivisible(n): if n == 0: return True elif n < 0: return False elif helperDivisible(n - a) or helperDivisible(n - b) or helperDivisible(n - c): return True else: return False return helperDivisible(n) As you can see, my second function had to use a "helper" function in order to work. My overall question, though, is which function do you think would provide the correct answer, if the grader had tested for 0 as an initial input?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >