Search Results

Search found 3111 results on 125 pages for 'labeled statements'.

Page 102/125 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Fastest way to clamp a real (fixed/floating point) value?

    - by Niklas
    Hi, Is there a more efficient way to clamp real numbers than using if statements or ternary operators? I want to do this both for doubles and for a 32-bit fixpoint implementation (16.16). I'm not asking for code that can handle both cases; they will be handled in separate functions. Obviously, I can do something like: double clampedA; double a = calculate(); clampedA = a > MY_MAX ? MY_MAX : a; clampedA = a < MY_MIN ? MY_MIN : a; or double a = calculate(); double clampedA = a; if(clampedA > MY_MAX) clampedA = MY_MAX; else if(clampedA < MY_MIN) clampedA = MY_MIN; The fixpoint version would use functions/macros for comparisons. This is done in a performance-critical part of the code, so I'm looking for an as efficient way to do it as possible (which I suspect would involve bit-manipulation) EDIT: It has to be standard/portable C, platform-specific functionality is not of any interest here. Also, MY_MIN and MY_MAX are the same type as the value I want clamped (doubles in the examples above).

    Read the article

  • Better way of coding this if statment?

    - by HadlowJ
    Hi. I have another question for you very helpful people. I use a lot of if statements many of which are just repeated and im sure could be shortened. This is my current bit of code if (Globals.TotalStands <= 1) { ScoreUpdate.StandNo2.Visible = false; ScoreUpdate.ScoreStand2.Visible = false; ScoreUpdate.ScoreOutOf2.Visible = false; } if (Globals.TotalStands <= 2) { ScoreUpdate.StandNo3.Visible = false; ScoreUpdate.ScoreStand3.Visible = false; ScoreUpdate.ScoreOutOf3.Visible = false; } if (Globals.TotalStands <= 3) { ScoreUpdate.StandNo4.Visible = false; ScoreUpdate.ScoreStand4.Visible = false; ScoreUpdate.ScoreOutOf4.Visible = false; } if (Globals.TotalStands <= 4) { ScoreUpdate.StandNo5.Visible = false; ScoreUpdate.ScoreStand5.Visible = false; ScoreUpdate.ScoreOutOf5.Visible = false; } if (Globals.TotalStands <= 5) { ScoreUpdate.StandNo6.Visible = false; ScoreUpdate.ScoreStand6.Visible = false; ScoreUpdate.ScoreOutOf6.Visible = false; } if (Globals.TotalStands <= 6) { ScoreUpdate.StandNo7.Visible = false; ScoreUpdate.ScoreStand7.Visible = false; ScoreUpdate.ScoreOutOf7.Visible = false; } if (Globals.TotalStands <= 7) { ScoreUpdate.StandNo8.Visible = false; ScoreUpdate.ScoreStand8.Visible = false; ScoreUpdate.ScoreOutOf8.Visible = false; } as you can see there is a huge amount of code to do something simple (which i do on a few other forms as well and im sure there must be a better way of coding this that gets the same result? Im a code noob so please be gentle, code is c# and software is visual studio 2008 pro. Thanks

    Read the article

  • In mysql, is "explain ..." always safe?

    - by tye
    If I allow a group of users to submit "explain $whatever" to mysql (via Perl's DBI using DBD::mysql), is there anything that a user could put into $whatever that would make any database changes, leak non-trivial information, or even cause significant database load? If so, how? I know that via "explain $whatever" one can figure out what tables / columns exist (you have to guess names, though) and roughly how many records are in a table or how many records have a particular value for an indexed field. I don't expect one to be able to get any information about the contents of unindexed fields. DBD::mysql should not allow multiple statements so I don't expect it to be possible to run any query (just explain one query). Even subqueries should not be executed, just explained. But I'm not a mysql expert and there are surely features of mysql that I'm not even aware of. In trying to come up with a query plan, might the optimizer actual execute an expression in order to come up with the value that an indexed field is going to be compared against? explain select * from atable where class = somefunction(...) where atable.class is indexed and not unique and class='unused' would find no records but class='common' would find a million records. Might 'explain' evaluate somefunction(...)? And then could somefunction(...) be written such that it modifies data?

    Read the article

  • Best way to associate data files with particular tests in RSpec / Ruby

    - by Bill T
    For my RSpec tests I would to automatically associate data files with each test. To clarify, if my tests each require an xml file as input data and then some xpath statements to validate the responses they get back I would like to externalize the xml and xpath as files and have the testing framework easily associate them with the particular test being run by using the unique ID of the test as the file(s) name. I tried to get this behavior but the solution isn't very clean. I wrote a helper method that takes the value of "description" and combines it with FILE to create a unique identifier which is set into a global variable that other utilities can access. The unique identifier is used to associate the data files I need. I have to call this helper method as the first line of every test, which is ugly. If I have an RSpec example that looks like this: describe "Basic functions of this server I'm testing" do it "should give me back a response" do # Sets a global var to: "my_tests_spec.rb_should_give_me_back_a_response" TestHelper::who_am_i __FILE__, description ... end end Is there some better/cleaner/slicker way I can get an unique ID for each test that I could use to associate data files with? Perhaps something build into RSpec I'm unaware of? Thank you, -Bill

    Read the article

  • Migrate Data and Schema from MySQL to SQL Server

    - by colithium
    Are there any free solutions for automatically migrating a database from MySQL to SQL Server Server that "just works"? I've been attempting this simple (at least I thought so) task all day now. I've tried: SQL Server Management Studio's Import Data feature Create an empty database Tasks - Import Data... .NET Framework Data Provider for Odbc Valid DSN (verified it connects) Copy data from one or more tables or views Check 1 VERY simple table Click Preview Get Error: The preview data could not be retrieved. ADDITIONAL INFORMATION: ERROR [42000] [MySQL][ODBC 5.1 Driver][mysqld-5.1.45-community]You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"table_name"' at line 1 (myodbc5.dll) A similar error occurs if I go through the rest of the wizard and perform the operation. The failed step is "Setting Source Connection" the error refers to retrieving column information and then lists the above error. It can retrieve column information just fine when I modify column mappings so I really don't know what the issue is. I've also tried getting various MySql tools to output ddl statements that SQL Server understand but haven't succeeded. I've tried with MySQL v5.1.11 to SQL Server 2005 and with MySQL v5.1.45 to SQL Server 2008 (with ODBC drivers 3.51.27.00 and 5.01.06.00 respectively)

    Read the article

  • memcmp,strcmp,strncmp in C

    - by el10780
    I wrote this small piece of code in C to test memcmp() strncmp() strcmp() functions in C. Here is the code that I wrote: #include <stdio.h> #include <stdlib.h> #include <string.h> int main(int argc, char** argv) { char *word1="apple",*word2="atoms"; if (strncmp(word1,word2,5)==0) printf("strncmp result.\n"); if (memcmp(word1,word2,5)==0) printf("memcmp result.\n"); if (strcmp(word1,word2)==0) printf("strcmp result.\n"); } Can somebody explain me the differences because I am confused with these three functions?My main problem is that I have a file in which I tokenize its line of it,the problem is that when I tokenize the word "atoms" in the file I have to stop the process of tokenizing.I first tried strcmp() but unfortunately when it reached to the point where the word "atoms" were placed in the file it didn't stop and it continued,but when I used either the memcmp() or the strncmp() it stopped and I was happy.But then I thought,what if there will be a case in which there is one string in which the first 5 letters are a,t,o,m,s and these are being followed by other letters.Unfortunately,my thoughts were right as I tested it using the above code by initializing word1 to "atomsaaaaa" and word2 to atoms and memcmp() and strncmp() in the if statements returned 0.On the other hand strcmp() it didn't.It seems that I must use strcmp(). I have done google searches but I got more confused as I have seen sites and other forums to define these three differently.If it is possible for someone to give me correct explanations/definitions so I can use them correctly in my source code,I would be really grateful.

    Read the article

  • How can I unit test my custom validation attribute

    - by MightyAtom
    I have a custom asp.net mvc class validation attribute. My question is how can I unit test it? It would be one thing to test that the class has the attribute but this would not actually test that the logic inside it. This is what I want to test. [Serializable] [EligabilityStudentDebtsAttribute(ErrorMessage = "You must answer yes or no to all questions")] public class Eligability { [BooleanRequiredToBeTrue(ErrorMessage = "You must agree to the statements listed")] public bool StatementAgree { get; set; } [Required(ErrorMessage = "Please choose an option")] public bool? Income { get; set; } .....removed for brevity } [AttributeUsage(AttributeTargets.Class)] public class EligabilityStudentDebtsAttribute : ValidationAttribute { // If AnyDebts is true then // StudentDebts must be true or false public override bool IsValid(object value) { Eligability elig = (Eligability)value; bool ok = true; if (elig.AnyDebts == true) { if (elig.StudentDebts == null) { ok = false; } } return ok; } } I have tried to write a test as follows but this does not work: [TestMethod] public void Eligability_model_StudentDebts_is_required_if_AnyDebts_is_true() { // Arrange var eligability = new Eligability(); var controller = new ApplicationController(); // Act controller.ModelState.Clear(); controller.ValidateModel(eligability); var actionResult = controller.Section2(eligability,null,string.Empty); // Assert Assert.IsInstanceOfType(actionResult, typeof(ViewResult)); Assert.AreEqual(string.Empty, ((ViewResult)actionResult).ViewName); Assert.AreEqual(eligability, ((ViewResult)actionResult).ViewData.Model); Assert.IsFalse(((ViewResult)actionResult).ViewData.ModelState.IsValid); } The ModelStateDictionary does not contain the key for this custom attribute. It only contains the attributes for the standard validation attributes. Why is this? What is the best way to test these custom attributes? Thanks

    Read the article

  • Any Alternate way for writing to a file other than ofstream

    - by Aditya
    Hi All, I am performing file operations (writeToFile) which fetches the data from a xml and writes into a output file(a1.txt). I am using MS Visual C++ 2008 and in windows XP. currently i am using this method of writing to output file.. 01.ofstreamhdr OutputFile; 02./* few other stmts / 03.hdrOutputFile.open(fileName, std::ios::out); 04. 05.hdrOutputFile << "#include \"commondata.h\""<< endl ; 06.hdrOutputFile << "#include \"Commonconfig.h\"" << endl ; 07.hdrOutputFile << "#include \"commontable.h\"" << endl << endl ; 08. hdrOutputFile << "#pragma pack(push,1)" << endl ; 09.hdrOutputFile << "typedef struct \n {" << endl ; 10./ simliar hdrOutputFiles statements... */.. I have around 250 lines to write.. Is any better way to perform this task. I want to reduce this hdrOutputFile and use a buffer to do this. Please guide me how to do that action. I mean, buff = "#include \"commontable.h\"" + "typedef struct \n {" + ....... hdrOutputFile << buff. is this way possible? Thanks Ramm

    Read the article

  • What is the right path for PHP includes on a Mac?

    - by skorned
    Running Mac OS X 10.5.8, with PHP 5.2.11 Pre-installed. Using Coda 1.6.10. I'm writing PHP files, and then preview them running from file, not server. This was working fine till I tried PHP includes. These don't work as a relative path, only as an absolute from the root of the drive. Is there any way I can use statements like include_once "common/header.php"; without specifying my entire file path like so : include_once "/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0/common/base.php"; ,where ColoredLists_v1.0 is the directory with all the website files in it. I tried solutions like prepending _SERVER[DOCUMENT_ROOT] or dirname(File) to the file paths, but that didn't work as the variables were not set. Is there any easy way to do this, or a configuration I can change so that it looks in a specific directory by default instead of looking at the drive root? Currently, echo_include_path shows .: When I include this line at the start of the script, it works: set_include_path('/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0'); However, if I want to do this for all my scripts, I can't seem to make the change permanent. Even after I edited the Unix include_path in my php.ini, it doesn't seem to work.

    Read the article

  • Vector does reallocation on every push_back

    - by Amrish
    IDE - Visual Studio 2008, Visual C++ I have a custom class Class1 with a copy constructor to it. I also have a vector Data is inserted using the following code Class1* objClass1; vector<Class1> vClass1; for(int i=0;i<1000;i++) { objClass1 = new Class1(); vClass1.push_back(*objClass1); delete objClass1; } Now on every insert, the vector gets re-allocated and all the existing contents are copied to new locations. For example, if the vector has 5 elements and if I insert the 6th one, the previous 5 elements along with the new one gets copied to a new location (I figured it out by adding log statements in the copy constructors.) On using reserve(), this however does not happen as expected! I have the following questions Is it mandatory to always use the reserve statement? Does vector does a reallocation every time I do a push_back; or does it happen because I am debugging?

    Read the article

  • PHP: How do I rename a directory where the parent directory is variable?

    - by gsquare567
    i would like to move the files inside uploads/pension/#SOME_VARIABLE_NUMBER#/#SOME_CONSTANT_NUMBER#/ here is my code: // move pension statements // located at uploads/pension/%COMPANY_ID%/%USER_ID%/%HASH% // so just move the %USER_ID% folder to the new company $oldPensionDir = "uploads/pension/" . $demo_user[Users::companyID] . "/" . $demo_user[Users::userID] . "/"; $newPensionDir = "uploads/pension/" . $newCompanyID . "/" . $demo_user[Users::userID] . "/"; // see if the user had any files, and if so, move them if(file_exists($oldPensionDir)) { // if it doesnt exist, make it if(!file_exists($newPensionDir)) mkdir($newPensionDir); // move the folder rename($oldPensionDir, $newPensionDir); } however... when i need to make the directory with the "mkdir" function, i get: mkdir() [<a href='function.mkdir'>function.mkdir</a>]: No such file or directory ok, maybe the mkdir won't work, but what about the rename? perhaps that will make the directory if it's not there... nope! rename(uploads/pension/1001/783/,uploads/pension/1000/783/) [<a href='function.rename'>function.rename</a>]: The system cannot find the path specified. (code: 3) so, there are two errors. i'm pretty sure if the renaming works, i won't even need the mkdir, but who knows... can anyone tell me why these are errors and how to fix them? thanks!

    Read the article

  • PHP sql with foreach loop variable problem

    - by anthony
    This is really getting frustrating. I have a text file that I'm reading for a list of part numbers that goes into an array. I'm using the following foreach function to search a database for matching numbers. $file = file('parts_array.txt'); foreach ($file as $newPart) { $sql = "SELECT products_sku FROM products WHERE products_sku='" . $newPart . "'"; $rs = mysql_query($sql); $num_rows = mysql_num_rows($rs); echo $num_rows; echo "<br />"; } The problem is I'm getting 0 rows returned from mysql_num_rows. I can type the sql statement without the variable and it works perfectly. I can even echo out the sql statement from this script, copy and paste the statement from the browser and it works. But, for some reason I'm not getting any records when I'm using the variable. I've used variables in sql statements tons of times, but this really has me stumped.

    Read the article

  • Zend, slow load, "waiting for response" for 20-80 seconds on local site

    - by Tony C.
    So I have several sites running under the same zend setup. All of the sites run pretty normally except one. Upon loading or reloading this one site, reguardless of which page your on (excluding the 404 page explanation later...) you get a serious pause before any content begins to download. Using firebugs net panel you can see that the first request which is www.(siteaddress).com.local you see a "waiting for response" bar (purple) that is going for anywhere from 20 to sometimes 80+ seconds and this isn't on a dev site, this is on a local site under localhost. What I've managed to figure out so far is that all the pages do this except my 404 page. The reason the 404 page doesn't succumb to this is because it uses a seperate controller (the error controller) and therefore bypasses much of the controller and functions the other parts of the site use. Using exit statements I've manged to figure out that the problem happens somewhere between my post dispatch and my main (top most) controllers Init function. If i exit in the main controllers init the page loads (then exits instantly, no wait). If i do the same in the pre or post dispatch the page waits the 20-80 seconds then exits. Is there a diagram or explanation somewhere or a way for me to find out what events fire inbetween the post dispatch and the main controllers init function? Or does anyone have any clue what might cause this? Any help would be greatly appreciated...

    Read the article

  • give feedback on this pointer program

    - by JohnWong
    This is relatively simple program. But I want to get some feedback about how I can improve this program (if any), for example, unnecessary statements? #include<iostream> #include<fstream> using namespace std; double Average(double*,int); int main() { ifstream inFile("data2.txt"); const int SIZE = 4; double *array = new double(SIZE); double *temp; temp = array; for (int i = 0; i < SIZE; i++) { inFile >> *array++; } cout << "Average is: " << Average(temp, SIZE) << endl; } double Average(double *pointer, int x) { double sum = 0; for (int i = 0; i < x; i++) { sum += *pointer++; } return (sum/x); } The codes are valid and the program is working fine. But I just want to hear what you guys think, since most of you have more experience than I do (well I am only a freshman ... lol) Thanks.

    Read the article

  • Sqlite3 prompting `...>` instead of `sqlite>`

    - by Imray
    I'm following a beginners tutorial to sqlite3. The first step is creating a new database. So I enter a name (movies.db). I'm expecting to get another sqlite> prompt on the next line, and continue with the tutorial, but instead I get a lame ...> after which I can type any gibbersish I want. Clearly, this is not good. What my command prompt looks like: SQLite version 3.8.1 2013-10-17 12:57:35 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> $ sqlite3 movies.db ...> gibberish ...> dsds ...> sdada ...> gfgys ...> a ...> Aaaaarrrgh! ...> How do I get sqlite3 to work normally for me? Pardon my newbie-ness. I hope I've phrased this question in a way that might help other newbs too.

    Read the article

  • Matching on search attributes selected by customer on front end

    - by CodeNinja1974
    I have a method in a class that allows me to return results based on a certain set of Customer specified criteria. The method matches what the Customer specifies on the front end with each item in a collection that comes from the database. In cases where the customer does not specify any of the attributes, the ID of the attibute is passed into the method being equal to 0 (The database has an identity on all tables that is seeded at 1 and is incremental). In this case that attribute should be ignored, for example if the Customer does not specify the Location then customerSearchCriteria.LocationID = 0 coming into the method. The matching would then match on the other attributes and return all Locations matching the other attibutes, example below: public IEnumerable<Pet> FindPetsMatchingCustomerCriteria(CustomerPetSearchCriteria customerSearchCriteria) { if(customerSearchCriteria.LocationID == 0) { return repository.GetAllPetsLinkedCriteria() .Where(x => x.TypeID == customerSearchCriteria.TypeID && x.FeedingMethodID == customerSearchCriteria.FeedingMethodID && x.FlyAblityID == customerSearchCriteria.FlyAblityID ) .Select(y => y.Pet); } } The code for when all criteria is specified is shown below: private PetsRepository repository = new PetsRepository(); public IEnumerable<Pet> FindPetsMatchingCustomerCriteria(CustomerPetSearchCriteria customerSearchCriteria) { return repository.GetAllPetsLinkedCriteria() .Where(x => x.TypeID == customerSearchCriteria.TypeID && x.FeedingMethodID == customerSearchCriteria.FeedingMethodID && x.FlyAblityID == customerSearchCriteria.FlyAblityID && x.LocationID == customerSearchCriteria.LocationID ) .Select(y => y.Pet); } I want to avoid having a whole set of if and else statements to cater for each time the Customer does not explicitly select an attribute of the results they are looking for. What is the most succint and efficient way in which I could achieve this?

    Read the article

  • JQuery: using .LIVE problems

    - by TeddTedd
    I have the following JQuery code: $("#myDIV li:eq(0)").live('click',function(){ funcA(); }); $("#myDIV li:eq(1)").live('click',function(){ funcB(); }); $("#myDIV li:eq(2)").live('click',function(){ funcC(); }); $("#myDIV li:eq(3)").live('click',function(){ funcD(); }); And realized it's really inefficient. So I tried the following, which I believe is much more effect; however, the code does not work: var tab_node = $("#myDIV li"); tab_node.eq(0).live('click',function(){ funcA(); }); tab_node.eq(1).live('click',function(){ funcB(); }); tab_node.eq(2).live('click',function(){ funcC(); }); tab_node.eq(3).live('click',function(){ funcD(); }); Any idea how I can make my code more efficient while also work? UPDATE: From the answers below, it sounds like these two statements are not equalavent. New Question: Is there any way to run my original code more efficient?

    Read the article

  • What's the cleanest way to do byte-level manipulation?

    - by Jurily
    I have the following C struct from the source code of a server, and many similar: // preprocessing magic: 1-byte alignment typedef struct AUTH_LOGON_CHALLENGE_C { // 4 byte header uint8 cmd; uint8 error; uint16 size; // 30 bytes uint8 gamename[4]; uint8 version1; uint8 version2; uint8 version3; uint16 build; uint8 platform[4]; uint8 os[4]; uint8 country[4]; uint32 timezone_bias; uint32 ip; uint8 I_len; // I_len bytes uint8 I[1]; } sAuthLogonChallenge_C; // usage (the actual code that will read my packets): sAuthLogonChallenge_C *ch = (sAuthLogonChallenge_C*)&buf[0]; // where buf is a raw byte array These are TCP packets, and I need to implement something that emits and reads them in C#. What's the cleanest way to do this? My current approach involves [StructLayout(LayoutKind.Sequential, Pack = 1)] unsafe struct foo { ... } and a lot of fixed statements to read and write it, but it feels really clunky, and since the packet itself is not fixed length, I don't feel comfortable using it. Also, it's a lot of work. However, it does describe the data structure nicely, and the protocol may change over time, so this may be ideal for maintenance. What are my options? Would it be easier to just write it in C++ and use some .NET magic to use that?

    Read the article

  • How can I concisely copy multiple SQL rows, with minor modifications?

    - by Steve Jessop
    I'm copying a subset of some data, so that the copy will be independently modifiable in future. One of my SQL statements looks something like this (I've changed table and column names): INSERT Product( ProductRangeID, Name, Weight, Price, Color, And, So, On ) SELECT @newrangeid AS ProductRangeID, Name, Weight, Price, Color, And, So, On FROM Product WHERE ProductRangeID = @oldrangeid and Color = 'Blue' That is, we're launching a new product range which initially just consists of all the blue items in some specified current range, under new SKUs. In future we may change the "blue-range" versions of the products independently of the old ones. I'm pretty new at SQL: is there something clever I should do to avoid listing all those columns, or at least avoid listing them twice? I can live with the current code, but I'd rather not have to come back and modify it if new columns are added to Product. In its current form it would just silently fail to copy the new column if I forget to do that, which should show up in testing but isn't great. I am copying every column except for the ProductRangeID (which I modify), the ProductID (incrementing primary key) and two DateCreated and timestamp columns (which take their auto-generated values for the new row). Btw, I suspect I should probably have a separate join table between ProductID and ProductRangeID. I didn't define the tables. This is in a T-SQL stored procedure on SQL Server 2008, if that makes any difference.

    Read the article

  • How do I get 2-way data binding to work for nested asp.net Repeater controls

    - by jimblanchard
    I have the following (trimmed) markup: <asp:Repeater ID="CostCategoryRepeater" runat="server"> <ItemTemplate> <div class="costCategory"> <asp:Repeater ID="CostRepeater" runat="server" DataSource='<%# Eval("Costs")%>'> <ItemTemplate> <tr class="oddCostRows"> <td class="costItemTextRight"><span><%# Eval("Variance", "{0:c0}")%></span></td> <td class="costItemTextRight"><input id="SupplementAmount" class="costEntryRight" type="text" value='<%# Bind("SupplementAmount")%>' runat="server" /></td> </tr> </ItemTemplate> </asp:Repeater> </div> </ItemTemplate> </asp:Repeater> The outer repeater's DataSource is set in the code-beside. I've snipped them, but there are Eval statements that wire up to the properties in the outer Repeater. Anyway, one of the fields in the inner Repeater needs to be a Bind instead of an Eval, as I want to get the values that the user types in. The SupplementAmount input element correctly receives it's value when the page loads, but on the other side, when I inspect the contents of the Costs List when the form posts back, the changes the user made aren't present. Thanks.

    Read the article

  • How to report progress of a JavaScript function?

    - by LambyPie
    I have a JavaScript function which is quite long and performs a number of tasks, I would like to report progress to the user by updating the contents of a SPAN element with a message as I go. I tried adding document.getElementById('spnProgress').innerText = ... statements throughout the function code. However, whilst the function is executing the UI will not update and so you only ever see the last message written to the SPAN which is not very helpful. My current solution is to break the task up into a number of functions, at the end of each I set the SPAN message and then "trigger" the next one with a window.setTimeout call with a very short delay (say 10ms). This yields control and allows the browser to repaint the SPAN with the updated message before starting the next step. However I find this very messy and difficult to follow the code, I'm thinking there must be a better way. Does anyone have any suggestions? Is there any way to force the SPAN to repaint without having to leave the context of the function? Thanks

    Read the article

  • What is the difference between these two linq implementations?

    - by Mahesh Velaga
    I was going through Jon Skeet's Reimplemnting Linq to Objects series. In the implementation of where article, I found the following snippets, but I don't get what is the advantage that we are gettting by splitting the original method into two. Original Method: // Naive validation - broken! public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Refactored Method: public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } return WhereImpl(source, predicate); } private static IEnumerable<TSource> WhereImpl<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Jon says - Its for eager validation and then defferring for the rest of the part. But, I don't get it. Could some one please explain it in a little more detail, whats the difference between these 2 functions and why will the validations be performed in one and not in the other eagerly? Conclusion/Solution: I got confused due to my lack of understanding on which functions are determined to be iterator-generators. I assumed that, it is based on signature of a method like IEnumerable<T>. But, based on the answers, now I get it, a method is an iterator-generator if it uses yield statements.

    Read the article

  • Python: problem with tiny script to delete files

    - by Rosarch
    I have a project that used to be under SVN, but now I'm moving it to Mercurial, so I want to clear out all the .svn files. It's a little too big to do it by hand, so I decided to write a python script to do it. But it isn't working. def cleandir(dir_path): print "Cleaning %s\n" % dir_path toDelete = [] files = os.listdir(dir_path) for filedir in files: print "considering %s" % filedir # continue if filedir == '.' or filedir == '..': print "skipping %s" % filedir continue path = dir_path + os.sep + filedir if os.path.isdir(path): cleandir(path) else: print "not dir: %s" % path if 'svn' in filedir: toDelete.append(path) print "Files to be deleted:" for candidate in toDelete: print candidate print "Delete all? [y|n]" choice = raw_input() if choice == 'y': for filedir in toDelete: if os.path.isdir(filedir): os.rmdir(filedir) else: os.unlink(filedir) exit() if __name__ == "__main__": cleandir(dir) The print statements show that it's only "considering" the filedirs whose names start with ".". However, if I uncomment the continue statement, all the filedirs are "considered". Why is this? Or is there some other utility that already exists to recursively de-SVN-ify a directory tree?

    Read the article

  • Porting Symbian C++ to Android NDK

    - by Donal Rafferty
    I've been given some Symbian C++ code to port over for use with the Android NDK. The code has lots of Symbian specific code in it and I have very little experience of C++ so its not going very well. The main thing that is slowing me down is trying to figure out the alternatives to use in normal C++ for the Symbian specific code. At the minute the compiler is throwing out all sorts of errors for unrecognised types. From my recent research these are the types that I believe are Symbian specific: TInt, TBool, TDesc8, RSocket, TInetAddress, TBuf, HBufc, RPointerArray Changing TInt and TBool to int and bool respectively works in the compiler but I am unsure what to use for the other types? Can anyone help me out with them? Especially TDesc, TBuf, HBuf. Also Symbian has a two phase contructor using NewL and NewLc But would changing this to a normal C++ constructor be ok? Finally Symbian uses the clean up stack to help eliminate memory leaks I believe, would removing the clean up stack code be acceptable, I presume it should be replaced with try/catch statements?

    Read the article

  • How to duplicate all data in a table except for a single column that should be changed.

    - by twiga
    I have a question regarding a unified insert query against tables with different data structures (Oracle). Let me elaborate with an example: tb_customers ( id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3) ) tb_suppliers ( id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx, archive_id NUMBER(3) ) The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key. My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id. One solution (that works), is to iterate over all the tables in a stored procedure and do a: CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE); UPDATE temp SET archive_id=something; INSERT INTO temp (select * from temp); DROP TABLE temp; I do not like this solution very much as the DDL commands muck up all restore points. Does anyone else have any solution?

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >