Search Results

Search found 7418 results on 297 pages for 'argument passing'.

Page 263/297 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Why it's can be compiled in GNU/C++, can't compiled in VC++2010 RTM?

    - by volnet
    #include #include #include #include "copy_of_auto_ptr.h" #ifdef _MSC_VER #pragma message("#include ") #include // http://gcc.gnu.org/onlinedocs/gcc/Diagnostic-Pragmas.html#Diagnostic-Pragmas #endif /* case 1-4 is the requirement of the auto_ptr. which form http://ptgmedia.pearsoncmg.com/images/020163371X/autoptrupdate/auto_ptr_update.html */ /* case 1. (1) Direct-initialization, same type, e.g. */ std::auto_ptr source_int() { // return std::auto_ptr(new int(3)); std::auto_ptr tmp(new int(3)); return tmp; } /* case 2. (2) Copy-initialization, same type, e.g. */ void sink_int(std::auto_ptr p) { std::cout source_derived() { // return std::auto_ptr(new Derived()); std::auto_ptr tmp(new Derived()); return tmp; } /* case 4. (4) Copy-initialization, base-from-derived, e.g. */ void sink_base( std::auto_ptr p) { p-go(); } int main(void) { /* // auto_ptr */ // case 1. // auto_ptr std::auto_ptr p_int(source_int()); std::cout p_derived(source_derived()); p_derived-go(); // case 4. // auto_ptr sink_base(source_derived()); return 0; } In Eclipse(GNU C++.exe -v gcc version 3.4.5 (mingw-vista special r3)) it's two compile error: Description Resource Path Location Type initializing argument 1 of void sink_base(std::auto_ptr<Base>)' from result ofstd::auto_ptr<_Tp::operator std::auto_ptr<_Tp1() [with _Tp1 = Base, _Tp = Derived]' auto_ptr_ref_research.cpp auto_ptr_ref_research/auto_ptr_ref_research 190 C/C++ Problem Description Resource Path Location Type no matching function for call to `std::auto_ptr::auto_ptr(std::auto_ptr)' auto_ptr_ref_research.cpp auto_ptr_ref_research/auto_ptr_ref_research 190 C/C++ Problem But it's right in VS2010 RTM. Questions: Which compiler stand for the ISO C++ standard? The content of case 4 is the problem "auto_ptr & auto_ptr_ref want to resolve?"

    Read the article

  • ForEach loop in Mathematica.

    - by dreeves
    I'd like something like this: ForEach[i_, {1,2,3}, Print[i] ] Or, more generally, to destructure arbitrary stuff in the list you're looping over, like: ForEach[{i_, j_}, {{1,10}, {2,20}, {3,30}}, Print[i*j] ] (Meta-question: is that a good way to call a ForEach loop, with the first argument a pattern like that?) ADDED: Some answerers have rightly pointed out that usually you want to use Map or other purely functional constructs and eschew a non-functional programming style where you use side effects. I agree! But here's an example where I think this ForEach construct is supremely useful: Say I have a list of options (rules) that pair symbols with expressions, like attrVals = {a -> 7, b -> 8, c -> 9} Now I want to make a hash table where I do the obvious mapping of those symbols to those numbers. I don't think there's a cleaner way to do that than ForEach[a_ -> v_, attrVals, h[a] = v] ADDED: I just realized that to do ForEach properly, it should support Break[] and Continue[]. I'm not sure how to implement that. Perhaps it will need to somehow be implemented in terms of For, While, or Do since those are the only loop constructs that support Break[] and Continue[]. If anyone interested in this wants to ask about that as a separate question, please do!

    Read the article

  • c# creating a database query METHOD

    - by Sinaesthetic
    I'm not sure if im delluded but what I would like to do is create a method that will return the results of a query, so that i can reuse the connection code. As i understand it, a query returns an object but how do i pass that object back? I want to send the query into the method as a string argument, and have it return the results so that I can use them. Here's what i have which was a stab in the dark, it obviously doesn't work. This example is me trying to populate a listbox with the results of a query; the sheet name is Employees and the field/column is name. The error i get is "Complex DataBinding accepts as a data source either an IList or an IListSource.". any ideas? public Form1() { InitializeComponent(); openFileDialog1.ShowDialog(); openedFile = openFileDialog1.FileName; lbxEmployeeNames.DataSource = Query("Select [name] FROM [Employees$]"); } public object Query(string sql) { System.Data.OleDb.OleDbConnection MyConnection; System.Data.OleDb.OleDbCommand myCommand = new System.Data.OleDb.OleDbCommand(); string connectionPath; //build connection string connectionPath = "provider=Microsoft.Jet.OLEDB.4.0;Data Source='" + openedFile + "';Extended Properties=Excel 8.0;"; MyConnection = new System.Data.OleDb.OleDbConnection(connectionPath); MyConnection.Open(); myCommand.Connection = MyConnection; myCommand.CommandText = sql; return myCommand.ExecuteNonQuery(); }

    Read the article

  • Generic Singleton Fasade design pattern

    - by Paul
    Hi I try write singleton fasede pattern with generics. I have one problem, how can I call method from generic variable. Something like this: T1 t1 = new T1(); //call method from t1 t1.Method(); In method SingletonFasadeMethod I have compile error: Error 1 'T1' does not contain a definition for 'Method' and no extension method 'Method' accepting a first argument of type 'T1' could be found (are you missing a using directive or an assembly reference?) Any advace? Thank, I am beginner in C#. All code is here: namespace GenericSingletonFasade { public interface IMyInterface { string Method(); } internal class ClassA : IMyInterface { public string Method() { return " Calling MethodA "; } } internal class ClassB : IMyInterface { public string Method() { return " Calling MethodB "; } } internal class ClassC : IMyInterface { public string Method() { return "Calling MethodC"; } } internal class ClassD : IMyInterface { public string Method() { return "Calling MethodD"; } } public class SingletonFasade<T1,T2,T3> where T1 : class,new() where T2 : class,new() where T3 : class,new() { private static T1 t1; private static T2 t2; private static T3 t3; private SingletonFasade() { t1 = new T1(); t2 = new T2(); t3 = new T3(); } class SingletonCreator { static SingletonCreator() { } internal static readonly SingletonFasade<T1,T2,T3> uniqueInstace = new SingletonFasade<T1,T2,T3>(); } public static SingletonFasade<T1,T2,T3> UniqueInstace { get { return SingletonCreator.uniqueInstace; } } public string SingletonFasadeMethod() { //Problem is here return t1.Method() + t2.Method() + t3.Method(); } } }

    Read the article

  • Identity alternative for SQL Azure Federation : are Azure Queues or Service Bus Queues a good choice?

    - by JYL
    As many of developers, I'm looking for a way to integrate my existing app to SQL Azure Federations, and replacing the Identity columns (the primary keys of my tables) is a big problem. For many reasons, I do NOT want use GUID for my primary keys (please don't open the debate about the GUID or not, it's not my question : i just don't want a GUID, period). So I need to build a key provider to replace the "identity" feature of a standard SQL database. I'm using Entity Framework, so i can easily find one place to set the Id value just before the insert (by overriding the SaveChanges method of my ObjectContext class). I just need to find a "not too complicated" implementation for getting the current Id, which is "farm-ready". I've read this SO post : "ID Generation for Sharded Database (Azure Federated Database)" and "Synchronizing Multiple Nodes in Windows Azure from MSDN Magazine", but this solution sounds a bit complicated for me. I'm thinking about creating (automatically) one azure queue for each SQL table, which contain a pre-loaded list of consecutive integer. When I want an Id value, I just have to get a message from the queue (which becomes invisible and is deleted on the way), which give me the current available Id. About the choice between "Windows Azure Queues" and "Windows Azure Service Bus Queues", I prefere "Windows Azure Queues", due to the "high" latency of Service Bus Queues. I don't think that the lack of "ordering garantee" of Azure Queues is a problem. What do you think about that idea of using Azure Queues to provide Id values ? Do you see any argument to give up that idea ? Do you have a better idea, or even a good practice, to provider integer ids in SQL Azure Federation databases ? Thanks.

    Read the article

  • ASP.NET DropDownList control doesn't postback correctly inside of UserControl

    - by RichardAZ
    I have a situation where a DropDownList control is not posting back correctly. The AutoPost property is set to true, so the postback does happen, but the SelectedValue is not set to the correct value. In addition, the onSelectedIndexChanged event doesn't fire. The exact same code works perfect fine on an ASPX page, but does not work in a ASCX control. I have tried all the obvious things, I hope, trying to figure this one out, but no luck so far. I have even investigated what comes back in Request.Form["__EVENTTARGET"] and __EVENTARGUMENT. __EVENTTARGET does point to the drop down list, but the argument is empty. Can the folks of StackOverflow help lead me in the right direction to debug this issue. Of course, it is further complicated by master pages and the usual overcomplication of ASP.NET. Here is the code: <div> <asp:DropDownList ID="testDrop" runat="server" AutoPostBack="true" EnableViewState="true" onselectedindexchanged="testDrop_SelectedIndexChanged"> <asp:ListItem Value="1" Text="1">1</asp:ListItem> <asp:ListItem Value="2" Text="2">2</asp:ListItem> </asp:DropDownList> </div> And here is the generated html: <select id="ctl00_MainContent_rptAccordion_ctl00_statControl_testDrop" onchange="javascript:setTimeout('__doPostBack(\'ctl00$MainContent$rptAccordion$ctl00$statControl$testDrop\',\'\')', 0)" name="ctl00$MainContent$rptAccordion$ctl00$statControl$testDrop"> <option value="1" selected="selected">1</option> <option value="2">2</option> THANKS!

    Read the article

  • Why in the world is this Moq + NUnit test failing?

    - by Dave Falkner
    I have this dataAccess mock object and I'm trying to verify that one of its methods is being invoked, and that the argument passed into this method fulfills certain constraints. As best I can tell, this method is indeed being invoked, and with the constraints fulfilled. This line of the test throws a MockException: data.Verify(d => d.InsertInvoice(It.Is<Invoice>(i => i.TermPaymentAmount == 0m)), Times.Once()); However, removing the constraint and accepting any invoice passes the test: data.Verify(d => d.InsertInvoice(It.IsAny<Invoice>()), Times.Once()); I've created a test windows form that instantiates this test class, runs its .Setup() method, and then calls the method which I am wishing to test. I insert a breakpoint on the line of code where the mock object is failing the test data.InsertInvoice(invoice); to actually hover over the invoice, and I can confirm that its .TermPaymentAmount decimal property is indeed zero at the time the method is invoked. Out of desperation, I even added a call back to my dataAccess mock: data.Setup(d => d.InsertInvoice(It.IsAny<Invoice>())).Callback((Invoice inv) => System.Windows.Forms.MessageBox.Show(inv.TermPaymentAmount.ToString("G17"))); And this gives me a message box showing "0". This is really baffling me, and no one else in my shop has been able to figure this out. Any help would be appreciated. A barely related question, which I should probably ask independently, is whether it is preferable to use Mock.Verify(...) as I have here, or to use Mock.Expect(...).Verifiable followed by Mock.VerifyAll() as I have seen other people doing? If the answer is situational, which situations would warrent the use of one over the other?

    Read the article

  • Does ReleaseStringUTF do more than free memory?

    - by Bayou Bob
    Consider the following C code segments. Segment 1: char * getSomeString(JNIEnv *env, jstring jstr) { char * retString; retString = (*env)->GetStringUTFChars(env, jstr, NULL); return retString; } void useSomeString(JNIEnv *env, jobject jobj, char *mName) { jclass cl = (*env)->GetObjectClass(env, jobj); jmethodId mId = (*env)->GetMethodID(env, cl, mName, "()Ljava/lang/String;"); jstring jstr = (*env)->CallObjectMethod(env, obj, id, NULL); char * myString = getSomeString(env, jstr); /* ... use myString without modifing it */ free(myString); } Because myString is freed in useSomeString, I do not think I am creating a memory leak; however, I am not sure. The JNI spec specifically requires the use of ReleaseStringUTFChars. Since I am getting a C style 'char *' pointer from GetStringUTFChars, I believe the memory reference exists on the C stack and not in the JAVA heap so it is not in danger of being Garbage Collected; however, I am not sure. I know that changing getSomeString as follows would be safer (and probably preferable). Segment 2: char * getSomeString(JNIEnv *env, jstring jstr) { char * retString; char * intermedString; intermedString = (*env)->GetStringUTFChars(env, jstr, NULL); retString = strdup(intermedString); (*env)->ReleaseStringUTFChars(env, jstr, intermedString); return retString; } Because of our 'process' I need to build an argument on why getSomeString in Segment 2 is preferable to Segment 1. Is anyone aware of any documentation or references which detail the behavior of GetStringUTFChars and ReleaseStringUTFChars in relation to where memory is allocated or what (if any) additional bookkeeping is done (i.e. local Reference Pointer to the Java Heap being created, etc). What are the specific consequences of ignoring that bookkeeping. Thanks in advance.

    Read the article

  • The sign of zero with float2

    - by JackOLantern
    Consider the following code performing operations on complex numbers with C/C++'s float: float real_part = log(3.f); float imag_part = 0.f; float real_part2 = (imag_part)*(imag_part)-(real_part*real_part); float imag_part2 = (imag_part)*(real_part)+(real_part*imag_part); The result will be real_part2= -1.20695 imag_part2= 0 angle= 3.14159 where angle is the phase of the complex number and, in this case, is pi. Now consider the following code: float real_part = log(3.f); float imag_part = 0.f; float real_part2 = (-imag_part)*(-imag_part)-(real_part)*(real_part); float imag_part2 = (-imag_part)*(real_part)+(real_part)*(-imag_part); The result will be real_part2= -1.20695 imag_part2= 0 angle= -3.14159 The imaginary part of the result is -0 which makes the phase of the result be -pi. Although still accomplishing with the principal argument of a complex number and with the signed property of floating point's 0, this changes is a problem when one is defining functions of complex numbers. For example, if one is defining sqrt of a complex number by the de Moivre formula, this will change the sign of the imaginary part of the result to a wrong value. How to deal with this effect?

    Read the article

  • Strange type-related error

    - by vsb
    I wrote following program: isPrime x = and [x `mod` i /= 0 | i <- [2 .. truncate (sqrt x)]] primes = filter isPrime [1 .. ] it should construct list of prime numbers. But I got this error: [1 of 1] Compiling Main ( 7/main.hs, interpreted ) 7/main.hs:3:16: Ambiguous type variable `a' in the constraints: `Floating a' arising from a use of `isPrime' at 7/main.hs:3:16-22 `RealFrac a' arising from a use of `isPrime' at 7/main.hs:3:16-22 `Integral a' arising from a use of `isPrime' at 7/main.hs:3:16-22 Possible cause: the monomorphism restriction applied to the following: primes :: [a] (bound at 7/main.hs:3:0) Probable fix: give these definition(s) an explicit type signature or use -XNoMonomorphismRestriction Failed, modules loaded: none. If I specify signature for isPrime function explicitly: isPrime :: Integer -> Bool isPrime x = and [x `mod` i /= 0 | i <- [2 .. truncate (sqrt x)]] I can't even compile isPrime function: [1 of 1] Compiling Main ( 7/main.hs, interpreted ) 7/main.hs:2:45: No instance for (RealFrac Integer) arising from a use of `truncate' at 7/main.hs:2:45-61 Possible fix: add an instance declaration for (RealFrac Integer) In the expression: truncate (sqrt x) In the expression: [2 .. truncate (sqrt x)] In a stmt of a list comprehension: i <- [2 .. truncate (sqrt x)] 7/main.hs:2:55: No instance for (Floating Integer) arising from a use of `sqrt' at 7/main.hs:2:55-60 Possible fix: add an instance declaration for (Floating Integer) In the first argument of `truncate', namely `(sqrt x)' In the expression: truncate (sqrt x) In the expression: [2 .. truncate (sqrt x)] Failed, modules loaded: none. Can you help me understand, why am I getting these errors?

    Read the article

  • Return multiple IDs from a function and use the result in a query

    - by NewK
    I have this function that returns me all children of a tree node: CREATE OR REPLACE FUNCTION fn_category_get_childs_v2(id_pai integer) RETURNS integer[] AS $BODY$ DECLARE ids_filhos integer array; BEGIN SELECT array ( SELECT category_id FROM category WHERE category_id IN ( (WITH RECURSIVE parent AS ( SELECT category_id , parent_id from category WHERE category_id = id_pai UNION ALL SELECT t.category_id , t.parent_id FROM parent INNER JOIN category t ON parent.category_id = t.parent_id ) SELECT category_id FROM parent WHERE category_id <> id_pai ) ) ) into ids_filhos; return ids_filhos; END; and I would like to use it in a select statement like this: select * from teste1_elements where category_id in (select * from fn_category_get_childs_v2(12)) I've also tried this way with the same result: select * from teste1_elements where category_id=any(select * from fn_category_get_childs_v2(12))) But I get the following error: ERROR: operator does not exist: integer = integer[] LINE 1: select * from teste1_elements where category_id in (select *... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. The function returns an integer array, is that the problem? SELECT * from fn_category_get_childs_v2(12) retrieves the following array (integer[]): '{30,32,34,20,19,18,17,16,15,14}'

    Read the article

  • MySQL connection attempt works fine in 5.2.9 but not in 5.3.0 - Help?

    - by Rich
    Hi, I'm having trouble making a secondary MySQL connection (to a separate, external DB) in my code. It works fine in PHP 5.2.9 but fails to connect in PHP 5.3.0. I'm aware of (at least some) of the changes needed to make successful MySQL connections in the newer version of PHP, and have succeeded before, so I'm not sure why it isn't working this time. I already have a db connection open to a local database. This function below is then used to make an additional connection to a separate, remote directory. The included config file simply contains the external database details (host, user, pass and name). I have checked and it is being included correctly. function connectDP() { global $dpConnection; include("secondary_db_config.php); $dpConnection = mysql_connect($dp_dbHost, $dp_dbUser, $dp_dbPass, true) or DIE("ERROR: Unable to connect to Deployment Platform"); mysql_select_db($dp_dbName, $dpConnection) or DIE("ERROR 006: Unable to select Deployment Platform Database"); } I then attempt to make this new connection simply by calling this function externally: connectDP(); But when loading the page (in 5.3.0), I get the message: ERROR: Unable to connect to Deployment Platform I'm using the optional new_link flag boolean as the fourth argument in the mysql_connect() function and it's still not working. I've been wracking my brain this morning trying to figure out why this connection doesn't work (while I've done something very similar elsewhere to a separate second database that does work). Any help would be appreciated. Thanks! Rich

    Read the article

  • Porting VB6 app to VB.Net: Can anyone ballpark how much effort this is?

    - by Robusto
    In 2002 I did a pretty large VB6 app for a client. It used a lot of UserControls and a 3rd party menu control (for putting icons next to menu names). It had dynamically "splittable" panels, TreeViews with multi-state checkboxes, etc. A very rich UI. My total time on the project was about 500 hours, which the client graciously let me spread over a whole month. (Yeah, it was that kind of job.) They were very happy, though, and they paid the bill on time with no argument. So after having no contact with them for years, they suddenly call and wonder if I can update the app to .Net for them. My initial reaction is just to decline, since I don't use VB.Net. And having read a bunch of posts on SO about the difficulties of porting, etc., etc., I'm even more inclined to decline, so to speak. Still, before I tell them no I am interested in roughly quantifying the effort it would take. I would love to hear from anyone who has done this kind of thing and has a feel for how much work it is. Was it: Significantly less than the effort you used on the original? Somewhat less than the effort you used on the original? The same as the effort you used on the original? More? A lot more? Please only respond if you have actually done this kind of port. And the answer doesn't have to be exact, since I really am only trying to ballpark this. My feeling is that the effort will be at least as much as it took for the original, if not more. But I could be wrong. Thanks for any help.

    Read the article

  • Adding and sorting a linked list in C

    - by user1202963
    In my assignment, I have to write a function that takes as arguments a pointer to a "LNode" structure and an integer argument. Then, I have to not only add that integer into the linked list, but also put place it so that the list is in proper ascending order. I've tried several various attempts at this, and this is my code as of posting. LNode* AddItem(LNode *headPtr, int newItem) { auto LNode *ptr = headPtr; ptr = malloc(sizeof(LNode)); if (headPtr == NULL) { ptr->value = newItem; ptr->next = headPtr; return ptr; } else { while (headPtr->value > newItem || ptr->next != NULL) { printf("While\n"); // This is simply to let me know how many times the loop runs headPtr = headPtr->next; } ptr->value = newItem; ptr->next = headPtr; return ptr; } } // end of "AddItem" When I run it, and try to insert say a 5 and then a 3, the 5 gets inserted, but then the while loop runs once and I get a segmentation fault. Also I cannot change the arguments as it's part of a skeletal code for this project. Thanks to anyone who can help. If it helps this is what the structure looks like typedef struct LNode { int value; struct LNode *next; } LNode;

    Read the article

  • jQuery: How do I rewrite .after( content, content )?

    - by Evan Carroll
    I've got this form working, but according to my previous question it might not be supported: it isn't in the docs either way -- but the intention is pretty obvious in the code. $(".section.warranty .warranty_checks :last").after( $('<div class="little check" />').click( function () { alert('hi') } ) , $('<span>OEM</span>') /*Notice this (a second) argument */ ); What this does is insert <div class="little check"> with a simple .click() callback, followed by a sibling of <span>OEM</span>. How else can I write this then? I'm having difficulty conjuring something working by chaining any combination of .after(), and .insertAfter()? I would expect this to work, but it doesn't: $(".section.warranty .warranty_checks :last").after( $('<div class="little check" />').click( function () { alert('hi') } ).after ( $('<span>OEM</span>') ) ); I would also expect this to work, but it doesn't: $(".section.warranty .warranty_checks :last").after( $('<span>OEM</span>').insertAfter( $('<div class="little check" />').click( function () { alert('hi') } ) ); );

    Read the article

  • create CLLocationCoordinate2D from array

    - by shani
    I have a plist with dictionary of array's with coordinates (stored as strings). I want to create a CLLocationCoordinate2D from every array and crate an overlay for the map. I did that - NSString *thePath = [[NSBundle mainBundle] pathForResource:@"Roots" ofType:@"plist"]; NSDictionary *pointsDic = [[NSDictionary alloc] initWithContentsOfFile:thePath]; NSArray *pointsArray = [NSArray arrayWithArray:[pointsDic objectForKey:@"roade1"]]; CLLocationCoordinate2D pointsToUse[256]; for(int i = 0; i < 256; i++) { CGPoint p = CGPointFromString([pointsArray objectAtIndex:i]); pointsToUse[i] = CLLocationCoordinate2DMake(p.x,p.y); NSLog(@"coord %f",pointsToUse [i].longitude); NSLog(@"coord %f",pointsToUse [i].latitude); } MKPolyline *myPolyline = [MKPolyline polylineWithCoordinates:pointsToUse count:256]; [[self mv] addOverlay:myPolyline]; but the app is crashing without any error. (BTW when i remove the addOverLay method the app does not crash). I have 2 questions- What am i doing wrong? I have tried to set the pointsArray count as the argument for the CLLocationCoordinate2D like that - CLLocationCoordinate2D pointsToUse[pointsArray count]; And i am getting an error. How can i set the CLLocationCoordinate2D dynamically ? Thanks for any help. Shani

    Read the article

  • trouble with state monad composition

    - by user1308560
    I was trying out the example given at http://www.haskell.org/haskellwiki/State_Monad#Complete_and_Concrete_Example_1 How this makes the solution composible is beyond my understanding. Here is what I tried but I get compile errors as follows: Couldn't match expected type `GameValue -> StateT GameState Data.Functor.Identity.Identity b0' with actual type `State GameState GameValue' In the second argument of `(>>=)', namely `g2' In the expression: g1 >>= g2 In an equation for `g3': g3 = g1 >>= g2 Failed, modules loaded: none. Here is the code: See the end lines module StateGame where import Control.Monad.State type GameValue = Int type GameState = (Bool, Int) -- suppose I want to play one game after the other g1 = playGame "abcaaacbbcabbab" g2 = playGame "abcaaacbbcabb" g3 = g1 >>= g2 m2 = print $ evalState g3 startState playGame :: String -> State GameState GameValue playGame [] = do (_, score) <- get return score playGame (x:xs) = do (on, score) <- get case x of 'a' | on -> put (on, score + 1) 'b' | on -> put (on, score - 1) 'c' -> put (not on, score) _ -> put (on, score) playGame xs startState = (False, 0) main str = print $ evalState (playGame str) startState

    Read the article

  • Do you use cParseCSV source on iphone?

    - by Beomseok
    link! I'm using cParseCSV. I'm want to encode text of NSUTF8StringEncoding. But It does not work. 2010-04-05 08:53:16.616 test[12086:207] Fine thanks. 2010-04-05 08:53:16.617 test[12086:207] ????, ????. 2010-04-05 08:53:16.617 test[12086:207] Thank you. 2010-04-05 08:53:16.618 test[12086:207] ?????. 2010-04-05 08:53:16.618 test[12086:207] I'm all right. 2010-04-05 08:53:16.619 test[12086:207] ?? ????. 2010-04-05 08:53:16.620 test[12086:207] Oh, pretty good. 2010-04-05 08:53:16.620 test[12086:207] ?, ?? ???? 2010-04-05 08:53:16.621 test[12086:207] Alive and kicking. 2010-04-05 08:53:16.621 test[12086:207] ???????. 2010-04-05 08:53:16.622 test[12086:207] So so. 2010-04-05 08:53:16.623 test[12086:207] ?? ???. 2010-04-05 08:53:16.623 test[12086:207] Not too bad. 2010-04-05 08:53:16.624 test[12086:207] (null) 2010-04-05 08:53:16.625 test[12086:207] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSPlaceholderString initWithString:]: nil argument' I don't know Why it happen...Please advice for me.

    Read the article

  • Coding the R-ight way - avoiding the for loop

    - by mropa
    I am going through one of my .R files and by cleaning it up a little bit I am trying to get more familiar with writing the code the r-ight way. As a beginner, one of my favorite starting points is to get rid of the for() loops and try to transform the expression into a functional programming form. So here is the scenario: I am assembling a bunch of data.frames into a list for later usage. dataList <- list (dataA, dataB, dataC, dataD, dataE ) Now I like to take a look at each data.frame's column names and substitute certain character strings. Eg I like to substitute each "foo" and "bar" with "baz". At the moment I am getting the job done with a for() loop which looks a bit awkward. colnames(dataList[[1]]) [1] "foo" "code" "lp15" "bar" "lh15" colnames(dataList[[2]]) [1] "a" "code" "lp50" "ls50" "foo" matchVec <- c("foo", "bar") for (i in seq(dataList)) { for (j in seq(matchVec)) { colnames (dataList[[i]])[grep(pattern=matchVec[j], x=colnames (dataList[[i]]))] <- c("baz") } } Since I am working here with a list I thought about the lapply function. My attempts handling the job with the lapply function all seem to look alright but only at first sight. If I write f <- function(i, xList) { gsub(pattern=c("foo"), replacement=c("baz"), x=colnames(xList[[i]])) } lapply(seq(dataList), f, xList=dataList) the last line prints out almost what I am looking for. However, if i take another look at the actual names of the data.frames in dataList: lapply (dataList, colnames) I see that no changes have been made to the initial character strings. So how can I rewrite the for() loop and transform it into a functional programming form? And how do I substitute both strings, "foo" and "bar", in an efficient way? Since the gsub() function takes as its pattern argument only a character vector of length one.

    Read the article

  • How to find sum of node's value for given depth in binary tree?

    - by masato-san
    I've been scratching my head for several hours for this... problem: Binary Tree (0) depth 0 / \ 10 20 depth 1 / \ / \ 30 40 50 60 depth 2 I am trying to write a function that takes depth as argument and return the sum of values of nodes of the given depth. For instance, if I pass 2, it should return 180 (i.e. 30+40+50+60) I decided to use breath first search and when I find the node with desired depth, sum up the value, but I just can't figure out how to find out the way which node is in what depth. But with this approach I feel like going to totally wrong direction. function level_order($root, $targetDepth) { $q = new Queue(); $q->enqueue($root); while(!$q->isEmpty) { //how to determin the depth of the node??? $node = $q->dequeue(); if($currentDepth == $targetDepth) { $sum = $node->value; } if($node->left != null) { $q->enqueue($node->left); } if($node->right != null) { $q->enqueue($node->right); } //need to reset this somehow $currentDepth ++; } }

    Read the article

  • Imbricated C++ template

    - by gregseth
    I have the following pattern: template <int a, int b> class MyClass { template <int c> MyClass<a, c> &operator*(MyClass<c, b> const &other) const; }; // ../.. template <int a, int b> template <int c> MyClass<a, c> &MyClass<a, b>::operator*(MyClass<c, b> const &other) const { MyClass<a, c> result; // ..do stuff.. return result; } It doesn't compile, the error message is Error C2975. invalid template argument 'number', constant expression expected. If I replace template <int c> by template <int c, int d> and use it accordignly, it works fine. But I want d to be the same value as b. My questions: Why the example doesn't work? How can I enforce d to be the same than b? Thanks.

    Read the article

  • OcaIDE doesn't see JoCaml tools

    - by Surikator
    I'm having a problem while using OcaIDE in ocamlbuild mode. I'm trying to compile my own JoCaml sources. According to the JoCaml manual (bottom of page), to use ocamlbuild with JoCaml, I just need to add the -use-jocaml argument to ocamlbuild. Indeed, if I go to the root of my project and write ocamlbuild -use-jocaml foo.native it generates my executable just fine. However, in OcaIDE I get /bin/sh: jocamldep: command not found In OcaIDE, the -use-jocaml flag is passed in the "Other Flags" box (in Project Properties). And that certainly is working, as the complaint is precisely that it doesn't find jocaml stuff. The puzzling thing is that jocaml is installed and can be accessed from any random terminal window. For example, running jocamldep -modules foo.ml > foo.ml.depends on my project does generate the desired dependency file. So, it would seem I would have to configure OcaIDE and tell it where JoCaml executables are or something. This is done for OCaml, for example. But there is no place to do that for JoCaml. And it's really strange that, if jocamldep/jocamlc/etc are all accessible from anywhere, OcaIDE wouldn't be able to pick them. Any ideas? (I am aware I can do an ocamlbuild plugin and pass the flag in a "myocamlbuild.ml" file. I'll probably use that a latter stage after I get familiar with ocamlbuild plugins. But here the question is about OcaIDE. EDIT: Actually, ocamlbuild plugins don't seem to be a solution as, although there is an option -use-jocaml in ocamlbuild to enforce jocaml use (and it works fine), the plugin system doesn't support it, i.e. jocaml is not in the list of options.)

    Read the article

  • Why can't I pass a form field of type file to a CFFUNCTION using structure syntax?

    - by Eric Belair
    I'm trying to pass a form field of type "file" to a CFFUNCTION. The argument type is "any". Here is the syntax I am trying to use (pseudocode): <cfloop from="1" to="5" index="i"> <cfset fieldname = "attachment" & i /> <cfinvoke component="myComponent" method="attachFile"> <cfinvokeargument name="attachment" value="#FORM[fieldname]#" /> </cfinvoke> </cfloop> The loop is being done because there are five form fields named "attachment1", "attachment2", et al. This throws an exception in the function: coldfusion.tagext.io.FileTag$FormFileNotFoundException: The form field C:\ColdFusion8\...\neotmp25080.tmp did not contain a file. However, this syntax DOES work: <cfloop from="1" to="5" index="i"> <cfinvoke component="myComponent" method="attachFile"> <cfinvokeargument name="attachment" value="FORM.attachment#i#" /> </cfinvoke> </cfloop> I don't like writing code like that in the second example. It just seems like bad practice to me. So, can anyone tell me how to use structure syntax to properly pass a file type form field to a CFFUNCTION??

    Read the article

  • SYN receives RST,ACK very frequently

    - by user1289508
    Hi Socket Programming experts, I am writing a proxy server on Linux for SQL Database server running on Windows. The proxy is coded using bsd sockets and in C, and it is working just fine. When I use a database client (written in JAVA, and running on a Linux box) to fire queries (with a concurrency of 100 or more) directly to the Database server, not experiencing connection resets. But through my proxy I am experiencing many connection resets. Digging deeper I came to know that connection from 'DB client' to 'Proxy' always succeeds but when the 'Proxy' tries to connect to the DB server the connection fails, due to the SYN packet getting RST,ACK. That was to give some background. The question is : Why does sometimes SYN receives RST,ACK? 'DB client(linux)' to 'Server(windows)' ---- Works fine 'DB client(linux) to 'Proxy(Linux)' to 'Server(windows)' ----- problematic I am aware that this can happen in "connection refused" case but this definitely is not that one. SYN flooding might be another scenario, but that does not explain fine behavior while firing to Server directly. I am suspecting some socket option setting may be required, that the client does before connecting and my proxy does not. Please put some light on this. Any help (links or pointers) is most appreciated. Additional info: Wrote a C client that does concurrent connections, which takes concurrency as an argument. Here are my observations: - At 5000 concurrency and above, some connects failed with 'connection refused'. - Below 2000, it works fine. But the actual problem is observed even at a concurrency of 100 or more. Note: The problem is time dependent sometimes it never comes at all and sometimes it is very frequent and DB client (directly to server) works fine at all times .

    Read the article

  • how to solve a weired swig python c++ interfacing type error

    - by user2981648
    I want to use swig to switch a simple cpp function to python and use "scipy.integrate.quadrature" function to calculate the integration. But python 2.7 reports a type error. Do you guys know what is going on here? Thanks a lot. Furthermore, "scipy.integrate.quad" runs smoothly. So is there something special for "scipy.integrate.quadrature" function? The code is in the following: File "testfunctions.h": #ifndef TESTFUNCTIONS_H #define TESTFUNCTIONS_H double test_square(double x); #endif File "testfunctions.cpp": #include "testfunctions.h" double test_square(double x) { return x * x; } File "swig_test.i" : /* File : swig_test.i */ %module swig_test %{ #include "testfunctions.h" %} /* Let's just grab the original header file here */ %include "testfunctions.h" File "test.py": import scipy.integrate import _swig_test print scipy.integrate.quadrature(_swig_test.test_square, 0., 1.) error info: UMD has deleted: _swig_test Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python27\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 523, in runfile execfile(filename, namespace) File "D:\data\haitaliu\Desktop\Projects\swig_test\Release\test.py", line 4, in <module> print scipy.integrate.quadrature(_swig_test.test_square, 0., 1.) File "C:\Python27\lib\site-packages\scipy\integrate\quadrature.py", line 161, in quadrature newval = fixed_quad(vfunc, a, b, (), n)[0] File "C:\Python27\lib\site-packages\scipy\integrate\quadrature.py", line 61, in fixed_quad return (b-a)/2.0*sum(w*func(y,*args),0), None File "C:\Python27\lib\site-packages\scipy\integrate\quadrature.py", line 90, in vfunc return func(x, *args) TypeError: in method 'test_square', argument 1 of type 'double'

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >