Search Results

Search found 4232 results on 170 pages for 'curious bob'.

Page 153/170 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • VBA: Difference in two ways of declaring a new object? (Trying to understand why my solution works)

    - by Matt
    I was creating a new object within a loop, and adding that object to a collection; but when I read back the collection after, it was always filled entirely with the last object I had added. I've come up with two ways around this, but I simply do not understand why my initial implementation was wrong. Original: Dim oItem As Variant Dim sOutput As String Dim i As Integer Dim oCollection As New Collection For i = 0 To 10 Dim oMatch As New clsMatch oMatch.setLineNumber i oCollection.Add oMatch Next For Each oItem In oCollection sOutput = sOutput & "[" & oItem.lineNumber & "]" Next MsgBox sOutput This resulted in every lineNumber being 10; I was obviously not creating new objects, but instead using the same one each time through the loop, despite the declaration being inside of the loop. So, I added Set oMatch = Nothing immediately before the Next line, and this fixed the problem, it was now 0 to 10. So if the old object was explicitly destroyed, then it was willing to create a new one? I would have thought the next iteration through the loop would cause anything declared within the loop do be destroyed due to scope? Curious, I tried another way of declaring a new object: Dim oMatch As clsMatch: Set oMatch = New clsMatch. This, too, results in 0 to 10. Can anyone explain to me why the first implementation was wrong?

    Read the article

  • create an independent hidden process

    - by Jessica
    I'm creating an application with its main window hidden by using the following code: STARTUPINFO siStartupInfo; PROCESS_INFORMATION piProcessInfo; memset(&siStartupInfo, 0, sizeof(siStartupInfo)); memset(&piProcessInfo, 0, sizeof(piProcessInfo)); siStartupInfo.cb = sizeof(siStartupInfo); siStartupInfo.dwFlags = STARTF_USESHOWWINDOW | STARTF_FORCEOFFFEEDBACK | STARTF_USESTDHANDLES; siStartupInfo.wShowWindow = SW_HIDE; if(CreateProcess(MyApplication, "", 0, 0, FALSE, 0, 0, 0, &siStartupInfo, &piProcessInfo) == FALSE) { // blah return 0; } Everything works correctly except my main application (the one calling this code) window loses focus when I open the new program. I tried lowering the priority of the new process but the focus problem is still there. Is there anyway to avoid this? furthermore, is there any way to create another process without using CreateProcess (or any of the API's that call CreateProcess like ShellExecute)? My guess is that my app is losing focus because it was given to the new process, even when it's hidden. To those of you curious out there that will certainly ask the usual "why do you want to do this", my answer is because I have a watchdog process that cannot be a service and it gets started whenever I open my main application. Satisfied? Thanks for the help. Code will be appreciated. Jess.

    Read the article

  • Socket.Recieve Failing When Multithreaded

    - by Qua
    The following piece of code runs fine when parallelized to 4-5 threads, but starts to fail as the number of threads increase somewhere beyond 10 concurrentthreads int totalRecieved = 0; int recieved; StringBuilder contentSB = new StringBuilder(4000); while ((recieved = socket.Receive(buffer, SocketFlags.None)) > 0) { contentSB.Append(Encoding.ASCII.GetString(buffer, 0, recieved)); totalRecieved += recieved; } The Recieve method returns with zero bytes read, and if I continue calling the recieve method then I eventually get a 'An established connection was aborted by the software in your host machine'-exception. So I'm assuming that the host actually sent data and then closed the connection, but for some reason I never recieved it. I'm curious as to why this problem arises when there are a lot of threads. I'm thinking it must have something to do with the fact that each thread doesn't get as much execution time and therefore there are some idle time for the threads which causes this error. Just can't figure out why idle time would cause the socket not to recieve any data.

    Read the article

  • Extending the .NET type system so the compiler enforces semantic meaning of primitive values in cert

    - by Drew Noakes
    I'm working with geometry a bit at the moment and am converting a lot between degrees and radians. Unfortunately, both of these are represented by double, so there's compile time warning/error if I try to pass a value in degrees where radians are expected. I believe F# has a compile-time solution for this (called units of measure.) I'd like to do something similar in C#. As another example, imagine a SQL library that accepts various query parameters as strings. It'd be good to have a way of enforcing that only clean strings were allowed to be passed in at runtime, and the only way to get a clean string was to pass through some SQL injection attack preventing logic. The obvious solution is to wrap the double/string/whatever in a new type to give it the type information the compiler needs. I'm curious if anyone has an alternative solution. If you do think wrapping is the only/best way, then please go into some of the downsides of the pattern (and any upsides I haven't mentioned too.) I'm especially concerned about the performance of abstracted primitive numeric types on my calculations at runtime.

    Read the article

  • Moving from a non-clustered PK to a clustered PK in SQL 2005

    - by adaptr
    HI all, I recently asked this question in another thread, and thought I would reproduce it here with my solution: What if I have an auto-increment INT as my non-clustered primary key, and there are about 15 foreign keys defined to it ? (snide comment about original designer being braindead in the original :) ) This is a 15M row table, on a live database, SQL Standard, so dropping indexes is out of the question. Even temporarily dropping the foreign key constraints will be difficult. I'm curious if anybody has a solution that causes minimal downtime. I tested this in our testing environment and finally found that the downtime wasn't as severe as I had originally feared. I ended up writing a script that drops all FK constraints, then drops the non-clustered key, re-creates the PK as a clustered index, and finally re-created all FKs WITH NOCHECK to avoid trawling through all FKs to check constraint compliance. Then I just enable the CHECK constraints to enable constraint checking from that point onwards, and all is dandy :) The most important thing to realize is that during the time the FKs are absent, there MUST NOT be any INSERTs or DELETEs on the parent table, as this may break the constraints and cause issues in the future. The total time taken for clustering a 15M row, 800MB index was ~4 minutes :)

    Read the article

  • CGContextDrawImage returning bad access

    - by Marcelo
    Hello guys, I've been trying to blend two UIImage for about 2 days now and I've been getting some BAD_ACCESS errors. First of all, I have two images that have the same orientation, basically I'm using the CoreGraphics to do the blending. One curious detail, everytime I modify the code, the first time I compile and run it on device, I get to do everything I want without any sort of trouble. Once I restart the application, I get error and the program shuts down. Can anyone give me a light? I tried accessing the baseImage sizes dynamically, but it gives me bad access too. Here's a snippet of how I'm doing the blending. UIGraphicsBeginImageContext(CGSizeMake(320, 480)); CGContextRef context = UIGraphicsGetCurrentContext(); CGContextTranslateCTM(context, 0, 480); CGContextScaleCTM(context, 1.0, -1.0); CGContextDrawImage(context, rect, [baseImage CGImage]); CGContextSetBlendMode(context, kCGBlendModeOverlay); CGContextDrawImage(context, rect, [tmpImage CGImage]); [transformationView setImage:UIGraphicsGetImageFromCurrentImageContext()]; UIGraphicsEndImageContext();

    Read the article

  • What is different about C++ math.h abs() compared to my abs()

    - by moka
    I am currently writing some glsl like vector math classes in c++, and I just implemented an abs() function like this: template<class T> static inline T abs(T _a) { return _a < 0 ? -_a : _a; } I compared its speed to the default c++ abs from math.h like this: clock_t begin = clock(); for(int i=0; i<10000000; ++i) { float a = abs(-1.25); }; clock_t end = clock(); unsigned long time1 = (unsigned long)((float)(end-begin) / ((float)CLOCKS_PER_SEC/1000.0)); begin = clock(); for(int i=0; i<10000000; ++i) { float a = myMath::abs(-1.25); }; end = clock(); unsigned long time2 = (unsigned long)((float)(end-begin) / ((float)CLOCKS_PER_SEC/1000.0)); std::cout<<time1<<std::endl; std::cout<<time2<<std::endl; Now the default abs takes about 25ms while mine takes 60. I guess there is some low level optimisation going on. Does anybody know how math.h abs works internally? The performance difference is nothing dramatic, but I am just curious!

    Read the article

  • Writing a VM - well formed bytecode?

    - by David Titarenco
    Hi, I'm writing a virtual machine in C just for fun. Lame, I know, but luckily I'm on SO so hopefully no one will make fun :) I wrote a really quick'n'dirty VM that reads lines of (my own) ASM and does stuff. Right now, I only have 3 instructions: add, jmp, end. All is well and it's actually pretty cool being able to feed lines (doing it something like write_line(&prog[1], "jmp", regA, regB, 0); and then running the program: while (machine.code_pointer <= BOUNDS && DONE != true) { run_line(&prog[machine.cp]); } I'm using an opcode lookup table (which may not be efficient but it's elegant) in C and everything seems to be working OK. My question is more of a "best practices" question but I do think there's a correct answer to it. I'm making the VM able to read binary files (storing bytes in unsigned char[]) and execute bytecode. My question is: is it the VM's job to make sure the bytecode is well formed or is it just the compiler's job to make sure the binary file it spits out is well formed? I only ask this because what would happen if someone would edit a binary file and screw stuff up (delete arbitrary parts of it, etc). Clearly, the program would be buggy and probably not functional. Is this even the VM's problem? I'm sure that people much smarter than me have figured out solutions to these problems, I'm just curious what they are!

    Read the article

  • Correct Delphi compiler switches to stop in the user's code, not my component's

    - by Jeremy Mullin
    I'm modifying our VCL components so the end user's application links to our dcu files, instead of building our source code each time. We have everything working, but I want the debugger to stop on the user's code when an exception is raised. At first it would stop in our dcu and open the CPU window. I was able to prevent that by removing debug info from the dcu files. But now it still doesn't stop in the users code (like DevExpress libraries and others do). The following screencast is a short example. The first time I cause an exception in the DevExpress code, and the debugger correctly stops in my button event. The second time I cause an exception in my components, but the debugger doesn't have my button event on the call stack, and doesn't show me where the problem was. Any ideas why? http://screencast.com/t/NjhlOTRk Currently building the DCU's with these options: -$W+ -$D- -h -w -q Update: The TDataSet methods in between my component and the button event seem to cause this behavior. If I instead call a direct method of my table, I get the expected behavior. I'm guessing there isn't anything I can do about this, but I'm still curious why it happens.

    Read the article

  • How to make std::vector's operator[] compile doing bounds checking in DEBUG but not in RELEASE

    - by Edison Gustavo Muenz
    I'm using Visual Studio 2008. I'm aware that std::vector has bounds checking with the at() function and has undefined behaviour if you try to access something using the operator [] incorrectly (out of range). I'm curious if it's possible to compile my program with the bounds checking. This way the operator[] would use the at() function and throw a std::out_of_range whenever something is out of bounds. The release mode would be compiled without bounds checking for operator[], so the performance doesn't degrade. I came into thinking about this because I'm migrating an app that was written using Borland C++ to Visual Studio and in a small part of the code I have this (with i=0, j=1): v[i][j]; //v is a std::vector<std::vector<int> > The size of the vector 'v' is [0][1] (so element 0 of the vector has only one element). This is undefined behaviour, I know, but Borland is returning 0 here, VS is crashing. I like the crash better than returning 0, so if I can get more 'crashes' by the std::out_of_range exception being thrown, the migration would be completed faster (so it would expose more bugs that Borland was hiding).

    Read the article

  • Semantically linking to code snippets

    - by Tim
    What's the most simple and semantic way of presenting code snippets in HTML? Possible XHTML syntax <a href="code_sample.php" type="text/x-php"> Example of widget creation </a> Example of linked file (code_sample.php): // Create a new widget $widget = new widget(); Pros: Semantically uses title to describe the source code being referenced Up to the client to render snippet Having very many custom server-side implementations tells me it should be standardized Browsers can have plug-ins for copy+paste, download, etc Seems to me this is where it belongs (not in Javascript) Degradation: non-compliant browsers receive a link to the associated content Cons: Not semantic enough? Seems wrong to replace hyperlinks with source code for presentation <object> might be better, but wouldn't degrade as nicely. Background I'm trying to create a "personal" XHTML standard for storing notes (wow, this is probably among the nerdiest things I've said). Since notes are just "scratch" it needs to be very lightweight. SO's markdown is very lightweight but not semantic enough for my needs. Plus, now I'm just curious. What's the most ideal syntax for linking to client-rendered code-snippets?

    Read the article

  • Why is F# member not found when used in subclass

    - by James Black
    I have a base type that I want to inherit from, for all my DAO objects, but this member gets the error further down about not being defined: type BaseDAO() = member v.ExecNonQuery2(conn)(sqlStr) = let comm = new MySqlCommand(sqlStr, conn, CommandTimeout = 10) comm.ExecuteNonQuery |> ignore comm.Dispose |> ignore I inherit in this type: type CreateDatabase() = inherit BaseDAO() member private self.createDatabase(conn) = self.ExecNonQuery2 conn "DROP DATABASE IF EXISTS restaurant" This is what I see when my script runs in the interactive shell: --> Referenced 'C:\Program Files\MySQL\MySQL Connector Net 6.2.3\Assemblies\MySql.Data.dll' [Loading C:\Users\jblack\Documents\Visual Studio 2010\Projects\RestaurantService\RestaurantDAO\BaseDAO.fs] namespace FSI_0106.RestaurantServiceDAO type BaseDAO = class new : unit -> BaseDAO member ExecNonQuery2 : conn:MySql.Data.MySqlClient.MySqlConnection -> sqlStr:string -> unit member execNonQuery : sqlStr:string -> unit member execQuery : sqlStr:string * selectFunc:(MySql.Data.MySqlClient.MySqlDataReader -> 'a list) -> 'a list member f : x:obj -> string member Conn : MySql.Data.MySqlClient.MySqlConnection end [Loading C:\Users\jblack\Documents\Visual Studio 2010\Projects\RestaurantService\RestaurantDAO\CreateDatabase.fs] C:\Users\jblack\Documents\Visual Studio 2010\Projects\RestaurantService\RestaurantDAO\CreateDatabase.fs(56,14): error FS0039: The field, constructor or member 'ExecNonQuery2' is not defined I am curious what I am doing wrong. I have tried not inheriting, and just instantiating the BaseDAO type in the function, but I get the same error. I started on this path because I had a property that had the same error, so it seems there may be a problem with how I am defining my BaseDAO type, but it compiles with no error, which further confuses me about this problem.

    Read the article

  • Python raw strings and trailing back slashes.

    - by dash-tom-bang
    I ran across something once upon a time and wondered if it was a Python "bug" or at least a misfeature. I'm curious if anyone knows of any justifications for this behavior. I thought of it just now reading "Code Like a Pythonista," which has been enjoyable so far. I'm only familiar with the 2.x line of Python. Raw strings are strings that are prefixed with an r. This is great because I can use backslashes in regular expressions and I don't need to double everything everywhere. It's also handy for writing throwaway scripts on Windows, so I can use backslashes there also. (I know I can also use forward slashes, but throwaway scripts often contain content cut&pasted from elsewhere in Windows.) So great! Unless, of course, you really want your string to end with a backslash. There's no way to do that in a 'raw' string. In [9]: r'\n' Out[9]: '\\n' In [10]: r'abc\n' Out[10]: 'abc\\n' In [11]: r'abc\' ------------------------------------------------ File "<ipython console>", line 1 r'abc\' ^ SyntaxError: EOL while scanning string literal In [12]: r'abc\\' Out[12]: 'abc\\\\' So one slash before the closing quote is an error, but two slashes gives you two slashes! Certainly I'm not the only one that is bothered by this? Thoughts on why 'raw' strings are 'raw, except for slash-quote'? I mean, if I wanted to embed a single quote in there I'd just use double quotes around the string, and vice versa. If I wanted both, I'd just triple quote. If I really wanted three quotes in a row in a raw string, well, I guess I'd have to deal, but is this considered "proper behavior"?

    Read the article

  • Is there an ORM that supports composition w/o Joins

    - by Ken Downs
    EDIT: Changed title from "inheritance" to "composition". Left body of question unchanged. I'm curious if there is an ORM tool that supports inheritance w/o creating separate tables that have to be joined. Simple example. Assume a table of customers, with a Bill-to address, and a table of vendors, with a remit-to address. Keep it simple and assume one address each, not a child table of addresses for each. These addresses will have a handful of values in common: address 1, address 2, city, state/province, postal code. So let's say I'd have a class "addressBlock" and I want the customers and vendors to inherit from this class, and possibly from other classes. But I do not want separate tables that have to be joined, I want the columns in the customer and vendor tables respectively. Is there an ORM that supports this? The closest question I have found on StackOverflow that might be the same question is linked below, but I can't quite figure if the OP is asking what I am asking. He seems to be asking about foregoing inheritance precisely because there will be multiple tables. I'm looking for the case where you can use inheritance w/o generating the multiple tables. Model inheritance approach with Django's ORM

    Read the article

  • Kernighan & Ritchie word count example program in a functional language

    - by Frank
    I have been reading a little bit about functional programming on the web lately and I think I got a basic idea about the concepts behind it. I'm curious how everyday programming problems which involve some kind of state are solved in a pure functional programing language. For example: how would the word count program from the book 'The C programming Language' be implemented in a pure functional language? Any contributions are welcome as long as the solution is in a pure functional style. Here's the word count C code from the book: #include <stdio.h> #define IN 1 /* inside a word */ #define OUT 0 /* outside a word */ /* count lines, words, and characters in input */ main() { int c, nl, nw, nc, state; state = OUT; nl = nw = nc = 0; while ((c = getchar()) != EOF) { ++nc; if (c == '\n') ++nl; if (c == ' ' || c == '\n' || c = '\t') state = OUT; else if (state == OUT) { state = IN; ++nw; } } printf("%d %d %d\n", nl, nw, nc); }

    Read the article

  • jQuery and jQuery UI (Dual Licensing)

    - by John Hartsock
    OK I have read many posts regarding Dual Licensing using MIT and GPL licenses. But Im curious still, as the wording seems to be inclusive. Many of the Dual Licenses state that the software is licensed using "MIT AND GPL". The "AND" is what confuses me. It seems to me that the word "AND" in the terms, means you will be licensing the product using both licenses. Most of the posts, here on stackoverflow, say you can license the software using one "OR" the other. JQuery specifically states "OR", whereas JQuery UI specifically States "AND". Another Instance of the "AND" would be JQGrid. Im not a lawyer but, it seems to me that a legal interpretation of this would state that use of the software would mean that your using the software under both licenses. Has anyone who has contacted a lawyer gotten clarification or a definitive answer as to what is true? Can you use Dual licensed software products that state "AND" in the terms of agreement under either license?

    Read the article

  • Good tools for keeping the content in test/staging/live environments synchronized

    - by David Stratton
    I'm looking for recommendations on automated folder synchronization tools to keep the content in our three environments synchronized automatically. Specifically, we have several applications where a user can upload content (via a File Upload page or a similar mechanism), such as images, pdf files, word documents, etc. In the past, we had the user doing this to our live server, and as a result, our test and staging servers had to be manually synchronized. Going forward, we will have them upload content to the staging server, and we would like some software to automatically copy the files off to the test and live servers EITHER on a scheduled basis OR as the files get uploaded. I was planning on writing my own component, and either set it up as a scheduled task, or use a FileSystemWatcher, but it occurred to me that this has probably already been done, and I might be better off with some sort of synchronization tool that already exists. On our web site, there are a limited number of folders that we want to keep synchronized. In these folders, it is an all or nothing - we want to make sure the folders are EXACT duplicates. This should make it fairly straightforward, and I would think that any software that can synchronize folders would be OK, except that we also would like the software to log changes. (This rules out simple BATCH files.) So I'm curious, if you have a similar environment, how did you solve the challenge of keeping everything synchronized. Are you aware of a tool that is reliable, and will meet our needs? If not, do you have a recommendation for something that will come close, or better yet, an open source solution where we can get the code and modify it as needed? (preferably .NET). Added Also, I DID google this first, but there are so many options, I am interested mostly in knowing what actually works well vs what they SAY works, which is why I'm asking here.

    Read the article

  • Is there a difference between starting an application from the OS or from adb

    - by aruwen
    I do have a curious error in my application. My app crashes (don't mind the crash, I roughly know why - classloader) when I start the application from the OS directly, then kill it from the background via any Task Killer (this is one of the few ways to reproduce the crash consistently - simulating the OS freeing memory and closing the application) and try to restart it again. The thing is, if I start the application via adb shell using the following command: adb shell am start -a android.intent.action.MAIN -n com.my.packagename/myLaunchActivity I cannot reproduce the crash. So is there any difference in how Android OS calls the application as opposed to the above call? EDIT: added the manifest (just changed names) <?xml version="1.0" ?> <manifest android:versionCode="5" android:versionName="1.05" package="com.my.sample" xmlns:android="http://schemas.android.com/apk/res/android"> <uses-sdk android:minSdkVersion="7"/> <application android:icon="@drawable/square_my_logo" android:label="@string/app_name"> <activity android:label="@string/app_name" android:name="com.my.InfoActivity" android:screenOrientation="landscape"></activity> <activity android:label="@string/app_name" android:name="com.my2.KickStart" android:screenOrientation="landscape"/> <activity android:label="@string/app_name" android:name="com.my2.Launcher" android:screenOrientation="landscape"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> </application> <uses-permission android:name="android.permission.INTERNET"/> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/></manifest> starting the com.my2.Launcher from the adb shell

    Read the article

  • Non-english domain naming issues in programming

    - by Svend
    Most programming code, I imagine is written in english. But I'm curious how people handling the issue of naming herein. Alot of programming is done within some bussiness domain, usually with well established terms for certain procedures, items. I'm from Denmark for instance, and something I work alot with has a term called "indblikskode", which sorta translates to "insight code". So, do I use the line "string indblikskode = ..." in the C# code for some webservice related to this? Or do I try to use a translation, such as "insightcode"? The bussiness I'm in isn't even consistent in it's language, for instance using the term "organisatorisk enhed" (organizatorical unit), but just as often using the abbreviation "OU", which is obviously abbreviated from the english. How do other people handle this naming issue, while keeping consistent, and sane (in everything from simple variable names in your code, to database tables, to server names)? Duplicates: Should identifiers and comments be always in English or in the native language of the application and developers? Do you use another language instead of english ?

    Read the article

  • git rebase without changing commit timestamps

    - by Olivier
    Would it make sense to perform git rebase while preserving the commit timestamps? I believe a consequence would be that the new branch will not necessarily have commit dates chronologically. Is that theoretically possible at all? (e.g. using plumbing commands; just curious here) If it is theoretically possible, then is it possible in practice with rebase, not to change the timestamps? For example, assume I have the following tree: master <jun 2010> | : : : oldbranch <feb 1984> : / oldcommit <jan 1984> Now, if I rebase oldbranch on master, the date of the commit changes from feb 1984 to jun 2010. Is it possible to change that behaviour so that the commit timestamp is not changed? In the end I would thus obtain: oldbranch <feb 1984> / master <jun 2010> | : Would that make sense at all? Is it even allowed in git to have a history where an old commit has a more recent commit as a parent? Edit A crucial question of Von C helped me understand what is going on: when your rebase, the committer's timestamp changes, but not the author's timestamp, which suddenly all makes sense. So my question was actually not precise enough. The answer is that rebase actually doesn't change the author's timestamps (you don't need to do anything for that), which suits me perfectly.

    Read the article

  • php OOP function declarations

    - by kris
    I'm a big fan of OOP in php, but i feel like defining class methods gets disorganized so fast. I have a pretty good background in OOP in C++, and i am pretty comfortable with how it is handled there, and am curious if there are ways to do it similarly in php. To be more specific, here is what i mean. I like how in C++ you can define a class header (myclass.h) and then define the actual details of the functions in the implementation file (myclass.cc). Ive found that this can easily be replicated using interfaces in php, but i havent found a good solution for the following: I like to organize my code in C++ in different files based on how they are accessed, so for example, public methods that can be called outside of the class would be in 1 place, and private methods would be organized somewhere else - this is personal preference. Ive tried to define class methods in php like: private function MyPHPClass::myFunction(){ } when the definition isnt directly inside the class block( { } ), but i havent had any success doing this. Ive been through all of the pages on php.net, but couldnt find anything like this. Im assuming that there is no support for something like this, but thought i would ask anyway. thanks

    Read the article

  • Why can't I pass self as a named argument to an instance method in Python?

    - by Joseph Garvin
    This works: >>> def bar(x, y): ... print x, y ... >>> bar(y=3, x=1) 1 3 And this works: >>> class foo(object): ... def bar(self, x, y): ... print x, y ... >>> z = foo() >>> z.bar(y=3, x=1) 1 3 And even this works: >>> foo.bar(z, y=3, x=1) 1 3 But why doesn't this work? >>> foo.bar(self=z, y=3, x=1) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unbound method bar() must be called with foo instance as first argument (got nothing instead) This makes metaprogramming more difficult, because it requires special case handling. I'm curious if it's somehow necessary by Python's semantics or just an artifact of implementation.

    Read the article

  • When do instance variables get initialized and values assigned?

    - by AKh
    When doees the instance variable get initialized? Is it after the constructor block is done or before it? Consider this example: public abstract class Parent { public Parent(){ System.out.println("Parent Constructor"); init(); } public void init(){ System.out.println("parent Init()"); } } public class Child extends Parent { private Integer attribute1; private Integer attribute2 = null; public Child(){ super(); System.out.println("Child Constructor"); } public void init(){ System.out.println("Child init()"); super.init(); attribute1 = new Integer(100); attribute2 = new Integer(200); } public void print(){ System.out.println("attribute 1 : " +attribute1); System.out.println("attribute 2 : " +attribute2); } } public class Tester { public static void main(String[] args) { Parent c = new Child(); ((Child)c).print(); } } OUTPUT: Parent Constructor Child init() parent Init() Child Constructor attribute 1 : 100 attribute 2 : null When the memory for the atribute 1 & 2 are allocated in the heap ? Curious to know why is attribute 2 is NULL ? Are there any design flaws?

    Read the article

  • Few Basic Questions in Overriding

    - by Dahlia
    I have few problems with my basic and would be thankful if someone can clear this. What does it mean when I say base *b = new derived; Why would one go for this? We very well separately can create objects for class base and class derived and then call the functions accordingly. I know that this base *b = new derived; is called as Object Slicing but why and when would one go for this? I know why it is not advisable to convert the base class object to derived class object (because base class is not aware of the derived class members and methods). I even read in other StackOverflow threads that if this is gonna be the case then we have to change/re-visit our design. I understand all that, however, I am just curious, Is there any way to do this? class base { public: void f(){cout << "In Base";} }; class derived:public base { public: void f(){cout << "In Derived";} }; int _tmain(int argc, _TCHAR* argv[]) { base b1, b2; derived d1, d2; b2 = d1; d2 = reinterpret_cast<derived*>(b1); //gives error C2440 b1.f(); // Prints In Base d1.f(); // Prints In Derived b2.f(); // Prints In Base d1.base::f(); //Prints In Base d2.f(); getch(); return 0; } In case of my above example, is there any way I could call the base class f() using derived class object? I used d1.base()::f() I just want to know if there any way without using scope resolution operator? Thanks a lot for your time in helping me out!

    Read the article

  • pure/const functions in C++

    - by Albert
    Hi, I'm thinking of using pure/const functions more heavily in my C++ code. (pure/const attribute in GCC) However, I am curious how strict I should be about it and what could possibly break. The most obvious case are debug outputs (in whatever form, could be on cout, in some file or in some custom debug class). I probably will have a lot of functions, which don't have any side effects despite this sort of debug output. No matter if the debug output is made or not, this will absolutely have no effect on the rest of my application. Or another case I'm thinking of is the use of my own SmartPointer class. In debug mode, my SmartPointer class has some global register where it does some extra checks. If I use such an object in a pure/const function, it does have some slight side effects (in the sense that some memory probably will be different) which should not have any real side effects though (in the sense that the behaviour is in any way different). Similar also for mutexes and other stuff. I can think of many complex cases where it has some side effects (in the sense of that some memory will be different, maybe even some threads are created, some filesystem manipulation is made, etc) but has no computational difference (all those side effects could very well be left out and I would even prefer that). How does it work out in practice? If I mark such functions as pure/const, could it break anything (considering that the code is all correct)?

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >