Search Results

Search found 7086 results on 284 pages for 'explain'.

Page 263/284 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Template function as a template argument

    - by Kos
    I've just got confused how to implement something in a generic way in C++. It's a bit convoluted, so let me explain step by step. Consider such code: void a(int) { // do something } void b(int) { // something else } void function1() { a(123); a(456); } void function2() { b(123); b(456); } void test() { function1(); function2(); } It's easily noticable that function1 and function2 do the same, with the only different part being the internal function. Therefore, I want to make function generic to avoid code redundancy. I can do it using function pointers or templates. Let me choose the latter for now. My thinking is that it's better since the compiler will surely be able to inline the functions - am I correct? Can compilers still inline the calls if they are made via function pointers? This is a side-question. OK, back to the original point... A solution with templates: void a(int) { // do something } void b(int) { // something else } template<void (*param)(int) > void function() { param(123); param(456); } void test() { function<a>(); function<b>(); } All OK. But I'm running into a problem: Can I still do that if a and b are generics themselves? template<typename T> void a(T t) { // do something } template<typename T> void b(T t) { // something else } template< ...param... > // ??? void function() { param<SomeType>(someobj); param<AnotherType>(someotherobj); } void test() { function<a>(); function<b>(); } I know that a template parameter can be one of: a type, a template type, a value of a type. None of those seems to cover my situation. My main question is hence: How do I solve that, i.e. define function() in the last example? (Yes, function pointers seem to be a workaround in this exact case - provided they can also be inlined - but I'm looking for a general solution for this class of problems).

    Read the article

  • Releasing the keyboard stops shake events. Why?

    - by Moshe
    1) How do I make a UITextField resign the keyboard and hide it? The keyboard is in a dynamically created subview whose superview looks for shake events. Resigning first responder seems to break the shake event handler. 2) how do you make the view holding the keyboard transparent, like see through glass? I have seen this done before. This part has been taken care of thanks guys. As always, code samples are appreciated. I've added my own to help explain the problem. EDIT: Basically, - (void)motionBegan:(UIEventSubtype)motion withEvent:(UIEvent *)event; gets called in my main view controller to handle shaking. When a user taps on the "edit" icon (a pen, in the bottom of the screen - not the traditional UINavigationBar edit button), the main view adds a subview to itself and animates it on to the screen using a custom animation. This subview contains a UINavigationController which holds a UITableView. The UITableView, when a cell is tapped on, loads a subview into itself. This second subview is the culprit. For some reason, a UITextField in this second subview is causing problems. When a user taps on the view, the main view will not respond to shakes unless the UITextField is active (in editing mode?). Additional info: My Motion Event Handler: - (void)motionBegan:(UIEventSubtype)motion withEvent:(UIEvent *)event { NSLog(@"%@", [event description]); SystemSoundID SoundID; NSString *soundFile = [[NSBundle mainBundle] pathForResource:@"shake" ofType:@"aif"]; AudioServicesCreateSystemSoundID((CFURLRef)[NSURL fileURLWithPath:soundFile], &SoundID); AudioServicesPlayAlertSound(SoundID); [self genRandom:TRUE]; } The genRandom: Method: /* Generate random label and apply it */ -(void)genRandom:(BOOL)deviceWasShaken{ if(deviceWasShaken == TRUE){ decisionText.text = [NSString stringWithFormat: (@"%@", [shakeReplies objectAtIndex:(arc4random() % [shakeReplies count])])]; }else{ SystemSoundID SoundID; NSString *soundFile = [[NSBundle mainBundle] pathForResource:@"string" ofType:@"aif"]; AudioServicesCreateSystemSoundID((CFURLRef)[NSURL fileURLWithPath:soundFile], &SoundID); AudioServicesPlayAlertSound(SoundID); decisionText.text = [NSString stringWithFormat: (@"%@", [pokeReplies objectAtIndex:(arc4random() % [pokeReplies count])])]; } } shakeReplies and pokeReplies are both NSArrays of strings. One is used for when a certain part of the screen is poked and one is for when the device is shaken. The app will randomly choose a string from the NSArray and display onscreen. For those of you who work graphically, here is a diagram of the view hierarchy: Root View -> UINavigationController -> UITableView -> Edit View -> Problem UITextfield

    Read the article

  • MySQL - Calculating fields on the fly vs storing calculated data

    - by Christian Varga
    Hi Everyone, I apologise if this has been asked before, but I can't seem to find an answer to a question that I have about calculating on the fly vs storing fields in a database. I read a few articles that suggested it was preferable to calculate when you can, but I would just like to know if that still applies to the following 2 examples. Example 1. Say you are storing data relating to a car. You store the fuel tank size in litres, and how many litres it uses per 100km. You also want to know how many KMs it can travel, which can be calculated from the tank size and economy. I see 2 ways of doing this: When a car is added or updated, calculate the amount of KMs and store this as a static field in the database. Every time a car is accessed, calculate the amount of KMs on the fly. Because the cars economy/tank size doesn't change (although it could be edited), the KMs is a pretty static value. I don't see why we would calculate it every single time the car is accessed. Wouldn't this waste cpu time as opposed to simply storing it in a separate field in the database and calculating only when a car is added or updated? My next example, which is almost an entirely different question (but on the same topic), relates to counting children. Let's say we have a app which has categories and items. We have a view where we display all the categories, and a count of all the items inside each category. Again, I'm wondering what's better. To perform a MySQL query to count all the items in each category every single time the page is accessed? Or store the count in a field in the categories table and update when an item is added / deleted? I know it is redundant to store anything that can be calculated, but I worry that calculating fields or counting records might be slow as opposed to storing the data in a field. If it's not then please let me know, I just want to learn about when to use either method. On a small scale I guess it wouldn't matter either way, but apps like Facebook, would they really count the amount of friends you have every time someone views your profile or would they just store it as a field? I'd appreciate any responses to both of these scenarios, and any resource that might explain the benefits of calculating vs storing. Thanks in advance, Christian

    Read the article

  • What is GC holes?

    - by tianyi
    I wrote a long TCP connection socket server in C#. Spike in memory in my server happens. I used dotNet Memory Profiler(a tool) to detect where the memory leaks. Memory Profiler indicates the private heap is huge, and the memory is something like below(the number is not real,what I want to show is the GC0 and GC2's Holes are very very huge, the data size is normal): Managed heaps - 1,500,000KB Normal heap - 1400,000KB Generation #0 - 600,000KB Data - 100,000KB "Holes" - 500,000KB Generation #1 - xxKB Data - 0KB "Holes" - xKB Generation #2 - xxxxxxxxxxxxxKB Data - 100,000KB "Holes" - 700,000KB Large heap - 131072KB Large heap - 83KB Overhead/unused - 130989KB Overhead - 0KB Howerver, what is GC hole? I read an article about the hole: http://kaushalp.blogspot.com/2007/04/what-is-gc-hole-and-how-to-create-gc.html The author said : The code snippet below is the simplest way to introduce a GC hole into the system. //OBJECTREF is a typedef for Object*. { PointerTable *pTBL = o_pObjectClass->GetPointerTable(); OBJECTREF aObj = AllocateObjectMemory(pTBL); OBJECTREF bObj = AllocateObjectMemory(pTBL); //WRONG!!! “aObj” may point to garbage if the second //“AllocateObjectMemory” triggered a GC. DoSomething (aOb, bObj); } All it does is allocate two managed objects, and then does something with them both. This code compiles fine, and if you run simple pre-checkin tests, it will probably “work.” But this code will crash eventually. Why? If the second call to “AllocateObjectMemory” triggers a GC, that GC discards the object instance you just assigned to “aObj”. This code, like all C++ code inside the CLR, is compiled by a non-managed compiler and the GC cannot know that “aObj” holds a root reference to an object you want kept live. ======================================================================== I can't understand what he explained. Does the sample mean aObj becomes a wild pointer after GC? Is it mean { aObj = (*aObj)malloc(sizeof(object)); free(aObj); function(aObj);? } ? I hope somebody can explain it.

    Read the article

  • quick java question

    - by j-unit-122
    private static char[] quicksort (char[] array , int left , int right) { if (left < right) { int p = partition(array , left, right); quicksort(array, left, p - 1 ); quicksort(array, p + 1 , right); } for (char i : array) System.out.print(i + ” ”); System.out.println(); return array; } private static int partition(char[] a, int left, int right) { char p = a[left]; int l = left + 1, r = right; while (l < r) { while (l < right && a[l] < p) l++; while (r > left && a[r] >= p) r--; if (l < r) { char temp = a[l]; a[l] = a[r]; a[r] = temp; } } a[left] = a[r]; a[r] = p; return r; } } hi guys just a quick question regarding the above coding, i know that the above coding returns the following B I G C O M P U T E R B C E G I M P U T O R B C E G I M P U T O R B C E G I M P U T O R B C E G I M P U T O R B C E G I M O P T U R B C E G I M O P R T U B C E G I M O P R T U B C E G I M O P R T U B C E G I M O P R T U B C E G I M O P R T U B C E G I M O P R T U B C E G I M O P R T U when the sequence BIGCOMPUTER is used but my question is can someone explain to me what is happening in the code and how? i know abit about the quick-sort algorithm but it doesnt seem to be the same in the above example.

    Read the article

  • devise register confirmation

    - by mattherick
    hello! i have a user and an admin role in my project. i created my authentification with devise, really nice and goot tool for handling the authentification. in my admin role i don´t have any confirmation or something like that. it is really simple and doesn´t make problems. but in my user model i have following things: model: devise :database_authenticatable, :confirmable, :recoverable, :rememberable, :trackable, :validatable, :timeoutable, :registerable # Setup accessible (or protected) attributes for your model attr_accessible :email, :username, :prename, :surname, :phone, :street, :number, :location, :password, :password_confirmation and few validations, but they aren´t relevant this time. my migration looks like following one: class DeviseCreateUsers < ActiveRecord::Migration def self.up create_table(:users) do |t| t.database_authenticatable :null = false t.confirmable t.recoverable t.rememberable t.trackable t.timeoutable t.validateable t.string :username t.string :prename t.string :surname t.string :phone t.string :street t.integer :number t.string :location t.timestamps end add_index :users, :email, :unique => true add_index :users, :confirmation_token, :unique => true add_index :users, :reset_password_token, :unique => true add_index :users, :username, :unique => true add_index :users, :prename, :unique => false add_index :users, :surname, :unique => false add_index :users, :phone, :unique => false add_index :users, :street, :unique => false add_index :users, :number, :unique => false add_index :users, :location, :unique => false end def self.down drop_table :users end end into my route.rb I added following statements: map.devise_for :admins map.devise_for :users, :path_names = { :sign_up = "register", :sign_in = "login" } map.root :controller = "main" and now my problem.. if I register a new user, I fill in all my data in the register form and submit it. After that I get redirected to the controller main with the flash-notice "You have signed up successfully." And I am logged in. But I don´t want to be logged in, because I don´t have confirmed my new user account yet. If I open the console I see the last things in the logs and there I see the confirmation-mail and the text and all stuff, but I am already logged in... I can´t explain why, ... does somebody of you have an idea? If I copy out the confirmation-token from the logs and confirm my account, I can log in, but if I don´t confirm, I also can log in..

    Read the article

  • Why do sockets not die when server dies? Why does a socket die when server is alive?

    - by Roman
    I try to play with sockets a bit. For that I wrote very simple "client" and "server" applications. Client: import java.net.*; public class client { public static void main(String[] args) throws Exception { InetAddress localhost = InetAddress.getLocalHost(); System.out.println("before"); Socket clientSideSocket = null; try { clientSideSocket = new Socket(localhost,12345,localhost,54321); } catch (ConnectException e) { System.out.println("Connection Refused"); } System.out.println("after"); if (clientSideSocket != null) { clientSideSocket.close(); } } } Server: import java.net.*; public class server { public static void main(String[] args) throws Exception { ServerSocket listener = new ServerSocket(12345); while (true) { Socket serverSideSocket = listener.accept(); System.out.println("A client-request is accepted."); } } } And I found a behavior that I cannot explain: I start a server, than I start a client. Connection is successfully established (client stops running and server is running). Then I close the server and start it again in a second. After that I start a client and it writes "Connection Refused". It seems to me that the server "remember" the old connection and does not want to open the second connection twice. But I do not understand how it is possible. Because I killed the previous server and started a new one! I do not start the server immediately after the previous one was killed (I wait like 20 seconds). In this case the server "forget" the socket from the previous server and accepts the request from the client. I start the server and then I start the client. Connection is established (server writes: "A client-request is accepted"). Then I wait a minute and start the client again. And server (which was running the whole time) accept the request again! Why? The server should not accept the request from the same client-IP and client-port but it does!

    Read the article

  • Abnormally disconnected TCP sockets and write timeout

    - by James
    Hello I will try to explain the problem in shortest possible words. I am using c++ builder 2010. I am using TIdTCPServer and sending voice packets to a list of connected clients. Everything works ok untill any client is disconnected abnormally, For example power failure etc. I can reproduce similar disconnect by cutting the ethernet connection of a connected client. So now we have a disconnected socket but as you know it is not yet detected at server side so server will continue to try to send data to that client too. But when server try to write data to that disconnected client ...... Write() or WriteLn() HANGS there in trying to write, It is like it is wating for somekind of Write timeout. This hangs the hole packet distribution process as a result creating a lag in data transmission to all other clients. After few seconds "Socket Connection Closed" Exception is raised and data flow continues. Here is the code try { EnterCriticalSection(&SlotListenersCriticalSection); for(int i=0;i<SlotListeners->Count;i++) { try { //Here the process will HANG for several seconds on a disconnected socket ((TIdContext*) SlotListeners->Objects[i])->Connection->IOHandler->WriteLn("Some DATA"); }catch(Exception &e) { SlotListeners->Delete(i); } } }__finally { LeaveCriticalSection(&SlotListenersCriticalSection); } Ok i already have a keep alive mechanism which disconnect the socket after n seconds of inactivity. But as you can imagine, still this mechnism cant sync exactly with this braodcasting loop because this braodcasting loop is running almost all the time. So is there any Write timeouts i can specify may be through iohandler or something ? I have seen many many threads about "Detecting disconnected tcp socket" but my problem is little different, i need to avoid that hangup for few seconds during the write attempt. So is there any solution ? Or should i consider using some different mechanism for such data broadcasting for example the broadcasting loop put the data packet in some kind of FIFO buffer and client threads continuously check for available data and pick and deliver it to themselves ? This way if one thread hangs it will not stop/delay the over all distribution thread. Any ideas please ? Thanks for your time and help. Regards Jams

    Read the article

  • Learning... anything really

    - by WebDevHobo
    I'm particularly interested in Windows PowerShell, but here's a somewhat more general complaint: When asking for help on learning something new, be it a small subject on PHP or understanding a class in Java, what usually happens is that people direct me towards the documentation pages. What I'm looking for is somewhat of a course. A deep explanation of why something works the way it does. I know my basic programming, like Java and C#. I've never seen C or C++, though I have seen a bit of assembler. I know what the Stack and Heap are, how boxing and unboxing works, why you have to deep-copy an array instead of copying the pointer and some other things. Windows PowerShell on the other hand, I know nothing about. And I notice that when reading the small document or some code, I usually forget what it does or why it works. What I am looking for is preferably, a nice tutorial that explains the beginnings, the concepts, and goes to more difficult things at a steady pace. The only thing documentation can do is explain what a function does. That's no good to me since I don't know what I want to do yet. I could read about a thousand functions, and forget about most of them, because I don't need to implement them right after it. Randomly wandering through the documentation doesn't do me any good. So conclude, what is a good tutorial on Windows Powershell? One which explains in clear language what is happening, one which builds on previous things learned. I don't think googling this is a good idea. Doing a Google search on this would turn up numerous tutorials. And experience tells me that you have to look long and hard to find the gem you're looking for. That's why I'm asking here. Because this is the place where you can find more experienced people. Many of the PowerShell guys among you will know the good ones already, and by asking you, I avoid wasting time that could be spent learning. So to summarize: I will not google this!

    Read the article

  • What database table structure should I use for versions, codebases, deployables?

    - by Zac Thompson
    I'm having doubts about my table structure, and I wonder if there is a better approach. I've got a little database for version control repositories (e.g. SVN), the packages (e.g. Linux RPMs) built therefrom, and the versions (e.g. 1.2.3-4) thereof. A given repository might produce no packages, or several, but if there are more than one for a given repository then a particular version for that repository will indicate a single "tag" of the codebase. A particular version "string" might be used to tag a version of the source code in more than one repository, but there may be no relationship between "1.0" for two different repos. So if packages P and Q both come from repo R, then P 1.0 and Q 1.0 are both built from the 1.0 tag of repo R. But if package X comes from repo Y, then X 1.0 has no relationship to P 1.0. In my (simplified) model, I have the following tables (the x_id columns are auto-incrementing surrogate keys; you can pretend I'm using a different primary key if you wish, it's not really important): repository - repository_id - repository_name (unique) ... version - version_id - version_string (unique for a particular repository) - repository_id ... package - package_id - package_name (unique) - repository_id ... This makes it easy for me to see, for example, what are valid versions of a given package: I can join with the version table using the repository_id. However, suppose I would like to add some information to this database, e.g., to indicate which package versions have been approved for release. I certainly need a new table: package_version - version_id - package_id - package_version_released ... Again, the nature of the keys that I use are not really important to my problem, and you can imagine that the data column is "promotion_level" or something if that helps. My doubts arise when I realize that there's really a very close relationship between the version_id and the package_id in my new table ... they must share the same repository_id. Only a small subset of package/version combinations are valid. So I should have some kind of constraint on those columns, enforcing that ... ... I don't know, it just feels off, somehow. Like I'm including somehow more information than I really need? I don't know how to explain my hesitance here. I can't figure out which (if any) normal form I'm violating, but I also can't find an example of a schema with this sort of structure ... not being a DBA by profession I'm not sure where to look. So I'm asking: am I just being overly sensitive?

    Read the article

  • Why does WebSharingAppDemo-CEProviderEndToEnd sample still need a client db connection after scope c

    - by Don
    I'm researching a way to build an n-tierd sync solution. From the WebSharingAppDemo-CEProviderEndToEnd sample it seems almost feasable however for some reason, the app will only sync if the client has a live SQL db connection. Can some one explain what I'm missing and how to sync without exposing SQL to the internet? The problem I'm experiencing is that when I provide a Relational sync provider that has an open SQL connection from the client, then it works fine but when I provide a Relational sync provider that has a closed but configured connection string, as in the example, I get an error from the WCF stating that the server did not receive the batch file. So what am I doing wrong? SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(); builder.DataSource = hostName; builder.IntegratedSecurity = true; builder.InitialCatalog = "mydbname"; builder.ConnectTimeout = 1; provider.Connection = new SqlConnection(builder.ToString()); // provider.Connection.Open(); **** un-commenting this causes the code to work** //create anew scope description and add the appropriate tables to this scope DbSyncScopeDescription scopeDesc = new DbSyncScopeDescription(SyncUtils.ScopeName); //class to be used to provision the scope defined above SqlSyncScopeProvisioning serverConfig = new SqlSyncScopeProvisioning(); .... The error I get occurs in this part of the WCF code: public SyncSessionStatistics ApplyChanges(ConflictResolutionPolicy resolutionPolicy, ChangeBatch sourceChanges, object changeData) { Log("ProcessChangeBatch: {0}", this.peerProvider.Connection.ConnectionString); DbSyncContext dataRetriever = changeData as DbSyncContext; if (dataRetriever != null && dataRetriever.IsDataBatched) { string remotePeerId = dataRetriever.MadeWithKnowledge.ReplicaId.ToString(); //Data is batched. The client should have uploaded this file to us prior to calling ApplyChanges. //So look for it. //The Id would be the DbSyncContext.BatchFileName which is just the batch file name without the complete path string localBatchFileName = null; if (!this.batchIdToFileMapper.TryGetValue(dataRetriever.BatchFileName, out localBatchFileName)) { //Service has not received this file. Throw exception throw new FaultException<WebSyncFaultException>(new WebSyncFaultException("No batch file uploaded for id " + dataRetriever.BatchFileName, null)); } dataRetriever.BatchFileName = localBatchFileName; } Any ideas?

    Read the article

  • What is the best practice to segment c#.net projects based on a single base project

    - by Anthony
    Honestly, I can't word my question any better without describing it. I have a base project (with all its glory, dlls, resources etc) which is a CMS. I need to use this project as a base for othe custom bake projects. This base project is to be maintained and updated among all custom bake projects. I use subversion (Collabnet and Tortise SVN) I have two questions: 1 - Can I use subversion to share the base project among other projects What I mean here is can I "Checkout" the base project into another "Checked Out" project and have both update and commit seperatley. So, to paint a picture, let's say I am working on a custom project and I modify the core/base prject in some way (which I know will suit the others) can I then commit those changes and upon doing so when I update the base project in the other "Checked out" resources will it pull the changes? In short, I would like not to have to manually deploy updated core files whenever I make changes into each seperate project. 2 - If I create a custom file (let's say an webcontrol or aspx page etc) can I have it compile seperatley from the base project Another tricky one to explain. When I publish my web application it creates DLLs based on the namespaces of projects attached to it. So I may have a number of DLLs including the "Website's" namespace DLL, which could simply be website. I want to be able to make a seperate, custom, control which does not compile into those DLLs as the custom files should not rely on those DLLS to run. Is it as simple to set a seperate namespace for those files like CustomFiles.ProjectName for example? Think of the whole idea as adding modules to the .NET project, I don't want the module's code in any of the core DLLs but I do need for module to be able to access the core dlls. (There is no need for the core project to access the module code as it should be one way only in theory, though I reckon it woould not be possible anyway without using JSON/SOAP or something like that, maybe I am wrong.) I want to create a pluggable environment much like that of Joomla/Wordpress as since PHP generally doesn't have to be compiled first I see this is the reason why all this is possible/easy. The idea is to allow pluggable themes, modules etc etc. (I haven't tried simply adding .NET themes after compile/publish but I am assuming this is possible anyway? OR does the compiler need to reference items in the files?)

    Read the article

  • Backbone.js (model instanceof Model) via Chrome Extension

    - by Leoncelot
    Hey guys, This is my first time ever posting on this site and the problem I'm about to pose is difficult to articulate due to the set of variables required to arrive at it. Let me just quickly explain the framework I'm working with. I'm building a Chrome Extension using jQuery, jQuery-ui, and Backbone The entire JS suite for the extension is written in CoffeeScript and I'm utilizing Rails and the asset pipeline to manage it all. This means that when I want to deploy my extension code I run rake assets:precompile and copy the resulting compressed JS to my extensions Directory. The nice thing about this approach is that I can actually run the extension js from inside my Rails app by including the library. This is basically the same as my extensions background.js file which injects the js as a content script. Anyway, the problem I've recently encountered was when I tried testing my extension on my buddy's site, whiskeynotes.com. What I was noticing is that my backbone models were being mangled upon adding them to their respective collections. So something like this.collection.add(new SomeModel) created some nonsense version of my model. This code eventually runs into Backbone's prepareModel code _prepareModel: function(model, options) { options || (options = {}); if (!(model instanceof Model)) { var attrs = model; options.collection = this; model = new this.model(attrs, options); if (!model._validate(model.attributes, options)) model = false; } else if (!model.collection) { model.collection = this; } return model; }, Now, in most of the sites on which I've tested the extension, the result is normal, however on my buddy's site the !(model instance Model) evaluates to true even though it is actually an instance of the correct class. The consequence is a super messed up version of the model where the model's attributes is a reference to the models collection (strange right?). Needless to say, all kinds of crazy things were happening afterward. Why this is occurring is beyond me. However changing this line (!(model instanceof Model)) to (!(model instanceof Backbone.Model)) seems to fix the problem. I thought maybe it had something to do with the Flot library (jQuery graph library) creating their own version of 'Model' but looking through the source yielded no instances of it. I'm just curious as to why this would happen. And does it make sense to add this little change to the Backbone source? Update: I just realized that the "fix" doesn't actually work. I can also add that my backbone Models are namespaced in a wrapping object so that declaration looks something like class SomeNamespace.SomeModel extends Backbone.Model

    Read the article

  • How do I 'globally' catch exceptions thrown in object instances.

    - by SleepyBobos
    I am currently writing a winforms application (C#). I am making use of the Enterprise Library Exception Handling Block, following a fairly standard approach from what I can see. IE : In the Main method of Program.cs I have wired up event handler to Application.ThreadException event etc. This approach works well and handles the applications exceptional circumstances. In one of my business objects I throw various exceptions in the Set accessor of one of the objects properties set { if (value > MaximumTrim) throw new CustomExceptions.InvalidTrimValue("The value of the minimum trim..."); if (!availableSubMasterWidthSatisfiesAllPatterns(value)) throw new CustomExceptions.InvalidTrimValue("Another message..."); _minimumTrim = value; } My logic for this approach (without turning this into a 'when to throw exceptions' discussion) is simply that the business objects are responsible for checking business rule constraints and throwing an exception that can bubble up and be caught as required. It should be noted that in the UI of my application I do explictly check the values that the public property is being set to (and take action there displaying friendly dialog etc) but with throwing the exception I am also covering the situation where my business object may not be used by a UI eg : the Property is being set by another business object for example. Anyway I think you all get the idea. My issue is that these exceptions are not being caught by the handler wired up to Application.ThreadException and I don't understand why. From other reading I have done the Application.ThreadException event and it handler "... catches any exception that occurs on the main GUI thread". Are the exceptions being raised in my business object not in this thread? I have not created any new threads. I can get the approach to work if I update the code as follows, explicity calling the event handler that is wired to Application.ThreadException. This is the approach outlined in Enterprise Library samples. However this approach requires me to wrap any exceptions thrown in a try catch, something I was trying to avoid by using a 'global' handler to start with. try { if (value > MaximumTrim) throw new CustomExceptions.InvalidTrimValue("The value of the minimum..."); if (!availableSubMasterWidthSatisfiesAllPatterns(value)) throw new CustomExceptions.InvalidTrimValue("Another message"); _minimumTrim = value; } catch (Exception ex) { Program.ThreadExceptionHandler.ProcessUnhandledException(ex); } I have also investigated using wiring a handler up to AppDomain.UnhandledException event but this does not catch the exceptions either. I would be good if someone could explain to me why my exceptions are not being caught by my global exception handler in the first code sample. Is there another approach I am missing or am I stuck with wrapping code in try catch, shown above, as required?

    Read the article

  • vector related memory allocation question

    - by memC
    hi all, I am encountering the following bug. I have a class Foo . Instances of this class are stored in a std::vector vec of class B. in class Foo, I am creating an instance of class A by allocating memory using new and deleting that object in ~Foo(). the code compiles, but I get a crash at the runtime. If I disable delete my_a from desstructor of class Foo. The code runs fine (but there is going to be a memory leak). Could someone please explain what is going wrong here and suggest a fix? thank you! class A{ public: A(int val); ~A(){}; int val_a; }; A::A(int val){ val_a = val; }; class Foo { public: Foo(); ~Foo(); void createA(); A* my_a; }; Foo::Foo(){ createA(); }; void Foo::createA(){ my_a = new A(20); }; Foo::~Foo(){ delete my_a; }; class B { public: vector<Foo> vec; void createFoo(); B(){}; ~B(){}; }; void B::createFoo(){ vec.push_back(Foo()); }; int main(){ B b; int i =0; for (i = 0; i < 5; i ++){ std::cout<<"\n creating Foo"; b.createFoo(); std::cout<<"\n Foo created"; } std::cout<<"\nDone with Foo creation"; std::cout << "\nPress RETURN to continue..."; std::cin.get(); return 0; }

    Read the article

  • thread management in nbody code of cuda-sdk

    - by xnov
    When I read the nbody code in Cuda-SDK, I went through some lines in the code and I found that it is a little bit different than their paper in GPUGems3 "Fast N-Body Simulation with CUDA". My questions are: First, why the blockIdx.x is still involved in loading memory from global to share memory as written in the following code? for (int tile = blockIdx.y; tile < numTiles + blockIdx.y; tile++) { sharedPos[threadIdx.x+blockDim.x*threadIdx.y] = multithreadBodies ? positions[WRAP(blockIdx.x + q * tile + threadIdx.y, gridDim.x) * p + threadIdx.x] : //this line positions[WRAP(blockIdx.x + tile, gridDim.x) * p + threadIdx.x]; //this line __syncthreads(); // This is the "tile_calculation" function from the GPUG3 article. acc = gravitation(bodyPos, acc); __syncthreads(); } isn't it supposed to be like this according to paper? I wonder why sharedPos[threadIdx.x+blockDim.x*threadIdx.y] = multithreadBodies ? positions[WRAP(q * tile + threadIdx.y, gridDim.x) * p + threadIdx.x] : positions[WRAP(tile, gridDim.x) * p + threadIdx.x]; Second, in the multiple threads per body why the threadIdx.x is still involved? Isn't it supposed to be a fix value or not involving at all because the sum only due to threadIdx.y if (multithreadBodies) { SX_SUM(threadIdx.x, threadIdx.y).x = acc.x; //this line SX_SUM(threadIdx.x, threadIdx.y).y = acc.y; //this line SX_SUM(threadIdx.x, threadIdx.y).z = acc.z; //this line __syncthreads(); // Save the result in global memory for the integration step if (threadIdx.y == 0) { for (int i = 1; i < blockDim.y; i++) { acc.x += SX_SUM(threadIdx.x,i).x; //this line acc.y += SX_SUM(threadIdx.x,i).y; //this line acc.z += SX_SUM(threadIdx.x,i).z; //this line } } } Can anyone explain this to me? Is it some kind of optimization for faster code?

    Read the article

  • invalid postback event instead of dropdown to datagrid

    - by rima
    I faced with funny situation. I created a page which is having some value, I set these value and control my post back event also. The problem is happening when I change a component index(ex reselect a combobox which is not inside my datagrid) then I dont know why without my page call the Page_Load it goes to create a new row in grid function and all of my parameter are null! I am just receiving null exception. So in other word I try to explain the situation: when I load my page I am initializing some parameter. then everything is working fine. in my page when I change selected item of my combo box, page suppose to go and run function related to that combo box, and call page_load, but it is not going there and it goes to rowcreated function. I am trying to illustrate part of my page. Please help me because I am not receiving any error except null exception and it triger wrong even which seems so complicated for me. public partial class W_CM_FRM_02 : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (Page.IsPostBack && !loginFail) return; InitializeItems(); } } private void InitializeItems() { cols = new string[] { "v_classification_code", "v_classification_name" }; arrlstCMMM_CLASSIFICATION = (ArrayList)db.Select(cols, "CMMM_CLASSIFICATION", "v_classification_code <> 'N'", " ORDER BY v_classification_name"); } } protected void DGV_RFA_DETAILS_RowCreated(object sender, GridViewRowEventArgs e) { //db = (Database)Session["oCon"]; foreach (DataRow dr in arrlstCMMM_CLASSIFICATION) ((DropDownList)DGV_RFA_DETAILS.Rows[index].Cells[4].FindControl("OV_RFA_CLASSIFICATION")).Items.Add(new ListItem(dr["v_classification_name"].ToString(), dr["v_classification_code"].ToString())); } protected void V_CUSTOMER_SelectedIndexChanged(object sender, EventArgs e) { if (V_CUSTOMER.SelectedValue == "xxx" || V_CUSTOMER.SelectedValue == "ddd") V_IMPACTED_FUNCTIONS.Enabled = true; } } my form: <%@ Page Language="C#" MasterPageFile="~/MasterPage.master" AutoEventWireup="true" CodeFile="W_CM_FRM_02.aspx.cs" Inherits="W_CM_FRM_02" Title="W_CM_FRM_02" enableeventvalidation="false" EnableViewState="true"%> <td>Project name*</td> <td><asp:DropDownList ID="V_CUSTOMER" runat="server" AutoPostBack="True" onselectedindexchanged="V_CUSTOMER_SelectedIndexChanged" /></td> <td colspan = "8"> <asp:GridView ID="DGV_RFA_DETAILS" runat="server" ShowFooter="True" AutoGenerateColumns="False" CellPadding="1" ForeColor="#333333" GridLines="None" OnRowDeleting="grvRFADetails_RowDeleting" Width="100%" Style="text-align: left" onrowcreated="DGV_RFA_DETAILS_RowCreated"> <RowStyle BackColor="#FFFBD6" ForeColor="#333333" /> <Columns> <asp:BoundField DataField="ON_RowNumber" HeaderText="SNo" /> <asp:TemplateField HeaderText="RFA/RAD/Ticket No*"> <ItemTemplate> <asp:TextBox ID="OV_RFA_NO" runat="server" Width="120"></asp:TextBox> </ItemTemplate> </asp:TemplateField>

    Read the article

  • Is a many-to-many relationship with extra fields the right tool for my job?

    - by whichhand
    Previously had a go at asking a more specific version of this question, but had trouble articulating what my question was. On reflection that made me doubt if my chosen solution was correct for the problem, so this time I will explain the problem and ask if a) I am on the right track and b) if there is a way around my current brick wall. I am currently building a web interface to enable an existing database to be interrogated by (a small number of) users. Sticking with the analogy from the docs, I have models that look something like this: class Musician(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) dob = models.DateField() class Album(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) class Instrument(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) Where I have one central table (Musician) and several tables of associated data that are related by either ForeignKey or OneToOneFields. Users interact with the database by creating filtering criteria to select a subset of Musicians based on data the data on the main or related tables. Likewise, the users can then select what piece of data is used to rank results that are presented to them. The results are then viewed initially as a 2 dimensional table with a single row per Musician with selected data fields (or aggregates) in each column. To give you some idea of scale, the database has ~5,000 Musicians with around 20 fields of related data. Up to here is fine and I have a working implementation. However, it is important that I have the ability for a given user to upload there own annotation data sets (more than one) and then filter and order on these in the same way they can with the existing data. The way I had tried to do this was to add the models: class UserDataSets(models.Model): user = models.ForeignKey(User) name = models.CharField(max_length=100) description = models.CharField(max_length=64) results = models.ManyToManyField(Musician, through='UserData') class UserData(models.Model): artist = models.ForeignKey(Musician) dataset = models.ForeignKey(UserDataSets) score = models.IntegerField() class Meta: unique_together = (("artist", "dataset"),) I have a simple upload mechanism enabling users to upload a data set file that consists of 1 to 1 relationship between a Musician and their "score". Within a given user dataset each artist will be unique, but different datasets are independent from each other and will often contain entries for the same musician. This worked fine for displaying the data, starting from a given artist I can do something like this: artist = Musician.objects.get(pk=1) dataset = UserDataSets.objects.get(pk=5) print artist.userdata_set.get(dataset=dataset.pk) However, this approach fell over when I came to implement the filtering and ordering of query set of musicians based on the data contained in a single user data set. For example, I could easily order the query set based on all of the data in the UserData table like this: artists = Musician.objects.all().order_by(userdata__score) But that does not help me order by the results of a given single user dataset. Likewise I need to be able to filter the query set based on the "scores" from different user data sets (eg find all musicians with a score 5 in dataset1 and < 2 in dataset2). Is there a way of doing this, or am I going about the whole thing wrong?

    Read the article

  • How to infer the type of a derived class in base class?

    - by enzi
    I want to create a method that allows me to change arbitrary properties of classes that derive from my base class, the result should look like this: SetPropertyValue("size.height", 50); – where size is a property of my derived class and height is a property of size. I'm almost done with my implementation but there's one final obstacle that I want to solve before moving on, to describe this I will first have to explain my implementation a bit: Properties that can be modified are decorated with an attribute There's a method in my base class that searches for all derived classes and their decorated properties For each property I generate a "property modifier", a class that contains 2 delegates: one to set and one to get the value of the property. Property Modifiers are stored in a dictionary, with the name of the property as key In my base class, there is another dictionary that contains all property-modifier-dictionaries, with the Type of the respective class as key. What the SetPropertyValue method does is this: Get the correct property-modifier-dictionary, using the concrete type of the derived class (<- yet to solve) Get the property modifier of the property to change (e.g. of the property size) Use the get or set delegate to modify the property's value Some example code to clarify further: private static Dictionary<RuntimeTypeHandle, object> EditableTypes; //property-modifier-dictionary protected void SetPropertyValue<T>(EditablePropertyMap<T> map, string property, object value) { var property = map[property]; // get the property modifier property.Set((T)this, value); // use the set delegate (encapsulated in a method) } In the above code, T is the Type of the actual (derived) class. I need this type for the get/set delegates. The problem is how to get the EditablePropertyMap<T> when I don't know what T is. My current (ugly) solution is to pass the map in an overriden virtual method in the derived class: public override void SetPropertyValue(string property, object value) { base.SetPropertyValue((EditablePropertyMap<ExampleType>)EditableTypes[typeof(ExampleType)], property, value); } What this does is: get the correct dictionary containing the property modifiers of this class using the class's type, cast it to the appropiate type and pass it to the SetPropertyValue method. I want to get rid of the SetPropertyValue method in my derived class (since there are a lot of derived classes), but don't know yet how to accomplish that. I cannot just make a virtual GetEditablePropertyMap<T> method because I cannot infer a concrete type for T then. I also cannot acces my dictionary directly with a type and retrieve an EditablePropertyMap<T> from it because I cannot cast to it from object in the base class, since again I do not know T. I found some neat tricks to infere types (e.g. by adding a dummy T parameter), but cannot apply them to my specific problem. I'd highly appreciate any suggestions you may have for me.

    Read the article

  • Correlation formula explanation needed d3.js

    - by divakar
    function getCorrelation(xArray, yArray) { alert(xArray); alert(yArray); function sum(m, v) {return m + v;} function sumSquares(m, v) {return m + v * v;} function filterNaN(m, v, i) {isNaN(v) ? null : m.push(i); return m;} // clean the data (because we know that some values are missing) var xNaN = _.reduce(xArray, filterNaN , []); var yNaN = _.reduce(yArray, filterNaN , []); var include = _.intersection(xNaN, yNaN); var fX = _.map(include, function(d) {return xArray[d];}); var fY = _.map(include, function(d) {return yArray[d];}); var sumX = _.reduce(fX, sum, 0); var sumY = _.reduce(fY, sum, 0); var sumX2 = _.reduce(fX, sumSquares, 0); var sumY2 = _.reduce(fY, sumSquares, 0); var sumXY = _.reduce(fX, function(m, v, i) {return m + v * fY[i];}, 0); var n = fX.length; var ntor = ( ( sumXY ) - ( sumX * sumY / n) ); var dtorX = sumX2 - ( sumX * sumX / n); var dtorY = sumY2 - ( sumY * sumY / n); var r = ntor / (Math.sqrt( dtorX * dtorY )); // Pearson ( http://www.stat.wmich.edu/s216/book/node122.html ) var m = ntor / dtorX; // y = mx + b var b = ( sumY - m * sumX ) / n; // console.log(r, m, b); return {r: r, m: m, b: b}; } I have finding correlation between the points i plot using this function which is not written by me. my xarray=[120,110,130,132,120,118,134,105,120,0,0,0,0,137,125,120,127,120,160,120,148] yarray=[80,70,70,80,70,62,69,70,70,62,90,42,80,72,0,0,0,0,78,82,68,60,58,82,60,76,86,82,70] I can t able to understand the function perfectly. Can anybody explain it with the data i pasted here. I also wanted to remove the zeros getting calculated from this function.

    Read the article

  • Generics vs Object performance

    - by Risho
    I'm doing practice problems from MCTS Exam 70-536 Microsft .Net Framework Application Dev Foundation, and one of the problems is to create two classes, one generic, one object type that both perform the same thing; in which a loop uses the class and iterated over thousand times. And using the timer, time the performance of both. There was another post at C# generics question that seeks the same questoion but nonone replied. Basically if in my code I run the generic class first it takes loger to process. If I run the object class first than the object class takes longer to process. The whole idea was to prove that generics perform faster. I used the original users code to save me some time. I didn't particularly see anything wrong with the code and was puzzled by the outcome. Can some one explain why the unusual results? Thanks, Risho Here is the code: class Program { class Object_Sample { public Object_Sample() { Console.WriteLine("Object_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(Object a) { Console.WriteLine("{0}", a); } } class Generics_Samle<T> { public Generics_Samle() { Console.WriteLine("Generics_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(T a) { Console.WriteLine("{0}", a); } } static void Main(string[] args) { long ticks_initial, ticks_final, diff_generics, diff_object; Object_Sample OS = new Object_Sample(); Generics_Samle<int> GS = new Generics_Samle<int>(); //Generic Sample ticks_initial = 0; ticks_final = 0; ticks_initial = GS.getTicks(); for (int i = 0; i < 50000; i++) { GS.display(i); } ticks_final = GS.getTicks(); diff_generics = ticks_final - ticks_initial; //Object Sample ticks_initial = 0; ticks_final = 0; ticks_initial = OS.getTicks(); for (int j = 0; j < 50000; j++) { OS.display(j); } ticks_final = OS.getTicks(); diff_object = ticks_final - ticks_initial; Console.WriteLine("\nPerformance of Generics {0}", diff_generics); Console.WriteLine("Performance of Object {0}", diff_object); Console.ReadKey(); } }

    Read the article

  • ACL implementation

    - by Kirzilla
    First question Please, could you explain me how simpliest ACL could be implemented in MVC. Here is the first approach of using Acl in Controller... <?php class MyController extends Controller { public function myMethod() { //It is just abstract code $acl = new Acl(); $acl->setController('MyController'); $acl->setMethod('myMethod'); $acl->getRole(); if (!$acl->allowed()) die("You're not allowed to do it!"); ... } } ?> It is very bad approach, and it's minus is that we have to add Acl piece of code into each controller's method, but we don't need any additional dependencies! Next approach is to make all controller's methods private and add ACL code into controller's __call method. <?php class MyController extends Controller { private function myMethod() { ... } public function __call($name, $params) { //It is just abstract code $acl = new Acl(); $acl->setController(__CLASS__); $acl->setMethod($name); $acl->getRole(); if (!$acl->allowed()) die("You're not allowed to do it!"); ... } } ?> It is better than previous code, but main minuses are... All controller's methods should be private We have to add ACL code into each controller's __call method. The next approach is to put Acl code into parent Controller, but we still need to keep all child controller's methods private. What is the solution? And what is the best practice? Where should I call Acl functions to decide allow or disallow method to be executed. Second question Second question is about getting role using Acl. Let's imagine that we have guests, users and user's friends. User have restricted access to viewing his profile that only friends can view it. All guests can't view this user's profile. So, here is the logic.. we have to ensure that method being called is profile we have to detect owner of this profile we have to detect is viewer is owner of this profile or no we have to read restriction rules about this profile we have to decide execute or not execute profile method The main question is about detecting owner of profile. We can detect who is owner of profile only executing model's method $model-getOwner(), but Acl do not have access to model. How can we implement this? I hope that my thoughts are clear. Sorry for my English. Thank you.

    Read the article

  • Android Notification with AlarmManager, Broadcast and Service

    - by user2435829
    this is my code for menage a single notification: myActivity.java public class myActivity extends Activity { protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.mylayout); cal = Calendar.getInstance(); // it is set to 10.30 cal.set(Calendar.HOUR, 10); cal.set(Calendar.MINUTE, 30); cal.set(Calendar.SECOND, 0); long start = cal.getTimeInMillis(); if(cal.before(Calendar.getInstance())) { start += AlarmManager.INTERVAL_FIFTEEN_MINUTES; } Intent mainIntent = new Intent(this, myReceiver.class); pIntent = PendingIntent.getBroadcast(this, 0, mainIntent, PendingIntent.FLAG_UPDATE_CURRENT); AlarmManager myAlarm = (AlarmManager)getSystemService(ALARM_SERVICE); myAlarm.setRepeating(AlarmManager.RTC_WAKEUP, start, AlarmManager.INTERVAL_FIFTEEN_MINUTES, pIntent); } } myReceiver.java public class myReceiver extends BroadcastReceiver { @Override public void onReceive(Context c, Intent i) { Intent myService1 = new Intent(c, myAlarmService.class); c.startService(myService1); } } myAlarmService.java public class myAlarmService extends Service { @Override public IBinder onBind(Intent arg0) { return null; } @Override public void onCreate() { super.onCreate(); } @SuppressWarnings("deprecation") @Override public void onStart(Intent intent, int startId) { super.onStart(intent, startId); displayNotification(); } @Override public void onDestroy() { super.onDestroy(); } public void displayNotification() { Intent mainIntent = new Intent(this, myActivity.class); PendingIntent pIntent = PendingIntent.getActivity(this, 0, mainIntent, PendingIntent.FLAG_UPDATE_CURRENT); NotificationManager nm = (NotificationManager) this.getSystemService(Context.NOTIFICATION_SERVICE); Notification.Builder builder = new Notification.Builder(this); builder.setContentIntent(pIntent) .setAutoCancel(true) .setSmallIcon(R.drawable.ic_noti) .setTicker(getString(R.string.notifmsg)) .setContentTitle(getString(R.string.app_name)) .setContentText(getString(R.string.notifmsg)); nm.notify(0, builder.build()); } } AndroidManifest.xml <uses-permission android:name="android.permission.WAKE_LOCK" /> ... ... ... <service android:name=".myAlarmService" android:enabled="true" /> <receiver android:name=".myReceiver"/> IF the time has NOT past yet everything works perfectly. The notification appears when it must appear. BUT if the time HAS past (let's assume it is 10.31 AM) the notification fires every time... when I close and re-open the app, when I click on the notification... it has a really strange behavior. I can't figure out what's wrong in it. Can you help me please (and explain why, if you find a solution), thanks in advance :)

    Read the article

  • Memory leaks getting sub-images from video (cvGetSubRect)

    - by dnul
    Hi, i'm trying to do video windowing that is: show all frames from a video and also some sub-image from each frame. This sub-image can change size and be taken from a different position of the original frame. So , the code i've written does basically this: cvQueryFrame to get a new image from the video Create a new IplImage (img) with sub-image dimensions ( window.height,window.width) Create a new Cvmat (mat) with sub-image dimensions ( window.height,window.width) CvGetSubRect(originalImage,mat,window) seizes the sub-image transform Mat (cvMat) to img (IplImage) using cvGetImage my problem is that for each frame i create new IplImage and cvMat which take a lot of memory and when i try to free the allocated memory I get a segmentation fault or in the case of the CvMat the allocated space does not get free (valgrind keeps telling me its definetly lost space). the following code does it: int main(void){ CvCapture* capture; CvRect window; CvMat * tmp; //window size window.x=0;window.y=0;window.height=100;window.width=100; IplImage * src=NULL,*bk=NULL,* sub=NULL; capture=cvCreateFileCapture( "somevideo.wmv"); while((src=cvQueryFrame(capture))!=NULL){ cvShowImage("common",src); //get sub-image sub=cvCreateImage(cvSize(window.height,window.width),8,3); tmp =cvCreateMat(window.height, window.width,CV_8UC1); cvGetSubRect(src, tmp , window); sub=cvGetImage(tmp, sub); cvShowImage("Window",sub); //free space if(bk!=NULL) cvReleaseImage(&bk); bk=sub; cvReleaseMat(&tmp); cvWaitKey(20); //window dimensions changes window.width++; window.height++; } } cvReleaseMat(&tmp); does not seem to have any effect on the total amount of lost memory, valgrind reports the same amount of "definetly lost" memory if i comment or uncomment this line. cvReleaseImage(&bk); produces a segmentation fault. notice i'm trying to free the previous sub-frame which i'm backing up in the bk variable. If i comment this line the program runs smoothly but with lots of memory leaks I really need to get rid of memory leaks, can anyone explain me how to correct this or even better how to correctly perform image windowing? Thank you

    Read the article

  • Strange behavior of std::cout &operator<<...

    - by themoondothshine
    Hey ppl, I came across something weird today, and I was wondering if any of you here could explain what's happening... Here's a sample: #include <iostream> #include <cassert> using namespace std; #define REQUIRE_STRING(s) assert(s != 0) #define REQUIRE_STRING_LEN(s, n) assert(s != 0 || n == 0) class String { public: String(const char *str, size_t len) : __data(__construct(str, len)), __len(len) {} ~String() { __destroy(__data); } const char *toString() const { return const_cast<const char *>(__data); } String &toUpper() { REQUIRE_STRING_LEN(__data, __len); char *it = __data; while(it < __data + __len) { if(*it >= 'a' && *it <= 'z') *it -= 32; ++it; } return *this; } String &toLower() { REQUIRE_STRING_LEN(__data, __len); char *it = __data; while(it < __data + __len) { if(*it >= 'A' && *it <= 'Z') *it += 32; ++it; } return *this; } private: char *__data; size_t __len; protected: static char *__construct(const char *str, size_t len) { REQUIRE_STRING_LEN(str, len); char *data = new char[len]; std::copy(str, str + len, data); return data; } static void __destroy(char *data) { REQUIRE_STRING(data); delete[] data; } }; int main() { String s("Hello world!", __builtin_strlen("Hello world!")); cout << s.toLower().toString() << endl; cout << s.toUpper().toString() << endl; cout << s.toLower().toString() << endl << s.toUpper().toString() << endl; return 0; } Now, I had expected the output to be: hello world! HELLO WORLD! hello world! HELLO WORLD! but instead I got this: hello world! HELLO WORLD! hello world! hello world! I can't really understand why the second toUpper didn't have any effect.

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >